title
stringlengths
15
163
paper_decision
stringclasses
4 values
review_1
stringlengths
853
32.6k
rebuttals_1
stringlengths
0
15.1k
review_2
stringlengths
1.03k
35.6k
rebuttals_2
stringlengths
0
15.1k
review_3
stringlengths
807
27.4k
rebuttals_3
stringlengths
0
15k
review_4
stringlengths
780
22.2k
rebuttals_4
stringlengths
0
15.1k
review_5
stringclasses
171 values
rebuttals_5
stringclasses
166 values
review_6
stringclasses
25 values
rebuttals_6
stringclasses
24 values
review_7
stringclasses
4 values
rebuttals_7
stringclasses
4 values
Optimizing Noise Distributions for Differential Privacy
Accept (poster)
Summary: The paper studies non-canonical (i.e., not Laplace or Gaussian) noise distributions for answering $d$ queries under $(\varepsilon, \delta)$-DP. It casts the overall problem as follows: the user provides $\delta$, $d$, and the sensitivity and error constraint $\sigma$ for each query. Then the provided algorithm formulates this as a convex optimization problem over discrete (or "morally" discrete) Renyi DP noise distributions that can be represented as finite real vectors and applies a carefully chosen optimization algorithm to solve for a distribution achieving the minimal possible $\varepsilon$. Depending on the provided user inputs, the resulting algorithm can produce distributions that achieve noticeable improvements over Laplace and Gaussian noise. ## update after rebuttal I've increased my score to weak reject. The author response resolved my questions about parameters, numerical stability, and the relationship to the staircase mechanism (and I'd suggest to the authors to add these discussions to the next version of the paper). I don't think it's clearly wrong for the paper to be accepted. However, to me the small degree of improvement, the opacity of the eventual distributions, and the need to use connect-the-dot accounting for each new problem instance limit how useful the results are theoretically or practically. Claims And Evidence: The abstract claims "significant[ly]" better privacy guarantees than the Laplace and Gaussian distributions in some parameter settings. "Significant" is subjective, but the improvements appear to range from ~0 to 9%, with the latter occurring in a somewhat narrow portion of the parameter space, with some further questions about baselines (see boxes below). I think the significance is questionable. Methods And Evaluation Criteria: The form of the experiments provided, mostly plots involving some combination of $d$, $\varepsilon$, $\delta$, and $\sigma$ seems reasonable. Theoretical Claims: I didn't check any of the proofs. The basic ideas seem reasonable. Experimental Designs Or Analyses: The paper omits a few things that I think should be discussed. 1) How is $N$, the support size of the distribution, chosen? What is its value in the experiments? What about $r$, $K$, and $T$? 2) How easy/fast are these algorithms to run relative to their simple Laplace and Gaussian counterparts? Runtime doesn't appear to be discussed anywhere (or I missed it). 3) If I remember correctly, the staircase mechanism dominates the Laplace mechanism, particularly for large $\varepsilon$. The paper claims that its optimization recovers the staircase mechanism in the single composition regime but otherwise omits it. It seems like the staircase mechanism should replace the Laplace mechanism in the experiments, especially since the figures show the largest improvements at large $\varepsilon$, where the staircase mechanism should also improve over the Laplace mechanism. Concretely: is the returned distribution at these settings just the staircase mechanism? 4) Can you say a bit more about possible numerical issues? The optimization problem described in Theorem 3.6 looks like it may have under/overflow issues. This is especially relevant because the DP guarantee of the algorithm is entirely dependent on the optimization working correctly. Supplementary Material: N/A Relation To Broader Scientific Literature: The general idea of solving an optimization problem to find a private additive noise distribution is (as the paper notes) not new. To the best of my knowledge, this formulation in terms of ~finite discrete distributions has not been studied before. I think the paper is at least moderately novel. Essential References Not Discussed: N/A Other Strengths And Weaknesses: (See other responses.) Other Comments Or Suggestions: (See other responses.) Questions For Authors: In addition to questions 1)-4) written above, I'll ask: 5) What do the authors think this algorithm "tells us" about DP? It returns what the authors claim is a new and better kind of distribution, but I'm curious what form the distribution takes. In addition to the question above about its relationship to the staircase mechanism, it would be interesting to see approximately what shape the distribution has, or other properties that might explain its improvement over baselines. Without that, it's hard to say how just knowing that the distributions exist improves our understanding of DP. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for their constructive feedback. Below are our responses to their concerns. **The abstract claims "significant[ly]":** Since "significant" is subjective, we will clearly specify the gains in the abstract if accepted. We addressed our framework's practicality in our response to reviewer WDdd and kindly refer the reviewer to that discussion (see our responses to all their questions). **Q1)** In our framework, $N$ is the number of bins before the geometric tail begins, resulting in a probability vector of length $N+1$. $\Delta$ is the bin width, so the geometric tails start beyond $-N\Delta$ and $+N\Delta$. Figure 4 provides the values of $\Delta, r$, and $N$ for a specific distribution. Our rule of thumb for selecting these parameters is to choose a small $\Delta$ (around 0.05) and set $N$ such that $N \Delta$ is about 20 times the noise’s standard deviation to capture its behavior effectively. For $r$, we select a value close to 1 (e.g., 0.9999) to ensure the noise has a heavy tail, as a light tail may fail to satisfy the pure DP constraint. In Algorithm 3, $T$ refers to the time step for updating $\alpha$. Specifically, every $T$ iterations, $\alpha$ is updated to optimize the moments accountant. Based on our observations, a $T$ between 10 and 20 is sufficient to determine $\alpha$. We set the total number of iterations, $K$, to 5,000, but with preconditioning, convergence typically stabilizes around 2,000 iterations based on our observations. We will include these details in the final version. **Q2)** Our method is computationally efficient due to the convexity of the optimization and the use of preconditioned gradient descent, achieving an optimal distribution in 18.7 s ± 431 ms (mean ± std). While our method requires additional computation during the optimization phase compared to Laplace or Gaussian, this is a one-time cost. Once completed, sampling is fast and straightforward due to its discrete nature using inverse CDF. Specifically, for sampling 50,000 times, our noise takes 4.61 ms ± 94.2 µs, as compared to 3.37 ms ± 980 µs for Gaussian (numpy.random.normal) and 3.2 ms ± 877 µs for Laplace (numpy.random.laplace). All runtime experiments were conducted on Google Colab's CPU environment without GPU acceleration. If accepted, we will include more detailed runtime comparisons. **Q3)** The Staircase mechanism is optimal in the single composition setting for pure DP ($\delta = 0$). Since we did not consider the $\delta = 0$ case in our experiments, we compared our noise to Laplace instead. If accepted, we will compute the composed $(\epsilon, \delta)$ values for Staircase, compare them with our noise, and add figures showing how our distributions recover this mechanism. We have addressed why our framework can recover the Staircase in our response to reviewer TUrb (Q2 and Q3) and kindly ask the reviewer to refer to that rebuttal for more details. **Q4)** Since Gaussian is a straightforward baseline, our algorithm starts from a Gaussian approximation to ensure the initial point is feasible. The main challenge is preventing noise parameters $(p_0, \dots, p_N)$ from approaching zero, as Renyi DP requires full support to avoid instability. This leads to slow convergence since it requires careful steps to stay within the positive orthant. To address this, we introduced a **novel preconditioning approach** in Section 4 (end of page 7, start of page 8) to stabilize the objective, prevent numerical issues, and accelerate convergence. As a result, we do not encounter overflow or underflow issues. In particular, the iterates on the optimization variable stay feasible while continually improving the objective, after starting from a point close to the Gaussian as mentioned above. This ensures that, even if the algorithm does not fully converge (which empirically it almost always does), it will achieve a good solution. We hope this addresses the reviewer's concern. If not, we would appreciate further clarification. **Q5)** We appreciate this query. Our distribution is setting-dependent and adapts to approach the optimal shape in each regime. The existence of fundamentally different optimal distributions, such as the monotone Staircase and non-monotone Cactus, highlights the difficulty of assessing a noise distribution’s utility-privacy tradeoff based solely on its appearance. Supporting both types demonstrates our framework’s generality and optimality. The shape of our optimal distribution combines the characteristics of different distributions depending on the setting. Figure 2 implicitly illustrates how the shape of our optimal distribution transitions between Laplace and Gaussian as cost or composition increases; specifically, our distribution resembles Laplace when Laplace is close to optimal; similarly for Gaussian. We already included an example of an optimized distribution in Figure 4. We will add more illustrations of optimal distributions in the final version. --- Rebuttal Comment 1.1: Comment: Q1) Thanks for clarifying the parameter ranges. Q2) Thanks for the timing results. Q3) I understand that the optimization given by this paper can recover the Staircase mechanism (or a close approximation to it). However, to my reading, the current version of the paper is unclear about whether it is recovering the Staircase mechanism in the multiple composition setting. That matters because recovering a known noise distribution is less novel than identifying a new one. The suggested experiment computing the $(\varepsilon, \delta)$-DP values and adding it as a comparison would clarify this point -- if the recovered distribution is clearly better than Staircase, that would be pretty good evidence -- but without it, I think the paper is missing a necessary baseline. Q4) I follow how a step like preconditioning is necessary to ensure a Renyi DP guarantee. However, I don't see how it mitigates issues like: based on the answer to Q1, $r \approx 1$ and $N \approx 400\sigma$. AFAIK, the typical range of $\alpha$ used in RDP conversions is $[1, 10]$. The condition in Equation 23 of the optimization problem includes terms like $r^{\alpha N}$. Maybe having an exponent in the thousands in this condition is fine for some reason, but it's not obvious to me at first look. As mentioned in the initial review, since the DP guarantee hinges on the optimization, this seems worth being careful about. Alternatively, if the DP guarantee of the obtained distribution can be verified in a more obviously stable way after the fact, that would mitigate this concern. Q5) I appreciate that the flexibility of this method is a point in its favor. But I think a possible weakness is that, since the distributions are obtained from a fairly opaque optimization process, they don't tell us much about where the improvement "comes from". --- Reply to Comment 1.1.1: Comment: We appreciate the reviewer's feedback. Here is our response to their concern. **Q3)** We appreciate the reviewer’s comment. Our observation of the optimal noise distributions in the multiple composition regime indicates that these distributions do not resemble the Staircase mechanism in this setting (see, for example, Figure 4)—we are indeed discovering new distributions. In the discrete setting with sensitivity 1, the Laplace and Staircase mechanisms are identical, resulting in the same privacy curve. The right panel of Figure 3 compares discrete mechanisms for sensitivity 1 (Laplace = Staircase, Gaussian, and our noise) over 10 compositions. In this figure, since our mechanism clearly outperforms Laplace, it follows that our derived noise is fundamentally different from the Staircase in this setting. **For the continuous case, we direct the reviewer’s attention to the plot available at this link: https://drive.google.com/file/d/1-Tv_fsx-82FQZgptxa23BqwPJAcKMUo6/view?usp=sharing. This plot compares the privacy curves of the Laplace and Staircase mechanisms under the same setting as the left panel of Figure 3 in our paper (10 compositions, standard deviation of 5, sensitivity of 1). This plot clearly illustrates why we chose to benchmark against the Laplace mechanism rather than the Staircase mechanism in the multiple composition regime. As shown, Laplace outperforms Staircase in this setting. Since our noise distribution outperforms Laplace, this confirms that our noise is not only different from but also an improvement over the Staircase mechanism. We would also like to highlight that computing the privacy guarantees under composition for the Staircase mechanism cannot be done in closed form and must be done numerically. As far as we can tell, we are the first to consider this.** As mentioned earlier, we will include a direct comparison with the Staircase mechanism across a wider range of parameters in the final version of the paper. **Q4)** As mentioned, $r$ is chosen to be very close to 1, ensuring that raising it to large powers remains manageable. Additionally, we initialize our optimization with an approximation of a Gaussian, which serves as a feasible starting point. As explained in the "Optimization for $\alpha$" part on page 7, we also initialize our $\alpha$ with the optimal $\alpha$ for the Gaussian. From this starting point, our optimization continually improves the objective and avoids moving in a direction that would result in an infinite objective value. Moreover, we emphasize that the DP guarantees presented in our paper do not stem from our algorithm itself, but from the state-of-the-art privacy accountant, Connect-the-Dots. We derive an optimal noise distribution from our algorithm and compute its privacy guarantees using this established method, which further confirms that the optimization algorithm is functioning correctly. **Q5)** We would offer the following analogy to illustrate the value of our approach: neural networks have to be trained by solving an optimization problem. Just like our problem, there is simply no way to derive the parameters in an entirely theoretical manner. This is not to say that neural networks take the theory, or human input, out of the picture: the loss function, the architecture of the network, the optimization algorithm, etc. are all crucial pieces that can be understood theoretically and contribute to the impact of neural networks, even though the optimization itself is somewhat opaque. In this analogy, the loss function, architecture, optimization choices correspond to our objective function being derived from the Moments Accountant bound using Renyi DP, the way we parameterize the distribution (piecewise constant with infinite geometric tails that maintain DP without contributing much to it), and the preconditioned gradient descent algorithm. All of these choices draw from a theoretical understanding of the setting, even though the optimization itself is somewhat opaque.
Summary: This paper addresses the optimization of noisy distributions under the RDP framework. Compared to classic approaches, such as Laplace or Gaussian mechanisms, the derived distribution achieves a lower overall cost. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: Yes. Experimental Designs Or Analyses: Yes. Supplementary Material: Yes. Relation To Broader Scientific Literature: See the pros below. Essential References Not Discussed: No. Other Strengths And Weaknesses: Pros: The problem is important, and the proposed approach is compelling. Cons: Please refer to my questions for further details. Other Comments Or Suggestions: See the question below. Questions For Authors: 1. From the numerical results presented in the figures, it appears that there is little difference for smaller values of $\epsilon$ (e.g., $\epsilon < 2$) compared to the Gaussian distribution. Meanwhile, adding Gaussian noise may be more straightforward for statistical inference or uncertainty quantification (for example, when constructing confidence intervals). Could the authors provide further justification for adopting the proposed distribution instead? 2. It might be more illustrative if the authors included an example to demonstrate the advantages of their method. For instance, it would be helpful to see a specific problem where the variance of the new privacy-preserving estimator is clearly lower than that of a traditional estimator. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We appreciate the reviewer's thoughtful feedback. Below, we provide detailed responses to their concerns. **From the numerical results presented in the figures, it appears that there is little difference for smaller values of ϵ (e.g., ϵ<2) compared to the Gaussian distribution. Meanwhile, adding Gaussian noise may be more straightforward for statistical inference or uncertainty quantification (for example, when constructing confidence intervals). Could the authors provide further justification for adopting the proposed distribution instead?** We appreciate the reviewer’s careful consideration of our work. If the reviewer is referring to Figure 3 in our paper, where there is a small difference for $\epsilon<2$ , we agree with the reviewer regarding this specific plot. However, we would like to highlight that the results in this plot are sensitive to the cost threshold ($\sigma^2$) and the number of compositions. In certain settings, our noise mechanism can achieve an improvement over Gaussian noise even for $\epsilon$ values less than 2. For example, if we compare our optimized noise, designed for 10 compositions and $\delta = 10^{-6}$, against Gaussian and Laplace distributions, all with the same standard deviation of 8, at the target $\delta = 10^{-6}$, our noise achieves an $\epsilon$ of 1.62, compared to 1.76 for Laplace and 1.74 for Gaussian noise. Replacing the Gaussian or Laplace distributions with our optimized noise yields improvements of 6.89\% and 7.95\%, respectively, in the $\epsilon$ value. So, in general, it is not true that Gaussian noise should always be preferred for $\epsilon$ values less than 2. From the perspective of sampling complexity, we note that classical inverse CDF sampling methods can be applied straightforwardly to our optimal noise distribution. Thus, while our optimal noise distribution provides a better privacy-utility tradeoff than Gaussian noise in appropriate settings, its practical use remains efficient. We have included sampling time comparisons between our noise and Gaussian in the rebuttal for reviewer zWjf (response to Q2) and kindly encourage the reviewer to refer to that for further details. We have provided additional justification for why our noise should be adopted in the rebuttal for Reviewer WDdd, and we kindly ask the reviewer to refer to that section for more details (please see our responses to all questions from that reviewer). **It might be more illustrative if the authors included an example to demonstrate the advantages of their method. For instance, it would be helpful to see a specific problem where the variance of the new privacy-preserving estimator is clearly lower than that of a traditional estimator.** We thank the reviewer for this insightful comment. In our rebuttal to reviewer WDdd, we have provided details on the practicality of our framework, including applicable datasets (response to Q2) and an example illustrating how our method reduces estimator variance (response to Q3). We kindly ask the reviewer to refer to that rebuttal for further clarification. --- Rebuttal Comment 1.1: Comment: I appreciate the authors' response and their efforts to improve this manuscript. After carefully reading the response to me and other reviewers, I think this manuscript indeed provides some novel results about privacy mechanisms, although it may lack some intuitive or direct motivation/application that can replace the traditional noise, such as Gaussian noise. It is a difficult decision, and for now, I have to keep the scores. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for recognizing the novelty of our work. We appreciate the thoughtful feedback and the opportunity to clarify and elaborate on the contributions and significance of our work. We would like to reiterate that while Gaussian noise is widely used in practice, its popularity stems not from its universal optimality, but from its analytical convenience and tractability. However, Gaussian noise is not tailored to specific utility goals or problem constraints. In contrast, our framework provides noise distributions optimized for the specific setting at hand, leading to better privacy-utility tradeoffs. As demonstrated in the example provided in our response to Reviewer WDdd, our optimized noise achieves approximately 10\% improvement in mean squared error compared to the best of Gaussian and Laplace for the same privacy guarantee in certain settings. Furthermore, as shown in Figure 1 of our paper, our method yields about a 5\% improvement in $\varepsilon$ for the same quadratic cost—a considerable gain when working with tight privacy budgets. These results highlight that, in the right settings, our optimized noise offers tangible benefits and is a compelling alternative to standard mechanisms.
Summary: The authors of the paper introduce an optimization framework that optimizes noise distribution for $\alpha$-RDP, where the optimal distribution can be obtained by a finite-dimensional convex optimization problem. Their main contribution is the proposal of optimized distribution for a moderate composition regime (single/large cases are already shown in previous works). For their objective, they first formulate an optimization problem for **minimizing $\alpha$-Renyi Divergence for a given constraint (e.g., variance of the distribution)**. Then, they show that the problem is convex and symmetric. Claims And Evidence: Throughout the paper, the authors introduce their contributions very clearly. Also, the experimental results clearly show the superiority compared to Laplace and Gaussian mechanisms. Methods And Evaluation Criteria: Since the aim is to minimize the value of $\epsilon$ for a given value of $\delta$ and variance bound $\sigma$, the evaluation criteria is proper. Theoretical Claims: The reviewer checked the correctness of the proofs for theoretical claims for Theorem 3.1. and Proposition 3.5 Experimental Designs Or Analyses: The reviewer has a question about the experimental design, especially for baselines. Please refer to the “Question for Authors” Section. Supplementary Material: I reviewed Appendices A and B. Relation To Broader Scientific Literature: Previously, noise distribution optimizations for differential privacy have been discussed for a single composition or a large number of composition regimes, where the optimal strategies are stair-case and cactus mechanisms, respectively. The authors propose a general approach that can be used for all regimes. Essential References Not Discussed: None Other Strengths And Weaknesses: Throughout the paper, the authors clearly introduce the novelty of their work, as well as their mathematical contributions. The preliminaries part is well-written. The idea of min-max optimization is interesting, and the approach provided in this manuscript is novel. Other Comments Or Suggestions: Here are my minor concerns; please refer to the next section for more detailed reviews. - In the preliminaries (line 120-130), the definition of $\sim$ is duplicated ($X\sim P$ for probability distribution and $d\sim d’$ for the neighboring sets). - After Eq. (11), the authors need to explain the constraint with an example (if $c(x)=x^2$ the constraint limits the variance of additive noise). Questions For Authors: - In the contribution parts, the authors mentioned that “the algorithm recovers as special cases noise distributions that are known to be optimal in different regimes, such as the Stair case and Cactus mechanisms.” However, the reviewer cannot find the related experimental results. - In the numerical results, the authors said “a small number of compositions, Laplace is close to the optimal, and the Gaussian is close to the optimal if large number of compositions,” although the optimal one would be star-case and cactus distributions. Can the authors provide experimental results by adding two baselines (stair-case and cactus). - Are there any additional practical examples regarding moderate composition regimes other than U.S. Census Bureau? - Please better introduce the relation between the authors’ work and the machine learning. For example, how can we use moderate competition in **Machine Learning** field? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We would like to thank the reviewer for acknowledging the novelty of our work and its mathematical contributions. Our responses to the concerns are provided below. **Q1) In the preliminaries (line 120-130), the definition of $\sim$ is duplicated ( for probability distribution and for the neighboring sets). After Eq. (11), the authors need to explain the constraint with an example (if $c(x)=x^2$, the constraint limits the variance of additive noise).** We appreciate the reviewer for pointing out the duplication in the preliminaries and the need for further clarification on the constraint after Eq. (11). We will include an example to clarify the constraint and address the duplication in the final version of the paper. **Q2) In the contribution parts, the authors mentioned that “the algorithm recovers as special cases noise distributions that are known to be optimal in different regimes, such as the Staircase and Cactus mechanisms.” However, the reviewer cannot find the related experimental results.** We thank the reviewer for their valuable feedback. Our distribution class is rich enough that it can closely approximate either the Staircase mechanism or the Cactus mechanism. Thus, by the nature of the optimization problem, our resulting distribution will always be at least as good as either mechanism. Specifically, the Cactus mechanism (which is optimal when the number of compositions tends to infinity) is derived by minimizing the KL-divergence, which corresponds to the limiting case of Renyi DP (RDP) as $\alpha$ approaches 1. On the other hand, the RDP of order $\alpha=\infty$ is exactly pure DP, and our optimization will yield the Staircase mechanism as the optimal noise distribution. As we explained in the introduction, the optimal $\alpha$ value, chosen through the moments accountant formula (used in our algorithm), is closely linked to the number of compositions. This method effectively determines a large $\alpha$ for a single composition with $\delta$ of 0 (pure DP), and an $\alpha$ close to 1 for larger composition scenarios, leading to the Staircase and Cactus distributions as special cases. In the final version, we will include additional figures comparing the noise distributions derived from the settings optimal for Staircase and Cactus, and show how our approach recovers these distributions. **Q3) In the numerical results, the authors said “a small number of compositions, Laplace is close to the optimal, and the Gaussian is close to the optimal if large number of compositions,” although the optimal one would be star-case and cactus distributions. Can the authors provide experimental results by adding two baselines (stair-case and cactus).** We appreciate the reviewer’s suggestion to compare against these baselines. While we did not include them in the current experimental results (as our plots primarily focused on settings with a non-zero $\delta$ and moderate composition, where Gaussian and Laplace noise were the better choices and Staircase and Cactus were not optimal), we will include detailed plots in the final version to compare the noise obtained from our method with the Staircase and Cactus baselines. **Q4, Q5) Are there any additional practical examples regarding moderate composition regimes other than U.S. Census Bureau? Please better introduce the relation between the authors’ work and the machine learning. For example, how can we use moderate competition in Machine Learning field?** We discussed potential datasets for moderate composition regimes in the rebuttal to Reviewer WDdd (Response to Q2) and provided an example in Response to Q3, demonstrating the improvement on a real-world dataset when using our noise. Due to space constraints, we kindly ask the reviewer to refer to those sections.
Summary: The paper proposed a novel framework for optimizing noise distributions for (epsilon, delta)-DP using the Renyi differential privacy formulation. Experiments are shown to showcase the benefits of the approach. Overall: The paper is easy to follow and the main results are well laid out. The experimental results are not particularly convincing in terms of the practicality of the proposed approach. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes and no, see below. Theoretical Claims: Yes, proposition 3.5 and Theorem 3.6 Experimental Designs Or Analyses: Yes, check below. Supplementary Material: I skimmed through the material Relation To Broader Scientific Literature: See below Essential References Not Discussed: They are mostly covered Other Strengths And Weaknesses: Pros: (i) The setting is highly interesting and it answers the question of how to select the optimal distribution for the specific problem at hand. (ii) The utilization of the Renyi DP formulation and the subsequent minmax problem with the proposed convex optimization/solver are interesting. In particular, each of these are rigorously proved by the presented Theorems/propositions. (iii) The experiments show how the optimization finds the best of the Gaussian and Laplacian regimes and in some cases is better than both. Cons: (a) The main issue is that the experiments do not convey the power of the proposed approach and except for very narrow regimes, it would be okay to use the Gaussian or the Laplacian setting directly. (b) The experimental setting is pretty limited and it would be nicer to see further results on ML/DL models referenced in the introduction which would have a substantial number of compositions beyond 10 that are shown here. Other Comments Or Suggestions: See below Questions For Authors: (1) How do you think this framework would apply to real-world applications? Will there be substantial gains over Gaussian or Laplacian distributions? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We first would like to thank the reviewer for recognizing the novelty of our work and for their constructive comments. Below, we provide a detailed response to their concerns. **Q1) The main issue is that the experiments do not convey the power of the proposed approach and except for very narrow regimes, it would be okay to use the Gaussian or the Laplacian setting directly.** We appreciate the reviewer's comment. While Gaussian noise is provably optimal only in the asymptotic setting as sensitivity approaches zero, no such proof exists for Laplace noise in general. Our framework allows us to compute the optimal noise distribution for any setting, and by comparing $(\epsilon, \delta)$ guarantees, we can now see that Gaussian and Laplace noise are near-optimal in certain cases. This insight is a direct outcome of our approach. It should be emphasized that, without our approach, it is not clear when it is best to choose Gaussian versus Laplace. Rather than replacing Gaussian or Laplace noise universally, our method provides a principled way to determine when to use them versus our optimized distribution. Figure 2 comprises a handy guide that tells a practitioner exactly what to do based on their setting. In the moderate composition regime, highlighted in Figure 2, the improvement achieved by our approach compared to classical distributions is nontrivial. For example, Figure 1 illustrates a setting in which our approach improves the epsilon value by more than 5%. For those on a tight privacy budget, this is significant. **Q2) The experimental setting is pretty limited and it would be nicer to see further results on ML/DL models referenced in the introduction which would have a substantial number of compositions beyond 10 that are shown here.** While modern ML often focuses on high-dimensional datasets, where training with DP guarantees requires a high number of compositions, developing predictive algorithms for tabular datasets has always been, and continues to be, a key part of the ML repertoire. In particular, sharing statistics for a restricted number of compositions in a private manner is a key application of DP. Some tabular datasets come with predefined SQL queries or are used in environments where a limited number of queries is the norm. For example, the U.S. Census Bureau enforces pre-approved aggregate queries to safeguard sensitive data. Similarly, health datasets like NHANES and the Medical Cost Personal Dataset restrict researchers to approved statistical queries, such as calculating the average medical charges for specific age groups. These constraints highlight that in many real-world scenarios, a small number of queries (e.g., 10-20) is both standard and necessary, reinforcing the practical value of optimized noise mechanisms tailored for such settings. **Q3 ) How do you think this framework would apply to real-world applications? Will there be substantial gains over Gaussian or Laplacian distributions?** As mentioned above, there are many practical datasets for which a small number of compositions is meaningful. To demonstrate the superior performance of our noise mechanism, **we have an additional experiment that we performed for this review;** it involves 10 queries on the Medical Cost Personal Dataset. These queries include calculating the average medical charges for specific age groups, total charges by smoker status, average BMI by region, and other similar aggregate statistics. We target the $(\epsilon, \delta) = (2.83, 10^{−6})$ setting. As illustrated in Figure 1 of our paper, Laplace noise achieves this with a standard deviation of 5, outperforming Gaussian noise. To achieve the same $\epsilon$ of 2.83, our noise mechanism requires a **lower standard deviation** of 4.73. To highlight the utility of our mechanism for the abovementioned dataset, we use the metric of MMSE (minimum mean squared error) as follows: we compute the empirical average (over 10 queries, repeated 100k times) of the squared difference of the true query and the clipped+noisy DP query output. The resulting average MMSE for Laplace noise is 24.98, while our optimized noise achieves an MMSE of 22.37, resulting in a 10.4% improvement. If accepted, we will include these results in the final paper. We hope this example highlights the value of our work and its practical advantages in real-world machine learning applications. --- Rebuttal Comment 1.1: Comment: Thanks to the authors for the detailed rebuttal and it clarified the results further. I have read the responses to mine and other comments and for now will keep my score. It's an interesting result for sure.
null
null
null
null
null
null
Beyond the Permutation Symmetry of Transformers: The Role of Rotation for Model Fusion
Accept (spotlight poster)
Summary: In this paper, the authors identify a neural network (NN) parameter symmetry beyond the well-studied permutation symmetry. In particular, they show that the weights of self-attention layers are governed by *rotation symmetry*, i.e. one can transform the query, key, value, output matrices by appropriate rotation matrices/their transposes and maintain the function computed by the attention mechanism intact. They proceed by leveraging this symmetry to improve model fusion for self-attention layers. They devise a new parameter matching algorithm by showing that the optimisation problem of aligning self-attention weights up to rotation admits a closed form solution that requires solving an eigendecomposition problem. They additionally enhance their matching algorithm by another step that accounts for weight scaling symmetries, while in the case of a Transformer, the MLPs are aligned using the parameter matching algorithm of Ainsworth et al., 2023. The method is experimentally tested on fusing language and vision transformers, showcasing improved downstream task performance compared to simple (averaging) fusion and other baselines. Claims And Evidence: - The main claim of the paper is that taking rotation symmetry into account when aligning self-attention weights can improve the downstream performance of fused Transformer models. Although, experimentally this generally seems to be true, I think that the authors have overclaimed in their abstract by mentioning that their "matching algorithm substantially improves model fusion", as this is not clear from the experimental results (Tables 1 and 2). In specific, in several cases the matching algorithm seems to be on par or slightly improves simple fusion, therefore the word "substantially" does not seem to follow from the results. - Additionally, the authors mention multiple times that they "introduce rotation symmetry" (e.g. Contribution 1), which I believe is misleading, as it has been discussed before in the work of Tran et al., 2024 for weight space learning (neural functionals). I think the phrasing needs to be changed in the text to clarify that rotation symmetry is not a contribution, rather it is taken advantage of to improve model fusion. Methods And Evaluation Criteria: - **Practicality**. The proposed methodology for self-attention weight alignment is technically sound and can be easily implemented in practice. Additionally, I appreciated the fact that the authors include in their framework the MLP alignment, as well as the scaling symmetries of self-attention weights, making their approach practical for fusion of real-world Transformers. - See Experimental Designs Or Analyses for my comments on empirical evaluation. Theoretical Claims: The optimal solution of the matching algorithm, even though seems straightforward to derive, is an important result and strengthens the credibility of the algorithm (recall that MLP permutation matching algorithms do not admit optimal closed-form solutions beyond 2 layers). The proof is concise and seems correct. Experimental Designs Or Analyses: - **Weakness**. The experimental section provides evidence, at least partial, to the claims. However, it was unclear to me how several experimental design choices were made. For example, - Why did the authors choose these particular Transformers (RoBERTa, DeBERTa, ViT)? How would model fusion behave on larger models? Have the authors considered extending their experimental section with more recent architectures, or architectures from different domains? I do not intend to imply that this is necessary, rather than it should be made clear why those particular choices were made. - Similarly, how did the authors choose these particular baselines (apart from the obvious simple averaging one)? - Why do the authors match only the self-attention layers and do not perform merging on the classifier? Perhaps an ablation study on that would help. - Would it be possible to use a variant of the matching algorithm to match more than two models, akin to model soups? This would further strengthen the impact of this work. - How does each component of the overall matching algorithm contribute to the resulting downstream performance (e.g. permutation matching of MLPs compared to rotation matching of self-attention)? Supplementary Material: The entire SM was reviewed. Relation To Broader Scientific Literature: Although I have not followed the entire literature on model merging, the paper is mostly well-contextualised: it mentions naive (non-symmetry) merging algorithms and compares against them, while it also discusses permutation symmetry matching, which is the most well-studied one. Essential References Not Discussed: Regarding permutation symmetry matching, I believe that the paper misses two recent algorithms that improve upon Ainsworth et al.,2023: - Peña et al., Re-basin via implicit Sinkhorn differentiation, CVPR'23 - Navon et al., Equivariant Deep Weight Space Alignment, ICML'24. Additionally, regarding weight space symmetries, I believe that the authors should have dedicated more space in mentioning the ongoing efforts on neural functionals/metanetworks (e.g. Navon et al., ICML'23, Zhou et al., NeurIPS'23, Lim et al., ICLR'24 etc.), while the have missed two important references on scaling symmetries (which is something that is taken into account in the current work): - Kalogeropoulos et al., Scale Equivariant Graph Metanetworks, NeurIPS'25 - Godfrey et al., NeurIPS'22: this is mentioned, but not in the paragraph concerning scaling symmetries. Other Strengths And Weaknesses: **Strengths** - *Importance/Significance*. With the growing availability of trained models, fusing the knowledge embedded in their weights is arguably an important problem, as it reduces the need to train new models (potentially bigger and potentially on new datasets). Since Transformers are currently one of the most popular NN technologies, designing an improved and specialised fusion algorithm for them is a crucial step. - *Novelty*. Although the rotation symmetry of Transformers has already been studied, it has not been examined in the context of model fusion. To the best of my knowledge, the rotation symmetry parameter matching algorithm provided by the authors is novel. **Weaknesses** - As previously mentioned, the main weaknesses are that some claims need to be modified and that the experimental choices need to be explained more thoroughly, and potentially extended. Other Comments Or Suggestions: - Questions For Authors: Additional questions that I believe need to be discussed in the paper: 1) When the authors take scaling into account, it seems to me that the way it is done, breaks the optimality of the matching algorithm. Could the authors comment on that? Would it be possible to devise a matching algorithm that is optimal w.r.t both scaling and rotation symmetries? 2) Would it be possible to take into account the symmetries of the rest of the transformer components: e.g. LayerNorm and Softmax? 3) Are rotation symmetries the only ones in self-attention layers? Perhaps there exist a larger group of transformations to which the layer is invariant. Could the authors discuss this? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your constructive feedback. We will include additional results and discussions in the revision. We believe that our paper will be much stronger thanks to your efforts. **Response to claims**: We thank the reviewer for the suggestion, and we will adjust the wording in the next version of our manuscript. Additionally, we would like to clarify that we have included [1] as a concurrent work in the discussion. **Response to W1**: We thank the reviewer for this suggestion. Our backbone model selection follows previous studies on model fusion [2,3]. Specifically, we deliberately selected models representing different transformer architectures. While experiments on larger models would indeed be valuable, they would require substantially more computing resources. Due to computing resource constraints, we focused on models that could be effectively studied within our available infrastructure. **Response to W2**: For clarity, the three model fusion methods (Fisher, Regmean, and OT) are not baselines in our evaluation, but rather backbone methods that we enhance with our parameter matching algorithm. To the best of our knowledge, our work represents the first parameter matching algorithm specifically designed to enhance model fusion performance We specifically selected these methods because they achieve desirable performance and provide a diverse set of merging strategies to demonstrate the general applicability of our matching algorithm. **Response to W3**: We first would like to clarify that we follow previous studies [2] to leave classification heads unmerged. Unlike attention layers that capture generalizable patterns, classifier layer parameters are highly task-specific in nature. Even minor modifications to downstream tasks (e.g., altering label orders in classification tasks or targeting different output distributions) result in entirely different optimal parameter values for classifier heads. This task-specificity makes merging classifier heads conceptually unsound without task alignment information. **Response to W4**: Please refer to our response to reviewer **ydmT**. **Response to W5**: Thank you for this insightful question that helps clarify component contributions. We conducted an ablation study isolating the effects of matching different components: | | Fisher | Regmean | OT | | --- | -------- | -------- | -------- | | FFN-only | 12.21 | 11.89 | 32.08 | | Attn-only | **20.21** | 12.94 | 28.66 | | FFN+Attn | 18.61 | **15.31** | **32.50** | For Fisher and RegMean, attention matching provides more contributions to model fusion. For OTFusion, FFN matching provides more contributions. These differences highlight how the underlying fusion method interacts with component matching. We will include the results in our revised manuscript. **Response to references**: We are committed to adding the suggested references to Related Work of our next revision. **Response to Q1**: We thank the reviewer for this thoughtful question. You raise an important theoretical point about joint optimization of rotation and scaling. It's true that our sequential approach (first optimizing rotation $R$, then scaling $\alpha$) cannot guarantee global optimality for the joint ($R$, $\alpha$) optimization problem. Proving that sequential optimization yields the global optimum would be non-trivial, as calculating the general solution of Eq. (12) without knowing the specific values of matrices is complex. We can view our approach (sequentially optimizing $R$ and $\alpha$) as a practical approximation to the joint optimization problem. Developing an algorithm that jointly optimizes for both rotation and scaling symmetries would be an interesting extension to our method, though it would likely come with increased computational complexity. Our current method balances theoretical soundness with computational efficiency. **Response to Q2**: We believe there might be no symmetries for Softmax and LayerNorm that require matching since these modules do not contain parameter matrices to be rotated. **Response to Q3**: We thank the reviewer for this thoughtful question. While our work focuses on rotation symmetries, we acknowledge that transformers may exhibit additional symmetries beyond the permutation, rotation, and scaling transformations we address. The transformer architecture, with its complex interplay of components, likely possesses a rich symmetry structure that extends beyond what we have explored. Characterizing the complete symmetry group of transformers remains an open question. We identify this as an important direction for future research. [1] Tran, Viet-Hoang, et al. "Equivariant Neural Functional Networks for Transformers." arXiv 2024. [2] Jin, Xisen, et al. "Dataless knowledge fusion by merging weights of language models." ICLR 2023 [3] Imfeld, Moritz, et al. "Transformer fusion with optimal transport." ICLR 2024 --- Rebuttal Comment 1.1: Comment: I thank the authors for their response. - Quick clarification: softmax and normalization layers do induce symmetries (translation and scaling resp., see Neural Mechanics: Symmetry and Broken Conservation Laws in Deep Learning Dynamics, Kunin et al., ICLR'21) that I would expect to affect the overall symmetry group of the Transformer weights (beyond rotations). In addition, if I am not mistaken the rotation matrices can be extended to arbitrary invertible matrices (general linear group). I think the authors should study those more thoroughly and discuss this observation in their paper - perhaps as a limitation/room for improvement, i.e. that not all symmetries are considered for parameter matching. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for these insightful follow-up comments. Regarding softmax and normalization layers, we appreciate you bringing this related work [1] to our attention. We agree that the geometric properties of these components indeed enlarge the symmetry set of previous model layers. For the scope of rotation symmetry, we agree that the orthogonal matrices constraint can be extended to general invertible matrices as in our previous response to reviewer **QTZj**, but it comes with practical challenges in parameter matching, including higher computational complexity and suboptimal solution. Please refer to our response to reviewer **QTZj** for more details. Both of these points represent important supplements to the scope of parameter-space symmetries in transformers. We commit to adding a separate limitations section to discuss these broader symmetry considerations and corresponding challenges for future studies. [1] Kunin, Daniel, et al. "Neural Mechanics: Symmetry and Broken Conservation Laws in Deep Learning Dynamics." ICLR 2021
Summary: The paper studies transformer parameter symmetries. Specifically, it explains how to weight space average two attention layers modulo not only permutation symmetries but also rotation symmetries. Experimental results show that considering this extra symmetry leads to better alignment between different trained transformers. ## update after rebuttal The authors have reasonably addressed my concerns, and so I increased my rating to 4. Claims And Evidence: Most claims are well supported. Methods And Evaluation Criteria: The benchmarks seem to align with prior work. Theoretical Claims: I haven't checked the proofs. The results seem plausible. Experimental Designs Or Analyses: The experimental design seems reasonable. Supplementary Material: I have not reviewed the supp mat. Relation To Broader Scientific Literature: The paper considers a further symmetry than prior work. I am not aware of prior work exploring rotation symmetries in attention layers. Essential References Not Discussed: N/A Other Strengths And Weaknesses: 1. The paper considers rotation symmetries and also rotation symmetries + scaling. One could also straightforwardly consider invertible linear transformations $A$ in general since $QA A^{-1} K^T = QK^T$. Other Comments Or Suggestions: 1. Include the baseline scores in the tables. I.e the scores of the models that are merged. 2. Comment on the fact that CIFAR10 accuracies are extremely low after merging. (30%) Questions For Authors: 1. Is there any intuition/reason as to why the merged networks do not perform well? E.g. 30% accuracy on CIFAR10 sounds very low. 2. Would it be possible to extend to general invertible linear transformations $A$ as described under "Other Strengths And Weaknesses" above? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you very much for your thoughtful feedback. We are honored to have this valuable chance to address your raised concerns and questions. We believe that our paper will be much stronger thanks to your efforts. **Response to comments and Q1**: We thank the reviewer for this important question. To clarify, our choice of end ViT models strictly follows previous works [1], resulting in similar performance degradation when merging models trained on different tasks (Table 1 in [1]). We attribute these low metric values to two key factors: 1. Task divergence. The end ViT models were fine-tuned on substantially different tasks, leading to specialized parameters that conflict when naively merged. 2. Non-convexity. ViTs exhibit highly non-convex loss landscapes, making naive interpolation between models challenging compared to simpler models such as MLPs. Actually, merging large models on different tasks remains challenging. This also motivates our parameter matching strategy which improves model merging without access to any training data. We are committed to including the metric values of the end models in the next version of our paper. **Response to Q2**: We thank the reviewer for providing this insightful comment. We find that all orthogonal matrices in the rotation symmetry can be directly replaced by invertible matrices without any barrier. However, invertible matrices are not applicable for parameter matching. In our parameter matching algorithm, the orthogonal matrix is an important promise for Theorem 1. If using invertible matrices, Eq. (9) can only be solved by gradient descent, losing the guarantee of global optimum. In addition, parameter matching requires computation of the inverse matrix of every $R$, which is impractical for invertible matrices. In contrast, the inverse matrix of the orthogonal matrix is just its transpose. In summary, if we only consider the rotation symmetry for transformers, then yes, the orthogonal matrices in the rotation symmetry can be replaced by invertible matrices without barrier; while if we consider the parameter matching based on rotation symmetry, then no, using invertible matrices results in suboptimal parameter matching and large complexity. We extend our best gratitude for your efforts in the rebuttal phase. We highly value the opportunity to improve our paper. We will include the additional results and discussions in the next version of our paper. We sincerely appreciate your time and consideration. [1] Imfeld, Moritz, et al. "Transformer fusion with optimal transport." ICLR 2024 --- Rebuttal Comment 1.1: Comment: I thank the authors for their response. > Task divergence. The end ViT models were fine-tuned on substantially different tasks, leading to specialized parameters that conflict when naively merged. Are not both ViT models trained on CIFAR10? That was my interpretation of [1]. Quote from that paper: > First, we train individual models from scratch on each dataset until convergence. We ensure model diversity by initializing each model with different seed values and different batch randomization. This results in unique models with similar performance but located in diverse parts of the landscape, and whose suitable fusion can improve performance. Further, I would like to ask what the corresponding table in that paper is to the experiments in this paper. Finally, is there a reason for not including the best method from that paper? It seems like "OT-ACTS" outperforms "OT-ACTS (EMD)". --- Reply to Comment 1.1.1: Comment: We thank the reviewer for these additional concerns. Regarding the factors affecting ViT performance: After revisiting the paper [1], we acknowledge our misunderstanding of the experimental settings and agree that the end models were trained on the same dataset with different random seeds and batch selections. In this case, the highly non-convex nature of ViT remains the primary explanation for the low performance of model fusion. Additionally, task divergence remains relevant for explaining challenges in our language model fusion settings. Regarding your question about the corresponding table: Our ViT experimental settings align with **Table 1** in [1]. For your question about baseline selections, we selected OT-ACTS (EMD) as our baseline despite OT-ACTS showing better performance because hard alignment guarantees functional equivalence of the matched model, making it a feasible baseline for both model fusion (in Table 2 of our paper) and parameter matching (in Figure 3 of our paper). To address your concern about including the best methods, we have conducted additional experiments using our parameter matching approach with OT-Fusion variants (including soft alignment): | | OT-ACTS (EMD) | OT-WTS | OT-ACTS | | ------- | ---- | -------- |---| | w/o Match | 32.08 | 57.11 | 61.15 | | FFN-only | 32.08 | 57.11 | 61.15 | | Attn-only | 28.66 | 56.16 | 60.07 | | FFN+Attn | 32.50 | 57.16 | 61.23 | | FFN+Attn (scale) | **32.53** | **57.17** | **61.25** | These results demonstrate that our parameter matching approach consistently improves performance across all model fusion methods. We commit to including these additional results in the next version of our paper. We sincerely appreciate your careful examination of our work and your efforts in improving our paper. [1] Imfeld, Moritz, et al. "Transformer Fusion with Optimal Transport." ICLR 2024
Summary: The paper introduces rotation symmetry in transformers, extending permutation symmetry from discrete to continuous spaces. It demonstrates theoretically and empirically that rotating query-key and value-output parameter matrices preserves functional equivalence. The main contribution is a theoretically optimal algorithm for matching parameters during model fusion, significantly enhancing performance across NLP and vision benchmarks. Claims And Evidence: The main claim is that rotation symmetry improves transformer model fusion by reducing distances between parameter sets. This is convincingly supported by extensive empirical evidence showing consistent improvement across multiple fusion methods, transformer architectures, and tasks. Methods And Evaluation Criteria: The methods and evaluation criteria (real-world NLP and vision tasks, including Emotion, NER, GLUE benchmarks, CIFAR-10) are appropriate, realistic, and convincingly aligned with demonstrating the advantages of their proposed method. Theoretical Claims: The main theorem (Theorem 4.1) provides a closed-form solution for the rotation symmetry optimization problem. The proof is sound and well-structured. While the derivations in the paper are mathematically elegant, there are concerns regarding their reliance on idealized conditions. In particular, the closed-form solution (Theorem 4.1) leverages perfect orthogonality and well-conditioned eigendecompositions - assumptions that may not hold in high-dimensional, noisy parameter spaces encountered during practical training. Experimental Designs Or Analyses: Experimental designs and analyses appear sound and valid. Model fusion benchmarks, comparisons against well-established methods, and the analysis of distances in parameter space are well-executed. Supplementary Material: I reviewed Appendix A only. Relation To Broader Scientific Literature: This work substantially builds upon existing literature on permutation symmetry (e.g., Ainsworth et al., 2023; Entezari et al., 2022) and extends it significantly into continuous rotation symmetry for transformers, providing both theoretical novelty and practical utility. It addresses a clear gap regarding transformer-specific symmetries. Essential References Not Discussed: All related works are cited to my knowledge. Other Strengths And Weaknesses: The introduction of continuous rotation symmetry is original, theoretically insightful, and practically impactful. However, the potential computational overhead in larger transformer models or extremely high-dimensional settings might need further exploration. Other Comments Or Suggestions: N/A Questions For Authors: The correlation between reduced Euclidean distance in parameter space and improved model fusion performance is assumed to be direct. Are there references that provide a theoretical backing for this correlation? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you very much for your thoughtful feedback. We are honored to have this valuable chance to address your raised concerns and questions. We believe that our paper will be much stronger thanks to your efforts. **Response to theoretical claims**: Thank you for this thoughtful point. We would like to clarify that the perfect orthogonality isn't an assumption but rather a property that holds for any SVD decomposition. Even in noisy parameter spaces, SVD always provides orthogonal singular vectors. Any parameter matrix obtained through practical training, e.g., standard SGD, differentially private SGD, and other optimization methods, can be decomposed this way. **Response to questions**: Yes, the correlation between reduced Euclidean distance and improved model fusion performance is backed by previous studies [1]. Their theoretical results show that strong convexity and closer end models can boost the utility of direct model fusion. Additionally, experimental results in [1] and our paper support the correlation from an empirical perspective. We extend our best gratitude for your efforts in the rebuttal phase. We highly value the opportunity to improve our paper. We will include the additional results and discussions in the next version of our paper. We sincerely appreciate your time and consideration. [1] Wortsman, Mitchell, et al. "Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time." ICML 2022.
Summary: The paper extends the concept of permutation symmetry in MLPs to rotation symmetry for the self-attention layer in the transformers. The authors show that due to the inherent design of self-attention layers, each of the query, key, and value vectors can be rotated without changing the functional representation and thus can be used to match two transformer models. Based on this, authors propose a model merging algorithm for transformer-based models and empirically show that by matching two models, merging can be improved. Claims And Evidence: Most of the claims made in the paper are well-supported; however, the authors suggest that the proposed matching algorithm can be used to merge multiple models, though they only experiment with merging with two models. Similar work for CNNs suggests that it may not be possible to merge more than two models simultaneously [1]. [1]. Sharma et al., Simultaneous Linear Connectivity of Neural Networks Modulo Permutation Methods And Evaluation Criteria: Yes, methods is evaluated correctly with the relevant benchmark datasets. Theoretical Claims: Yes, the theoretical claims are correct and easy to understand. Experimental Designs Or Analyses: 1. I think authors should show that Linear Mode Connectivity improves after the merging. 2. Moreover, it is not clear to me if the fine-tuned models will move out of the loss basin of the original pre-trained models. This could also explain why matching only the first few layers works, as observed by the authors. Supplementary Material: Yes, the derivations for the matching algorithm (part a) Relation To Broader Scientific Literature: The paper introduces a new concept of symmetry --- rotational symmetry --- which could match transformers. Previous work has only looked into the permutation symmetry for MLP/CNNs, which limited its application to transformers. However, rotational symmetry is more general symmetry, which can be used with transformers. The authors theoretically explain and show how to obtain this rotation symmetry for transformers; this could spur future research. Essential References Not Discussed: Related works are cited and discussed. Other Strengths And Weaknesses: ### Strengths: 1. The paper is well-written and easy to follow. 2. The paper studies an important problem of symmetry for transformer-based models and successfully extends the previous work to transformer models. 3. The closed-loop form for obtaining $R$ is interesting. 4. Experiments are well-designed, and empirical results demonstrate the efficacy of the method. ### Weaknesses: 1. The models are fine-tuned so that may already be in the same basin. As suggested earlier, study the LMC before and after the merging. 2. Experiments on merging more than two models need to be included, or the claim needs to be readjusted otherwise. Other Comments Or Suggestions: Line 226 >and can be solved precisely by Hungarian Algorithm (Martello & Toth, 1987). I believe weight matching and activation matching give an approximate solution; I would refrain from using the word precisely. Questions For Authors: Why do you think fine-tuning the models will move them outside the loss basin of the original model? Previous work suggests they remain in the same basin [2]. [2]. Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you very much for your thoughtful feedback. We are honored to have this valuable chance to address your raised concerns and questions. We believe that our paper will be much stronger thanks to your efforts. **Response to claims and W2**: We thank the reviewer for this insightful suggestion. In the binary case, the optimality of parameter matching does not rely on the choice of anchor models. However, this might not hold for multiple models. When merging multiple end models, a naive extension of our method is to select a single model as an anchor and align all others to it via rotation symmetry, as done in pairwise merging. However, the optimality of this approach depends on how the overall distance measure is defined. An alternative approach could involve iterative pairwise merging, but this introduces path dependency issues where the final result depends on the order of merging. We agree that finding optimal alignments across multiple models is an interesting and important direction and plan to explore it in the future work. We will make corresponding adjustments to the claims in the revision of our paper. **Response to experiments 1**: Thank you for this valuable suggestion. We've computed the LMC curves of the ViT model with and without our proposed matching technique on the CIFAR-10 dataset. Our analysis reveals two key observations: 1. For most interpolation coefficients $\lambda\in[0,1]$, models merged with our matching approach exhibit lower loss values than unmatched models, indicating improved connectivity. 2. We observe a loss barrier from the LMC curve between the end models used in our experiments. We explain this because ViT exhibits a strong non-convexity [1]. Similar results are recorded in previous works [2]. We will include detailed LMC results in our revised manuscript to fully illustrate these findings. **Response to experiments 2 and W1**: We thank the reviewer for this insightful question. We've analyzed the LMC curves between end models fine-tuned on different NLP tasks and observed that all model pairs (in our settings) exhibit positive loss barriers, confirming that fine-tuning often does move models into different loss basins. However, the magnitude of these barriers varies across tasks. Importantly, we observe positive loss barriers between the non-anchor original model and its matched counterpart. This provides evidence that our matching technique achieves model rebasin through rotation symmetry. Regarding why matching only the first few layers is often sufficient: this suggests that rotational symmetry divergence primarily occurs in early layers during fine-tuning, while later layers maintain more consistent representations. This aligns with observations that early layers capture more task-specific features [3]. We acknowledge that precisely characterizing loss basin distributions remains an open problem due to the highly non-convex nature of transformer loss landscapes. Even for simpler architectures like MLPs, loss barrier analysis remains challenging. We believe this is a valuable and insightful direction for future research. **Response to comments**: We believe there might be misunderstanding regarding this sentence. In our manuscript, we mentioned that the **linear assignment problem** can be solved precisely by Hungarian Algorithm, rather than weight matching or activation matching problems. **Response to questions**: We thank the reviewer for this important point. To our knowledge, our experimental setting differs from the model soup paper [4]. The model soup paper primarily studies merging models trained on **the same dataset** with different hyperparameters, where models likely remain in the same loss basin as they optimize for the same objective. In contrast, our experiments merge models fine-tuned on different datasets. Our LMC analysis confirms these models occupy different loss basins, as evidenced by positive loss barriers. We extend our best gratitude for your efforts in the rebuttal phase. We highly value the opportunity to improve our paper. We will include the additional results and discussions in the next version of our paper. We sincerely appreciate your time and consideration. [1] Park, Namuk, and Songkuk Kim. "How Do Vision Transformers Work?." ICLR 2022. [2] Imfeld, Moritz, et al. "Transformer fusion with optimal transport." ICLR 2024 [3] Tenney, Ian, Dipanjan Das, and Ellie Pavlick. "BERT Rediscovers the Classical NLP Pipeline." ACL 2019. [4] Wortsman, Mitchell, et al. "Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time." ICML 2022. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed response and for running additional experiments. **Analysis of LMC** Please add a detailed analysis of LMC for different tasks in the final version. LMC gives a better idea of about the effectiveness of matching. I really enjoyed reading the paper; matching transformers with rotational symmetry could be useful for many applications, and a lot of work done on MLP/ConvNets with permutation matching can be extended to transformers with rotational symmetry. I recommend accepting the paper! Edit: Updated the score! --- Reply to Comment 1.1.1: Comment: We are delighted to learn that our rebuttal addressed your concerns. We commit to adding a detailed analysis of the LMC results for different tasks in the final version, including all discussions during the rebuttal phase. In light of your further feedback, we respectfully hope you can consider updating the overall recommendation of our paper. Thank you again for your constructive feedback and dedicated efforts in improving our paper. Best, Authors
null
null
null
null
null
null
What Makes a Good Feedforward Computational Graph?
Accept (poster)
Summary: The authors are motivated by the recent surge of feedforward networks, and analyze the underlying computational graphs. Their core question is: What characterizes a “good” computational graph? To address this, they propose two metrics: a) Mixing time: Assesses how quickly information from various nodes reaches a designated target node, and b) Minimax fidelity: Evaluates whether information vanishes along the paths from input nodes to the target node (i.e., whether bottlenecks exist). They study different random graph models under these metrics, focusing on their asymptotic behavior. They then train a GAT on graphs derived from these models to empirically illustrate some aspects of their theoretical insights. Claims And Evidence: - Although the paper is motivated by feedforward networks, after the initial setup, there is little tangible connection between these metrics and actual modern neural network architectures. The authors’ own references to empirical performance remain vague, for instance: “As having many paths seems useful for efficient data propagation, mixing time seems a good measure...” - Yet such intuition does not necessarily hold for widely used architectures like deep MLPs or Transformers, which can often be represented by combinations of bipartite graphs (leading to high mixing time and low minimax fidelity) yet still excel in performance. - Much of the argument feels more opinion-based than backed by empirical evidence. For instance, low mixing time might not be inherently desirable. For example, deep ResNets or GPT architectures have ±96 layers to enable the learning of "strong" hidden representations without directly propagating information to output neurons. - Similarly, a high minimal fidelity does not obviously correlate with effectiveness. Modern networks routinely prune away large portions of weights or nodes, indicating that focusing on relatively few “important” nodes/edges is advantageous. - In contrast to their claims, the authors do not demonstrate any clear empirical link between these proposed metrics and the predictive performance of real-world neural networks. Methods And Evaluation Criteria: The datasets do not provide meaningful insights into what constitutes a “good” computational graph. They consist of synthetic graphs from the different random graph models, on which three different tasks are tested using a GAT. However, there is no clear connection to feedforward networks or their suitability for different tasks, leaving the practical relevance of these experiments unclear. Theoretical Claims: - Their motivation comes from feedforward networks, but their graph models can not accommodate them. For example, given a simple MLP, their line graph model can not simulate the forward pass of MLPs. One would need edge weights. The same holds for other architectures like CNNs, RNNs, or attention blocks. - There are some unclear parts, e.g., locally connected feedforward graphs with $\kappa=1$ do not have in- and out-degrees equal to $2$ unless every "layer" $n$ (corresponding to order $n$), contains only one node. Please see below for clarifying questions. - The other proofs appear correct. Experimental Designs Or Analyses: Experimental design and analysis appear sound. Supplementary Material: I skimmed through them. Relation To Broader Scientific Literature: The paper is related to the oversquashing literature from GNNs. Some GNN papers are working on spectral properties/oversmoothing on directed graphs that are missing. Essential References Not Discussed: The authors claim that "many important concepts have not been generalised to the directed case..." I would suggest [1] where concepts like oversmoothing etc. were generalized to directed graphs. I would also recommend [2] that discuss how feedforward networks / their parameters can be written as feedforward graphs. [1] Maskey, S., Paolino, R., Bacho, A., & Kutyniok, G. (2024). A fractional graph laplacian approach to oversmoothing. Advances in Neural Information Processing Systems, 36. [2] Lim, D., Maron, H., Law, M. T., Lorraine, J., & Lucas, J. Graph Metanetworks for Processing Diverse Neural Architectures. In The Twelfth International Conference on Learning Representations. Other Strengths And Weaknesses: ### Strengths: - The presentation is clear and straightforward to follow. - Analyzing feedforward networks from a graph perspective is an intriguing approach, particularly if it can spur improvements in sparsity or overall performance. - The theoretical analysis is detailed and appears thorough. Other Comments Or Suggestions: In conclusion, although the authors appear to address their central question, they do not provide sufficient evidence that these metrics correlate with the performance of modern feedforward networks. The authors’ motivation feels overstated: they neither fully analyze existing feedforward architectures nor demonstrate how to enhance computational graphs beyond the original input design. This gap risks misleading readers when considering new architecture development. For example, while mixing time is presented as a “good measure,” it tends to increase graph density and does not correlate with improved performance, thus casting doubt on its practical value. Questions For Authors: - Given that the author's motivations come from analyzing feedforward networks, what is the relation of their analysis to feedforward networks? In particular, given that the nodes are an ordered set, are nodes allowed to have the same order? This would allow to model feedforward networks as seen in [2] Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer QmmN, Thank you for your careful review! We hope to provide useful clarifications, and that you may reconsider the relevance of our work: ### **On feedforward networks** Your comments focus on our work not easily representing multi-layer feedforward NNs. For us, the term **“feedforward graph”** is defined more narrowly (cf. Section 3): we study feedforward computation over a _data axis_ (ie. we assume the _input nodes_ have a sequential ordering to them) rather than the _layer axis_! It is hence a _spatial_ constraint rather than one pertaining to multilayer architectures. Prior work on analysing data-axis feedforward graphs is **rare**: there was no established theory even for single-layer models over such inputs. The entirety of our paper aims to fill this gap, and study how to integrate information across the data axis with a _single_ graph. This corresponds to studying multi-layer GNNs which use the same feedforward graph in every layer. **It was never our intention to study arbitrary multi-layer signal propagation in this paper**. This was sufficient to fill the page limit and pose important directions for future work. Progressing into the general case (where graphs may differ across layers) is a natural next step, but it induces a combinatorial explosion of design choices. We wanted to understand the basics well before inviting such analyses. > This gap risks misleading readers when considering new architecture development. We recognise that, due to the established meaning of “feedforward”, our wording might pose a risk of misleading. We are happy to commit to **changing our title** to be more precise, eg.: “What makes a good _one-step_ feedforward computational graph?” We also commit to **detailedly discussing** how our setup differs from general multilayer NNs. ### **On metrics** Our notion of **mixing time** measures how quickly _salient information_ (ie. input features) can mix within the model. That is, how many feedforward layers (for a given graph) are necessary for a certain level of mixing of salient data to occur. > ResNets or GPT architectures have ±96 layers to enable the learning of "strong" hidden representations without directly propagating information to output neurons. This point is _irrelevant_ to our metric, as intermediate neurons in a multi-layer network **do not hold salient information at the beginning**. If every neuron had salient data at the start, then our mixing time would be particularly high; but in the common scenario, our metric’s one-step approach wouldn’t measure mixing in the output neurons only, but at any intermediate point. On **minimax fidelity**: > Modern networks routinely prune away large portions of weights or nodes, indicating that focusing on relatively few “important” nodes/edges is advantageous. The problem is that modern NNs over feedforward data (eg. GPT) are topologically biased towards **earlier nodes** (proved in Barbero et al., NeurIPS’24), and hence not equally well-prepared when important data is not at the start of the input. This is **precisely** what we try to quantify with minimax fidelity: _“which node is in the worst possible position by this choice of graph structure”_? Fidelity focuses on averaging (rather than pruning) because the mechanisms current models have for pruning, such as the `softmax` function, will provably fail to prune at OOD problem sizes (proved in Veličković et al., 2024). > In particular, given that the nodes are an ordered set, are nodes allowed to have the same order? While not explicitly allowed, our framework supports it: simply choose a random order for equal nodes. One component of the FS graph is a block-wise sequence of bipartite communications of this kind. ### **Further points** > Related work [1, 2] We appreciate these relevant papers and will be sure to discuss them in our revision. While [1] generalises oversmoothing to directed graphs, they are not “feedforward”; backwards edges are allowed, making a spectral approach far easier. > The datasets do not provide meaningful insights into what constitutes a “good” computational graph We designed our tasks to allow fine-grained control over which (& how many!) nodes are relevant for computing the answer, allowing us to validate our theory’s prescriptions in an unbiased manner. That said, we agree evaluating real-world setups would make the work more valuable. We have now trained Gemma 2B models – varying the graph used as the attention mask – on the standard Wiki dataset (tensorflow.org/datasets/catalog/wikipedia) containing texts from Wikipedia. After 3,000 batches of training, our model achieves the following perplexities: | **Graph** | **Perplexity** | | :------- | :--------: | | Full | $5.3513$ | | Line | $5.7267$ | | FS | $\mathbf{5.2786}$ | This trend is consistent throughout training, demonstrating the sparse FS graph is competitive with the full feedforward graph, even when nodes are natural language tokens. --- Rebuttal Comment 1.1: Comment: Dear authors, Thanks for the rebuttal. I indeed misunderstood the motivation—apologies for that. The central question of your work is better phrased as "what makes a good causal attention mask?" in Transformer terms. The term feedforward graph could be confusing, given its association with standard feedforward networks (also often represented as DAGs), which led to my earlier comparison with works like [1]. I agree that, in this light, my earlier critique of the metrics no longer applies. I have updated my review accordingly and raised my score. Some further clarifying questions/points: - Might you consider a “copy-last-token” or "last token copies feature of intermediate token x" tasks (or similar long-range memory benchmark) to highlight whether FS improves upon fully causal masks? - Have you thought about how learned attention weights might partially override low fidelity? You do note that certain theoretical results (e.g. [Barbero et al., 2024]; [Veličković et al., 2024]) question how well attention can prune in the large-length regime, which justifies looking at your purely topological perspective. However, I believe a brief discussion of when attention can adapt enough to mitigate topological constraints would be valuable. - If each layer has a “locally optimal” feedforward graph, where optimal refers to good mixing time and fidelity, does the whole Transformer have a "globally optimal" feedforward graph? - I would like to point to directed expanders discussed in *On directed analogues of expander and hyperfinite graph sequences* by (Csóka and Grabowski, 2021). Best wishes. --- Reply to Comment 1.1.1: Comment: Dear Reviewer QmmN, Thank you so much for your positive response and reevaluating your assessment of our work! We are delighted that we’ve been able to reach a mutual understanding about our work’s purpose. Your follow-up is full of useful suggestions which are greatly appreciated! Here are our answers: > The central question of your work is better phrased as "what makes a good causal attention mask?" in Transformer terms. Causal masks are indeed a very relevant deployment direction for our work. We wanted to avoid mentioning _“causal masks”_ in the title as we didn’t want to make any connotations with causality; furthermore, several other important settings – such as temporal graph representation learning – rely on similar spatial constraints but do not require the use of Transformers. We will explore the updated wording in the title with careful consideration of all of these points, and in either case the discussion with you was highly informative, and general feedforward graphs are worth discussing in our paper – we will be sure to reference the key takeaways from our exchange prominently in the revision. > Might you consider a “copy-last-token” or "last token copies feature of intermediate token x" tasks (or similar long-range memory benchmark) to highlight whether FS improves upon fully causal masks? In some sense, we were already doing this (the maximum task requires isolating the maximal token, and then copying one of its features). Your suggestion, however, inspired us to test more granularly how such tasks relate to fidelity. For our revision, we have now conducted an experiment where we evaluate on the max task (same as before) but _we **stratify** the reported accuracies as a function of where the max token is._ This allows us to visualise a “performance profile” for the various graph choices – and indeed, the full graph has a sharp collapse towards the last few tokens, while the FS graph remains a consistent predictive power regardless of where the max token is located. We will gladly display this in the updated paper, since it shows a clear link between what smaller minimax fidelity predicts (that certain tokens’ representation within the final token’s embedding will eventually “fall off a cliff”) and empirical performance on a task. > Have you thought about how learned attention weights might partially override low fidelity? You do note that certain theoretical results (e.g. [Barbero et al., 2024]; [Veličković et al., 2024]) question how well attention can prune in the large-length regime, which justifies looking at your purely topological perspective. However, I believe a brief discussion of when attention can adapt enough to mitigate topological constraints would be valuable. We have thought about this a fair bit, and will add some of our key takeaways in the paper. Certainly, attention would be capable of overcoming some of the topological restrictions when the length distribution shift is not large enough to trigger the Theorems in the prior works. However, we believe that there is another tradeoff that must be contended with: due to the renormalising effect of the softmax function, any attention invested by the modules to “sharpen” less represented paths is attention that is not spent for mixing the relevant information between tokens for answering the task. This tradeoff emphasises the challenges of simultaneously solving the problem given to a Transformer and protecting dataflow along its most vulnerable paths, while also pointing at the fact that alternatives (such as a “pure gate” / sigmoidal attention) might allow for more granularity in this tradeoff. > If each layer has a “locally optimal” feedforward graph, where optimal refers to good mixing time and fidelity, does the whole Transformer have a "globally optimal" feedforward graph? This is a very interesting question to ponder, and we will highlight it in the future work section. This does sound plausible to us, although the definition of “locally optimal” may likely induce some interesting combinations of constraints across layers, which might not be trivial to resolve. But we are absolutely certain that research in this space should progress in these kinds of directions, as it is highly unlikely that repeating one layer only will be achieve global optimality in the regime without backwards edges. > I would like to point to directed expanders discussed in On directed analogues of expander and hyperfinite graph sequences by (Csóka and Grabowski, 2021). Thank you; this is an excellent reference that we will be certain to discuss in our revision! Best, Authors
Summary: The paper studies the impact of feedforward graph structures on information flow, introducing two metrics — mixing time and minimax fidelity— to assess speed and accuracy respectively. The study reveals a trade-off between fast information propagation and high-fidelity signal propagation among various graph types and proposes the FunSearch (FS) graph generator to design graphs that balance these metrics effectively. Empirical evaluations highlight that neural networks using the FS graph architecture outperform or match those with traditional designs in both in-distribution and out-of-distribution tasks. Claims And Evidence: The paper's claims regarding mixing time and minimax fidelity are well-supported by rigorous theoretical analyses in Lemma 4.1, Proposition 5.1, and evidence obtained from evaluations on canonical graph structures in Figure 3. The introduction of the FS graph generator is backed by theoretical guarantees in Theorem 6.1 and evidence from empirical tests in Figure 3, substantiating the claims of achieving a balance between fast mixing and high fidelity. However, idealized assumptions limit the generality of the claims (please see 'Theoretical Claims'), while broader applicability remains uncertain without more evidence on real-world datasets (please refer to 'Experimental Designs Or Analyses'). Methods And Evaluation Criteria: The introduction of mixing time and minimax fidelity as complementary metrics measures information propagation efficiency and input detail preservation in graphs. Theoretical analyses, featuring rigorous derivations and asymptotic studies for various graph constructions, provide a foundation and clarify graph design trade-offs. The three synthetic tasks — maximum retrieval, second maximum retrieval, and parity computation — test information propagation efficiency and sharpness, effectively stressing different graph aspects and aligning well with the study's objectives. Theoretical Claims: Lemma 4.1 and Proposition 5.1 establish stationary distribution and signal decay in feedforward graphs, with proofs demonstrating valid reasoning under specific assumptions. However, Theorem 6.1's proof relies on strong assumptions, such as integer-valued log n, perfect divisibility, and fixed minimum fraction of outgoing edges crossing blocks. Although the overall strategy is plausible and the recurrence analysis leads to a polylogarithmic upper bound, the proof depends on somewhat idealized conditions, raising questions about how robust the result is when these assumptions are relaxed. Experimental Designs Or Analyses: The experiments use synthetic tasks that lack real-world complexity, with graph attention network conditions limited by identity vertex features and synthetic graph structures. Testing on larger, real datasets with authentic vertex features and real-world graphs would better validate the findings. Additionally, providing more detailed information on the experimental settings—such as hyperparameter tuning, hyperparameter details of baselines, and statistical significance tests—would enhance the robustness of the findings. Supplementary Material: The lack of access to the code as part of the supplementary material limits the ability to verify the experimental results. Without the code, independent reproduction and validation of the findings become challenging for other researchers. However, since this submission relies on synthetic datasets and tasks, independent reproduction of the results might be easier than in code-heavy projects with real datasets and specific train-test splits. Relation To Broader Scientific Literature: This submission extends the extensive literature on graph rewiring to feedforward (directed acyclic) graphs — a setting where traditional spectral methods fall short due to the lower-triangular structure. The introduction of mixing time and minimax fidelity metrics adapts classical Markov chain and expander graph theories to feedforward architectures. The FS graph generator, inspired by FunSearch, contributes a practical algorithm for designing graphs that balance rapid information mixing with high signal fidelity. Essential References Not Discussed: Including discussions of random walks on directed acyclic graphs, or connections to causal inference frameworks, could further highlight the novelty of extending these analyses to feedforward graphs. Discussing recent work on sparse transformer mechanisms, e.g. [1], would enhance this paper by highlighting parallels in balancing computational efficiency and signal preservation, akin to mixing time and minimax fidelity metrics. A related work [2] presents a novel method that improves long-range connectivity in directed graphs using stochastic rewiring with graph attention. [1] Big Bird: Transformers for Longer Sequences, In NeurIPS 2020, [2] Graph Attention with Random Rewiring, arXiv 2407.05649v1. Other Strengths And Weaknesses: Strengths: [+] The paper is clearly written and well-organized. [+] Graph rewiring in the context of feedforward and directed (acyclic) computation graphs is largely unexplored. Weaknesses: [-] Experiments are weak; rigorous experiments and ablation studies, e.g., add/remove self-edges, experimenting with various in-degrees, would strengthen the submission. Other Comments Or Suggestions: The submission would benefit from a discussion on limitations and future research directions. For example, integrating the proposed metrics with modern design paradigms, such as residual connections and transformer variants, could lead to advanced models. These models would dynamically adapt their computational graphs based on the specific task. Questions For Authors: 1. Was there a consideration for testing the results on real-world datasets with real-world vertex features? This could demonstrate the practical applicability of the metrics beyond synthetic tasks, increasing confidence in their real-world relevance and utility. 2. Were there considerations for how Theorem 6.1 might adapt to less ideal conditions without strict assumptions like integer-valued log n? This would increase the theorem's applicability to real-world scenarios, enhancing the paper's theoretical impact and generality. 3. Was there a consideration for conducting comprehensive ablation studies, such as adding/removing self-edges? Such studies can improve understanding of experimental design choices, indicating areas for refinement and strengthening future research outcomes. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer y3Tf, We are delighted that you have found our foundations to be strong and our results to be interesting. We hope that our responses will strengthen your view of our contributions even further! > However, Theorem 6.1's proof relies on strong assumptions Thank you for raising this! We have now improved the proof of Theorem 6.1 so that all simplifying assumptions have been removed. This turned out to be fairly straightforward, and mostly required the judicious use of ceiling and floor functions; eg. the correct value of the number of levels, $D$, is now: $$D = \left \lfloor \frac{\log n}{\log \lceil \log n \rceil} \right \rfloor,$$ but the key arguments of our random walk analysis are unchanged. The proof is unwieldy to paste here but we will of course update it in our revision, and we are happy to discuss further aspects if you would like to know more! > The experiments use synthetic tasks that lack real-world complexity We designed our tasks to allow fine-grained control over which (and how many!) nodes are relevant for computing the answer; this allows us to validate our theory’s prescriptions in the most unbiased manner. That said, we agree that evaluating on real world data would make the work more valuable. We have now trained Gemma 2B models – varying the graph used as the attention mask – on the standard Wiki dataset (https://www.tensorflow.org/datasets/catalog/wikipedia) containing natural language texts scraped from Wikipedia. After 3,000 batches of training, our model achieves the following perplexities: | **Graph** | **Perplexity** | | :------- | :--------: | | Fully connected | $5.3513$ | | Line | $5.7267$ | | FS | $\mathbf{5.2786}$ | This trend is consistent throughout training. This demonstrates that the sparse FS graph can remain competitive with the full feedforward graph, even when nodes are natural language tokens. > Providing more detailed information on the experimental settings Thank you; we will include more details. We used default hyperparameters in most places, and tuned the order-of-magnitude of the learning rate and the weight decay coefficient using the fully-connected graph experiments only (then reused the tuned parameters everywhere else). > The lack of access to the code as part of the supplementary material limits the ability to verify the experimental results. We appreciate your remark, and thank you for acknowledging that the paper’s findings should be easier to reproduce since tasks are synthetic. We understand the positive impact of open-sourcing, and would aim to release our code for the synthetic tasks upon acceptance. > Including discussions of random walks on directed acyclic graphs, or connections to causal inference frameworks, could further highlight the novelty of extending these analyses to feedforward graphs. We agree that these points are useful, and will add further discussions! To give a concrete example, since our original submission, we managed to make our statement in Section 4.4. (mixing time is ‘small’ iff there are ‘lots’ of paths from ‘most’ vertices to $\tau$) precise: **Proposition:** _Suppose that the outdegree of every vertex other than $\tau$ is at least $2$. Let $t$ be the average mixing time. Then for some $s \leq t$, the average number of paths from vertex $i$ to $\tau$ with length $s$ is at least $(3/4t)2^s$, where the average is taken over all vertices $i$ between $0$ and $n-1$._ We will include this in the next revision – it provides a relevant connection between our metrics and paths on a DAG and its proof is only a few paragraphs long. > Related works ([1, 2]) Thank you for bringing these to our attention. They are highly relevant to discuss and we will incorporate them. The key difference is that both Big Bird and GRASS optimise for _bidirectional_ graphs, where back edges are allowed. > Rigorous experiments and ablation studies, e.g., add/remove self-edges, experimenting with various in-degrees This is an excellent suggestion! We have now performed both of these ablations, and found that removing self-edges led to significant performance regressions on all tasks – especially on the Parity task where multiple elements need to interact meaningfully for the final answer computation. We also evaluated our graph generators at different levels of in-degree OOM: $O(1)$, $O(\log n)$ and $O(\sqrt{n})$. We find that, as the orders are increased, the models get more performant in-distribution, without a significant effect out-of-distribution. We will incorporate both of these analyses in our revision! > The submission would benefit from a discussion on limitations and future research directions. We’ll explicitly include the directions you discussed (integration with Transformer variants, dynamically adapting the computation graph) and we’ll also suggest additional ones (connecting the mixing and fidelity metrics and finding further connections between them and spectrally-motivated analyses!).
Summary: the paper proposed two metrics to measure the quality of computational graph, experiments demonstrate the correlation between those metrics and actual performance. ## update after rebuttal the author's response addresses my question. the paper looks good to me, I will keep my rating Claims And Evidence: YES Methods And Evaluation Criteria: YES Theoretical Claims: YES Experimental Designs Or Analyses: YES Supplementary Material: NO Relation To Broader Scientific Literature: Related to model optimization, general model training Essential References Not Discussed: NO Other Strengths And Weaknesses: strength: 1. the paper is well written and easy to follow 2. the experimental part is good, can support the claim The paper address the challenge of what constitute a good feedforward graph sctructure, especially in terms of information propagation quality. two metrics (mixing time and minimax fidelity) provides us a principle for graph design that could inspire further research in this area, crucial for understanding how effectively information flows through the graph. the experiment parts show that the new graph generator outperforms existing models, showcasing the potential for improved model sparsity and out-of-distribution generalization . Other Comments Or Suggestions: NO Questions For Authors: 1. The paper discusses the asymptotic behaviour of various graphs. why certain structures perform better from an ML perspective. 2. The paper presents empirical results on tasks like maximum retrieval and parity. However, the tasks seem relatively simple. Could the authors expand their evaluation to more complex tasks, such as those involving real-world datasets (e.g., natural language processing or graph-based tasks)? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear Reviewer GmC2, We are very thankful for your kind review and recognising the strengths of our work! To address your questions: > The paper discusses the asymptotic behaviour of various graphs. why certain structures perform better from an ML perspective. This is an excellent question. We believe that we can relate at least some of our metrics to established approaches in graph machine learning that study which graphs will have better information propagation from a machine learning perspective. To name a few examples: * **High mixing time** implies that it takes a long time for information to travel to the final vertex (in the worst case – when there is more than one sink – mixing time is infinite). If there are insufficiently many layers in a model to cover these long paths, this directly relates to the well-known _under-reaching_ phenomenon in graph neural networks (Barceló et al., ICLR’20). * **Low fidelity** for a particular node implies that this node’s information is not reaching the final node in a way that is not “drowned out” by others, especially as more model layers are added – this effect is well-studied in the GNN literature as the _over-smoothing_ effect, and was recently also studied in the domain of Transformers as well (Barbero et al., NeurIPS’24). * High fidelity in certain nodes, but **low _minimax_ fidelity**, implies that, while some nodes are able to sharply send their information into the final node, this is done at the expense of other nodes, which fail to arrive at the sink quickly enough to avoid being overwhelmed. This kind of biased communication is related to the _over-squashing_ effect, which Barbero et al. (NeurIPS’24) have also studied from the perspective of Transformers. Since our original submission, we have also managed to make our statement in Section 4.4. (that mixing time is ‘small’ if and only if there are ‘lots’ of paths from ‘most’ vertices to $\tau$) precise, as follows: **Proposition:** _Suppose that the outdegree of every vertex other than $\tau$ is at least $2$. Let $t$ be the average mixing time. Then for some $s \leq t$, the average number of paths from vertex $i$ to $\tau$ with length $s$ is at least $(3/4t)2^s$, where the average is taken over all vertices $i$ between $0$ and $n-1$._ The proof is only a couple of paragraphs long, and it uses the intuition given in Section 4.1. More precisely, $(\mathbf{W}^t)_{\tau i}$ represents the probability of a random walk starting at $i$ and reaching $\tau$ by time $t$. This is equal to the sum, over all $s \leq t$, of the probability that the random walk first reaches $\tau$ at time $s$. For any such $s$, this probability is equal to the sum, over all paths from $i$ to $\tau$ with length $s$ avoiding the loop based at $\tau$, of the probability of taking that path. This latter probability is at most $1/ 2^{s}$ because at each step along the path there were at least two options that could have been taken. So, for some $s \leq t$, the average number of paths from vertex $i$ to $\tau$ with length $s$ is at least $(3/4t)2^s$. We intend to include this new result in the appendices. > The paper presents empirical results on tasks like maximum retrieval and parity. However, the tasks seem relatively simple. Could the authors expand their evaluation to more complex tasks, such as those involving real-world datasets (eg. natural language processing or graph-based tasks)? Thank you for your comment! We have designed our tasks to allow us fine-grained control over which (and how many!) nodes are relevant for computing the final answer, as this would allow us to validate our theory’s prescriptions in the most unbiased manner. That being said, we fully agree that evaluating our proposal on real world datasets would make the work more valuable. To this end, we have now trained Gemma 2B language models – varying the graph used as the attention mask – on the standard Wiki dataset (https://www.tensorflow.org/datasets/catalog/wikipedia) containing natural language texts scraped from Wikipedia. After 3,000 batches of training, our model achieves the following perplexities: | **Graph** | **Perplexity** | | :------- | :--------: | | Fully connected | $5.3513$ | | Line | $5.7267$ | | FS | $\mathbf{5.2786}$ | This trend is consistent throughout training. This demonstrates that the sparse FS graph can remain competitive with the full feedforward graph, even when nodes are natural language tokens.
null
null
null
null
null
null
null
null
From Thousands to Billions: 3D Visual Language Grounding via Render-Supervised Distillation from 2D VLMs
Accept (poster)
Summary: This paper proposes an approach for 3D vision-language understanding by leveraging rendered RGB images, grounding masks, and 2D feature loss for model training, rather than incorporating explicit 3D supervision. The model follows a pretrain-finetune paradigm, with evaluations conducted on open-vocab 3D instance segmentation and 3D referential grounding. Ablation studies demonstrate that the pretraining stage enhances fine-tuning efficiency by reducing the amount of required data. ## Post rebuttal I appreciate the authors’ clarifications regarding the model architecture, pretraining strategy, and the paper’s claims. Although this direction is promising, I cannot currently recommend acceptance due to several issues: (1) the omission of critical technical details in the methods section; (2) numerous typos and grammatical errors; (3) overstatements of the model’s capabilities; and (4) insufficient explanation as to why pretraining on ScanNet is beneficial compared to ScanNet++. Instead, the authors only compare pretraining on ScanNet++ followed by fine-tuning on ScanNet. Concerns (1) and (2) were also raised by Reviewer eYEH, and (3) was flagged by Reviewer zvKK. The authors acknowledge these points in the rebuttal. In the paper’s current form, there is more confusion than clarity: for instance, the model details remain unclear, it’s not included which data the model is pretrained on, how exactly that pretraining is conducted, and why ablations and data scaling are evaluated on a combined set of ScanRefer, SR3D, and NR3D. Furthermore, the remaining grammatical errors and overstated claims may mislead the readers, and I encourage the authors to revise and polish the paper to improve its coherence. Claims And Evidence: - The paper proposes a framework for pretraining and fine-tuning on 3D VL tasks, and its performance is verified by the experiments and ablations. - However, "The approach is new for vision-language understanding." (Line 16) is not accurate as using rendered features/images as supervision is not new in 3D VL understanding, as also pointed out by the paper's related work section, e.g., Ponderv2, Point Cloud Unsupervised Pre-training via 3D Gaussian Splatting (Liu et all.). This paradigm is the first in the 3D referential domain, to the best of my knowledge. - The claims in the introduction "render supervised framework can be used with essentially any 3D/4D task or model, provided the results are renderable" is quite bold to make. For example, extending the proposed method to tasks requiring image inputs (e.g., fine-grained caption) instead of sparse point clouds or to 4D dynamic scenes is inherently non-trivial. Methods And Evaluation Criteria: The paper lacks sufficient details of the proposed methodology, including both the model and data perspectives. - The paper lacks a detailed explanation of the model architecture and its operational flow. While it specifies the inputs and outputs of each module, it does not clarify how inputs are processed within the model or how outputs, such as the correspondence matrix C, are subsequently utilized. Furthermore, Figure 4 and the accompanying text are not well-aligned, leading to significant confusion for readers trying to understand the overall framework. - The paper does not mention how (much) pretraining data is generated or used in the pretraining. Are all the images used for each scene? Is the sensor PC organized by the RGBD point clouds for all the frames? Then how to handle the increasingly large point clouds as the model input. Theoretical Claims: No theoretical claims in the paper. Experimental Designs Or Analyses: The experiment results generally validate the effectiveness of the proposed model. I have two questions: 1) In Tab.4 Loss Ablation, what is the performance when only L_{RGB} is excluded? It seems the L_{RGB} loss plays a negligible role in the performance. 2) Could the author provide insights into why adding Scannet++ only marginally improves the performance in Tab. 6? ScanNet++ dataset, while smaller than ScanNet, is roughly 1/3 of the ScanNet scenes. Is using half of the ScanNet scenes realizing similar performance, similar to the data scaling of the fine-tuning data? Supplementary Material: Supp contains more training details, the performance of data scaling on open-vocab segmentation, and discussion of limitations. Relation To Broader Scientific Literature: I believe the proposed method aims to address the scarcity of data in 3D vision language understanding domain, by leveraging the benefits of neural rendering, e.g., Gaussian Splatting, and its strong relationship with point clouds. The paper presents evidence of improving 3D referential grounding through this paradigm, which I believe can bring potential insights for the community. Essential References Not Discussed: Not that I'm aware of. Other Strengths And Weaknesses: The writing of the paper could be further refined. Currently, it contains typographical errors, unjustified hypotheses in the introduction, and a method section that lacks sufficient details. Other Comments Or Suggestions: Typos, for example, on Line 22-23 "For training, only need images and camera pose, and 2D labels.", Line 25-26 "We demonstrate this to pretrain a network", Line 90 "Specifically, render supervised framework can be used". Questions For Authors: While I appreciate the results demonstrated in the experiment section, as well as the data scaling performance in the ablation, I cannot give a positive rating at the moment primarily due to my concerns over the method and pretraining data as outlined above. Especially, how do the readers comprehend the effectiveness of pretraining when the details of pretraining data are not presented? The authors are encouraged to address my concerns above. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We appreciate that the reviewer likes our results and data scaling performance. We answer the questions below and will improve the writing given the valuable suggestions. --- > Model architecture and its operational flow… **Encoder Backbone**: is a SparseConv UNet [1] (Sec. 3.4), following PonderV2 implementation. **Gaussian Head** is a lightweight MLP (2 layer, 128 dim) mapping each point from the Encoder Backbone to Gaussian parameters (L182-L190), including scaling, color, etc. The **Mask Decoder Transformer** follows the MaskFormer pipeline [2], using m learnable tokens (mask proposal tokens in Figure 4) to predict potential 3D masks. Each token corresponds to a binary 3D instance mask over the Gaussians. The **correspondence matrix** C (m x |Q|) grounds the m instance masks into the Q language tokens. Each element indicates the probability that the mask corresponds to that particular language token. The **mask decoder** is a Transformer decoder, where the visual and language tokens serve as the Key and Value, while the proposed mask tokens act as the Query inputs to the decoder. To further aid clarity and reproducibility, we will release the code and improve the final version. Lastly, we note that LIFT-GS is model-agnostic, imposing minimal architectural constraints and being readily adaptable to other architectures (Lines 224–230). > Pretraining Data We used the training scenes from ScanNet and ScanNet++ for pretraining, except for the ablation study in Table 6. For images, we sampled frames from the original video trajectories with a frame skip of 30 (i.e., at 1 Hz). Each selected RGB-D frame was unprojected using the provided camera intrinsics and poses, then voxel-pooled at a 5 cm resolution, so the total point cloud number is controlled. ### Claims > Using rendered features/images as supervision is not new in 3D VL understanding... This paradigm is the first in the 3D referential domain, to the best of my knowledge. We agree and will make the claim more precise by specifying that we mean grounding with complex language (3D referential grounding). > The claim "render supervised framework…" is quite bold to make. For example, extending... or to 4D dynamic scenes is inherently non-trivial. We appreciate the reviewer’s suggestion and will revise it to make it more accurate. We appreciate any recommendations, and suggest the following: “The render-supervised framework provides a general and extensible design. LIFT-GS shows how to use it for highly structured tasks, such as 3D referential grounding and object detection”. We genuinely believe that our pipeline presents a general and extensible design. While adapting it to new tasks may involve non-trivial effort, we argue that such extensions are both feasible and conceptually straightforward. For image inputs, one could use methods like Dust3R [3] to regress point maps from images as point clouds. For dynamic scenes, it is possible to regress motion basis coefficients for each point, as Shape-of-Motion[4]. These examples illustrate our belief that the proposed pipeline can be extended to support a wide range of applications beyond the current scope. ### Experimental analysis > Role of L_{RGB} loss We keep L_{RGB} loss because it is necessary to supervise the reconstruction of the 3D Gaussian fields. We agree that this loss may not significantly benefit downstream tasks, as it provides limited semantic information. > Improvement of adding Scannet++ in Table6? Excellent point. In Tab. 5, increasing the finetuning data by 100% yielded about a 5% Acc@0.5 improvement. Increasing the pretraining data by 30% yielded a 1% improvement. Based on the curve in Figure 6 (we show the real and estimated values based on the curve below), the performance gain from adding ScanNet++ data (30% more) for pretraining is roughly equivalent to adding 15% more ScanNet finetuning data with 3D annotations. It shows that the effective transfer ratio is roughly 1 / 2; i.e., collecting twice the number of raw videos alone can yield improvements comparable to building a fully annotated 3D dataset. Therefore, **pretraining on ScanNet++ is highly efficient and cost-effective**, considering that annotating 3D referential grounding data requires orders of magnitude more effort than collecting raw video alone. |Finetune Data Ratio| 10% | 20% | 50% | 100% | 115%| 130% | Pretraining on Scannet++(130%)| |-|-|-|-|-|-|-|-| | Acc@0.25 | 24.96 | 36.11 | 43.80 | 47.53 | 48.28 | 48.94| 48.29| | Acc@0.5 | 14.70 | 23.03 | 28.89 | 33.75 | 34.72 | 35.59 | 34.35| | Acc@0.75 | 4.89 | 8.26 | 11.42 | 13.49 | 13.91 | 14.27 | 14.06| [1] https://github.com/facebookresearch/SparseConvNet/blob/main/examples/ScanNet/unet.py [2] MaskFormer: Per-Pixel Classification is Not All You Need for Semantic Segmentation [3] DUSt3R: Geometric 3D Vision Made Easy [4] Shape of Motion: 4D Reconstruction from a Single Video
Summary: The work addresses the problem of open vocabulary based 3d segmentation, that is predicting 3d masks for an RGB point cloud that adhere to the language based query. In order to do so the authors propose a feedforward architecture that predicts 3D Gaussians which carry information about their belonging to segmentation masks. These masks are predicted via a transformer decoder from the language query and the encoded point cloud (with learnt mask tokens similar to the decoder of MaskFormer). A key point of the paper is the supervision via 2D pseudo ground truth (with is plenty) rather than 3d labels. This is achieved by rendering the Gaussians and their masks and comparing them against 2d segmentations built from SAM masks and corresponding CLIP language embeddings. The model is optimized on the (i) RGB reconstruction, (ii) mask and language consistency, and (iii) a feature loss. Experimental results demonstrate the effectiveness of the approach for open vocab instance segmentation (ScanNet200) and 3D referential grounding (ScanRefer, SR3D, NR3D). Further ablation studies provide insights into the losses, the impact of pretraining, and the importance of good 2D foundational models for pseudo ground truth generation. Claims And Evidence: The main claim of the paper is to distilling knowledge from 2D foundation models for supervision rather than relying on 3d labels. This argument is well accepted in the community and underlined by the scarcity of 3d labels. Methods And Evaluation Criteria: The method is well presented and builds upon existing building blocks (differentiable rendering via Gaussian splatting, mask prediction via learnt proposal tokens, SAM based segmentation, CLIP for language embeddings) but also the usage of 2d ground truth has been well explored in the community (which is appropriately cited in the related work section). Theoretical Claims: N/A Experimental Designs Or Analyses: A comparison between PQ3D and the proposed approach is missing on the very same input data. The authors argue that they didn't succeed in retraining PQ3D on "Sensor PC" input data. Though, LIFT-GS could have been trained and evaluated on "Mesh PC" data. Even though this input data might not be available in real-world applications, it would allow for an apples-to-apples comparison to the SoTA method. Given the present analysis it appears that PQ3D might outperform the proposed method. A comparison would provide clarity. However, the novelty of the approach lies within the generic (pre)training from 2d ground truth rather than beating the top performer on any benchmark. Supplementary Material: No, not in depth. Relation To Broader Scientific Literature: Open vocabulary 3d instance segmentation is a very active research field. As such the proposed 3d supervised pre-training task is valuable to the wider community. Essential References Not Discussed: Liu et al., "Weakly Supervised 3D Open-vocabulary Segmentation", NeurIPS'23 Qin et al., "LangSplat: 3D Language Gaussian Splatting", CVPR'24 Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: - How are the different scales of maks that are generated by SAM handled in the loss? Segmentations in different images that see the same object will contradict such that a consistent 3d segmentation is not easily possible. He. et al. "View-Consistent Hierarchical 3D Segmentation Using Ultrametric Feature Fields", ECCV'24 addresses this problem via a hierarchical segmentation. - Which model is used to compute the ground truth feature map F_2D? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer’s recognition of LIFG-GS, including “the method is well presented and builds upon existing building blocks”, and believing “As such the proposed 3d supervised pre-training task is valuable to the wider community.”. We answer the question below and will make them more clear in the final version. --- >How are the different scales of maks that are generated by SAM handled in the loss? Segmentations in different images that see the same object will contradict such that a consistent 3d segmentation is not easily possible. He. et al. "View-Consistent Hierarchical 3D Segmentation Using Ultrametric Feature Fields", ECCV'24 addresses this problem via a hierarchical segmentation. That’s an excellent question, and we appreciate the opportunity to elaborate on this point. A key advantage of our approach is that it does **not** require 3D view-consistent masks, as our method is fully learning-based and relies solely on 2D supervision. In contrast, existing methods that leverage SAM often rely on hand-crafted hierarchical structures or empirically designed mask-merging heuristics to improve consistency across views. These procedures are typically complex and heuristic-driven, introducing additional sources of noise and making the pipeline less robust. Our framework takes a different route: the 3D model is queried using 2D CLIP embeddings, and training is guided only by corresponding 2D mask and grounding losses. As a result, there is no requirement for the 2D masks to be consistent across views. Instead, we encourage the Transformer decoder to **learn how to produce coherent 3D masks** that align with these independently supervised 2D views, in a fully data-driven manner—without relying on manually designed post-processing steps. To extract masks from SAM, we sample a grid of points across each image and generate masks accordingly. We apply Non-Maximum Suppression (NMS) to reduce overlapping masks and discard extremely small ones. Aside from these minimal filtering steps, we do not apply any further merging or heuristic post-processing. --- >Which model is used to compute the ground truth feature map F_2D? We use the same pipeline as in Figure 3 to compute the ground-truth 2D feature map F_2D​. Specifically, we apply the CLIP image encoder to extract features for each segmented region, assigning the resulting feature vector to all pixels within that region. For segmentation, we adopt the SAM-H model, and we use the CLIP-L model to extract features. This pipeline is flexible and can also accommodate alternative backbones, such as DINO-v2. --- >A comparison between PQ3D and the proposed approach is missing on the very same input data…. However, the novelty of the approach lies within the generic (pre)training from 2d ground truth rather than beating the top performer on any benchmark. We sincerely appreciate the reviewer’s recognition that the novelty of our work lies in proposing a generic (pre)training framework using 2D supervision, rather than merely achieving state-of-the-art results on specific benchmarks. Regarding PQ3D, we note that the reason why our method does not use exactly the same input data—i.e., mesh point clouds—as PQ3D, is that mesh point clouds are impractical in real-world applications due to the need for time-consuming meshing and human annotations (L298-303). To ensure a fair comparison, we made every effort to retrain all baselines under our unified setting for an apples-to-apples comparison. However, we were unable to successfully reproduce PQ3D due to its complex, multi-stage training pipeline that spans across multiple datasets. We greatly appreciate the reviewer’s suggestions and agree that more rigorous, apples-to-apples evaluations—using identical input settings as prior works—would further improve the clarity and fairness of our comparisons. While we are actively working on this, the time and computational constraints during the rebuttal period prevent us from including those results here. We plan to incorporate them in a future revision of the paper. --- > Additional Reference Thanks for pointing out, and we will include them in the final version.
Summary: This paper presents LIFT-GS, a scalable pretraining approach for 3D vision language grounding. Specifically, the model takes in a point cloud of the scene along with the language query embeddings to produce 3D Gaussians with features together with the predicted masks for grounding. For training LIFT-GS, the reconstruction loss on RGB, feature loss on image features from 2D foundation models, and the grounding loss on the predicted masks are applied. The method does not require ground truth 3D annotations or even 2D annotations. Instead, it leverage 2D foundation models to generate pseudo-labels for training. Experimental results on 3D language grounding demonstrate the decent the performance of the proposed method. ## Update after rebuttal My concerns about comparing with more recent state-of-the-arts and the training cost comparison are mostly solved. However, since the high-level idea is somehow similar to LangSplat which makes the Gaussians to predict other features and properties beyond the original properties, I would regard the method as a good direction to try on 3D language grounding, but the concept itself is not super novel. Therefore, I would like to maintain my score of weak accept. Claims And Evidence: Yes, the claim that the proposed LIFT-GS acts as a pretraining solution for 3D language grounding without 3D supervision is supported by the experiments. Methods And Evaluation Criteria: Yes, the proposed methods and the evaluation criteria make sense. Theoretical Claims: The paper does not have any proofs or theoretical claims. Experimental Designs Or Analyses: Yes, the experimental designs or analyses are sound and valid, except that I have some concerns about whether there are stronger and more recent state-of-the-arts for comparison. I will elaborate on this point in the "Other Strengths And Weaknesses" section. Supplementary Material: Yes, I have reviewed all parts of the supplementary material. Relation To Broader Scientific Literature: The paper is focusing on 3D vision language grounding, which is broadly related to 3D multi-modal language models. It can be a primitive step towards artificial general intelligence. Essential References Not Discussed: There might be some more recent works that can be used for comparison. For example, SceneVerse [1] and D-LISA [2]. Also, LangSplat [3] should be discussed in the literature as it is the first to enable open-vocabulary language grounding in 3D scenes with Gaussian Splatting, although LangSplat is per-scene optimized instead of training across scenes in a feed-forward way as the proposed method. Other than that, there are no essential references that are not discussed. [1] Jia et al. SceneVerse: Scaling 3D Vision-Language Learning for Grounded Scene Understanding. ECCV 2024. [2] Zhang et al. Multi-Object 3D Grounding with Dynamic Modules and Language-Informed Spatial Attention. NeurIPS 2024. [3] Qin et al. LangSplat: 3D Language Gaussian Splatting. CVPR 2024. Other Strengths And Weaknesses: **Strengths:** - The proposed method achieves training a 3D vision language grounding model with no ground truth 3D supervision. Moreover, the 2D labels can also be pseudo-labels generated by 2D foundation models. It can be very useful for scaling up the training data as the training process does not require 3D annotations. - The proposed training pipeline of cross-scene render-supervision is innovative to me in the context of 3D vision language grounding. Nevertheless, I have to admit that both (1) cross-scene 3D Gaussians / neural rendering and (2) language embedding in 3D Gaussians with additional features are not completely new ideas. For (1), the related works include pixelSplat [1], MVSplat [2], etc. For (2), the related works include LangSplat [3]. **Weaknesses:** - For the experimental comparisons, the most recent state-of-the-art for fair comparison seems to be 3D-VisTA [4], which seems to be a relatively out-of-date work for me. I think there are more recent advancements for 3D vision language grounding, including SceneVerse [5], D-LISA [6], etc. The authors seem to miss on discussing these related works, and therefore missing the experimental comparisons with them. - The training cost seems to be huge, which requires 32 A100 GPUs. Compared to other methods like SceneVerse [5] (which is already a method for scaling up data) which only requires 8 A100 GPUs for 2 days, the training cost of the proposed method seems to be very intimidating. [1] Charatan et al. pixelSplat: 3D Gaussian Splats from Image Pairs for Scalable Generalizable 3D Reconstruction. CVPR 2024. [2] Chen et al. MVSplat: Efficient 3D Gaussian Splatting from Sparse Multi-View Images. ECCV 2024. [3] Qin et al. LangSplat: 3D Language Gaussian Splatting. CVPR 2024. [4] Zhu et al. 3D-VisTA: Pre-trained Transformer for 3D Vision and Text Alignment. ICCV 2023. [5] Jia et al. SceneVerse: Scaling 3D Vision-Language Learning for Grounded Scene Understanding. ECCV 2024. [6] Zhang et al. Multi-Object 3D Grounding with Dynamic Modules and Language-Informed Spatial Attention. NeurIPS 2024. Other Comments Or Suggestions: There are some typos and strange grammars in some parts of the paper. I list some of them here: - Line 22: "For training, only need images and camera pose, and 2D labels." The sentence is weird. - Line 25-26: "We demonstrate this to pretrain a network ..." The expression of this sentence is strange. - Line 163: "ganulariry" -> "granularity" - Line 182: "Lmask" -> "$L_{\rm mask}$" - Line 205: "$C\sigma(i)$" -> "$C_{\sigma(i)}$" The correspondence matrix $C$ in Equation 1 has not be explained until Lines 194-195, which makes me feel very confused when I first read the paper. Questions For Authors: - How is the proposed method compared with the more recent state-of-the-arts like SceneVerse [1] and D-LISA [2]? Are there any specific reasons for the authors not discussing these works and not comparing with them in the experiment section? If not, I think comparing with - What is the training cost comparison between the proposed method and the compared baselines? The paper only states that the method needs to run 76K steps on 32 A100 GPUs. However, there are no specific training time / cost comparison between LIFT-GS and the other baselines. [1] Jia et al. SceneVerse: Scaling 3D Vision-Language Learning for Grounded Scene Understanding. ECCV 2024. [2] Zhang et al. Multi-Object 3D Grounding with Dynamic Modules and Language-Informed Spatial Attention. NeurIPS 2024. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful feedback and appreciate the recognition that using 2D supervision is “very useful for scaling up the training data,” “the claim… is supported by the experiments.”, “cross-scene render-supervision is innovative…”. We address the questions below and will incorporate improvements in the final version. --- > Compared to baselines, which output language embedding in 3D Gaussians with additional features, like Langsplat. Thanks for the suggestion — we will clarify it in the final version. Both LIFT-GS and LangSplat use render-supervised distillation. However, LangSplat's per-scene optimization method is primarily designed for object segmentation (nouns only), while LIFT-GS is a cross-scene method supporting 3D referential grounding, involving more complex language. LangSplat and its variants rely on **language-vision feature dot products for open-vocabulary segmentation**: reconstructing 3D RGB and CLIP feature fields and computing dot products with text embedding for segmentation (**Encoder only**). However, contrastive language-vision models (e.g., CLIP) tend to behave like bag-of-words models [2], making them struggle with handling slightly long language expressions with relational structure, such as spatial relationships, key in referential grounding. In contrast, LIFT-GS employs structured supervision from MDETR, training a **transformer decoder** over visual and language tokens to directly predict 3D masks and groundings through learned attention and referential loss. **We illustrate the failure modes of dot-product-based methods on both 3D and 2D with language inputs in Fig** https://postimg.cc/bd6QKTYk, https://postimg.cc/47Y9ZCxc. We compare LIFT-GS and LangSplat variants on 3D referential grounding benchmark ScanRefer. As LangSplat operates on rendered 2D images instead of 3D, we use a variant Semantic Gaussians [1] that reports much higher performance and directly segments 3D; as well as LERF (which both papers compare to). | Method | Acc@0.1 | Acc@0.25 | Acc@0.5| |-----------|---------------|-----------------|-------------| | Semantic Gaussians | 18.2% | 8.2% | 3.0%| | LERF | - | 4.4% | 0.3% | | LIFT-GS | - | 49.7% | 36.4%| --- > Comparison and discussion about SceneVerse [5], D-LISA [6]. Thanks for your suggestions, and we will include comparisons and discussions in the final version. The submission incudes comparisons to PQ3D (ECCV 2024), which reports stronger performance than both SceneVerse and D-LISA. For open-vocabulary segmentation, we compare against the numbers reported in PQ3D and show that LIFT-GS outperforms it. For the 3D referential grounding task, we tried our best efforts to retrain baselines under the Sensor Point Cloud setting for a fair comparison; adopting PQ3D, 3D VISTA, and BUTD-DeTr as our main baselines. Despite our best efforts, we weren't able to train PQ3D. However, we believe that 3D-VISTA is the appropriate comparison here, since the requested references perform comparably to that method on mesh point clouds. We compare their reported results as published below. Finally, we emphasize that LIFT-GS is both **model- and data-agnostic**, making it orthogonal to advances in data (e.g., SceneVerse) or architecture (e.g., D-LISA). Stronger models or larger 3D datasets can be seamlessly incorporated into our framework for better performance, as indicated in Figure 6. | Method | SR 0.5@Acc-Multiple | SR 0.5@Acc-Unique| Contribution| |-----------------|-------------------------------|----------------------------| ----------------| | PQ3D |46.2| 78.2 | - | | 3D-VisTA | 39.1| 75.1 | -| | D-LISA | 40 | 75.5 | Architecture| | SceneVerse| 42.7| 77.9| New Data| --- > Computation Resource Comparison. The difference in computational cost primarily arises from the **type of point cloud data** used during training. We follow the setting of BUTD-DeTR, using sensor point clouds—unprojected from posed RGB-D frames (frame skip of 30, voxel size 0.05 cm)—resulting in ~30k points per scene. This setup reflects real-world use cases more closely (L305). In contrast, most other methods, including SceneVerse, use ScanNet mesh segments, which are pre-annotated via face clustering, reducing scenes to ~300–1500 segments (∼100× fewer elements). This significantly lowers computational cost. However, meshing is not viable for real-world or real-time applications—it is computationally expensive and also contains human-annotated information. That said, for fairness, LIFT-GS can also operate on mesh segments, and under this setting, training requires comparable compute (8×A100 GPUs for 2–3 days). --- > Typos: We greatly appreciate these catches and will fix them in the final version. [1] Semantic Gaussians: Open-Vocabulary Scene Understanding with 3D Gaussian Splatting [2] [When and Why Vision-Language Models Behave like Bags-Of-Words, and What to Do About It?](https://openreview.net/forum?id=KRLUvxh8uaX) --- Rebuttal Comment 1.1: Comment: Dear authors, Thanks for providing the rebuttal! My concerns about comparing with more recent state-of-the-arts and the training cost comparison are mostly solved. However, since the high-level idea is somehow similar to LangSplat which makes the Gaussians to predict other features and properties beyond the original properties, I would regard the method as a good direction to try on 3D language grounding, but the concept itself is not super novel. Therefore, I would like to maintain my score of weak accept.
Summary: The paper presents LIFT-GS, a feedforward 3D vision–language grounding model that accepts a point cloud and a language query as inputs. It converts the point cloud into 3D Gaussians and uses differentiable rendering to supervise training with only 2D losses. The system is distilled from 2D foundation models to achieve 3D mask predictions for language-described objects. Claims And Evidence: - Supervision Without 3D Labels This is not a novel concept nor a new problem; similar ideas have been explored via NeRF-based per-scene optimization (LERF, NeRF-DFF), Gaussian splatting (LangSplat), feedforward transformers (Large Spatial Model, SAB3R), and even 3D LLMs (3D-LLM). Moreover, the choice to use point clouds rather than multi-view images appears as a mere setting difference. It remains unclear why the authors focus exclusively on point clouds. - Technical Novelty The method sections (3.1–3.4) describe a task formulation, standard 2D-supervised training for 3D reconstruction, and a conventional network for lifting point clouds into 3D Gaussians. The contributions in terms of problem and model novelty are not sufficiently distinguished from prior work. A stronger claim on what specific challenges are uniquely solved by focusing on point clouds and 3D Gaussian representations is needed. Methods And Evaluation Criteria: - Input Modality and Framework Choice The paper focuses solely on point cloud input. Since differentiable rendering with known camera poses is well established for RGB images, the choice to restrict the input to point clouds raises questions. It would strengthen the work to clarify why point clouds are preferred in this context and to discuss how the framework might extend (or why it might not extend) to settings with multiple images. Theoretical Claims: The manuscript does not propose any novel theoretical claims; its contributions are primarily empirical and architectural. Experimental Designs Or Analyses: Clarify how the model’s performance scales with data volume (how many scenes are used) and whether the absence of RGB images (as inputs) impacts the supervision via differentiable rendering. Supplementary Material: N/A Relation To Broader Scientific Literature: The paper’s positioning relative to prior 3D reconstruction and vision–language grounding work should be discussed in greater depth. The literature shows many similar approaches; hence, a clear delineation of the unique aspects of this work is essential. Essential References Not Discussed: While the references are adequate, the paper does not make a clear case for a distinct problem being solved or a novel method proposed. A more critical comparison to similar recent methods would help clarify its contributions. Other Strengths And Weaknesses: ### Strengths: - The paper leverages a scalable pipeline using differentiable rendering and distillation from 2D foundation models. - It demonstrates state-of-the-art performance on downstream 3D vision–language grounding tasks. ### Weaknesses: - The choice of point clouds over multi-view images is not sufficiently justified. - The overall novelty in both problem setting and technical method is not clearly established beyond what prior works have already addressed. Other Comments Or Suggestions: The authors should clarify why focusing on point clouds is beneficial and what new challenges are addressed by their specific combination of 3D Gaussian representations and 2D supervision. Strengthening the discussion on how their approach differs from existing feedforward or per-scene optimization methods would improve the paper’s impact. Questions For Authors: See the questions above. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: First, we would like to thank the reviewer for taking the time and effort to engage with the paper, and we look forward to a productive discussion. Before addressing the individual questions, we would like to clarify the core focus of our work. While our method involves 3D scene reconstruction as an intermediate step, the primary goal of this paper is **3D vision-language grounding**—specifically, localizing 3D instances (i.e., point cloud masks) based on complex language queries. 3D spatial understanding is a known weakness of existing VLMs ([1; Fig. 5]), but is a critical capability for applications across robotics and embodied AI. For example, it can identify pick-and-place locations in long-horizon manipulation and rearrangement tasks. The key challenges and contributions of our work lie in developing an effective and scalable pipeline for (pre-)training large 3D visual-language grounding models (the **Decoder**), and the contribution is to do this using differentiable rendering using 2D supervision (Line 97). --- > similar ideas have been explored via … feedforward transformers (Large Spatial Model, SAB3R) Could the reviewer kindly provide a citation or reference for SAB3R? Despite our best efforts, we have been unable to find a corresponding paper or preprint. On the project website, we found [3], the paper link on the project website returns a 404. The only reference we found is on a personal website [2], which does not link to any publication or arXiv submission. Moreover, none of the other referenced papers evaluate on any of the standard 3D visual-language grounding benchmarks (ScanRefer, SR3D, NR3D). --- > Why point cloud inputs?... why focusing on point clouds is beneficial As mentioned in the paper, the losses used in LIFT-GS are largely agnostic to inputs and could work with multi-view image inputs. The fact that the introduced problem formulation is independent of the input type highlights the primary focus of our paper, which is **a new method for training 3D visual-language grounding models (especially Decoder)**, and we believe this is a strength of the method. We chose to focus primarily on point cloud inputs, since they are currently one of the most widely adopted 3D representations in robotics, and the return type of SLAM and SfM. Point clouds offer flexibility and modularity, as they decouple the 3D reconstruction process from the grounding task, allowing us to leverage a wide range of input sources—including single-view, sparse-view, or long RGB(-D) videos, as well as LiDAR scans or other sensors. This makes point cloud-based methods especially well-suited to diverse real-world conditions. Since pointcloud reconstructions can be incrementally updated and are largely independent of the specific sensor package (camera, depth sensor, IMU, etc). These properties make point clouds a practical choice for real-world robotic systems. For better or worse, they have become a de facto representation for 3D language grounding. We will make this point much clearer in the final version. --- > Technical Novelty and compared to prior works. The method sections (3.1–3.4) describe a task formulation, standard 2D-supervised training for 3D reconstruction, and a conventional network for lifting point clouds into 3D Gaussians. The contributions in terms of problem and model novelty are not sufficiently distinguished from prior work. A stronger claim on what specific challenges are uniquely solved by focusing on point clouds and 3D Gaussian representations is needed. Reconstructing 3D Gaussian splats from point clouds is neither the focus nor a claimed contribution of our work. Our primary contribution lies in **(pre-)training a large Transformer-based mask decoder for 3D visual-language grounding without requiring 3D labels**, made possible through the use of differentiable rendering. In contrast, the mentioned prior methods—whether based on per-scene optimization or feedforward models—focus on reconstructing the **RGB and feature fields** of the 3D scene and perform language grounding by computing **dot products between language CLIP features and 3D feature fields**. These approaches primarily consist of encoder-only architectures, whereas our work emphasizes the use of a powerful Transformer-based decoder for mask prediction. We discuss it in L96-109, L153-164. Due to the limitations of CLIP embeddings, dot-product-based grounding methods struggle with handling **slightly complex language expressions** and cannot reliably **reason about relative spatial relationships**. This restricts their effectiveness in realistic grounding tasks. Because of character limits, we provide quantitative comparisons and discussions in the reply for Reviewer eYEH. [1] https://open-eqa.github.io/ [2] https://tianx-ia.github.io/ [3] https://web.archive.org/web/20250501000000*/https://uva-computer-vision-lab.github.io/sab3r/static/pdf/Semantic_Augmented_3D_Foundation_Models__Writing.pdf
null
null
null
null
null
null
A Cross Modal Knowledge Distillation & Data Augmentation Recipe for Improving Transcriptomics Representations through Morphological Features
Accept (poster)
Summary: This paper proposes a cross-modal knowledge distillation framework (Semi-Clipped) and a biologically inspired data augmentation method (PEA). The aim is to enhance the biological significance and predictive power of transcriptomic representations using weakly paired multimodal data (microscopy images + transcriptomics). By freezing the pre-trained morphological feature encoder and training a lightweight adapter, unidirectional knowledge transfer from morphological features to transcriptomics is achieved. Meanwhile, PEA enhances data diversity through randomized batch correction techniques, preserving biological information. Claims And Evidence: "PEA preserves biological information": There is a lack of direct evidence for the biological validity of the augmented data (e.g., comparison with real perturbations). Methods And Evaluation Criteria: Semi-Clipped avoids modality drift by freezing the teacher model, which is a reasonable design; PEA transforms batch correction into an augmentation strategy, demonstrating strong innovation. Theoretical Claims: There is no theoretical proof that the optimization objective of modality alignment maximizes cross-modal information transfer. Experimental Designs Or Analyses: Reasonable design: 1. 15 random seed tests to improve statistical significance 2. Controlled variable studies. Problems:\ Lack of hyperparameter experiments (learning rate, batch size, temperature, etc.) Supplementary Material: The supplementary materials include detailed experimental settings, evaluation metric calculations, batch processing descriptions, and other experimental results. Relation To Broader Scientific Literature: 1. Semi-Clipped Framework and Cross-Modal Distillation Research ① Adaptation of CLIP: Semi-Clipped is based on the CLIP framework but solves the problem of requiring a large amount of paired data in traditional CLIP by freezing the pre-trained encoder of the teacher modality (microscopy images). This contrasts with methods like XKD and C2KD, which require online adjustment of dual-modal encoders and are prone to modality drift with weakly paired data. ② Advantage of Unsupervised Alignment: Compared to distillation methods like KD and SHAKE that rely on label supervision, Semi-Clipped achieves unsupervised alignment through CLIP loss, improving biological relationship recall by 23% on the HUVEC-KO dataset, validating the limitations of biological labels. ③ Breakthrough in Single-Modality Inference: Unlike multimodal fusion methods such as VICReg and DCCA, this framework allows single-modality (transcriptomics) inference, inheriting the predictive power of the microscopy modality while maintaining the interpretability of transcriptomics. 2. PEA Data Augmentation and Bioinformatics Methods ① Creative Transformation of Batch Correction: Traditional batch correction techniques like TVN are redefined as random augmentation operations. Compared to conventional image augmentations (rotation, scaling), PEA introduces controlled variations while preserving biological signals, improving performance on the LINCS dataset by 69%. ② Innovation in Biological Data Augmentation: Unlike general augmentations such as scVI denoising and MDWGAN-GP, PEA achieves a Spearman correlation of 37.56 on the single-cell dataset SC-RPE1 by randomly controlling sample sampling and reweighting PCA variance, demonstrating its adaptability to complex biological noise. 3. Expansion of Multimodal Learning Paradigms ① Basic Model Adaptation: Adopting the pre-trained adapter approach of Fradkin et al., but avoiding the error accumulation of bidirectional distillation through unidirectional knowledge binding, improving the biological relationship recall of the scGPT adapter by 40%. ② Balance of Interpretability: Under the transcriptomics interpretability evaluation framework proposed by Bendidi et al., this is the first to achieve dual optimization of structural integrity (93.15) and biological discovery (39.84 recall) in cross-modal distillation, overcoming the information loss dilemma of traditional distillation methods. Essential References Not Discussed: No Other Strengths And Weaknesses: Advantages:\ For the first time, batch correction is restructured as data augmentation, addressing the scarcity of biological multimodal data while ensuring performance improvement with interpretability. Disadvantages: 1. The specific definition of weakly paired data is not clearly provided. 2. Computational costs (time complexity, space complexity) are not discussed. 3. Hyperparameter experiments for adapter temperature, learning rate, batch size, etc., are not conducted. Other Comments Or Suggestions: The visualization is not clear, and the y-axis of Figure 1 is not labeled. Questions For Authors: 1,Biological fidelity of PEA: How can it be proven that randomized batch correction does not disrupt key biological signals? Providing an analysis of TF activity changes could strengthen the conclusion.\ 2,Computational efficiency: What is the training time for Semi-Clipped on a dataset with 1.3 million samples? This information affects the evaluation of the method's practicality.\ 3,Negative results disclosure: Have other modality combinations (e.g., proteomics + transcriptomics) been attempted? Discussing failed cases could enhance the rigor of the method. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer DZgQ for their review, and for acknowledging the robustness of our experimental design, and the innovation behind our approach. We respond below to the comments of the reviewer : - __*“The visualization is not clear, and the y-axis of Figure 1 is not labeled.” :*__ We thank the reviewer for their feedback. For the final manuscript, we will ensure that the visualizations are clear and more zoomed in, and we will add the label to the y-axis of Figure 1 (Recall of known relationships). - __*“The specific definition of weakly paired data is not clearly provided.” :*__ We thank the reviewer for pointing this out. While we have provided short definitions for weakly paired data (L32, L138, L214), we agree that this work would benefit from a clear and self contained definition of weakly paired data (pairs of samples from two modalities that are not paired at the sample level, but share the same labels and conditions, in our case the common condition being the same cell type and same perturbation, even if the cell samples are different in other factors). We will provide a clearly defined definition of weakly paired data in the final manuscript at the Introduction section, as well as in the Experimental setup section (section 4). - __*“Biological fidelity of PEA: How can it be proven that randomized batch correction does not disrupt key biological signals?.” :*__ We have already proven and established this in the submitted manuscript, as we proposed two different and complementary ways of evaluating empirically whether the learnt representation using the augmentations is biologically : i) Both in Figure 2.b and Table 1, we evaluate PEA on the transcriptomics interpretability task, which is a published benchmark for predicting gene expression counts of real unseen perturbations, as the reviewer has proposed, through both reconstruction tasks and conservation of structural integrity measures of the data. ii) In figure 3, we focus on downstream tasks, and show that for discovering target relationships, our distilled transcriptomics representations uncover the same relationships the original transcriptomics data uncovers, while improving it further by uncovering imaging and other new relationships. This shows that we have evaluated the comprehensiveness of our transcriptomics representations thoroughly, and have shown that it conserves the original transcriptomics information even after distillation. - __*“Hyperparameter experiments for adapter temperature, learning rate, batch size, etc., are not conducted.” :*__ We thank the reviewer for pointing this out. We have added in the final manuscript an overview of the hyperparameter experiments and their results (Results section), which we also show in this anonymized link : https://imgur.com/a/0Oekn3T. - __*“Computational efficiency: What is the training time for Semi-Clipped on a dataset with 1.3 million samples? This information affects the evaluation of the method's practicality.” :*__ We thank the reviewer for pointing this out. While there is no publicly available weakly paired (transcriptomics-imaging) dataset of more than 300k samples, if we stack our existing datasets on themselves to reach the size of 1.3 million samples, Semi-Clipped then requires a 19 hour training on one single H100 GPU. Our distillation method requires extremely minimal computing resources, as it makes use of trainable adapters and frozen backbones, while the additional PEA data augmentation only multiplies the distillation training time by x1.3. We will update our final manuscript with detailed benchmarking of the time and computing resources of our approach. - __*“Negative results disclosure: Have other modality combinations (e.g., proteomics + transcriptomics) been attempted?” :*__ We thank the reviewer for proposing this idea. While other modalities can potentially add interesting information, these different modalities (e.g proteomics) suffer from even lower availability of perturbational data than transcriptomics, and even less than that in the case of weakly paired data; thus, no, other combinations have not been attempted.
Summary: Understanding how cells respond to stimuli such as genetic perturbations or chemical compounds forms a crucial part of drug discovery. This work proposes a method to enrich representations of transcriptomic data with paired morphological data. Measuring paired transcriptomic and morphological features of cells is complex, and hence datasets of this type are rare, even when considering datasets in which these measurements are only weakly paired by sharing the same metadata attribute, such as the applied perturbagen. This necessitates a method that enriches transcriptomic representations with morphological features using only a limited number of paired measurements. This work proposes using a CLIP style loss, applied to the morphological representations from a frozen pertained model, and transcriptomic representations from a pretrained transcriptomic model with an additional trainable adapter. In doing so, this work demonstrably shows that this method, SemiClipped, allows morphological features to enrich the transcriptomic representations, and that this particular method outperforms other cross-modal distillation methods. Crucially, this allows this model to retrieve a greater number of known biological relationships from transcriptomic data, whilst also maintaining a high-level of interpretability at the level of genes. In addition to proposing SemiClipped this work also proposes a novel augmentation method for transcriptomic data. Transformations which apply batch correction, in which samples are aligned to unperturbed control measurements, are applied to transcriptomic data to yield augmented samples, and simulate greater variance in the changes in experimental conditions under which transcriptomic data may be measured. This augmentation is shown to improve performance in biological retrieval tasks and maintains strong interpretability, a weakness of more common augmentation methods. ## Update after rebuttal After reading all reviews and rebuttals, I have decided to leave my score unchanged. I welcome the additional figures and tables the authors have shared during rebuttal. To raise my score from a 4 to a 5, my assessment of the overall significance of the paper would have to be changed such that I felt this paper will have major impact on the community. This is more a question of the scope of the paper, and hence has not changed during rebuttal. Claims And Evidence: There are three key claims made in the article, namely: 1. that SemiClipped, the proposed method of cross-modal distillation, provides SOTA performance in the data limited regime 2. that by freezing the encoder of the teacher modality (in this case morphological features) they prevent drift from student to teacher. 3. that they devise a novel biologically inspired data augmentation, PEA, that is capable of improving cross-modal distillation, and outperforms existing augmentations on benchmarks related to uncovering known biological relationships, and transcriptomic interpretability. Each of these claims are well supported by evidence. Three datasets were used for OOD evaluations, reflecting generalisation to cell types, experimental conditions and gene expression quantification technologies. Each of these datasets represents a distribution-shift that is expected between training and inference in production, hence providing a realistic evaluation of performance. The claim that SemiClipped provides SOTA performance is demonstrated throughout the paper, as it is shown to perform better than a number of strong baseline models from the literature. By comparing, in Figure 6, the biological relationships that are recalled by a unimodal pretrained transcriptomic model with those recalled by a transcriptomic model trained via cross-modal distillation, the authors demonstrate clearly that i) cross-modal distillation can allow transcriptomic models to recall biological relationships typically found in microscopy representations ii) that current methods recall more relationships than unimodal models, but can lose some relationships captured before distillation and iii) that SemiClipped, which freezes the microscopy imaging encoder, recalls the most relationships overall and from those recalled by the unimodal transcriptomic model. In combination with Figure 1, which shows that including a trainable adapter for both the imaging and transcriptomic models leads to poorer biological recall than using a frozen imaging module and trainable adapter for transcriptomics only, this supports the claim that SemiClipped provides SOTA performance and prevents drift from student to teacher. Furthermore, Figure 2 demonstrates clearly that PEA i) provides consistent superior performance in the recall of biological relationships, and ii) in the transcriptomic interpretability of the representations for a number of benchmark augmentations in isolation and combination. The evaluation metrics for the biological recall task and transcriptomic interpretability are well explained in the Appendices. Including both of these evaluations provides a clear insight into how cross-modal distillation effects performance and interpretability. Methods And Evaluation Criteria: As mentioned above, the work focuses on two tasks, recall of known biological relationships and transcriptomic interpretability. The biological recall task is motivated by the problem at hand - in drug discovery a model that can predict changes in biological relationships from transcriptomic data is crucial for the automation of assessing the impact of the many compounds that exist in the space of all possible small molecules. This work focusses on cross-modal distillation with microscopy imaging, which provides rich insight into cell state, but lacks gene level interpretability. This motivates the transcriptomic interpretability task. Additionally, by including three benchmark datasets that simulate real changes that one would expect between model training and deployment (changes in cell types, experimental conditions and gene expression quantification technologies) the author’s provide a realistic measure of model performance. Theoretical Claims: This is not applicable for this work. Experimental Designs Or Analyses: I specifically checked the experiment for which results are shown in Figure 2, since these results support the key claims of this work. By using the three OOD evaluation datasets there were no concerns of data leakage between training and inference, and made comparisons between the proposed model and baseline models fair. I therefore see no issues with the design of the experiments used to form this figure and have confidence in these results. Supplementary Material: I reviewed the appendices describing the evaluation metrics and batch correction techniques used to form PEA. Relation To Broader Scientific Literature: It is well known that microscopy imaging can be combined with deep learning to extract useful features that describe cell phenotype, for example see [1]. However, while these models can be used to infer relationships between genes via comparing embeddings of cells perturbed by different gene KO, these models to not provide a direct transcriptomic interpretability, in the same way that a model utilising transcriptomic data would. By leveraging these modalities together, the growing power of microscopy models can be leveraged to create strong transcriptomic prediction models. [1] Kraus, O., et al Masked autoencoders for microscopy are scalable learners of cellular biology. In CVPR, 2024. Essential References Not Discussed: I think it is worth mentioning [1] for demonstrating that encoders trained in SSL fashion with multi-modal data can outperform unimodal encoders in the limited data regime, with imaging and tabular data (which is somewhat related to transcriptomic data). [1] Hager, P., Menten, M.J. and Rueckert, D., 2023. Best of both worlds: Multimodal contrastive learning with tabular and imaging data. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 23924-23935). Other Strengths And Weaknesses: This is a strong paper, with high quality authorship, clear and concise results, and well thought out experiments. The novel augmentation method for transcriptomics data could have impact alone, and it would be interesting to see how this could be adapted outside of a preclinical setting. The main weakness of this work is the presentation of some of the results, there a few cases where figures are not well labelled. This is overcome by the clarity of the main text, but is a point that could be improved on. Other Comments Or Suggestions: - Add y-labels to Figure 1, and Figure 2. - Figure 3 took some time to parse, could this data be more clearly represented in a tabular format? I suppose the quantity of interest to highlight is the number of additional relationships gained via cross-modal distillation, and how many have been lost, and focussing on just these two may be enough to demonstrate the success of the method in a more immediate manner. Questions For Authors: - Can you foresee the impact of PEA beyond transcriptomics data used in a preclinical setting, where datasets do not typically have a well defined notion of unperturbed controls? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer T8LP for all their comments, and for their acknowledgement of the quality of the paper and the contribution. We aim to acknowledge and answer below the questions and suggestions of the reviewer : - __*“Additional reference” :*__ We thank the reviewer for pointing us toward this publication. As tabular data shares many common points with transcriptomics data, this merits an additional discussion on this paper in the final manuscript. - __*“there a few cases where figures are not well labelled” :*__ We agree with the reviewer, and while this was a limitation of the 8-pages format,, we will take advantage of the extra space in the final manuscript to ensure that all labels and captions are self contained and self explanatory. - __*“Add y-labels to Figure 1, and Figure 2.” :*__ We add y-labels to both figures in the final manuscript. - __*“Figure 3 took some time to parse, could this data be more clearly represented in a tabular format?” :*__ We thank the reviewer for their suggestion, this could clarify further the impact of our method. We have added to the final manuscript a new table that quantifies for each approach the amount of gain and loss of relationships through the distillation compared to other approaches. The added table is available in this anonymized link (https://imgur.com/a/qZB7q9l). We see that Semi-Clipped recalls the highest amount of known relationships, preserves the original transcriptomic information the best, and minimizes the loss of relationships the most. - __*“Can you foresee the impact of PEA beyond transcriptomics data used in a preclinical setting, where datasets do not typically have a well defined notion of unperturbed controls?” :*__ PEA’s augmentation principle is especially promising for biological data in batches with control, such as proteomics and metabolomics, where batch correction techniques are well established to mitigate technical variability. Although its impact may be more limited in contexts lacking a clear notion of unperturbed controls or where robust domain adaptation methods are already well established, PEA can still broaden the scope of existing solutions. PEA’s strategy of leveraging small and stochastic technical variations could extend and be adapted to specific post-processing methodologies in areas such as signal processing (ECG, wearable devices) or astronomy. We will add a discussion on this question to the conclusion of the final manuscript.
Summary: The paper introduces Semi-Clipped, a method that transfers morphological information from microscopy images to transcriptomic data through cross-modal knowledge distillation. The authors adapt the CLIP loss by freezing a pretrained teacher encoder (for images) and learning a trainable adapter for transcriptomics. The authors also introduce PEA (Perturbation Embedding Augmentation), a data augmentation technique based on batch correction methods to introduce biologically plausible variation into the transcriptomic profile. Experiments on multiple out-of-distribution datasets (e.g., HUVEC-KO, LINCS, and SC-RPE1) are presented. The paper claims improved performance and enhanced biological signal relative to both unimodal baselines, existing cross-modal distillation approaches and augmentation techniques. Claims And Evidence: - The claim that Semi-Clipped achieves improved cross-modal knowledge distillation is convincingly demonstrated through comprehensive experiments and comparisons with multiple competitive baseline methods (including KD, SHAKE, VICReg, and others). - PEA augmentation is interesting and generally improves the performance of multiple methods tested, the authors validated through statistically significant improvements. Methods And Evaluation Criteria: - Although Figure 1 has motivated Semi-Clipped over vanilla clip, the paper could benefit from comparing the methods with Clip loss in distillation benchmark in Figure 2(a). The reviewer is concerned that Semi-Cipped outperforms Clip in a context dependent way, as the scGPT + Tx Adaptor + Image Adaptor is omitted in the Figure 1. - There's a potential distribution shift issue since scVI was pretrained on single-cell data, while this study uses arrayed bulk sequencing data. Theoretical Claims: N/A Experimental Designs Or Analyses: The choice of benchmark metrics is interesting. The experimental designs and analyses appear sound. Supplementary Material: The appendix has been reviewed; no additional supplementary materials were attached to the paper. Relation To Broader Scientific Literature: The paper adequately discusses relevant works in the field. Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: The paper adequately covers relevant literature but contains numerous citation errors, the reviewer did not do an exhaustive check but the following are obviously wrong: - Geneformer is incorrectly cited as "Chen, T. K., Wang, Z., Li, X., Li, Y., and Huang, K. Gene-former: A foundation model for generalizable gene ex-pression learning. bioRxiv, pp. 2023.01.14.524028, 2023. - instead of "Theodoris, Christina V., Ling Xiao, Anant Chopra, Mark D. Chaffin, Zeina R. Al Sayed, Matthew C. Hill, Helene Mantineo et al. "Transfer learning enables predictions in network biology." *Nature* 618, no. 7965 (2023): 616-624." - scGPT is wrongly cited as: "Wang, Z., Song, B., Zhu, T., Li, B., Hu, Q., Tao, X., Chen, F., Wang, L., and Xie, P. scgpt: Transformer-based single-cell rna-seq data analysis. bioRxiv, pp.2023.02.24.529891, 2023." - which should be "Cui, Haotian, Chloe Wang, Hassaan Maan, Kuan Pang, Fengning Luo, Nan Duan, and Bo Wang. "scGPT: toward building a foundation model for single-cell multi-omics using generative AI." Nature Methods 21, no. 8 (2024): 1470-1480." - Similar errors appear for scBERT, Drug-seq, and others Questions For Authors: Can you please share your thoughts on previous sections? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer rL4s for their review, and for acknowledging the robustness demonstration of our claims. We will respond below to all their comments : - __*“The reviewer is concerned that Semi-Cipped outperforms Clip in a context dependent way” :*__ We thank the reviewer for pointing this out. To improve the demonstration of our claims, we add to Figure 1 of the final manuscript the performance of scGPT + Tx Adapter + Image Adapter. The revised figure can be found in this anonymized link (https://imgur.com/a/CDmKd3v). We show through this result that Semi-Clipped still outperforms Clip even in the scGPT context. To further illustrate this, we will add Clip as an additional approach in Figure 2.a in the final manuscript, to showcase with more clarity how Semi-Clipped compares to Vanilla Clip and other approaches. - __*“There's a potential distribution shift issue since scVI was pretrained on single-cell data, while this study uses arrayed bulk sequencing data” :*__ We thank the reviewer for pointing this out, and apologize for the confusion in the submitted manuscript. We do not have the distribution shift issue, as we pretrain scVI on arrayed bulk sequencing data (HUVEC-CMPD dataset), which is also used for the distillation training. While scVI was originally developed for single cell data, recent published benchmarks (e.g [Bendidi et al. 2024](https://arxiv.org/abs/2410.13956)) have shown that scVI still outperforms all models even when trained on Bulk sequencing data, which motivated our choice. We have updated the final manuscript to better reflect and clarify this point, given the additional page of main content available. - __*“The paper adequately covers relevant literature but contains numerous citation errors” :*__ We sincerely thank the reviewer for pointing this out. We have now fixed all errors in all the paper’s citations, this will be reflected in the final manuscript.
Summary: This paper aims to extract representations of transcriptomics by distilling knowledge from microscopy images. The authors introduce (1) semi-clipped for cross-modal distillation from pretrained foundation models, and (2) perturbation embedding augmentation to generalize transcriptomics data. Claims And Evidence: The concept of 'semi-clipped' is not essentially different to the vanilla CLIP, but simply with fixed pre-trained microscopy image features using weakly pair samples. Methods And Evaluation Criteria: One significant concern: Microscopic images and transcriptomics data, although they may have shared information, but the feature spaces may not be highly overlapping. In other words, these two modals have complementary information, which cannot be mutually replaced. Forcing the transcriptomics data to extract only the features that are similar to the image data may cause the loss of distinct information of transcriptomics. This paper doesn't show empirically that the learnt representation of transcriptomics is relatively comprehensive. Theoretical Claims: No Theoretical contribution is made in this work. Experimental Designs Or Analyses: * No t-SNE to show the distribution of learnt features. * Lack ablation study when teacher representations are not frozen. Supplementary Material: The supplementary material provides more detailed implementation and more results. Relation To Broader Scientific Literature: N/A Essential References Not Discussed: [1] Spatially Resolved Gene Expression Prediction from Histology Images via Bi-modal Contrastive Learning. This paper may be relevant which uses CLIP for spatial transcriptomics learning. Other Strengths And Weaknesses: Well written and easy to follow, yet the methodology contribution is limited. Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank reviewer rGLV for their review. We address reviewer’s comments below, in order to improve the clarity of the contribution of our submission : - __*"This paper doesn't show empirically that the learnt representation of transcriptomics is relatively comprehensive." :*__ We respectfully disagree. In the submitted manuscript, we proposed two different and complementary ways of evaluating empirically whether the learnt representation of transcriptomics through our approach is completely comprehensive : i) Both in Figure 2 and Table 1, we evaluate on the transcriptomics interpretability task, which is a published benchmark for evaluating whether the representation of transcriptomics is comprehensive and includes all information of the original transcriptomics data, through both reconstruction tasks and structural measures of data. ii) In figure 3, we focus on downstream tasks, and show that for discovering target relationships, our distilled transcriptomics representations uncover the same relationships the original transcriptomics data uncovers, while improving it further by uncovering imaging and other new relationships. Additionally, reviewer T8LP describes our evaluation as _“that SemiClipped, which freezes the microscopy imaging encoder, recalls the most relationships overall and from those recalled by the unimodal transcriptomic model.”_. This shows that we have evaluated the comprehensiveness of our transcriptomics representations thoroughly, and have shown that it conserves the original transcriptomics information even after distillation. - __*“t-SNE for representation distribution” :*__ We add in this anonymous link (https://imgur.com/a/GCi2FYj) a UMAP projection of our learnt features of the distillation model (bottom), compared to the transcriptomics features only (top). We compare both models at batch effect reduction (left, Mixed clusters is better) and perturbation separation (right, separated clusters is better). While both approaches are good at reducing batch effect, the distillation approach is largely better at separating different perturbations. This is also reflected in the NMI metric for clustering the different perturbations with a K-Means (0.128 for the transcriptomics only, 0.481 for the distillation). We had previously hesitated to add UMAP/t-SNE views due to their anecdotal nature. We will however add this figure to the final manuscript for enhanced clarity. - __*"Lack of ablation study when teacher representations are not frozen." :*__ We respectfully disagree. In the submitted manuscript, we show in Figure 1 that unfreezing the teacher representations through adding an Image Adapter leads to a drop in performance, likely due to modality drift. We add an additional evaluation of scGPT used with an Image Adapter in our response to reviewer rL4s for more comprehensiveness. - __*“Proposed reference” :*__ We thank the reviewer for the reference, we will include it in the final manuscript in the related works section (Biologically Relevant Representations). - __*“The methodology contribution is limited” :*__ We wish to point out that in addition to being the first to prove that distillation from Imaging to Transcriptomics is possible, we have proposed PEA, a completely novel and new approach for data augmentation on omics data that preserves biological information for multimodal learning, tackling one of the main problems of the omics field : Limited paired data and lack of biology preserving data augmentations. We also want to point out that reviewer rL4s described our claim and approach as *“PEA augmentation is interesting and generally improves the performance”*, while reviewer T8LP describes our approach as *“This is a strong paper, with high quality authorship, clear and concise results, and well thought out experiments. The novel augmentation method for transcriptomics data could have impact alone.“* and *“By leveraging these modalities together, the growing power of microscopy models can be leveraged to create strong transcriptomic prediction models.”*. Reviewer DZgQ described our work as *“For the first time, batch correction is restructured as data augmentation, addressing the scarcity of biological multimodal data while ensuring performance improvement with interpretability.”*. This being an application track submission for ICML, where submissions are judged by their real world relevance and the robustness of the claims and experiments, we hope this clarifies the reviewer’s assumptions about our work.
null
null
null
null
null
null
OD³: Optimization-free Dataset Distillation for Object Detection
Reject
Summary: This work presents an optimization-free data distillation framework for object detection. It addresses the challenges of training large neural networks on large-scale datasets by synthesizing compact datasets. The framework consists of two main stages: candidate selection, where object instances are iteratively placed in synthesized images, and candidate screening, where a pre-trained observer model removes low-confidence objects. Experiments on MS COCO and PASCAL VOC datasets with compression ratios ranging from 0.25% to 5% show that the proposed method outperforms existing methods, achieving significant improvements in accuracy. ## update after rebuttal While many studies report AP50 for the VOC dataset, mAP remains a valid and comprehensive metric for evaluating object detection models. The authors did not address my concern regarding why mAP is higher than mAP50 on VOC. The authors responded that mAP is redundant and AP50 is the standard practice for VOC, but this does not directly resolve my question. Additionally, soft labels at the feature level would require significant storage. Traditional soft labels for images already demand substantial storage—often exceeding the storage of the entire dataset itself. Thus, applying soft labels at the feature level further exacerbates the unfairness in dataset distillation. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: The theoretical of theorems 2 show the proposed method maintains the information density and diversity. Experimental Designs Or Analyses: 1. The comparison on the benchmarks is unfair for the baseline method including random, uniform, k-center, herding and DCOD. All the results are directly copied from the DCOD (NeurIPS 2024). These methods use the YOLO V3 as the base detector while the proposed method uses the Faster-RCNN 50, maybe even with FPN. Therefore, the experiment comparison in Table 1 and Figure 1 are unfair. 2. In Table 2, for the PASCAL VOC dataset, why do the authors only present the results of the proposed method OD3 and not include the baselines? Supplementary Material: The supplementary contains some examples of the condensed dataset and the code. Relation To Broader Scientific Literature: This work is highly related to the area of data efficiency and extends the dataset distillation into object detection beyond classification. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths: 1. The proposed method is optimize-free. By avoiding computationally intensive iterative optimization processes, the proposed method is more efficient. The data generation process is highly efficient as it does not require training, and the main time overhead comes from the screening by the observer model. 2. The distilled datasets can generalize well across different object detection architectures. Weaknesses: 1. The experiment comparison is unfair as referred to in the comments about "Experimental Designs Or Analyses". 2. In Table 2, the results are very surprising. The proposed method achieves impressive mAP on the Pascal VOC dataset. When IPD=0.5%, the mAP is larger than mAP50, which is the opposite of the traditional observation. Usually, the metric mAP is much smaller than mAP50. 3. Lack of ablation study about the soft-label generation. 4. How about the storage of the soft labels? In many dataset distillation for classification works, researchers found that the soft-label will take too much storage. If the structure of the detector is different, how do we use the feature-based soft labels? Other Comments Or Suggestions: 1. The algorithm needs to be more clear. how can a bounding box $b_{ir}$ add $l_{ir}? $\hat{x}$ is nor defined. 2. In Eq.(2), why can the proposed method represent the information density? Questions For Authors: 1. The function 'a' in Eq.(2) and (3) are conflicts. 2. In Eq.(4), the Information Diversity is the number of distinct objects on the canvas. How do the authors define the distinct objects? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for your valuable suggestions and for giving us the opportunity to address the points you raised! >**Q1: The experiment comparison is unfair for the baseline methods (random, uniform, k-center, herding, and DCOD).** We appreciate this concern. Below are our own runs of the core-set selection methods using our framework with 1% IPD: | Method | mAP | mAP@50 | mAP@75 | mAP$_{s}$| mAP$_{m}$ | mAP$_{l}$| |----------|--------|--------|--------|--------|--------|--------| | **Random** | 8.8 | 19.6 | 6.8 | 3.6 | 10.0 | 11.3 | | **Herding** | 8.3 | 18.0 | 6.5 | 4.4 | 10.1 | 9.8 | | **K-center** | 9.20 | 20.20 | 7.10 | 4.30 | 10.20 | 12.30 | | **Uniform** | 8.5 | 19.4 | 6.1 | 3.1 | 9.7 | 10.8 | **DCOD [1]**: Since their codebase is not publicly available, we contacted the authors to share the codebase or the datasets to facilitate better comparison with their method and exact reproduction of their results. However, they have declined to share them. Also, YOLOv3-SPP (their baseline) outperforms Faster R-CNN 101 (our observer baseline) with reported performance of 44.3% vs. 39.8%, respectively. We believe that the comparison remains fair, as our approach is evaluated under a more challenging setting (lower-performing baseline). >**Q2: For the PASCAL VOC dataset, why do the authors only present the results of the proposed method and not include the baselines?** For the PASCAL VOC dataset, we are unable to directly compare against DCOD since they used COCO metrics instead of the standard PASCAL metrics, which does not facilitate a direct comparison with training on the uncompressed dataset. However, we will include the results for the baseline core-set selection methods as follows (1.0% IPD): | Method | mAP@50 | |-------|------| | **Random** | 38.20 | | **Herding** | 35.00 | | **K-center** | 36.60 | | **Uniform** | 37.25 | >**Q3: In Table 2, the results are very surprising. Usually, the mAP metric is much smaller than mAP50.** We appreciate the reviewer's keen observation. Upon further inspection, the reported metrics are redundant, and we will keep the mAP@50 column, which reflects the default VOC metric using the area method. >**Q4: Lack of ablation study on soft-label generation.** As suggested, we will add the below results of the soft-label ablation: | Method | mAP | mAP@50 | mAP@75 | mAP$_{s}$| mAP$_{m}$ | mAP$_{l}$ | |-----------|------|--------|--------|------|------|------| | **Without soft label** | 17.1 | 31.4 | 17.0 | 6.0 | 19.0 | 24.1 | | **With soft label** | 22.5 | 39.6 | 22.9 | 10.6 | 28.0 | 29.8 | >**Q5: How is the storage of soft labels handled? In many dataset distillation works for classification, researchers found that soft labels require significant storage.** Following the approach of RDED [2], we avoid explicitly storing soft labels by using a knowledge distillation framework with a teacher model. >**Q6: The algorithm needs clarification. How can a bounding box b add ℓ? Also, $\hat{x}$ is not defined in the algorithm.** We understand the confusion and will re-define the bounding box modification as follows: $b' = (x - \ell_x, y - \ell_y, w + 2\ell_x, h + 2\ell_y)$ As for $\hat{x}$, we will define it as the final reconstructed image after adding the objects to the canvas. >**Q7: In Eq. (2), how does the proposed method represent information density?** The object confidence score normalized by area effectively captures the information density of the canvas. Based on this formulation, we select objects with higher confidence scores. A higher value indicates that the canvas contains more object-related information, while the object size ensures that confidence is measured per unit area. >**Q8: The function 'a' in Eq. (2) and Eq. (3) conflicts.** Originally, we used a different symbol in Eq. (3), so to resolve the conflict, we will change 'a' in Eq. (2) to uppercase or adopt an alternative notation for clarity. >**Q9: In Eq. (4), the Information Diversity metric counts distinct objects on the canvas. How are distinct objects defined?** As objects are added to the canvas, the method tracks each object {or} separately, even if it intersects with other objects. This ensures that distinct objects are accounted for. We appreciate your thoughtful feedback and hope these clarifications address your concerns. **References:** [1] Qi, Ding, Jian Li, Jinlong Peng, Bo Zhao, Shuguang Dou, Jialin Li, Jiangning Zhang, Yabiao Wang, Chengjie Wang, and Cairong Zhao. "Fetch and forge: Efficient dataset condensation for object detection." Advances in Neural Information Processing Systems 37 (2024): 119283-119300. [2] Sun, Peng, Bei Shi, Daiwei Yu, and Tao Lin. "On the diversity and realism of distilled dataset: An efficient dataset distillation paradigm." In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9390-9399. 2024. --- Rebuttal Comment 1.1: Comment: Thanks for the detailed response. The response only addressed part of my concerns. 1. For Q3, what is the meaning of "the reported metrics are redundant"? MAP is also an important metric for evaluating detection performance beyond mAP50. 2. The soft label claimed in this paper is actually the features, not the labels. --- Reply to Comment 1.1.1: Comment: We thank you for your response! We would like to clarify a few points regarding the concerns raised. **Q1: For Q3, what is the meaning of "the reported metrics are redundant"? MAP is also an important metric for evaluating detection performance beyond mAP50.** The official Pascal VOC evaluation protocol strictly reports box AP at 50% IoU (mAP@50) rather than the COCO-style mAP averaged over IoUs from 0.5 to 0.95. The VOC dataset is relatively simpler than COCO, as it typically contains fewer objects per image and those objects tend to be larger. In this context, evaluating detection performance using AP at 50% IoU (mAP@50) is more appropriate and meaningful. Consequently, most prior studies on VOC do not report the COCO-style mAP (averaged over IoUs from 0.5 to 0.95), as it offers limited practical value for this dataset. **Q2: The soft label claimed in this paper is actually the features, not the labels.** We have experimented with both traditional soft labels and feature-based/channel-wise soft labels and found that the current approach yields the most significant performance improvement for object detection tasks. We have discussed and elaborated on this in detail in Section 3.2 from lines 219 to 266. We hope this clarification addresses the concerns raised, and we appreciate the constructive feedback that helps improve our work.
Summary: The work proposes a new framework called OD3 (Optimization-free Dataset Distillation for Object Detection), specifically designed for dataset distillation in object detection tasks. It aims to reduce training time and computational resources by selecting and generating a high-quality compact dataset from a large dataset to replace the large dataset. The distillation process consists of two stages: 1. Candidate Selection, where potential candidates with high information density and diversity are chosen, representing the most valuable information in the original dataset; 2. Candidate Screening, where further screening of the selected candidates is conducted to ensure that the final generated dataset can effectively represent the spatial and semantic characteristics of the original dataset. Information density and information diversity are used as selection criteria to ensure that the selected samples can maximally represent the original dataset. SA-DCE is proposed to address the scale variation and spatial layout issues specific to object detection. Experiments were conducted on two widely used datasets, MS COCO and PASCAL VOC. Claims And Evidence: The OD3 framework distills a small-scale dataset from a large dataset while maintaining model performance. In the experimental section, experiments were conducted with different compression ratios, the SA-DCE module, and various overlap thresholds. The experiments demonstrate that detectors trained based on the OD3 algorithm outperform other algorithms, essentially proving the argument. In addition, the right of line 102 mentions that this work does not sacrifice performance significantly, but the experiment results suggest the performance drop is somewhat significant. Methods And Evaluation Criteria: OD3 is a novel optimization-free dataset distillation framework designed specifically for object detection. It involves two main stages: candidate selection and candidate screening. In the first stage, object instances are randomly placed on a blank canvas, ensuring minimal overlap. In the second stage, a pre-trained observer model evaluates and filters out low-confidence objects. The framework also uses scale-aware dynamic context extension to enhance small object detection by expanding the bounding areas based on object size. This approach allows OD3 to generate compact, high-fidelity datasets efficiently, significantly reducing the dataset size while maintaining or even improving detection performance compared to existing methods. Theoretical Claims: This work contains little proof or theoretical claims. Experimental Designs Or Analyses: 1. How does this method perform on the latest transformer-based detectors? 2. Additional ablation experiments regarding the confidence threshold eta are missing. 3. The dataset distillation technique claims to speed up training while maintaining model performance. Could the authors provide a performance comparison with models trained on uncompressed datasets? Supplementary Material: Supplementary appendix and code has been reviewed. The appendix of the paper provides comprehensive details and supporting results, including the distribution of images and objects in the distilled datasets, various ablation studies to analyze the impact of different components and settings on OD3's performance, and a proof demonstrating the effectiveness of the iterative add-then-remove process in maintaining high object representation and information density. This additional information reinforces the validity and robustness of the OD3 framework. Relation To Broader Scientific Literature: The main contribution of the OD3 framework is closely related to the broader scientific literature, addressing the specific needs of dataset distillation for object detection, a task often overshadowed by image classification in previous work. Traditional methods heavily rely on optimization, while OD3 introduces an optimization-free approach, leveraging the concepts of information density and diversity from active learning to ensure a representative and compact dataset. Additionally, the introduction of Scale-Aware Dynamic Context Expansion (SA-DCE) addresses the challenges of scale variation in object detection, which were previously tackled by multi-scale techniques. Essential References Not Discussed: All key references have been cited. Other Strengths And Weaknesses: The proposed method significantly improves training efficiency but requires further experiments to confirm that the model's performance is maintained before and after distillation. In data distillation, Faster RCNN101 was used as the observer model. However, the accuracy of the model trained based on distillation data collection was only 30.1, which could not even reach the accuracy of the observer model and could not clearly reflect the significance of distillation data. Moreover, the performance of Faster RCNN101 and Faster RCNN50 on COCO is not reported in the paper. Other Comments Or Suggestions: No other comments. Questions For Authors: No other questions. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate your thorough review and for recognizing our contributions! >**Q1: How does this method perform on the latest transformer-based detectors?** We agree that evaluating performance on transformer-based detectors is important. To further demonstrate the generalizability of our approach, we have conducted additional experiments and included the results accordingly. | IPD | Observer Model | Target Model | mAP | mAP@50 | mAP@75 | |------|------------------|--------------|------|--------|--------| | 0.25% | Deformable DETR | Faster RCNN | 11.90 | 22.60 | 11.10 | | 0.5% | Deformable DETR | Faster RCNN | 16.20 | 29.50 | 16.00 | | 1.0% | Deformable DETR | Faster RCNN | 22.00 | 38.00 | 22.90 | | 0.5% | DETR | Faster RCNN | 12.10 | 26.40 | 9.40 | | 1.0% | DETR | Faster RCNN | 16.40 | 33.90 | 13.90 | >**Q2: Additional ablation experiments regarding the confidence threshold are missing.** Thank you for your suggestion. The ablation study on the confidence threshold is indeed provided in Table 8 of the appendix. >**Q3: The dataset distillation technique claims to speed up training while maintaining model performance. Could the authors provide a performance comparison with models trained on uncompressed datasets?** Certainly! The following is the reported performance by mmdetection of the used models on the full versions of MS COCO: | Model | mAP | mAP@50 | mAP@75 | |--------------------|------|--------|--------| | Faster R-CNN 50 | 38.4 | 59.0 | 42.0 | | Faster R-CNN 101 | 39.8 | 60.1 | 43.3 | | RetinaNet 50 | 37.4 | 56.7 | 39.6 | | RetinaNet 101| 38.9 | 58.0 | 41.5 | >**Q4: The performance drop is somewhat significant between distilled training and full-scale training.** The performance drop is 8.9%, from 39% (full-scale with 100% of the original dataset) to 30.1% (distilled with only 5% IPD). With these results, we have successfully bridged around 10% of the gap between the previous SOTA and the theoretical upper bound at a 1.0% compression rate. Similarly, a recent work from the image classification task, which has been more thoroughly explored, is RDED [1]. This method achieved 33.9% accuracy at IPC=10 on ImageNet-100 with ResNet-101 vs training full-scale on the entire dataset achieving 78.25% (gap of 44.35%). Thus, we believe $OD^3$ exhibits a huge step forward toward completely bridging the gap between distilled and full-scale training. We appreciate your valuable feedback and hope this addresses your concerns. **References:** [1] Sun, Peng, Bei Shi, Daiwei Yu, and Tao Lin. "On the diversity and realism of distilled dataset: An efficient dataset distillation paradigm." In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9390-9399. 2024.
Summary: The paper proposes a dataset distillation method for object detection datasets, aiming at condensing the number of training images down to 0.25 - 5% of the original training dataset. This is achieved by first copy-pasting objects from the training set onto blank backgrounds. In a second step, objects that are assigned low confidence by a pre-trained model are removed. Finally, a target model is distilled from soft labels output by the larger pre-trained model. The effectiveness of the method is evaluated on COCO and Pascal VOC and outperforms prior work on these benchmarks. Claims And Evidence: The paper's claims are supported by sufficient evidence. The evaluation demonstrates substantial improvements in effectiveness over prior methods and the ablation studies clearly demonstrate the contributions of label types, the candidate selection, and the candidate screening components. Methods And Evaluation Criteria: While in essence a simple extension of copy-paste combined with knowledge distillation, the proposed method makes sense overall. To the best of my knowledge, the paper compares the method's effectiveness to the (few) relevant prior works, which are outperformed by a significant margin. However, the evaluation is limited to a single backbone architecture (ResNet-50) and two detectors (RetinaNet & Faster R-CNN), which severely limits its generality. Since the proposed method is in principle maximally general and the key promise is to reduce training effort, I think the paper should additionally evaluate the method on more modern backbones (such as ViTs) and detectors (such as DETR). Theoretical Claims: The main theoretical claim is that the proposed add-then-remove scheme of step-wise removing low-confidence pasted objects achieves a greater objective value than the add-only strategy. I have not checked this claim in detail but it makes intuitive sense. Experimental Designs Or Analyses: The experimental design is in line with prior work and uses (distilled versions of) COCO and Pascal VOC as standard benchmark datasets for object detection. The metrics used (mAP at different IoU thresholds) make sense as well. Supplementary Material: I appreciate that the supplemental material includes code and one example dataset. I have not run the code but it corresponds to the method's description. It furthermore contains additional dataset statistics, ablations, and a proof for the main theoretical claim. Relation To Broader Scientific Literature: The paper is concerned with the underexplored area of dataset distillation for object detection. The method is maximally simple, combining proven ideas such as copy-paste augmentation and knowledge distillation. Essential References Not Discussed: None. Other Strengths And Weaknesses: None. Other Comments Or Suggestions: 1. Eq. 6 has weird formatting 2. x_{i+1} is missing superscripts in Eq. 10 & Eq. 12 3. mmrazor reference renders as "(Contributors, 2021)" Questions For Authors: None. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thoughtful review and valuable suggestions! >**Q1: The evaluation is limited to a single backbone architecture (ResNet-50) and two detectors (RetinaNet & Faster R-CNN), which severely limits its generality. The paper should additionally evaluate the method on more modern backbones (such as ViTs) and detectors (such as DETR).** We appreciate this insight and agree that evaluating performance on transformer-based models is important. To further demonstrate the generalizability of our approach, we have conducted additional experiments and included the results accordingly. | IPD | Observer Model | Target Model | mAP | mAP@50 | mAP@75 | |-------|-----------------|--------------|-------|--------|--------| | 0.25% | Deformable DETR | Faster RCNN | 11.90 | 22.60 | 11.10 | | 0.5% | Deformable DETR | Faster RCNN | 16.20 | 29.50 | 16.00 | | 1.0% | Deformable DETR | Faster RCNN | 22.00 | 38.00 | 22.90 | | 0.5% | DETR | Faster RCNN | 12.10 | 26.40 | 9.40 | | 1.0% | DETR | Faster RCNN | 16.40 | 33.90 | 13.90 | >**Q2: Eq. 6 has formatting issues.** We will redefine the equation as follows: $\quad \mathbf{z}_i = f^\textrm{fpn}(f^\textrm{backbone}(\mathbf{x}_i))$ With the updated format: $\mathcal{L}\textrm{mse} = \mathbb{E}{(\mathbf{x}_i,\mathbf{y}^\textrm{feat}_i)} \Big\Vert \mathbf{y}^\textrm{feat}_i - \frac{\mathbf{z}_i - \textrm{mean}(\mathbf{z}_i)} {\textrm{std}(\mathbf{z}_i) + \epsilon} \Big\Vert_2^2.$ >**Q3: $x_{i+1}$ is missing superscripts in Eq. 10 & Eq. 12.** The superscripts $^{a}$ and $^{ar}$ will be added to $x_{i+1}$ in the revised version. >**Q4: The mmrazor reference renders as "(Contributors, 2021)".** As suggested, we will correct this reference in the revised version. We appreciate your valuable feedback and hope these clarifications address your concerns. --- Rebuttal Comment 1.1: Comment: > We appreciate this insight and agree that evaluating performance on transformer-based models is important. To further demonstrate the generalizability of our approach, we have conducted additional experiments and included the results accordingly. I appreciate the additional experiments with DETR detectors. I assume these results are using ResNet backbones? I still think it would be a meaningful improvement to add experiments using ViT backbones instead. --- Reply to Comment 1.1.1: Comment: Thank you for the suggestion. We agree that evaluating ViT-based backbones provides meaningful insights. Accordingly, we have included additional experiments using ViTDet [1] with a ViT-B backbone, and we will report the results in the revised manuscript. | IPD | Observer Model | Target Model | mAP | mAP@50 | mAP@75 | |---------|--------------------|----------------|------|--------|--------| | 0.25% | ViTDet (ViT-B) | Faster RCNN | 11.0 | 21.3 | 10.1 | | 0.5% | ViTDet (ViT-B) | Faster RCNN | 15.9 | 29.2 | 15.6 | | 1.0% | ViTDet (ViT-B) | Faster RCNN | 21.7 | 38.3 | 22.1 | **References:** [1] Li, Yanghao, et al. "Exploring plain vision transformer backbones for object detection." European conference on computer vision. Cham: Springer Nature Switzerland, 2022.
Summary: Dataset distillation for object detection is a under-explored task. This paper proposes a new optimization-free dataset distillation method tailored for object detection, named OD$^3$. OD$^3$ consists of two steps: (1) an iterative candidate selection process that strategically places object instances in synthesized images; and (2) a candidate screening process powered by a pre-trained observer model, which discards low-confidence objects. Experiments on MS COCO and Pascal VOC datasets demonstrate the effectiveness of the proposed method. Claims And Evidence: The claims in the submission are supported by clear and convincing evidence. Methods And Evaluation Criteria: The proposed method is heuristic and somewhat trivial. It is hard to find some theoretical contributions or fresh insights. Theoretical Claims: N/A Experimental Designs Or Analyses: - In Table 1, the proposed method achieves significant performance improvements. However, the AP performance of different object sizes are omitted. It would be better to add more detailed comparison. - In Table 5, the authors only evaluate RetinaNet and Faster R-CNN with ResNet backbone. All these models are somewhat outdated. How do the performance of detection transformers and more recent detectors perform? Supplementary Material: All parts in the supplementary material are reviewed. Relation To Broader Scientific Literature: The idea of candidate screening is somewhat relevant to the context classifier in prior WSOD work [a]. The proposed object detection dataset distillation method may benefits to larger community. [a] Object-aware instance labeling for weakly supervised object detection, ICCV'19 Essential References Not Discussed: N/A Other Strengths And Weaknesses: - This paper explores a under-explored task and achieves significant performance improvement. - The overall paper is well organized and written. - The proposed optimization-free approach shows promising efficiency. Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your constructive feedback and for giving us the opportunity to address your concerns! >**Q1: The AP performance for different object sizes is omitted.** The AP performance for different object sizes is reported in Table 3 and Table 5 of the main paper, as well as Table 7 and Table 8 of the appendix. Since these metrics were not reported for the SoTA method DCOD [1], we did not include them in Table 1 for direct comparison. >**Q2: How do detection transformers and more recent detectors perform?** We appreciate this suggestion and agree that evaluating transformer-based models is valuable. To further highlight the generalizability of our approach, we have conducted additional experiments and included the results accordingly. | IPD | Observer Model | Target Model | mAP | mAP@50 | mAP@75 | |------|------------------|--------------|------|--------|--------| | 0.25% | Deformable DETR | Faster RCNN | 11.90 | 22.60 | 11.10 | | 0.5% | Deformable DETR | Faster RCNN | 16.20 | 29.50 | 16.00 | | 1.0% | Deformable DETR | Faster RCNN | 22.00 | 38.00 | 22.90 | | 0.5% | DETR | Faster RCNN | 12.10 | 26.40 | 9.40 | | 1.0% | DETR | Faster RCNN | 16.40 | 33.90 | 13.90 | We hope this clarifies your concerns, and we appreciate your thoughtful review! **References:** [1] Qi, Ding, Jian Li, Jinlong Peng, Bo Zhao, Shuguang Dou, Jialin Li, Jiangning Zhang, Yabiao Wang, Chengjie Wang, and Cairong Zhao. "Fetch and forge: Efficient dataset condensation for object detection." Advances in Neural Information Processing Systems 37 (2024): 119283-119300. --- Rebuttal Comment 1.1: Comment: Thanks for your response. The response has addressed my concerns. I would like to raise my rating.
null
null
null
null
null
null
Efficient Robust Conformal Prediction via Lipschitz-Bounded Networks
Accept (poster)
Summary: This paper explores how to leverage LipNet and a novel robust conformal score algorithm for robust prediction. Previously proposed robust conformal prediction methods each have their own limitations, such as high computational complexity, making them difficult to scale to large datasets like ImageNet. By utilizing the properties of LipNet, this paper introduces a simple conformal score estimation method with lower computational complexity. The effectiveness of the proposed approach is validated on large-scale datasets, including CIFAR and ImageNet, in the latter part of the paper. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: Yes Experimental Designs Or Analyses: Yes Supplementary Material: I have checked the supplementary material. Relation To Broader Scientific Literature: N/A Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: 1. The paper is clearly written, with well-designed illustrations and elegant notation. Even readers unfamiliar with LipNet and conformal prediction can quickly grasp the key ideas presented. 2. This paper contributes to two fields. In the domain of conformal prediction, it introduces a new perspective of robustness and proposes a new approach to enhance the robustness of vanilla CP. In the field of Lipschitz neural networks, it leverages existing LipNet techniques to estimate the robust CP score. 3. The experiments in this paper are conducted on various real-world datasets, ensuring a high level of reliability. Weaknesses/suggestions: Based on my knowledge, this paper does not exhibit any obvious weaknesses in technical parts. Although the proposed method is relatively simple and may lack significant technical novelty, I believe the new insight this paper provides is valuable. According to the requirement of the ICML 2025, the author should add a impact statement section. Other Comments Or Suggestions: This paper is clear and well-structured, with substantial theoretical contributions and a comprehensive set of experiments. The proposed method has the potential to attract significant interest from various fields, such as conformal prediction and Lipschitz neural networks. Hence, I am inclined to accept it. ## update after rebuttal: The authors have addressed my concerns, so I recommend acceptance. Questions For Authors: See above Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **Common response:** First of all, we would like to thank our reviewers for their time spent reviewing our paper along with the insightful comments they provided. Their reviews highlight that our method is “highly efficient and scalable” while underlining its innovative nature and our comprehensive experiments. Moreover, reviewer X44i points out the novelty of the theoretical framework we developed to ensure worst-case coverage bounds of vanilla CP. We appreciate all these comments. To address common questions shared among reviewer AUox and reviewer 3iVr, we add the following experiments which support our initial results: - **Shared model comparison:** we run a comparison of robust CP methods on an identical model, see answer to reviewer AUox. - **Lipschitz bound tightness estimation:** we empirically evaluate the Lipschitz constant of our network by running adversarial attacks, see answer to reviewer 3iVr. **Reviewer response:** We thank the reviewer for their positive feedback and strong endorsement of our work. We have incorporated an impact statement, as well as additional experiments (shared model comparison and empirical tightness estimation) to further highlight the strengths of our approach. We hope these enhancements further clarify and strengthen our contribution. --- Rebuttal Comment 1.1: Comment: Thank you for your response and the new experimental results. I have decided to keep my score and am inclined to recommend acceptance.
Summary: - This paper proposes a novel method, lip-rcp, for efficient robust conformal prediction (CP) by leveraging Lipschitz-bounded neural networks. The key contributions include: - Theoretical analysis: Deriving worst-case coverage bounds for vanilla CP under l2 adversarial attacks, valid simultaneously for all perturbation budgets. - Efficient robust CP: Introducing a method to compute robust prediction sets using globally Lipschitz-constrained networks, achieving state-of-the-art performance in terms of set size and computational efficiency. - Scalability: Demonstrating applicability to large-scale datasets (e.g., ImageNet) with negligible computational overhead compared to vanilla CP. - The experiments validate the method on CIFAR-10, CIFAR-100, TinyImageNet, and ImageNet, showing superior results over existing approaches like VRCP and CAS. ## update after rebuttal After rebuttal, I still worry about the potential limitation in terms of its scope of applicability and the rationality of the theory. I maintain my original score, and am inclined to recommend a Weak Reject. Claims And Evidence: The claims are generally supported by theoretical proofs and empirical results but also some problem exist. For example: - Claim 1: The authors call their method a CP method, but highly rely on training a specific classifier, which contradicts the model-independence of the CP method. - Claim 2: Although author stated "We derive the first sound coverage bounds for vanilla CP that are valid simultaneously across all attack levels", it does not give an explicit quantification form, only gives a definition of the coverage bound. - Claim 3: The worst-case coverage bounds (Theorem 3.3) are rigorously proven in Appendix B, assuming input space convexity and continuity of the non-conformity score. But the convex assumptions are unreasonable for image data in experiment. - Claim 4: Although they provide a method to compute lower and upper bounds on CP, the discussion about the tightness of bounds based the method is missing; - Claim 5: The efficiency of lip-rcp is evidenced by Table 1 and Figure 2, showing O(1) complexity for non-conformity scores. However, the training overhead of Lipschitz networks (10–20% longer, per Appendix E) is not compared to baseline models. - Claim 6: ImageNet scalability (Table 2) is demonstrated, but CAS is evaluated on only 500 samples versus lip-rcp’s 50,000, raising concerns about fairness. Methods And Evaluation Criteria: Strengths: - The use of Lipschitz networks to bound adversarial score variations is innovative and aligns well with the goal of robust CP. The evaluation on standard benchmarks (CIFAR, ImageNet) is appropriate. Weaknesses: - The method is highly dependent on the specific Lipschitz network, but the estimation of the Lipschitz constant of common SOTA models is NP-hard, and the Lipschitz network is hard to obtain, so the practical application of the method is limited - The experiment focus on l2-bounded attacks limits practical relevance, as $\ell_\infty$ and $\ell_1$-attacks are more common in adversarial ML. Theoretical Claims: The proofs in Appendix B assume convex and closed input spaces (Assumption 3.2). But it is not valid for pixel-space images, which are not verified to be convex and closed. Experimental Designs Or Analyses: Strengths: - Comprehensive experiments across datasets and comparison to VRCP/CAS. Weaknesses: - The experiments comparison use different backbone model (ResNet50 in CAS vs. ResNeXt in lip-rcp). potentially biasing results. - The ImageNet comparison uses unequal evaluation sizes (500 vs. 50k), potentially biasing results. Supplementary Material: Appendices A–G provide detailed proofs, implementation details, and additional experiments. However, training hyperparameters (e.g., optimizer settings) are omitted, hindering reproducibility. Relation To Broader Scientific Literature: The work effectively bridges robust ML (Lipschitz networks) and uncertainty quantification (conformal prediction). It appropriately cites foundational CP works (Vovk et al., 2005) and recent robust CP methods (Gendler et al., 2022; Jeary et al., 2024). Essential References Not Discussed: Recent works CP vulnerabilities under adaptive attacks (Liu et al., 2024) are missing. Other Strengths And Weaknesses: No. Other Comments Or Suggestions: NO. Questions For Authors: - The CP method is a model-independent and data distribution-free technique. As the authors state “the CP approach has the advantage of being applicable to any model, even pre-trained on a different dataset” in introduction section. But the authors' proposed method relies on training a specific 1-Lipschitz network which contradicts the model-independence of the CP approach. How does it perform against other CP method based on the same backbone model, such as ResNet 50 e.t.? How much influence does the structure of 1-Lipschitz network have on the effect? For example, what is the result of CAS/RSCP method combined with 1-Lipschitz network? - Although the author mention the eq.19 is a estimate of Def 3.1. However, the Def 3.1 is defined in terms of upper (lower) infimum. As the authors state "the informativeness of the bound is directly linked to the tightness of the estimate" in Page 5. An in-depth discussion for the tightness of eq.19 is missing. - Despite the author providing a definition of the Conservative/Restrictive Prediction Set in Def. 3.1, further clarification is required regarding the practical application of this set of predictions. It is necessary to elucidate the conditions under which the Conservative Prediction Set and the Restrictive Prediction Set are employed. Additionally, the methods employed by the authors to calculate the prediction set in the experiment must be thoroughly explained. - The experiments of work is focused on l2 attack, the l1 and l∞ results is missing, which is common in adversarial attacks. The performance results of the method in these scenarios should be shown. - Why is CAS evaluated on only 500 ImageNet samples but the lip-cp use 50000 samples in Table 2? Would results hold with balanced sample sizes? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: **Common response:** First of all, we would like to thank our reviewers for their time spent reviewing our paper along with the insightful comments they provided. Their reviews highlight that our method is “highly efficient and scalable” while underlining its innovative nature and our comprehensive experiments. Moreover, reviewer X44i points out the novelty of the theoretical framework we developed to ensure worst-case coverage bounds of vanilla CP. We appreciate all these comments. To address common questions shared among reviewer AUox and reviewer 3iVr, we add the following experiments which support our initial results: - **Shared model comparison:** we run a comparison of robust CP methods on an identical model, see answer to reviewer AUox. - **Lipschitz bound tightness estimation:** we empirically evaluate the Lipschitz constant of our network by running adversarial attacks, see answer to reviewer 3iVr. **Reviewer response:** Foremost, we would like to thank Reviewer AUox for their comprehensive comments. We appreciate the comments about our methods' novelty and empirical validation. In the following paragraphs, we address their main concerns. **About comparing with different models (experimental weakness 1):** “The experiments comparison use different backbone model [...] potentially biasing results” To alleviate these concerns: we train a VGG-like 1-LipNet with 10.7M parameters from Boissin et al. 2025 on CIFAR-10. Then, we evaluate robust CP methods on this network under the conditions of Figure 3 . |Method|Coverage|Set size|Runtime(1 run)| |---|---|---|---| |CAS n_mc=1024 |94.6%|2.302|2615s| |CAS n_mc=10000|93.8%|2.057|5+ hours| |lip-rcp|**92.83%**|**1.889**|**10s**| |VRCP-I/C (CROWN)|OOM|OOM|N/A| CROWN does not scale to such a deep network due to its inner complexity. Therefore: - Using a deeper network further improves our results (10.7M parameters compared to 4.5M parameters for the ResNeXt). - LipNets allow for tighter conservative score estimations than CAS since its certificate is deterministic and does not require finite sample corrections. - This LipNet makes the CAS method perform better than with a ResNet50 due to its robustness (cf. Figure 3). These results will be added to the final version of the paper. We thank the reviewers for their comments which prompted this evaluation highlighting the performance and efficiency of our method. **About the ImageNet split sizes (experimental weakness 2):** To validate lip-rcp's performance without calibration set size differences we evaluate our method using only 500 calibration samples as done in the CAS article due to the method's high computational demands: Set sizes: 118.5 (111.0 originally) Coverage : 97.6% (97.4% originally) The performance gap between methods remains. **Theoretical questions (Claims 2 & 3):** While the manifold of images encountered in practice is typically nonconvex, Theorem 3.3 only requires that the input distribution and the score $x \mapsto s(x,y)$ be defined on a subset of a convex space such as $\mathcal{X}=[0,1]^{3 n_{\mathrm{pix}}}$, where $n_{\mathrm{pix}}$ is the number of pixels. Theorem 3.3 is then valid for any such distribution, even with finite support included in that space $\mathcal{X}$. We will add this fact within footnote 3 of Page 5. Also, our coverage bounds (15) are not closed-form, but (as mentioned after (15)) they can be quickly and tightly computed via a binary search. See also Langford and Schapire (2005, after Def 3.2) for closed-form yet looser bounds. **About model independence (Claim 1):** As correctly pointed out by the reviewer, we use additional information on the model to improve over black-box (model-free) approaches. We propose to insert the following clarification in Section 4: ``In practice, our method uses Lipschitz-by-design networks. This limits the model independence of our robust CP method, but it was key to obtaining efficient and competitive robust CP metrics for the first time. Interestingly, our methodology, which applies more generally to any network and score that are Lipschitz continuous, can also benefit from future research on Lipschitz constant estimation.'' **About computational efficiency (Claim 5):** With several recent developments mentioned in the related works of our article, Lipschitz-by-design networks have become easier to train, and well performing, and libraries exist to train LipNets with minimal effort (cf geotorch, deel-lip, etc…). Training overheads are described in Appendix E, for an exhaustive study of the overheads of LipNets, we refer the reviewer to (Boissin et al. 2025, Table 3). For reference, our ResNeXt model of Figure 3 introduces a 9.6% runtime overhead on TinyImageNet compared to an identical unconstrained model. **Details:** The article by Liu et al. (2024) mentioned by the Reviewer is already cited in Page 1 L46. --- Rebuttal Comment 1.1: Comment: Thanks for the rebuttal. Unfortunately, the rebuttal has reinforced, rather than resolved, my concerns. - Limited scope of practice: First, The CP method are known for model-independent and data distribution-free. But the authors' proposed method relies on training a specific 1-Lipschitz network which contradicts the model independence of the CP approach. Second, although author states in contribution that " We derive ... across all attack levels.", but the experiments of work is focused on l2 attack, the l1 and l∞ results is missing, which is common in adversarial attacks. Therefore, its scope of application is limited - Limitations of the Theory: The author states that "Theorem 3.3 only requires that the input distribution and the score be defined on a subset of a convex space ", but the image is also not satisfied with the condition. Although it has a certain effect in practical performance, its theory can not explain the reason for its effect well. --- Reply to Comment 1.1.1: Comment: First of all, we would like to thank the reviewer for their reactivity. **About our scope of applicability:** As previously argued in our rebuttal, our methodology applies more generally to most SOTA networks that are Lipschitz continuous, not only *“on training a specific 1-Lipschitz network”*. Furthermore, our method can also benefit from future research on Lipschitz constant estimation. Perhaps more importantly, all competing approaches suffer from strong implicit requirements on the model. The high memory overhead of smoothing methods limits their applicability to small to medium models in practice at inference time. Also, verification methods often do not scale to deep networks. **About l_1 and l_inf robustness** Although we do not benchmark against l_1 or l_inf adversarial attacks (as stated in our limitations), it is clearly stated throughout the paper that our worst-case vanilla CP bounds are valid for any verifiable network (see Figure 4 left). This includes networks for l_1 and l_inf attacks. We will add relevant references. **About theoretical limitations:** We are quite confused about the concerns of the reviewer. Indeed, our theorem holds true for any convex space $\mathcal{X}$ that **contains** the image space and on which the score $x \mapsto s(x,y)$ is continuous. This property is straightforward in our case (with $\mathcal{X} = [0,1]^{3n_{pix}}$).
Summary: This paper addresses the limitations of robust conformal prediction (CP) under adversarial attacks. Traditional robust CP methods typically generate prediction sets that are either excessively large or computationally expensive for large-scale scenarios. To tackle these challenges, the authors introduce lip-rcp, which leverages Lipschitz-bounded neural networks to estimate robust prediction sets. By utilizing 1-Lipschitz constrained models, the proposed approach provides tighter and computationally efficient robust conformal prediction sets compared to existing methods. Claims And Evidence: Although the authors propose using Lipschitz-bounded neural networks to efficiently compute conservative and restrictive conformal scores, they acknowledge that accurately estimating the Lipschitz constant for deep neural networks remains computationally challenging and can result in overly conservative bounds if the estimates are loose. Thus, while their method claims efficiency and scalability, the precise tightness of these Lipschitz bounds and its corresponding impact over CP are not discussed in this paper. Methods And Evaluation Criteria: Equation (20) modifies the standard LAC conformity score by replacing softmax with sigmoid. Since softmax has no simple Lipschitz bound, sigmoid is used to maintain tractability. However, does this change affect the calibration of the non-conformity score? Could this lead to overly conservative or loose prediction sets depending on the logit scaling? Theoretical Claims: 1. Your method relies on 1-Lipschitz constrained networks. How does the setting of Lipschitz constant (e.g., different values of $L_n$) impact the robustness and efficiency of the prediction sets? Could you provide theoretical analysis? 2. Figure 5 is a good example of illustrating the proof. Could you provide more explanation of Figure 5 in the caption or appendix? Experimental Designs Or Analyses: 1. Could you provide additional results on how accurate the estimation of the Lipschitz constant of the neural network? 2. How does lip-rcp perform in terms of robustness and accuracy trade-offs under different attack models? Supplementary Material: I did not review the supplementary code as part of my evaluation. My review is based on the theoretical justifications, experimental results, and clarity of the main paper. Relation To Broader Scientific Literature: The key contributions of this paper build upon and extend several lines of research in robust conformal prediction (CP), adversarial robustness, and Lipschitz-bounded neural networks. Prior work on robust CP has primarily focused on randomized smoothing methods (Gendler et al., 2022; Yan et al., 2024) and formal verification-based approaches (Jeary et al., 2024), both of which provide robustness guarantees but suffer from either high computational costs or excessively large prediction sets. The paper improves upon these methods by introducing lip-rcp, inspired by work in certifiable adversarial robustness (Anil et al., 2019; Boissin et al., 2025). Essential References Not Discussed: This paper has discussed the essential works. Other Strengths And Weaknesses: The key strength of this paper is its development of a highly efficient and scalable method, lip-rcp, that integrates Lipschitz-bounded neural networks into robust CP. Unlike previous robust CP methods, the proposed approach provides robust prediction sets with minimal computational cost. By leveraging networks designed with Lipschitz constraints, the authors achieve precise and certifiable bounds on prediction scores under adversarial perturbations. Other Comments Or Suggestions: Some typos: untractable $\rightarrow$ intractable valid simultaneously for $\rightarrow$ valid simultaneously across Tiny ImageNet $\rightarrow$ Tiny-ImageNet Questions For Authors: 1. How tight are the Lipschitz bounds in practice? Have you quantified the potential gap between the estimated and actual Lipschitz constants? 2. Equation (20) modifies the standard LAC conformity score by replacing softmax with sigmoid. Since softmax has no simple Lipschitz bound, sigmoid is used to maintain tractability. However, does this change affect the calibration of the non-conformity score? Could this lead to overly conservative or loose prediction sets depending on the logit scaling? 3. Could you provide additional results on how accurate the estimation of the Lipschitz constant of the neural network? 4. Do the different choice of Lipschitz parametrized networks affect the coverage and efficiency of lip-rcp? Could you provide empirical results? 5. How does lip-rcp perform in terms of robustness and accuracy trade-offs under different attack models? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: **Common response:** First of all, we would like to thank our reviewers for their time spent reviewing our paper along with the insightful comments they provided. Their reviews highlight that our method is “highly efficient and scalable” while underlining its innovative nature and our comprehensive experiments. Moreover, reviewer X44i points out the novelty of the theoretical framework we developed to ensure worst-case coverage bounds of vanilla CP. We appreciate all these comments. To address common questions shared among reviewer AUox and reviewer 3iVr, we add the following experiments which support our initial results: - **Shared model comparison:** we run a comparison of robust CP methods on an identical model, see answer to reviewer AUox. - **Lipschitz bound tightness estimation:** we empirically evaluate the Lipschitz constant of our network by running adversarial attacks, see answer to reviewer 3iVr. **Reviewer response:** We would like to thank Reviewer 3iVr for their insightful comments. Importantly we appreciate their comments about our methods' efficiency, scalability and performance. Below we answer the reviewers main concerns. **About the tightness of Lipschitz bound estimations (Questions 1 & 3):** We run a direct tightness estimation of our Lipschitz upper bound with Lipschitz-by-design networks. We compute the maximum ratio between logit variations under attack and adversarial attack budgets on the CIFAR-10 test set. This quantity offers a lower bound to the actual Lipschitz-constant of our network. Using PGD attacks of budget $\epsilon=0.05$ we get a Lipschitz constant lower bound of **0.917** when our by-design Lipschitz bound is **1**. This demonstrates that our bound is relatively tight in practice. Under these same attacks, the empirical coverage of our $\epsilon$-robust CP sets on the test split is 92.06% (94.7% under AutoAttack attacks) which approaches the desired robust coverage of 90% under $\alpha=0.1$. **About the choice of $L_n$ (Theoretical Claim 1):** To avoid any ambiguities, we address your question from two different angles. - Setting any $L_n \neq 1$ as the network constraint: this would have limited impact since constraining the Lipschitz constant of a classifier is not a limitation when using an appropriate optimization objective (Béthune et al. 2022). Moreover, $L_n = 1$ avoids gradient vanishing or explosion (Béthune et al. 2024, Thm 1). - Computing robust prediction sets using the l.h.s. of (19) and other values for $L_n$​: the approximation quality for the Lipschitz constant of the network is crucial, under-approximating it leads to uncertifiable results, while strongly over-estimating would lead to pathologically large prediction sets. Theory-wise, the impact of $L_n$​ can also be analyzed in a toy setting, with data points $(X_i,Y_i)$ drawn i.i.d. from a mixture of two Gaussians, and a model given by the Bayes rule. **About the LAC sigmoid score (Question 2):** Upon further inspection, the LAC sigmoid score yields a slight degradation of vanilla CP. CIFAR-10 / ResNet50 / $\alpha=0.1$: - LAC softmax coverage & set size: 90.04% / 1.088 - LAC sigmoid coverage & set size: 90.28% / 1.144 With similar tendencies for lower $\alpha$. We will add the following sentence to our paper: “This ablation study results in marginally bigger vanilla CP set sizes for LAC sigmoid scores compared to softmax ones with scaled temperature. Investigating Lipschitz conformal scores represents a promising direction for future research to further enhance robust CP performance.” **Regarding the tradeoff between accuracy and robustness (Theoretical claim 2 & Question 5):** Interestingly, the trade-off between accuracy and robustness for robust CP is not straightforward. Indeed, robust networks exhibit poorer accuracy which penalizes vanilla CP performance in small $\alpha$ regimes, consequently impacting robust CP performance. Similarly, accurate but brittle classifiers exhibit smaller vanilla CP set sizes yet the robust CP sets are large given the fine margins between conformal scores. To keep our method as simple, reliable and reproducible as possible we used standard hyperparameter values (which will be detailed in the Appendix) since our method exhibits SOTA behaviour without tuning. We would like to thank the reviewer for pointing out this promising aspect. **Details:** We will fix the typos that Reviewer 3iVr kindly pointed out. Furthermore, we propose the following caption for Figure 5: "Illustration of the proof. On the ball $\mathcal{B}\_{\epsilon + \delta}(x) $, the score $x \mapsto s(x,y)$ is minimized at some $\tilde{x}$. On the smaller ball $\mathcal{B}_{\epsilon}(x)$, the minimum can only be larger, but not larger than $s(x'',y)$, which is close to $s(\tilde{x},y)$ by continuity of $s(\cdot,y)$".
Summary: This paper uses 1-Lipschitz networks to estimate robust conformal prediction (CP) sets, leading to the new lip-rcp method. The proposed method achieves SOTA results in the size of the robust CP sets and computational efficiency. In addition, the authors also study vanilla CP under attack, and derive new worst-case coverage bounds of vanilla CP sets. Claims And Evidence: The authors claim that their proposed lip-rcp method achieves SOTA in robust CP. It seems to me that this claim is supported by their study. Methods And Evaluation Criteria: The empirical evaluation in this paper makes sense to me. Theoretical Claims: The theorem look plausible to me, though I have not checked all the details carefully. Experimental Designs Or Analyses: The experiments are solid, and seem to support the claimed contribution. Supplementary Material: I reviewed Appendix D, which makes sense to me. Relation To Broader Scientific Literature: The topic studied in this paper seems quite relevant to the deep learning community. There are a lot of previous results on Lipschitz networks. This paper seems to be the first in leveraging such Lipschitz network results for studying robust CP. To me, this connection is novel and interesting, worth being known to the deep learning community. Essential References Not Discussed: I do not have a particular paper in mind that this paper misses citing. Other Strengths And Weaknesses: The paper is well written. The work is quite solid. The connection between Lipschitz networks and robust CP seems to be simple yet novel. Other Comments Or Suggestions: One thing which is not that clear to me is how to compare lip-rcp with other robust learning methods which are not based on CP in the first place. Questions For Authors: Can the authors clarify what is the technical novelty of their theoretical contribution other than just merging Lipschitz networks with robust CP? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Common response:** First of all, we would like to thank our reviewers for their time spent reviewing our paper along with the insightful comments they provided. Their reviews highlight that our method is “highly efficient and scalable” while underlining its innovative nature and our comprehensive experiments. Moreover, reviewer X44i points out the novelty of the theoretical framework we developed to ensure worst-case coverage bounds of vanilla CP. We appreciate all these comments. To address common questions shared among reviewer AUox and reviewer 3iVr, we add the following experiments which support our initial results: - **Shared model comparison:** we run a comparison of robust CP methods on an identical model, see answer to reviewer AUox. - **Lipschitz bound tightness estimation:** we empirically evaluate the Lipschitz constant of our network by running adversarial attacks, see answer to reviewer 3iVr. **Reviewer response:** We would like to thank Reviewer X44i for the time dedicated to reviewing our paper and the suggestions they provided. Also, we appreciate the reviewer's comments regarding the soundness and novelty of our work. We develop answers to the reviewer's questions below: **About the connection with robust learning:** The field of Robust Conformal Prediction is quite specific, as it studies the robustness of *guaranteed prediction sets* under i.i.d conditions. It is not directly related to robust learning beyond reusing properties of a neural network, as the (split) Conformal procedure is post-hoc, and conducted on an already trained model. However, a comparison to any certifiably (or not) robust prediction set is feasible, based on other theories or heuristics. Yet, to our knowledge, robust prediction sets have only been studied in the Conformal Prediction setting. **About our theoretical contributions:** While one aspect of our work concerns developing a highly efficient method to compute certifiably robust prediction sets (lip-rcp) in the classic Robust CP setting of RSCP (Gendler et al., 2022). Our theoretical work of Section 3 introduces a novel complementary theoretical approach: In essence, we conduct a vanilla CP procedure (without robust score computations) and use an additional holdout data split to estimate maximum coverage variations for worst-case attacks. Those estimates are guaranteed for any budget simultaneously (uniformly). As opposed to Robust CP, this implies that under normal conditions, our method allows to retain the superior informativeness of vanilla CP sets while having an associated guarantee on how significantly their coverage under attack may evolve. As mentioned in the article, two previous works have proposed a weaker although incorrect guarantee, but with a similar intuition. Finally, the general formulation of our theoretical work allows us to extend our method to formal verification solvers (cf. Fig 4 - left).
null
null
null
null
null
null
Outlier-Aware Post-Training Quantization for Discrete Graph Diffusion Models
Accept (poster)
Summary: This paper introduces **Bit-DGDM**, a post-training quantization framework for **Discrete Graph Diffusion Models (DGDMs)**, addressing the long inference times caused by huge computational load and the presence of outliers in weights and activations. It proposes decomposing activations into dense, easily quantizable parts and sparse, non-quantizable parts based on precomputes thresholds. For weights, it introduces a decomposition algorithm based on the assumption that weights can be split into a sparse part with 𝛼-Sparsity and a low-rank dense part. The approach ultimately enables low-bit dense computation and high-bit but sparse-dense computation, and the implementation of computational kernels achieves significant practical acceleration results. Claims And Evidence: The claims made in the submission supported by clear and convincing evidence. Methods And Evaluation Criteria: The proposed methods evaluation criteria make sense. Theoretical Claims: All proofs for theoretical claims are correct. Experimental Designs Or Analyses: Yes Supplementary Material: Yes, the code of kernel is validity. Relation To Broader Scientific Literature: The parer design ill-conditioned low-rank decomposition of weight algorithm inspired by [1] and [2]. [1] Candès, E. J., Li, X., Ma, Y., & Wright, J. (2011). Robust principal component analysis?. *Journal of the ACM (JACM)*, *58*(3), 1-37. [2] Tong, T., Ma, C., & Chi, Y. (2021). Accelerating ill-conditioned low-rank matrix estimation via scaled gradient descent. *Journal of Machine Learning Research*, *22*(150), 1-63. Essential References Not Discussed: There are some quantization methods should be discussed and compared [a,b,c,d] [a] QVD: Post-training Quantization for Video Diffusion Models, in ACM MM [b] Ptq4sam: Post-training quantization for segment anything, in CVPR [c] Post-training Quantization on Diffusion Models, in CVPR [d] Towards Accurate Post-training Quantization for Diffusion Models, in CVPR Other Strengths And Weaknesses: Strengths: 1. The paper is well-written and clearly presents its contributions. The reader can easily follow the author's logic. 2. This article actually proposes a general quantization framework that can accelerate the model when there are outliers in both the activation values and weights. 3. Compared with many other LLM-oriented baseline methods, the advantages of this method are demonstrated. 4. The supplementary material is provided detailly, which enhance the reproducibility and transparency of the research. Weaknesses: 1. The proposed method has no advantage in reducing the footprint of memory at runtime compare to BF16 baseline, although the memory usage is indeed relatively small. 2. The idea of dividing weights and activations into difficult to quantize parts and easy to quantize parts is common. 3. Whether DGDM has practical and wide application is worth discussing. Other Comments Or Suggestions: No Questions For Authors: 1. I'm curious about what theoretical advancements the ill-conditioned low-rank decomposition proposed in this article has compared to previous work. 2. The discussion of work on low-rank factorization should be added, along with other discussions that divide weights into quantized and unquantized (or higher bit precision) work by importance, e.g. PB-LLM[1]. 3. What is the basis of rank selection for a low-rank components? Is there any experimental verification? [1] Shang, Y., Yuan, Z., Wu, Q., & Dong, Z. (2023). Pb-llm: Partially binarized large language models. *arXiv preprint arXiv:2310.00034*. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We very much appreciate your positive comments of our paper. **Q1:** Some quantization methods [a,b,c,d], low-rank decomposition and importance-based weight (e.g. PB-LLM) dividing methods should be discussed. **A1:** We sincerely appreciate your valuable suggestion. Due to the page limit of the rebuttal, we have included the comparison with these related works in this [`link`](https://anonymous.4open.science/r/ICML_Rebuttal_9670/comparion_svd_impotance.md). We will further expand our related work section to incorporate a detailed discussion of these studies. **Q2:** The proposed method shows no advantage over the BF16 baseline in runtime memory. **A2:** Thank you for your valuable comment. Our Bit-DGDM shows notable memory usage advantages compared to the BF16 baseline. In Table 1, on the QM9 dataset, Bit-DGDM only requires 2.3GB memory compared to BF16's 3.4GB usage, with a 32% reduction. The results are consistent in other datasets. Since DGDM is primarily a computation-intensive model rather than a large-parameter model, our main focus remains on improving inference speed while maintaining performance. We will incorporate memory analysis in our experimental results. **Q3:** The idea of dividing weights and activations into difficult to quantize parts and easy to quantize parts is common. **A3:** Thank you for your feedback. Though the idea of separating weights and activations has been explored before, our method introduces several key innovations that distinguish it from prior work. (i) For easy-to-quantize weights, we are the first to propose ill-conditioned low-rank weight decomposition, enabling stable decomposition into low-rank components even in the presence of significant outliers. (ii) For difficult-to-quantize weights, we propose the use of α-Sparsity, a structured sparsity property that facilitates subsequent inference acceleration. (iii) For activation outliers, our approach eliminates the need for calibration data to select thresholds, making it more practical. **Q4:** Whether DGDM has practical and wide application is worth discussing. **A4:** Thank you for your insightful question. Compared to Gaussian denoising diffusion models, DGDM possesses unique capabilities in modeling **discrete** data, enabling multiple real-world applications, particularly in biological and chemical tasks. In this work, we validate DGDM’s effectiveness through two critical applications: molecular generation and inverse protein folding. Furthermore, our proposed Bit-DGDM is specifically designed to facilitate the practical deployment of DGDM under real-world computational constraints. **Q5:** What theoretical advancements the proposed ill-conditioned low-rank decomposition has compared to previous work. **A5:** The key theoretical advancement of our ill-conditioned low-rank decomposition lies in its fundamental difference from conventional SVD-based methods when handling matrices with significant outliers. In standard SVD, the presence of outliers introduces several critical limitations [1,2]. They distort the singular value spectrum by amplifying small singular values associated with high-frequency noise components, consequently degrading the signal-to-noise ratio in the resulting low-rank approximation. This distortion manifests most prominently in the impaired rank selection capability, where the outlier-contaminated singular value spectrum exhibits slowed decay, making traditional truncation criteria unreliable. Built on prior ill-conditioned decomposition works [3,4], our method establishs rigorous recovery guarantees for the ill-conditioned low-rank composition of weights. The theoretical framework ensures stable outliers isolation properties even under significant outlier, while simultaneously maintaining better control over the dense matrix condition number throughout the decomposition process. [1] Estimating the number of hidden neurons in a feedforward network using the singular value decomposition. 2006. [2] Robust PCA via outlier pursuit. Neurips, 2010. [3] Accelerating ill-conditioned low-rank matrix estimation via scaled gradient descent. JMLR, 2021. [4]Learned robust PCA: A scalable deep unfolding approach for high-dimensional outlier detection. Neurips, 2021. **Q6:** What is the basis of rank selection for a low-rank components? **A6:** Thank you for raising this important question. The selection of rank is based a systematic trade-off analysis between computational efficacy and precision. Through experiments on CATH dataset, we observed that low ranks (rank=8) led to significant degradation in graph generation quality, and higher ranks (rank=32) provided marginal quality improvements while incurring substantial latency overhead. We will include detailed studies on the rank selection in the revised manuscript. ||Perplexity|Recovery(%)|Speedup|Mem.(GB)| |-|-|-|-|-| |Rank=8|5.1|46.5|2.6|4.7| |Rank=16|4.5|51.6|2.5|4.9| |Rank=32|4.5|51.8|2.2|5.4|
Summary: This paper focuses on the quantization of discrete diffusion models for graph data. To achieve this, the authors introduce sparse-dense activation quantization and low-rank decomposition with hardware support. Experimental results demonstrate that the proposed method enhances quantization performance while improving speed. Claims And Evidence: 1. In lines 60 -- 65, the authors argue that existing quantization methods are insufficient to address the challenges posed by computation boundaries, citing several quantization techniques for LLMs. However, prior work specifically targeting quantization for diffusion models [1,2,3] also addresses these challenges. These methods should be discussed in this context to provide a more comprehensive comparison. 2. In lines 275 -- 277, the authors state that the matrix is ill-conditioned but provide limited explanation of its impact. It would be helpful to clarify how this ill-conditioning affects the convergence of SGD and whether it introduces optimization difficulties. Additional theoretical or empirical analysis would strengthen this claim. [1] Li, Xiuyu, et al. "Q-diffusion: Quantizing diffusion models." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023. [2] Shang, Yuzhang, et al. "Post-training quantization on diffusion models." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2023. [3] So, Junhyuk, et al. "Temporal dynamic quantization for diffusion models." Advances in neural information processing systems 36 (2023): 48686-48698. Methods And Evaluation Criteria: 1. The proposed method can be viewed as a combination of existing techniques, and its components lack sufficient novelty. As I understand, the authors decompose both weights and activations into outliers and a central part. For weight decomposition, the approach closely resembles techniques used in LoRA [1]. Additionally, the concept of decomposed outliers has been previously introduced in [2]. Given these similarities, the paper should better highlight its unique contributions beyond existing work. While the method primarily builds on existing techniques, I acknowledge its contribution, as the implementation of this combination is non-trivial. The approach requires careful design and integration, which adds value despite the lack of fully novel components. 2. There exists increased computational cost due to the formulation. The proposed method relies on multiple integer multiplications to compensate for quantization errors, which may introduce additional computational overhead. It would be beneficial for the authors to provide a more detailed analysis of the computational complexity and conduct experiments to evaluate the actual impact on efficiency. [1] Hu, Edward J., et al. "Lora: Low-rank adaptation of large language models." ICLR 1.2 (2022): 3. [2] Dettmers, Tim, et al. "Gpt3. int8 (): 8-bit matrix multiplication for transformers at scale." Advances in neural information processing systems 35 (2022): 30318-30332. Theoretical Claims: Correct Experimental Designs Or Analyses: ## Strength ## 1. The experimental evaluation is fair. The authors compare both memory usage and computation cost in terms of real acceleration, which requires CUDA implementation. This adds credibility to the experiments and strengthens their impact. 2. The proposed method demonstrates good performance, as it effectively improves quantization accuracy. While there is a minor trade-off between memory and computation cost, such trade-offs are expected in this field and do not detract from the overall contribution. ## Weakness ## While there are existing quantization techniques for diffusion models, they are not included in the experimental comparisons [1, 2, 3]. A direct comparison with these methods would provide a clearer understanding of the proposed approach's advantages and limitations. [1] Li, Xiuyu, et al. "Q-diffusion: Quantizing diffusion models." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023. [2] Shang, Yuzhang, et al. "Post-training quantization on diffusion models." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2023. [3] So, Junhyuk, et al. "Temporal dynamic quantization for diffusion models." Advances in neural information processing systems 36 (2023): 48686-48698. Supplementary Material: Checked the implementation of CUDA. Relation To Broader Scientific Literature: None Essential References Not Discussed: None Other Strengths And Weaknesses: ## Strength ## 1. The paper addresses an important but underexplored problem—quantization for graph discrete diffusion models. It also highlights the issue of weight outliers, which is crucial for improving quantization performance. 2. The paper is well-structured, with a clear presentation of the motivation, background, methodology, and experiments, making it easy to follow. 3. The inclusion of a CUDA implementation enhances the practical applicability of the method by supporting hardware acceleration. 4. The paper provides a theoretical guarantee for the proposed method, strengthening its validity and reliability. Other Comments Or Suggestions: None Questions For Authors: None Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We highly appreciate your positive reviews and constructive suggestions. **Q1:** These methods [1,2,3] should be discussed in this context to provide a more comprehensive comparison. **A1:** Thank you for your constuctive suggestion. We note that these methods [1,2,3] were designed for image diffusion models (IDMs). Specifically, [1,2] optimize quantized weights by minimizing the MSE between quantized and full-precision continuous outputs. [3] introduces time-step specific encoding for image diffusion models. However, these approaches cannot be directly applied to Discrete Graph Diffusion Models (DGDMs) due to fundamental incompatibilities. As shown in Remark 3.1, in DGDMs, the intermediate node attributes and graph structures are obtained through discrete sampling from categorical distributions. While IDMs relay on Gaussian noise and continuous denoising processes. Our method advances existing method in three innovations. (i) Recognizing the significant outliers in model weights, we first propose an ill-conditioned low-rank weight decomposition. This contrasts with basic SVD approach, allowing our method to achieve better numerical stability. (ii) For the residual component derived from the raw weight and low-rank decomposition, our method enforces α-sparsity, enabling efficient sparse matrix multiplication during inference. (iii) For activation outliers, our approach eliminates the need for calibration data to select thresholds, making it more practical. We will add detailed discussion of these works in our related work. We sincerely appreciate your valuable feedback. [1] Q-diffusion: Quantizing diffusion models. In CVPR, 2023. [2] Post-training quantization on diffusion models. In CVPR, 2023. [3] Temporal dynamic quantization for diffusion models. In Neurips, 2023. **Q2:** In lines 275--277, the authors state that the matrix is ill-conditioned but provide limited explanation of its impact. It would be helpful to clarify how this ill-conditioning affects the convergence of SGD and whether it introduces optimization difficulties. Additional theoretical or empirical analysis would strengthen this claim. **A2:** We sincerely appreciate the reviewer's insightful suggestion. To better illustrate how ill-conditioning affects the optimization of Eqn. (9) ($\|LR^\top + S_W - W\|_F^2$), we have conducted additional **empirical analysis** examining the relationship between the number of outliers and the reconstruction error (measured by Frobenius norm between the reconstructed matrix and the original matrix). These results are provided in this [`link`](https://anonymous.4open.science/r/ICML_Rebuttal_9670/outlier_effect.png). We observe that the reconstruction error (Frobenius norm) increases monotonically with the number of outliers. This correlation indicates that stronger ill-conditioning (caused by more outliers) leads to greater optimization difficulties in matrix reconstruction. Moreover, in this [`link`](https://anonymous.4open.science/r/ICML_Rebuttal_9670/derivation.png), we present a comprehensive derivation of these two scale terms $(R^\top R)^{-1}$ and $(L^\top L)^{-1}$ in Eqn.(13). We will include a more detailed description of this mechanism in the revised version to better clarify these points. Thank you for this valuable suggestion to improve the clarity of our work. **Q3:** It would be beneficial for the authors to provide a more detailed analysis of the computational complexity and conduct experiments to evaluate the actual impact on efficiency. **A3:** We sincerely thank the reviewer for raising this important point regarding the computational cost of our method. We agree that the additional multiplications introduced to compensate for quantization errors incur some computational overhead, but this overhead is marginal in practice. To evaluate this, we conducted an ablation study (Figure 4 and Sec 5.4). The results show that the ablation variant (iii) Bit-DGDM-$S_XS_W$, which removes multiplications for sparse components, achieves only a modest speed improvement (from 2.5× to 2.6×) but suffers significant degradation in generation quality, as evidenced by the perplexity increase (4.5 → 4.7). This performance decline highlights the critical role of these operations in mitigating high-magnitude outliers. Furthermore, compared to threshold-based alternatives (variant (ii)), our method not only preserves competitive generation quality but also delivers better inference speed (2.5× vs. 2.2×). This efficiency gain stems from our use of ill-conditioned low-rank decomposition. Collectively, these experiments demonstrate that the additional computational overhead is negligible and does not substantially impact inference speed. In the revised version, we will expand our empirical analysis to better demonstrate the impact of each proposed component in our work. Once again, we deeply appreciate this valuable feedback, which will undoubtedly help improve the clarity and rigor of our work. --- Rebuttal Comment 1.1: Comment: Thanks for the reply of the authors. I will keep the positive rating for now. --- Reply to Comment 1.1.1: Comment: Thank you for your response. If you have any further suggestions, feel free to tell us. Thank you again for your constructive comments.
Summary: This paper presents Bit-DGDM, an advanced post-training quantization framework developed for Discrete Graph Diffusion Models. The proposed framework introduces two innovations, i.e., (1) a sparse-dense activation quantization mechanism and (2) an ill-conditioned low-rank weight decomposition technique, to effectively addresses two key challenges, i.e., the computational bottleneck in DGDMs and the outliers in weights and activations. The efficacy of the framework is rigorously validated through extensive experiments across a variety of graph generation tasks, demonstrating superior performance compared to existing quantization baselines. Claims And Evidence: The paper's claims is supported by clear evidence. Methods And Evaluation Criteria: The framework proposed in this study shows a high degree of alignment with the challenges intrinsic to DGDMs. The authors present an incisive analysis of prevailing issues, particularly emphasizing the outliers and computational intensity. The evaluation is rigorously designed, incorporating well-established benchmark datasets and scientifically validated metrics, thereby substantiating the efficacy of the proposed method. Theoretical Claims: I have checked the proofs and theoretical claims presented in the paper. The authors provide clear references for the work they rely on and explicitly delineate the underlying premises and assumptions that validate their theoretical proposition. The theoretical exposition is systematically organized. Experimental Designs Or Analyses: I have reviewed the paper's experimental design and results. The selected baselines are the most adavanced methods in the fields of LLM and image diffusion models, and the datasets utilized for validation are the mainstream benchmarks of graph generations. The experimental design is convincing, and the validation procedures demonstrate the efficacy of the proposed method. Supplementary Material: I have checked the theoretical proofs, the analysis of time complexity and the implementation details of the kernel. The proofs are well-organized, leveraging basic principles from robust PCA, with each lemma and theory appropriately referenced, thereby ensuring academic integrity. The time complexity analysis is exceptionally thorough. Besides, the kernel implementation is clear with well-documented pseudocode. The supplementary materials are comprehensive and provide robust support for the paper's claim. Relation To Broader Scientific Literature: The contributions of this paper significantly extend the scope of prior quantization research by uncovering critial insights tailored to discrete graph diffusion models. The authors proves that computations, instead of memory loading, consitute the primary bottleneck in DGDMs. Besides, the study reveals the presence of outliers in both activations and weights within DGDMs. To solve this, the authors innovatively leverage robust PCA for weight decomposition and employ sparse matrix to handle outliers. Essential References Not Discussed: The important references I familiar have been mentioned in this paper. Other Strengths And Weaknesses: The other strengths of the paper are as follows: (1) The paper is well-structured and easy to follow. (2) The proposed quantization method is effectively in solving specific challenges in DGDMs. (3) The experiment results show the effectiveness of proposed method in both generation performance and inference speed. Weaknesses: (1) The font in the figures of the paper should be consistent to make it easier to read. (2) The time complexity analysis should be extensively introduced in the main paper. Other Comments Or Suggestions: I suggest that the authors use a consistent font in the figures of the paper to make it easier to read. Questions For Authors: See above. Besides, I would like to know the time cost for the quantization of DGDMs. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We very much appreciate your constructive comments. For your concerns: **Q1:** The font in the figures of the paper should be consistent to make it easier to read. **A1:** Thank you for your helpful feedback. We have updated all figures to use Times New Roman font, ensuring formatting consistency. The updated figures are avaible here: [https://anonymous.4open.science/r/ICML_Refined_Figures_9670](https://anonymous.4open.science/r/ICML_Refined_Figures_9670) **Q2:** The time complexity analysis should be extensively introduced in the main paper. **A2:** We appreciate your valueble suggestion. Due to space limitations in the main manuscript, we have included the detailed complexity analysis in Appendix D. In our future version, we will add a concise summary of the key complexity results in Sec 4.3 of the main paper. **Q3:** The time cost for the quantization of DGDMs. **A3:** Thank you for your insightful question. As detailed in Appendix F.3, the quantization process demonstrates practical efficiency. Specifically, for the DiGress model on the QM9 dataset, the ill-conditioned decomposition requires 13.6 minutes and memory usage remains manageable at 2.1 GB. We emphasize that this computational overhead is highly acceptable given the significant inference acceleration achieved through quantization.
Summary: This paper proposes post-training quantization (PTQ) methods to quantize discrete graph diffusion models (DGDM). The paper first analyzes outlier distributions of weights and activations in DGDM. For activations, the proposed method split activation matrices into high-precision sparse matrix (outliers) and low-precision dense matrix. Then, weights are split into sparse matrices and low-rank matrices which represent dense matrices. By utilizing low-bit and sparse-dense matrix multiplication CUDA kernels, the proposed method not only achieves better accuracy and perplexity, but also fast computation. Claims And Evidence: The paper’s claim is convincing and supported by both empirical and theoretical evidence. Methods And Evaluation Criteria: The proposed method makes sense and remedies the computation bottlenecks of DGDM. Also, the motivational studies show that the proposed method is valid and aims to solve realistic problems. The paper also shows extensive evaluation, ablation, and sensitivity studies. However, the reviewer finds that SVDQuant (Li et al., 2024) shares a similar idea regarding outliers and low-rank computation. Therefore, the reviewer suggests adding more comparisons with SVDQuant. Theoretical Claims: No issues. Experimental Designs Or Analyses: The experimental results show clear advantages of the proposed method in terms of accuracy and perplexity. However, the proposed method shows lower speedup and higher memory usage compared to some of the baselines. Therefore, as there exists some trade-off between accuracy and efficiency, the reviewer suggests adding more analysis regarding the tradeoff. Supplementary Material: The reviewer appreciates algorithms, detailed experimental settings, and more analyses. Relation To Broader Scientific Literature: The contribution of the paper is related to neural network quantization as the paper analyzes and reduces the impact of outliers in target matrices. Essential References Not Discussed: N/A Other Strengths And Weaknesses: The paper is well-written and provides a detailed explanation of the preliminary and related work section. Other Comments Or Suggestions: The reviewer suggests using Times font in the figures helps to make the paper more consistent in format. Questions For Authors: The questions of the reviewer can be summarized as follows: 1. What are the differences between the proposed method and SVDQuant? 2. How significant is the tradeoff between the efficiency and effectiveness of the proposed method? This question is related to the design of the method as it contains high-precision multiplications in addition to the low-bit quantized operations. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We very much appreciate your positive comments and constructive suggestions. **Q1:** What are the differences between the proposed method and SVDQuant? **A1:** Thank you for your insightful question. Our method introduces two key innovations over SVDQuant. (i) Recognizing the presence of significant outliers in model weights, we propose an ill-conditioned low-rank weight decomposition method. This contrasts with SVDQuant's conventional SVD approach, allowing our method to achieve better numerical stability. (ii) For the residual component derived from the raw weight and low-rank decomposition, while SVDQuant maintains a dense structure, our method enforces $\alpha$-sparsity. This constraint ensures that no more than $\alpha$ proportion of elements in each row and column are non-zero, enabling efficient sparse matrix multiplication during inference. Consequently, our approach not only preserves precision but also achieves consistent acceleration, whereas SVDQuant’s dense residuals incur higher computational costs. **Q2:** How significant is the tradeoff between the efficiency and effectiveness of the proposed method? This question is related to the design of the method as it contains high-precision multiplications in addition to the low-bit quantized operations. **A2:** Thank you for your question. The trade-off between computational efficiency and model effectiveness is a central consideration in our model design. Our experimental results (Figure 4) demonstrate that while the high-precision multiplications do introduce some computational overhead, they are essential for maintaining the quality of generated graphs in precision-critical applications like molecular generation and protein inverse folding. Specifically, while removing the multiplication operations for sparse componenets(as in variant (iii) Bit-DGDM-$S_XS_W$) yields a slight speed improvement (2.5× → 2.6×), it comes at a significant cost in generation quality, as seen in the perplexity degradation (4.5 → 4.7). This confirms that the additional operations play a crucial role in handling high-magnitude outliers effectively. Moreover, compared to threshold-based variant (ii) removing weight outliers through thresholds, our method not only maintains competitive generation quality but also achieves better inference speed (2.5× vs. 2.2×). This gain stems from our use of ill-conditioned low-rank decomposition, which remains computationally efficient and model effective in significant outlier scenarios. **Q3:** The reviewer suggests using Times font in the figures helps to make the paper more consistent in format. **A3:** Thank you for your constructive suggestion. We have revised the font in all figures to Times font to ensure consistency with the paper’s format. The updated figures are available at the following link: [https://anonymous.4open.science/r/ICML_Refined_Figures_9670](https://anonymous.4open.science/r/ICML_Refined_Figures_9670). Please let us know if you have any additional feedback. We would be happy to make further improvements. --- Rebuttal Comment 1.1: Comment: The author's rebuttal successfully resolved the reviewer's concerns, so this reviewer stays in a positive rating for this paper. I appreciate the authors' efforts and time in preparing the rebuttal. --- Reply to Comment 1.1.1: Comment: Thank you for your response. If you have any further suggestion, feel free to tell us. Thank you again for your constructive comments and suggestions.
null
null
null
null
null
null
ReverB-SNN: Reversing Bit of the Weight and Activation for Spiking Neural Networks
Accept (poster)
Summary: This paper introduces a novel binary design in SNN termed ReverB, which uses real-value activation and binary weights, merging the characteristics of both BNN and SNN. This innovative approach retains the energy efficiency advantages of SNNs in inference and their temporal properties. Additionally, the paper proposes a reparameterization method to enhance the performance of the ReverB network. Claims And Evidence: Yes, the claims made in the submission are supported by clear and convincing evidence. The method is evalued across various network architectures and datasets. Methods And Evaluation Criteria: The method proposed in this paper is effective. The new network, positioned between BNNs and SNNs, is expected to fully leverage the advantages of both, and potentially start a new field in lightweight networks. The extensive experimental validation across diverse architectures and datasets convincingly demonstrates the superiority of this method over current state-of-the-art approaches. Theoretical Claims: Not applicable. This paper does not involve complex theoretical proofs. Experimental Designs Or Analyses: The experimental design is generally reasonable. The method is evalued across various network architectures and datasets. The authors also conducted a series of ablation experiments to evaluate the effectiveness of the proposed method. Supplementary Material: The supplementary materials provide code for paper. Relation To Broader Scientific Literature: This paper introduces a novel binary design in SNN termed ReverB, which uses real-value activation and binary weights, merging the characteristics of both BNN and SNN. It potentially starts a new field in lightweight networks. Essential References Not Discussed: I think the paper has cited enough relevant literatures. Other Strengths And Weaknesses: The paper is well-written, the idea is interesting, and the results are impressive. Other Comments Or Suggestions: None Questions For Authors: 1.Does the ReverB network has a reset mechanism, and how does it function? 2.Does the ReverB network use the Batch Normalization (BN) layer, and can ReverB eliminate it during inference similar to SNN? 3.Why does ReverB exhibit performance advantages over traditional SNNs, and can the authors provide an explanation for this? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thanks for your efforts in reviewing our paper and your recognition of our novel method and effective results. The responses to your questions are given piece by piece as follows. **Question 1**: Does the ReverB network has a reset mechanism, and how does it function? **A1**: Thanks for the question. ReverB network has a reset mechanism. When the membrane potential exceeds the firing threshold, it will emit itself and then reset to 0; otherwise, it will decay to a new value and then be updated with next-time input. **Question 2**: Does the ReverB network use the Batch Normalization (BN) layer, and can ReverB eliminate it during inference similar to SNN? **A2**: Thanks for the question. We use the BN layer in the network and it can be eliminated in the inference. At inference time, the batch statistics (mean and variance) are no longer needed. Instead, we can fold the normalization, scaling, and shifting operations into the weights and biases of the previous layer. Then $W′=σ/γ⋅W$ and $b′=γ(b−σ/μ)+β$. In our paper, we propose the re-parameterization method to fold the $σ/γ$ into the previous layer activation (see Eq.16). Thus the BN can be removed still in the ReverB. **Question 3**: Why does ReverB exhibit performance advantages over traditional SNNs, and can the authors provide an explanation for this? **A3**: Thanks for the question. This is because ReverB quantizes the weight while traditional SNNs quantize activation. Highlight greater accuracy degradation from quantizing activations compared to weights. This is due to several key reasons: **First**, activations often have a much wider and more varied dynamic range compared to weights. Due to the large dynamic range and possible non-uniform distribution of activations, quantizing them requires compressing a broader set of values into a smaller bit-width, which can result in greater precision loss. **Second**, Activations change frequently during the forward pass, as they are directly influenced by the input data and the weights. The values can vary significantly between different layers, and this variability can be much greater in deeper layers. Such frequent and complex changes make activations more sensitive to quantization errors. On the other hand, weights are stable and unrelated to input, making them less prone to large errors when quantized. **Third**, Here, we will also use the entropy concept to show that ReverB-SNN has a higher information capacity than vanilla SNN. thus exhibiting performance advantages over traditional SNNs. We first perform a theoretical analysis using the concept of entropy. The representational capability $ \mathcal{C}(\mathbf{X}) $ of a set $ \mathbf{X} $ is determined by the maximum entropy of $ \mathbf{X} $, expressed as: $\mathcal{C}(\mathbf{X}) = \max \mathcal{H}(\mathbf{X}) = - \sum_{x \in \mathbf{X}} p_{\mathbf{X}}(x) \log p_{\mathbf{X}}(x),$ where $ p_{\mathbf{X}}(x) $ is the probability of a sample $ x $ from $ \mathbf{X} $. Then it is clear for the following proposition: For a set $ \mathbf{X} $, when the probability distribution of $ \mathbf{X} $ is uniform, i.e., $ p_{\mathbf{X}}(x) = \frac{1}{N} $, where $ N $ is the total number of samples in $ \mathbf{X} $, the entropy $ \mathcal{H}(\mathbf{X}) $ reaches its maximum value of $ \log(N) $. Hence, we conclude that $ \mathcal{C}(\mathbf{X}) = \log(N) $. Using the proposition, we can evaluate the representational capacity of vanilla SNN and our model. Let's consider two connected neuron layers. Since the connectivity of the neurons is the same no matter for vanilla SNN or our model, let us focus on an arbitrary two connected neurons from different layers. For the vanilla SNN, the values output from one neuron to another are {0, 1} x v, where v is the fixed weight between the two neurons. Thus one neuron from vanilla SNN could transform two samples into another one, and $ \mathcal{C}(\mathbf{X}) = \log(2) = 1$. For our model, the values output from one neuron to another are u x {-1, 1}, where u is the real-valued spike and could be changed. u requires 32-bits. Thus the number of possible samples from u is 2^{32} and the value samples from one neuron to another is 2^{32+1}. $ \mathcal{C}(\mathbf{X}) = \log(2^{32+1}) = 33$. This highlights the limited representational capacity of vanilla SNN compared to our model. --- Rebuttal Comment 1.1: Comment: Thanks for the author's responses. My concerns have been addressed.
Summary: The paper proposes an SNN design with real-valued activations and binary weights to boost information capacity while keeping energy efficiency. Its novel bit-reversal strategy and adaptive weight scaling are key innovations. However, the paper’s motivation and presentation lack clarity and could benefit from visual aids. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: The paper doesn't provide formal proofs but offers partial derivations that seem generally sound, though some steps lack full justification, leaving a degree of uncertainty. Experimental Designs Or Analyses: The overall experimental design appears sound and appropriate for the problem at hand. While some aspects could be clarified further, these do not undermine the main findings. Supplementary Material: No supplementary material was provided for review. Relation To Broader Scientific Literature: The paper builds on established SNN research, particularly findings that quantizing activations causes more accuracy loss than weights. Its contributions extend prior work on efficient, multiplication-free SNNs and adaptive parameterization. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths + The approach preserves the multiplication-free, event-driven nature of SNNs, ensuring that energy efficiency remains a strong point despite the introduction of real-valued activations. + Incorporating a trainable factor for binary weights and using re-parameterization during inference allows the network to learn optimal weight magnitudes while still converting to a standard binary format. This offers a good balance between learning flexibility and inference efficiency. Weaknesses - The paper does not clearly explain the rationale behind reversing the bits of weight and activation. This lack of clarity could make it challenging for readers to fully grasp why this approach is beneficial. - The section detailing the contributions is somewhat lengthy and could be streamlined. Consolidating similar ideas might improve readability and focus. - The comparisons in Table 2 and Table 4 appear to rely on methods from 2022, missing more recent literature. - The paper could benefit from additional visualizations. Graphs or schematic diagrams illustrating the architecture and the re-parameterization process would enhance understanding and provide clearer insights into the proposed method. Other Comments Or Suggestions: N/A Questions For Authors: 1. Could you clarify the theoretical motivation and advantages behind "Reversing Bit"? 2. The paper lacks visualizations; could you add diagrams of the network architecture or re-parameterization process for clarity? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your efforts in reviewing our paper and your recognition of our novel bit-reversal strategy and adaptive weight scaling. The responses to your weaknesses and questions are given piece by piece as follows. **Weakness 1**: The paper does not clearly explain the rationale behind reversing the bits of weight and activation. This lack of clarity could make it challenging for readers to fully grasp why this approach is beneficial. **R1**: Thanks for the advice. Here, we will use the entropy concept to show that ReverB-SNN has a higher information capacity than vanilla SNN. we first perform a theoretical analysis using the concept of entropy. The representational capability $ \mathcal{C}(\mathbf{X}) $ of a set $ \mathbf{X} $ is determined by the maximum entropy of $ \mathbf{X} $, expressed as: $\mathcal{C}(\mathbf{X}) = \max \mathcal{H}(\mathbf{X}) = - \sum_{x \in \mathbf{X}} p_{\mathbf{X}}(x) \log p_{\mathbf{X}}(x),$ where $ p_{\mathbf{X}}(x) $ is the probability of a sample $ x $ from $ \mathbf{X} $. Then it is clear for the following proposition: For a set $ \mathbf{X} $, when the probability distribution of $ \mathbf{X} $ is uniform, i.e., $ p_{\mathbf{X}}(x) = \frac{1}{N} $, where $ N $ is the total number of samples in $ \mathbf{X} $, the entropy $ \mathcal{H}(\mathbf{X}) $ reaches its maximum value of $ \log(N) $. Hence, we conclude that $ \mathcal{C}(\mathbf{X}) = \log(N) $. Using the proposition, we can evaluate the representational capacity of vanilla SNN and our model. Let's consider two connected neuron layers. Since the connectivity of the neurons is the same no matter for vanilla SNN or our model, let us focus on an arbitrary two connected neurons from different layers. For the vanilla SNN, the information output from one neuron to another are {0, 1} x W, where W is the fixed weight between the two neurons. Thus one neuron from vanilla SNN could transform two samples into another one, and $ \mathcal{C}(\mathbf{X}) = \log(2) = 1$. For our model, the information output from one neuron to another are o x {-1, 1}, where o is the real-valued spike and could be changed. o requires 32-bits. Thus the number of possible samples from o is 2^{32} and the value samples from one neuron to another is 2^{32+1}. $ \mathcal{C}(\mathbf{X}) = \log(2^{32+1}) = 33$. This highlights the limited representational capacity of vanilla SNN compared to our model. **Weakness 2**: The section detailing the contributions is somewhat lengthy and could be streamlined. Consolidating similar ideas might improve readability and focus. **R2**: Thanks for the advice. We will further polish our contributions in the final version. **Weakness 3**: The comparisons in Table 2 and Table 4 appear to rely on methods from 2022, missing more recent literature. **R3**: Thanks for the advice. We have added more comparisons as below. It can be seen that our method also performs on par with or better than state-of-the-art methods. | Dateset | Method | Architecture | Timestep | Accuracy | | --- | --- | --- | --- | --- | | CIFAR10 | Q-SNNs(ACMMM 2024) | ResNet19 | 2 | 95.54% | | | AGMM(AAAI 2025) | ResNet19 | 2 | 96.33% | | | FSTA-SNN(AAAI 2025) | ResNet20 | 4 | 94.72% | | | FSTA-SNN(AAAI 2025) | ResNet19 | 2 | 96.52% | | | TAB(NeurIPS 2024) | ResNet19 | 2 | 94.73% | | | **Our method** | ResNet20 | 4 | **94.96%** | | | **Our method** | ResNet19 | 2 | **96.62%** | | CIFAR100 | SSCL(AAAI 2024) | ResNet20 | 2 | 72.86% | | | SSCL(AAAI 2024) | ResNet19 | 2 | **78.79%** | | | TAB(NeurIPS 2024) | ResNet19 | 2 | 76.31% | | | **Our method** | ResNet20 | 4 | **73.28%** | | | **Our method** | ResNet19 | 2 | 78.46% | | CIFAR10-DVS | SSCL(AAAI 2024) | ResNet19 | 10 | 80.00% | | | SpikeFormer(ICLR 2023) | SpikeFormer | 10 | 78.90% | | | **Our method** | ResNet19 | 10 | **80.50%** | **Weakness 4**: The paper could benefit from additional visualizations. Graphs or schematic diagrams illustrating the architecture and the re-parameterization process would enhance understanding and provide clearer insights into the proposed method. **R4**: Thanks for the advice. We have added the visualizations for the re-parameterization process. Please see it from https://imgur.com/GiJYTke **Question 1**: Could you clarify the theoretical motivation and advantages behind "Reversing Bit"? **A1**: Thanks for the question. Please see our response to **Weakness 1.** **Question 2**: The paper lacks visualizations; could you add diagrams of the network architecture or re-parameterization process for clarity? **A1**: Thanks for the advice. We have added the visualizations for the re-parameterization process. Please see it from https://imgur.com/GiJYTke --- Rebuttal Comment 1.1: Comment: The authors have addressed most of my concerns, which makes me relatively satisfied with the revisions. I lean towards a weak accept. However, the paper still requires further improvements in several areas: My primary interest was in the visualization of experimental results rather than inference visualization. This focus might not have been clearly communicated in the submission. The qualitative and quantitative comparison methods could be enhanced by including more recent work, such as [a] Towards Low-latency Event-based Visual Recognition with Hybrid Step-wise Distillation Spiking Neural Networks. There are some formatting and typesetting issues that need to be corrected.
Summary: This paper addresses the issue of information loss in Spiking Neural Networks (SNNs) due to the binarization of activations. The main contribution in the paper is to use binary weights (in $\\{-1,1\\}$) and real-valued spikes, instead of binary spikes and real-valued weights. This initial contribution is extended by allowing the binary weights to take their values in $\\{-\alpha, \alpha\\}$, where $\alpha$ is a trainable parameter; then, a re-parametrization technique is proposed to take $\alpha$ into account at inference while going back to binary weights in $\\{-1,1\\}$. Experiments on three datasets (CIFAR-10, CIFAR-100, ImageNet, with two architectures tested per dataset) are reported. Results suggest that the proposed contributions improve the classification accuracy over the baseline architectures, and perform on par with or better than state-of-the-art architectures. ## Update after rebuttal The rebuttal of the authors addressed my concerns about the rationale and design of the contribution. However, I still have concerns about the way hyperparameters were chosen/optimized, and the validity of the model used to estimate energy consumption. So, I increase my score to _2. Weak reject_. Claims And Evidence: The paper makes the following claims: 1. Weight binarization is less detrimental to performance than activation binarization. This statement seems to be supported by experimental results. 2. The proposed method (binarized weights, non-binarized spikes) maintains the event-based nature of the network. This claim seems correct, by construction. 3. The proposed method only requires additions. It seems to only apply to inference, not training, although this is no explicitly stated in the paper. While this claim is true for the first version of ReverB, it looks like this is not the case for the learnable version, which requires multiplications, according to Equation 18. Methods And Evaluation Criteria: 1. The paper proposes to solve the issue of information loss due to binary spikes in SNNs by binarizing weights instead of spikes. On the one hand, I find it interesting to change the perspective on this issue by moving the quantization issue from one variable of the model to another. On the other hand, I wonder whether this approach could be applied in practice. SNNs are valuable when they can be implemented on low-power neuromorphic hardware. Low power consumption is made possible thanks to the use of binary spikes by the model, and neuromorphic hardware is designed with this in mind. Switching binarization from activations to weights may not have the same benefits. This question is not addressed in the paper. 2. The evaluation criteria are the accuracy of the model and its energy consumption, which are relevant criteria in this context. However, the evaluation of the energy consumption of the model is based on the method from (Hu et al., 2021), who base their estimation on the specifications of particular hardware devices. These devices may not be able to run natively networks with real-valued spikes like the ones proposed in this paper. The relevance of this methodology for the network described here is not demonstrated in the paper. In addition, I believe memory consumption should also be considered as an evaluation criteria, as, here again, it may be very different with the proposed model. Theoretical Claims: 1. The paper provides the formulas for gradient computation in the proposed models (Equations 11, 14, and 15), but do not provide the derivation of these results, so I could not check whether they are sound. The authors should provide (for instance, as supplementary material) the complete demonstration that leads to these equations, as it is not straightforward considering the changes they made to the initial model (continuous weights, binary spikes) used in the STBP paper (Wu et al, 2018). 2. Equations 14 and 11 are the same, whereas Equation 14 should show how to compute the gradients with $\mathbf{W}^b_{\mathrm{trainable}}$. I believe there might be an error here. 3. The authors state (Section 3.2) that "the firing activity of spiking neurons becomes differentiable" in their model. This statement should be further justified. To my understanding, although the spikes take real values, they are still local in time and generated through a thresholding function, so there are discontinuities in the activation function that should be problematic in terms of differentiability. Experimental Designs Or Analyses: I reviewed Section 4 entirely. My comments about the experiments are detailed below. 1. The firing threshold $V_{\mathrm{th}}$ is said to be set initially to 0. Is this the actual value or a typo? Also, can it change over time? This was not mentioned previously in the paper. 2. A number of elements are missing from the experimental settings, which prevents from reproducing the experiments: - the optimizer used to train the network and its hyperparameters (e.g., number of epochs), - the protocol used to determine hyperparameters, - data pre-processing (if any), - preparation of the data (train/validation/test splits, and the size of mini-batches). 3. In Table 5, the figures for learnable ReverB are missing. Supplementary Material: The paper does not include supplementary material. Relation To Broader Scientific Literature: 1. The problem addressed in this paper is relevant to the community of neuromorphic machine learning. It has been addressed in a number of previous papers, as mentioned in Section 2. To the best of my knowledge, there is no previous work that proposes the same contribution as the one in this paper. 2. The use of binary weights has been explored in standard ANNs, for instance in (Courbariaux et al., 2015). The paper does not mention this line of work. - (Courbariaux et al., 2015) M. Courbariaux, Y. Bengio, J.P. David. BinaryConnect: Training Deep Neural Networks with binary weights during propagations. Neural Information Processing Systems (NeurIPS) (2015). Essential References Not Discussed: I did not identify any essential references that are not cited. Other Strengths And Weaknesses: 1. After Equation 12, it is stated that $\alpha \in \mathbb{R}^{C \times 1 \times 1}$. This dimension is not quite clear to me, since I assumed from the beginning that the previous equations were about fully-connected layers. It seems that the dimensions of tensors do not match here. The dimensions of tensors should be provided to make this clearer. 2. Some elements in Algorithm 1 are not clear. - It is not clear whether the `for` loop in the training algorithm loops over mini-batches or epochs. - In line 2 of the re-parameterization algorithm, why does $\alpha_i$ unfold into $i-1$ functions? - Why are labels used for inference? They should be used for evaluation only, not inference. Other Comments Or Suggestions: 1. Figure 1 is not very informative, as the principle of the contribution is simple enough and clearly stated in the paper. This figure could be removed. 2. In Equation 3, $T$ is not defined. 3. (Rathi & Roy, 2020) is cited to motivate the choice of the surrogate gradient, however a different surrogate gradient function is used in that paper. 4. In Section 3.2, paragraphs "Event-based Advantage Retaining" and "Addition-only Advantage Retaining" could be significantly shortened as they state straightforward properties of their model. 5. Parentheses are not typeset correctly in several equations (`\left(` and `\right)` should be used). 6. Before Equation 16, I think the equation number in "Eq. 16 can be further written as" is not the right one. 7. In Section 4.2, it is stated that top-1 accuracy and the mean accuracy and standard deviation are presented. I guess what is meant is that mean (top-1) accuracy is reported? 8. Percentages (%) are used instead of percentage points (pp) when presenting differences in accuracies. 9. Some typos should be corrected: - "trainable" is misspelled "trainble" in equations, - "binarization" is misspelled "binaration" after Equation 11, - "will converted": "will be converted", - before Equation 17, I guess $\alpha_1$ should actually be $\alpha_l$, - in the description of the experimental settings (Section 4), $\tau$ becomes $\tau_{\mathrm{decay}}$, - "peak accuracies of 95.51%": "peak accuracies of 95.53%", - "does not enjoy the multiplication-free" needs to be rephrased (Section 5), - "obviously" -> "Obviously", - the title of Section 4 should be spelled "Experiments". 10. The paper contains some subjective over-statements, which should be avoided. For instance: - "the well-trained SNN" (in Algorithm 1), - "Remarkably, our method achieves [...]", - "achieved impressive accuracies". Questions For Authors: 1. Why considering binary weight in $\\{-1,1\\}$ and not ternary weights in $\\{-1,0,1\\}$? The latter could enhance the learning capacity of neurons. 2. SNNs are valuable when they can be deployed on low-power neuromorphic hardware. Such hardware is typically designed with binary spikes in mind. Can the authors elaborate on the compatibility of their approach with current neuromorphic hardware, and what changes (if needed) should be applied to neuromorphic architectures to run this type of model? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thanks for your efforts in reviewing our paper. We will try to make the work clearer for you. The responses to your concerns and questions are given as follows. **Concern 1**: The only required addition is not for the learnable version. **R1**: Sorry for the confusion. Since the $\alpha$ will be fixed after training, it can be folded to the activation function before inference. Then only requires additions too. **Concern 2**: the model may not have the low power consumption benefits. **R2**: Thanks for the question. I agree that on many low-power neuromorphic hardware, the low-power consumption is thanks to the use of binary spikes. However, there are also many hardware that realize low-power consumption based on replacing multiplications with additions like [1,2]. With this hardware, our model can keep low power consumption too. What’s more, there is also no available hardware that could support SNN-based transformer architectures which become popular now. Our model and SNN-based transformer architectures could drive further hardware development like the emergence of other hardware like Lohoi and TureNorth. [1] A systolic SNN inference accelerator and its co-optimized software framework. [2] Tianjin chip. **Concern 3**: The evaluation of memory consumption. **R3**: Thanks for the question. For memory consumption, the vanilla SNN’s weights adopt 32 bits while our model’s weights could adopt 1 bit. Thus the memory consumption of our model is much less than that of the vanilla SNN. **Concern 4**: Provide the derivation for Equations 11, 14, and 15. **R4**: Thanks for the question. For Equations 11, from Equation 5 and $\mathbf{W_l}^b = {\rm sign}(\mathbf{W_l})$, based on chain rule, we could know that $\frac{\partial {L}}{\partial {\mathbf{W}_l}} = \sum_t \frac{\partial {L}}{\partial {U^t_l}}\frac{\partial {U^t_l}}{\partial {\mathbf{W}^b_l}}\frac{\partial \mathbf{W}^b_l}{\partial {\mathbf{W}_l}}$. From Equation 4 and Equation 1, we know that $U^t_l$ will affect $O^t_l$ and $U^{t+1}_l$, thus $\frac{\partial {L}}{\partial {U^t_l}} = \frac{\partial {L}}{\partial {O^t_l}} \frac{\partial {{O^t_l}}}{\partial {{U^t_l}}} + \frac{\partial {L}}{\partial {{{U^{t+1}_l}}}} \frac{\partial {{U^{t+1}_l}}}{\partial {{U^t_l}}}$. Combine all these, we can get Equations 11. The proof of Equation 14 is the same as Equation 11. For Equation 15, the $\alpha$ only affects $U^t$, thus based on the derivation of Equation 11, it is easy to derive it. **Concern 5**: Equations 14 and 11 are the same? **R5**: Sorry for the confusion. $\mathbf{W}^b_{\rm trainble} = \alpha \mathbf{W}^b$, and we calculate $\alpha$ and $W$ separately in Equations 14 and 15. Thus Equations 14 and 11 are the same in form. **Concern 6**: The firing activity becomes differentiable" should be further justified. **R6**: Thanks for the advice. Compared to binary activation, our real-valued activation becomes differentiable in more intervals. We overclaimed this in the paper. We will correct this in the final version. Thanks. **Concern 7**: The firing threshold is said to be set to 0? **R7**: Sorry for the confusion. In these static datasets, it is 0 all the time, since static datasets can not provide timing information. For neuromorphic datasets, we set it to 0.25. **Concern 8**: the experimental settings are missing. **R8**: Thanks for the question. We will clarify the code settings in detail in the final version. **Concern 9**: In Table 5, the figures for learnable ReverB are missing. **R9**: Thanks for the question. We add the results for learnable ReverB below. | Accuracy | #Flops | #Sops | Energy | | --- | --- | --- | --- | | 94.45% | 3.54M | 75.10M | 50.03uJ | **Concern 10**: Why $\alpha \in \mathbb{R}^{C \times 1 \times 1}$ **R10**: Sorry for the confusion. $\alpha$ is in a channel-wise manner for convolution layers. **Concern 11**: Some elements in Algorithm 1 are not clear. **R11**: Sorry for the confusion. The `for` loop is over mini-batches. With the $\alpha$ folded to i-1 function, then we can obtain a standard ReverB-SNN. We will correct the Inference to Evaluation in the Algorithm. Thanks. **Concern 12**: Other Comments Or Suggestions. **R12**: Very much thanks for these kind reminders. We will carefully correct these in our final version. **Question 1**: Why considering binary weights not ternary weights. **A1**: Thanks for the question. Using ternary weights is better than binary weights in our experiments. However, we only report the binary weight and real-valued activate results for a fair comparison of vanilla real-valued weight and binary activate SNNs. **Question 2**: How to deploy the proposed model to neuromorphic hardware. **A2**: Thanks for the question. Please see our response for **Concern 2**. --- Rebuttal Comment 1.1: Comment: Thanks to the authors for their answers. The rebuttal addresses my concerns only partially. Especially, some elements in the theoretical development (R5, R6, R10) and experimental design (R3 + concerns with energy consumption model, R8) are still unclear. So, I have to maintain my initial score. --- Reply to Comment 1.1.1: Comment: Considering that the first reply is only allowed to be 5000 characters, some issues were not explained in depth. Here, we provide further detailed responses. **Concern 5**: Eq 14 and 11 are the same? **R5**: Thanks. Similarly to the derivation of Eq 11 in R4, we can derive that $\frac{\partial {L}}{\partial {\mathbf{W}_{l,trainable}}} = \sum_t (\frac{\partial {L}}{\partial {O^t_l}} \frac{\partial {{O^t_l}}}{\partial {{U^t_l}}} + \frac{\partial {L}}{\partial {{{U^{t+1}_l}}}} \frac{\partial {{U^{t+1}_l}}}{\partial {{U^t_l}}} )\frac{\partial {{U^t_l}}}{\partial {\mathbf{W}\_{l,trainable}}}$. We also have $\mathbf{W}^b_{l,trainable} = \alpha {W^b_l}=\alpha \cdot {\rm sign}(\mathbf{W_l})$, thus $\frac{\partial {L}}{\partial {\mathbf{W}_{l}}} = \frac{\partial {L}}{\partial {\mathbf{W}\_{l,trainable}}} \frac{\partial {\mathbf{W}\_{l,trainable}}}{\partial {\mathbf{W}^b\_l}}\frac{\partial \mathbf{W}^b_l}{\partial {\mathbf{W}\_l}}$. Combine the two Equations, we have $\frac{\partial {L}}{\partial {\mathbf{W}_{l}}} = \sum_t (\frac{\partial {L}}{\partial {O^t_l}} \frac{\partial {{O^t_l}}}{\partial {{U^t_l}}} + \frac{\partial {L}}{\partial {{{U^{t+1}_l}}}} \frac{\partial {{U^{t+1}_l}}}{\partial {{U^t_l}}} )\frac{\partial {{U^t_l}}}{\partial {\mathbf{W}\_{l,trainable}}}\frac{\partial {\mathbf{W}\_{l,trainable}}}{\partial {\mathbf{W}^b\_l}}\frac{\partial \mathbf{W}^b\_l}{\partial {\mathbf{W}\_l}}$. Then fold $\frac{\partial {{U^t_l}}}{\partial {\mathbf{W}\_{l,trainable}}}\frac{\partial {\mathbf{W}\_{l,trainable}}}{\partial {\mathbf{W}^b\_l}}$ as $\frac{\partial {{U^t_l}}}{\partial {\mathbf{W}^b\_l}}$, we have Eq 14. **Concern 6**: The firing activity becomes differentiable" should be justified. **R6**: Sorry for the confusion. Compared to binary activation where the gradient is infinite at $V_{th}$, otherwise 0, our real-valued activation behaves similarly to a ReLU function, as shown in Equation 4. I agree that it remains non-differentiable when $U<V_{th}$, akin to how ReLU is non-differentiable for $x<0$. however,for $U>V_{th}$, the function is differentiable, similar to the behavior of ReLU for $x>0$. In the context of ANNs, ReLU is generally regarded as differentiable, so we stated that our real-valued activation could similarly be considered differentiable. I recognize that this statement may still be somewhat imprecise, and we will make the necessary corrections in the final version. **Concern 10**: Why $\alpha \in \mathbb{R}^{C \times 1 \times 1}$ **R10**: Sorry for the confusion. In the work, we adopt CNN models and the $\alpha$ is set to be learnable for convolution layers. For convolution layers, the $W \in \mathbb{R}^{C_{out} \times C_{in} \times K \times K}$. We set $\alpha$ in a $C_{in}$ channel-wise manner as$\alpha \in \mathbb{R}^{1 \times C_{in} \times 1 \times 1} $, thus it can be folded to the previous activation layers. we mistakenly wrote$\alpha \in \mathbb{R}^{1 \times C \times 1 \times 1}$ as $\alpha \in \mathbb{R}^{C \times 1 \times 1}$, we will correct it in the final version. **Concern 3**: The evaluation of energy and memory consumption. **R3**: Thanks. I agree that the hardware from (Hu et al., 2021) may not support our models, which makes the theoretical energy consumption somewhat imprecise. However, it is challenging to run the SNN model on hardware to directly compute energy consumption, as model development is often detached from hardware capabilities. Given this limitation, we have adopted the general theoretical energy consumption approach, which is consistent with other works presented in top conferences like ternary spike model [1] and SNN-based transformer model [2] which also cannot run on existing hardware. These works similarly rely on theoretical evaluation methods, like ours, to show their advantages until specialized hardware becomes available. [1] Ternary Spike, AAAI 2024 [2] SPIKE-DRIVEN TRANSFORMER V2, ICLR 2024 For memory consumption, the vanilla SNN uses full precision weights, requiring 32 bits per weight, whereas our model uses binary weights, requiring only 1 bit per weight. Taking ResNet20 as an example, which has 11.25M parameters, the vanilla SNN model would require $11.25 \times 4 = 45$MB of memory, while our SNN model only requires 1.41 MB of memory. **Concern 8**:the experimental settings are missing. **R8**: Thank you for the question. We used the SGD optimizer to train our models with a momentum of 0.9 and a learning rate of 0.1, which decays to 0 following a cosine schedule. For the CIFAR10(100) and CIFAR-DVS datasets, we trained the models for 400 epochs with a batch size of 128. On ImageNet, we trained for 300 epochs with the same batch size. Data augmentation was performed using only a flip operation. The train and test splits follow the settings provided by the official dataset. In these static datasets, $V_{th}$ is 0 all the time since static datasets can not provide timing information. For neuromorphic datasets, we set it to 0.25.
Summary: In this paper, the authors propose to make the weights of a spiking neural network ternary and the spikes of the units in the network real valued, essentially swapping what is done in SNNs usually. This swap preserves the advantages of SNNs, while improving its expressivity. The authors demonstrate their method in a number of benchmarks. Claims And Evidence: The claims made in the paper are supported by clear and convincing evidence -- both for their method preserving the advantages of SNNs in terms of energy efficiency (Table 5) and its ability to outperform existing SNNs in various benchmarks (Table 2-4). Methods And Evaluation Criteria: The benchmarks in which the method is evaluated on are pretty standard in the field for feed-forward SNNs. But it would have been useful to also see evaluations on sequence tasks since SNNs are inherently recurrent. It is not completely clear why the authors choose to use STBP to train the network rather than standard BPTT. This decision could be explained a bit better. Theoretical Claims: No major theoretical claims. Experimental Designs Or Analyses: The ablation studies, evaluation on standard benchmarks, and energy efficiency are the analyses that the authors perform. Did not find any major issues with any of them. Supplementary Material: Didn't check. Relation To Broader Scientific Literature: This paper builds on existing SNNs, improving their expressivity while keeping their other advantages. The use of ternary weights and real valued spikes [1] have been done before, but not in this specific context. [1] Subramoney, A., Nazeer, K. K., Schöne, M., Mayr, C. & Kappel, D. Efficient recurrent architectures through activity sparsity and sparse back-propagation through time. in The Eleventh International Conference on Learning Representations (2023). Essential References Not Discussed: The use of real-valued activations in spiking (i.e. event-based) networks has been done before in [1], and should be discussed. But the specific combination of ternary weights and real-valued activation is novel to my knowledge. [1] Subramoney, A., Nazeer, K. K., Schöne, M., Mayr, C. & Kappel, D. Efficient recurrent architectures through activity sparsity and sparse back-propagation through time. in The Eleventh International Conference on Learning Representations (2023). Other Strengths And Weaknesses: The key concept in this paper is novel to my knowledge. The end to end training setup and evaluation on challenging benchmarks is also a strength. One minor weakness is the framing of the method within current literature, which is not done thoroughly. For example many of the papers referenced in the introduction seem very arbitrary or is missing references to the seminal papers. E.g. The usual reference used for SNNs is [1]. Quantization was known well before 2019. Ditto knowledge distillation etc. [1] Maass, W. Networks of spiking neurons: The third generation of neural network models. Neural Networks 10, 1659–1671 (1997). Other Comments Or Suggestions: - It's not clearly described if a value of $\alpha$ is shared across an entire layer. Questions For Authors: - What is the activation used in between layers? In Fig. 1, it looks like it uses ReLU activations? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thanks for your efforts in reviewing our paper and your recognition of our novel method and notable results. The responses to your concerns and questions are given piece by piece as follows. **Concern 1**: It would have been useful to also see evaluations on sequence tasks since SNNs are inherently recurrent. **R1**: Thanks for the advice. Our method also performs well in sequence tasks. We have added the result on the CIFAR10-DVS and DVS-Gesture as below. It can be seen that our method also performs on par with or better than state-of-the-art methods. | Dateset | Method | Architecture | Timestep | Accuracy | | --- | --- | --- | --- | --- | | CIFAR10-DVS | SSCL(AAAI 2024) | ResNet19 | 10 | 80.00% | | CIFAR10-DVS | SpikeFormer(ICLR 2023) | SpikeFormer | 10 | 78.90% | | CIFAR10-DVS | **Our method** | ResNet19 | 10 | **80.50%** | | DVS-Gesture | ASA-SNN(ICCV 2023) | 5 layer SCNN | 20 | 97.70% | | DVS-Gesture | SpikeFormer(ICLR 2023) | SpikeFormer | 20 | 96.90% | | DVS-Gesture | TCJA (TNNLS 2024) | 5 layer SCNN | 20 | 97.56% | | DVS-Gesture | **Our method** | 5 layer SCNN | 20 | **98.23%** | **Concern 2**: Why do the authors choose to use STBP to train the network rather than standard BPTT? **R2**: Sorry for the confusion. Considering the similarity in computational mechanisms between SNNs and Recurrent Neural Networks (RNNs), SNN researchers transferred the Back-propagation Through Time (BPTT) method from RNNs to the supervised learning field of SNNs, which is also called the STBP training algorithm. Thus BPTT is the same as STPB in SNNs. **Concern 3**: One minor weakness is the framing of the method within the current literature. **R3**: Thanks for the advice. We will add more related literature and reframe them in the final version, such as: [1] Subramoney, A., Nazeer, K. K., Schöne, M., Mayr, C. & Kappel, D. Efficient recurrent architectures through activity sparsity and sparse back-propagation through time. in The Eleventh International Conference on Learning Representations (2023). [2] Maass, W. Networks of spiking neurons: The third generation of neural network models. Neural Networks 10, 1659–1671 (1997). **Concern 4**: It's not clearly described if a value of $\alpha$ is shared across an entire layer. **R4**: Sorry for the confusion. $\alpha$ is not shared across an entire layer. It is applied in a channel-wise manner across our models. We will make this description clearer in the final version. **Questions  1**: What is the activation used in between layers? **A1**: Sorry for the confusion. In addition inherently recurrent of our activation, the output of our activation is a ReLU-like function defined as follows: $ {\rm O}= {\rm U} \\ {\rm if U} \\ \ge V_{\rm th}, \\ { \rm otherwise} \\ 0$ In Relu activation, the $V_{\rm th}$ is 0, while in our activation, the $V_{\rm th}$ can be adjusted across different datasets or scenes. Our activation will decay through time, while ReLU not.
null
null
null
null
null
null
Rethinking Confidence Scores and Thresholds in Pseudolabeling-based SSL
Accept (poster)
Summary: This paper proposes a method for selection of points to be pseudolabeled in pseudolabeling-based semi-supervised learning idea. Contrasting previous works which use confidence-based thresholding, PaBlo trains a selector function with an optimization objective which balances coverage with pseudolabeling error, using a subset of the held-out validation data. Update after rebuttal: Apologies for the delay. Thanks for the additional results and new experiments. I'm raising my score to a Weak accept (3). However, I think the paper should incorporate an extensive discussion on the data availability assumptions that this method makes (validation data). Claims And Evidence: Claim 1: the adaptations of popular pseudolabelingbased SSL methods with PabLO output models with better test accuracy. This claim is verified on 3 datasets (CIFAR 10, CIFAR 100 and SVHN) with baselines Freematch and FixMatch. It would be interesting to include additional baselines which do not solely rely on simple confidence thresholds, such as uncertainty-based methods (e.g. [1], [2]) [1] Rizve, M. N., Duarte, K., Rawat, Y. S., & Shah, M. (2021). In defense of pseudo-labeling: An uncertainty-aware pseudo-label selection framework for semi-supervised learning. arXiv preprint arXiv:2101.06329. [2] Nguyen, V., Husain, H., Farfade, S., & Hengel, A. V. D. (2022). Confident sinkhorn allocation for pseudo-labeling. arXiv preprint arXiv:2206.05880. My main question about this experimental section is about the data splits. Given the statistics in Table 1, it seems that the validation set (which is labeled) used for the training of the selector function is much bigger than the labeled dataset used for training (6K samples, while the labeled data is only 250 samples for CIFAR 10), which contradicts the main purpose of semi-supervised learning, where labeled data is scarce. This brings a natural question: what happens if you use the validation data as labeled training data, and just use one of the baseline methods? I expect the performance of the baselines to increase significantly by using this validation data naively. Furthermore, the results in Table 6, for N_cal = N_th = 250 (approximately 82% top-1 accuracy) seem much lower than the results of the baselines Fixmatch and Freematch reported in Table 2 for CIFAR 10 (90.8% and 92.26%). Claim 2: a lower error tolerance is preferable It would be interesting to report the number of points which are pseudolabeled in Figure 3, and smaller values of epsilon. Indeed, with 0 error tolerance, we should recover classical supervised learning, which should be worse than pseudo-labeling. Hence it would be more intuitive to see an "inverse" U shape in Figure 3. Claim 3: Accumulation is not always helpful This claim is supported with results on CIFAR-10. Methods And Evaluation Criteria: The datasets are commonly used in the PL litterature. Other baselines could be included to compare with Pablo. Theoretical Claims: NA Experimental Designs Or Analyses: See above: My main question about this experimental section is about the data splits. Given the statistics in Table 1, it seems that the validation set (which is labeled) used for the training of the selector function is much bigger than the labeled dataset used for training (6K samples, while the labeled data is only 250 samples for CIFAR 10), which contradicts the main purpose of semi-supervised learning, where labeled data is scarce. This brings a natural question: what happens if you use the validation data as labeled training data, and just use one of the baseline methods? I expect the performance of the baselines to increase significantly by using this validation data naively. Furthermore, the results in Table 6, for N_cal = N_th = 250 (approximately 82% top-1 accuracy) seem much lower than the results of the baselines Fixmatch and Freematch reported in Table 2 for CIFAR 10 (90.8% and 92.26%). Supplementary Material: I read the appendix. Relation To Broader Scientific Literature: This paper positions itself as an improvement over the confidence-based PL methods which use thresholding. Essential References Not Discussed: The related work section covers the main strands of prior works related to this paper. Other Strengths And Weaknesses: The paper is well-written and easy to follow. Other Comments Or Suggestions: NA Questions For Authors: See above on the data splits + the results in Table 6. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate the feedback and questions. Our response is as follows, **On baselines that do not rely on confidence scores and thresholds.** While this would be interesting, our paper's focus is on pseudolabeling methods based on confidence scores and thresholds. For this reason, we chose baselines necessary to evaluate our claim — using our scores and thresholds in the common methods (FixMatch, FreeMatch) can improve the performance significantly, in contrast to using them with ad-hoc choices of scores and thresholding techniques. We also compared the variations of these with recent methods (BAM, MR) designed to encourage calibrated scores in SSL settings. **On validation data requirements.** We clarify as follows. * *it is common practice in SSL methods to use validation data to find the best model checkpoint by evaluating validation accuracy*. The baselines use all $N_{val}$ validation samples for this purpose. * In our method, we split this $N_{val}$ data into three parts, $N_{val}'$, $N_{cal}$, $N_{th}$ and use $N_{val}'$ for the usual model checkpoint evaluation and use $N_{cal}, N_{th}$ for learning confidence function and estimating thresholds respectively. **Note $N_{cal}$ and $N_{th}$ are much smaller than $N_{val}$**. * The idea of using the validation data for training is natural. Common benchmarks have been extensively studied in the literature for which there is a reasonable understanding of hyperparameters and expected performance. However, this overlooks the generality of SSL methods. In general, most of the SSL methods need validation data for model selection, tuning hyperparameters, etc. This cost is often overlooked in the prior works. Our work mentions it transparently. **Clarification on Table 6 results.** These results are to study the effect of sizes of calibration $N_{cal}$ and threshold estimation sets $N_{th}$. When these are too small, it will result in high variance in learning scores and thresholds, resulting in unreliable pseudolabeling (high excess pseudolabeling error). Using uniform convergence results it can be shown that the excess pseudolabeling error will scale as $O(\frac{1}{\sqrt{N_{cal}}} +\frac{1}{\sqrt{N_{th}}})$. Thus we see as $N_{cal}, N_{th}$ increases the performance gets better. **Inverse U Shape curve for error tolerance.** In general, we expect an inverse U shape for test accuracy vs epsilon curve. In the Fig. 3. setting, we did not see this. We have run this experiment in one more setting where we see the inverse U shape curve. We summarize our findings in two points, * We note that zero error tolerance is not equivalent to classical supervised learning. We see sizeable pseudolabeling taking place even with 0 error tolerance in certain settings, e.g., as in Figure 3, CIFAR-10 with 250 labeled points for training. **[Please see the new results here](https://anonymous.4open.science/r/icml-rebuttal-2024-anon-E247/cifar10_low_high_epsilon.png)** . This means reasonable models are found early in the training that are 100% accurate on part of the space and our scores, and thresholds are able to find that space. * Second, in settings with an even smaller amount of labeled data for training, we start to see the expected inverse U shape curve. We ran the procedure in the CIFAR-10 setting with 40 labeled points for training with various choices of $\epsilon \in$ { 0.1%, 1%, 5%, 20%, 40% }. **[See new results](https://anonymous.4open.science/r/icml-rebuttal-2024-anon-E247/cifar10_epsilon.png)** for the plot of test accuracy corresponding to each $\epsilon$. The test accuracies are 74.6%, 91.5%, and 83.8% corresponding to $\epsilon$ 0.1%, 1%, and 5% respectively. We hope our response resolves your queries. We are happy to answer any further questions. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed rebuttals. I appreciate the clarification on Table 6 and the new results for the inverse U shape curve. I still have questions regarding the other 2 points. **Regarding validation data requirements**: I acknowledge that, as you mention, validation data is widely used in the SSL litterature. While it is quite unrealistic in real-worlds settings to have such a big validation labeled dataset, it is understandable from a benchmarking perspective. However, I am not entirely convinced in your case about this assumption of using directly such held-out data in training the pipeline. Indeed, based on Table 1, in CIFAR 10, the calibration dataset is 4 times bigger than the labeled dataset (1K vs 250). And such a calibration dataset (which is labeled) *directly impacts the learning process*, as it is used to train the surrogate models. This is quite different from the use of validation data for *benchmarking purposes*, where the validation data is only used at the end to select the model (or do HPT tuning). Hence a natural question is: what happens if you leverage directly this calibration data (1K samples) as labeled examples in FixMatch/FlexMatch. In practice, this would require the same number of labeled samples at training time than your proposed method, hence makes it as applicable as your method. **Regarding baselines that do not rely on confidence scores and thresholds**: I acknowledge the contribution of this paper as an improvement over confidence based pseudo-labeling. However, I still think it is important to get results for at least one competitive PL method that also uses uncertainty (which claims to be better than confidence-based PL). This will allow to see the performance gap, and if your method narrows this gap. Else, it is complicated to see the value of using a confidence-based PL method (even improved with your method) vs other baselines. --- Reply to Comment 1.1.1: Comment: Thanks for the comment, and we are glad that some of your concerns are resolved. We answer the remaining queries below. **Note on our goal and experimental setup.** Our goal is to address the *problem of ad-hoc choices of confidence scores and thresholds in pseudolabeling-based SSL*. To this end, we proposed a *principled solution to learn scores and thresholds* that can directly achieve any specified pseudolabeling error while maximizing the number of pseudolabeled points. Given the focus of our paper, our experiments are designed to study whether using our learnable scores and thresholds can benefit the baselines. To keep our solution statistically sound, we used part of the validation data to learn the scores and thresholds. **General comments on validation data.** We make two points on the role of validation data. a) The real cost of validation data is grossly underestimated (often overlooked) in the prior works since the focus has been on common benchmark datasets where the hyperparameters have been tuned extensively over a long period of time. b) In general, for a new application (dataset), one would require a non-trivial amount of validation data for model selection and hyperparameter tuning. Note, these are all part of the model training process. Introducing novel datasets and benchmarks accounting for the cost of validation data in the overall labeled data can be useful for the SSL research in general and would be a fruitful direction for future work. **Experiment with baselines using calibration data for training.** As suggested, we run the baselines where the amount of training data for the baseline is increased by the amount of calibration data used in our method. So, for CIFAR-10 setting in the paper with 250 labels, we run the baseline now with 250 + 1000 = 1250 labeled points for training. Similarly, for CIFAR-100 with 2500 labels setting, we run with 2500 + 3000 = 5500 labels for training. The results are reported in the table below. The baselines using calibration data for training are annotated with ( train+cal ). We can see that even with more labeled data in training, the baselines still fall short significantly in the CIFAR-100 setting, while the performance gap in the CIFAR-10 (easier) setting narrows down. | Dataset | CIFAR-10 | CIFAR-100 | |------------------|--------------|--------------| | Fixmatch (train + cal) | 92.68 ± 0.31 | 64.77 ± 0.10 | | Fixmatch + Ours | 92.69 ± 0.74 | **69.10 ± 0.45** | | Freematch (train + cal) | 93.03 ± 0.03 | 67.69 ± 0.12 | | Freematch + Ours | 93.10 ± 0.28 | **68.76 ± 1.38** | **Comparison with the suggested UPS [1] baseline.** We compare against the suggested baseline UPS [1] and additional baselines, Softmatch [3], and Adamatch [2]. The results (below tables) remain consistent with the claims in the main paper, i.e., using our scores and thresholds in the baselines improves their performance, and we see UPS falls short in comparison to other baselines. We will include these results in the paper. The following table corresponds to $N_l = 250$ for Cifar-10 and $2500$ for CIFAR-100. | Dataset | CIFAR-10 | CIFAR-100 | |------------------|--------------|--------------| | Softmatch | 91.74 ± 0.78 | 61.43 ± 0.34 | | Softmatch + Ours | **93.14 ± 0.33** | **68.74 ± 0.72** | | Adamatch | 91.35 ± 0.66 | 58.08 ± 0.44 | | Adamatch + Ours | **93.06 ± 0.19** | **68.12 ± 0.48** | | UPS | 64.32 | 37.33 | The following table corresponds to $N_l = 40$ for Cifar-10 and $400$ for CIFAR-100. | Dataset | CIFAR-10 | CIFAR-100 | |------------------|---------------|--------------| | Softmatch | 83.60 ± 7.09 | 40.73 ± 1.46 | | Softmatch + Ours | **89.96 ± 4.74** | **67.84 ± 0.33** | | Adamatch | 75.00 ± 1.10 | 31.61 ± 1.92 | | Adamatch + Ours | **86.62 ± 10.54** | **67.00 ± 1.02** | | UPS | 20.30 | 9.34 | ----- [1] Rizve et al., In Defense of Pseudo-labeling: An Uncertainty-Aware Pseudo-Label Selection Framework For Semi-Supervised Learning, ICLR, 2021. [2] Berthelot et al., AdaMatch: A Unified Approach to Semi-Supervised Learning and Domain Adaptation, ICLR, 2022. [3] Chen et al., Softmatch: Addressing The Quantity-Quality Trade-Off In Semi-Supervised Learning, ICLR, 2023.
Summary: This paper introduces a principled framework for improving pseudolabeling-based semi-supervised learning (SSL) by explicitly controlling confidence scores and thresholds to manage pseudolabel quality and quantity. The approach addresses limitations of heuristic-driven methods, offering a systematic way to balance pseudolabel accuracy and coverage. Extensive experiments including ablation studies confirm the effectiveness of the optimization framework and threshold estimation. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: Not applicable, as this paper does not put forward any theoretical claims or provide corresponding proofs. Experimental Designs Or Analyses: Yes. Supplementary Material: Yes. This submission does not provide supplementary materials in zip format. The Appendix section contains the process of the proposed algorithm and some additional experimental results, both of which I have checked. Relation To Broader Scientific Literature: The key contributions of the paper are closely related to the fields of semi-supervised learning and pseudolabeling. To leverage unlabeled data to train the predictive model under semi-supervised learning, the authors proposes a approach to learn confidence scores and thresholds via an optimization problem in pseudolabeling unlabeled data that maximizes pseudolabel coverage while ensuring error rates remain below a target tolerance. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths: 1. The paper proposes a approach to learn confidence scores and thresholds via an optimization problem that maximizes pseudolabel coverage while ensuring error rates remain below a target tolerance (ϵ), which addresses limitations of heuristic-driven methods, offering a systematic way to balance pseudolabel accuracy and coverage. 2. The framework could be flexibility combined with popular SSL methods (e.g., Fixmatch, Freematch), enhancing their performance by leveraging high-quality pseudolabels. 3. Ablation studies confirm the effectiveness of the optimization framework and threshold estimation. Weaknesses: 1. My main concern is that the core technique used in this paper, which estimates thresholds on the test set to filter pseudolabels, closely resembles that of [1], despite the paper citing [1] and acknowledging this in Line 231 “Similar procedures have been used in the context of creating reliable datasets and are backed by theoretical guarantees for the quality of pseudolabels produced”. Since the auto-labeling method proposed in [1] can also be interpreted as a form of pseudolabeling, the contributions of this work appear incremental. The paper should further discuss the differences in insights and techniques compared to [1], such as in their objectives, algorithms, or validation strategies. [1] Vishwakarma, H., Lin, H., Sala, F., and Vinayak, R. K. Promises and pitfalls of threshold-based auto-labeling. In Thirty-seventh Conference on Neural Information Processing Systems, 2023. Other Comments Or Suggestions: Please refer to the weaknesses above. Questions For Authors: None. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for the careful review and positive feedback. We appreciate the recognition of the strengths of our work — *a flexible and principled approach for learning confidence scores and thresholds for pseudolabeling and its empirical effectiveness*. Our response to the queries is as follows, **Differences in insights and techniques compared to [1]** The prior work [1] studies a procedure to create labeled datasets with accuracy guarantees. Thus, [1] is only concerned with labeling the data correctly and not with how good the end model is. In contrast, our work is on semi-supervised learning, where **our goal is to learn a classifier with good generalization error**. In this procedure, the pseudolabels are never "committed", i.e., in each iteration, the assigned pseudolabel to a point can change. In contrast, in [1], the label assigned to a point is never changed. Furthermore, our approach and insights are novel within the area of pseudolabeling-based semi-supervised learning (SSL) [4,5]. We provide two points on this: First, our work settles the question of the right choices of confidence functions for SSL. Recent works [2,3] have highlighted the problem of miscalibrated confidence functions, leading to inefficiencies in pseudolabeling-based SSL. Although these works offer solutions to this problem, they still fall short of addressing the core issue: the confidence function is not specifically tailored to the needs of SSL. To provide a principled solution to this problem, we adapt the framework for learning confidence functions in the TBAL setting from [1] to the SSL setting. Second, we show how this framework for learning confidence functions can work in concert with popular SSL methods such as Fixmatch [4], Freematch [5], etc., and conduct an extensive empirical evaluation demonstrating that using confidence function learned from our method can yield significant improvements in the test accuracy. The flexibility to be integrated into a variety of techniques is novel to this work; works like [1] did not need to offer it. As a result, our work provides a flexible solution to practitioners using SSL, freeing them from the trial and error of selecting various confidence functions or hand-crafting one. [1] Vishwakarma, H., Lin, H., Sala, F., and Vinayak, R. K. Promises and pitfalls of threshold-based auto-labeling. NeurIPS, 2023 [2] Loh et al., Mitigating confirmation bias in semi-supervised learning via efficient bayesian model averaging. TMLR, 2023. [3] Mishra et al., Do not trust what you trust: Miscalibration in semi-supervised learning, arXiv, 2024. [4] Sohn et al., Fixmatch: Simplifying semi-supervised learning with consistency and confidence, NeurIPS, 2020. [5] Wang et al., Freematch: Self-adaptive thresholding for semi-supervised learning, ICLR, 2023. We hope our response resolves your queries. We are happy to answer any further questions.
Summary: This paper proposes PabLo, a novel method for semi-supervised learning. The authors conceive their approach through noting that the threshold for selecting pseudolabels from the teacher model should be both permissive enough to allow for a large degree of supervision, while not being so permissive as to introduce low-quality labels. PabLo proposes solving a surrogate optimization problem during each SSL training iteration, in which pseudolabeling thresholds are chosen to maximize pseudolabel coverage while constrained by a maximum error bound. Additionally, PabLo introduces a pseudolabel accumulation step, in which pseudolabels from previous training iterations can be brought into the current update if it is otherwise missed by the current teacher model. The authors validate their approach on three different image classification datasets, demonstrating an improvement when used in conjunction with common SSL approaches. Claims And Evidence: The claims presented in the paper are all valid. Methods And Evaluation Criteria: The proposed experimental methods are valid. Theoretical Claims: N/A Experimental Designs Or Analyses: The experimental design is sound. Supplementary Material: I reviewed the entirety of the supplementary material. Relation To Broader Scientific Literature: This paper investigates thresholding in the context of pseudolabeling beyond the level of simply using heuristic techniques, as has been tried previously in work such as FixMatch. The more principled approach of choosing thresholds improves upon these approaches and can be integrated directly with them. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: - The method is lightweight and can easily be integrated with currently existing SSL approaches. - Improvement over baselines in the provided experiments is convincing. - The theoretical framework which motivates PabLo is intuitive, and the careful analysis of the pseudolabeling threshold is principled. Weaknesses: - The experiments are limited to a single labeled fraction for each dataset; the results would be more convincing if multiple labeled data fractions were evaluated for each dataset. - The psuedolabel accumulation step, although introduced as a possible improvement, does not meaningfully improve the test accuracy. - PabLO includes several hyperparameters chosen heuristically (e.g. $N_{cal}$, $N_{th}$, $\epsilon$). This complexity of selecting all of these limits the practical usability of the method. - The authors introduce two algorithms for selecting thresholds, either selecting class-wise thresholds or global thresholds. However, it is not made clear which is adopted in the final algorithm, and no comparison between the two is included in the ablation studies. Other Comments Or Suggestions: I believe the paper will flow better if the related work section was moved to the beginning of the paper, after the introduction. Questions For Authors: - See weaknesses: does the final algorithm use class-wise or global thresholds (e.g. algorithm 1 or 2)? - Have the authors made plots visualizing the pseudolabel threshold as a function of training iteration (similar to Fig 5)? It would be interesting to see how the value of the threshold evolves during training, as this is subtly different from the pseudolabel coverage percent plotted in Fig 5. - What is the authors intention of the analysis on pseudolabel accumulation, as the approach ultimately does not improve PabLo's performance? Have the authors investigated more sophisticated strategies for pseudolabel accumulation (e.g. downweighting the contribution of pseudolabels accumulated from past training iterations)? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive feedback and the noted strengths — *a lightweight, intuitive, and theoretical framework to learn scores and thresholds that can be integrated with existing SSL approaches to improve their performance*. Our response to the queries is as follows, **More labeled data.** We have evaluated the methods on settings with smaller $N_l$. Specifically, using $N_l=$ 40 for Cifar-10 and SVHN and 400 for Cifar-100. **[The results are available here](https://anonymous.4open.science/r/icml-rebuttal-2024-anon-E247/cifar10_cifar100_svhn_low_label.png)**. The results are consistent with claims in the main paper and even more pronounced than those in Table 2. **On pseudolabel accumulation.** For the accumulation procedure our hypothesis is that it may help the high-precision pseudolabeling methods but it may not be useful for methods with noisy pseudolabels. We do not expect it to help baselines and the results in Table 4 are for completion — to show the performance of baselines with accumulation as well. The accumulation method is generally helpful when used in concert with our confidence scores and thresholds that can ensure high-precision pseudolabels. More sophisticated accumulation strategies accounting for the staleness of pseudolabels might be interesting to explore for future works. **On hyperparameters.** While several of the SSL baselines also require choosing hyperparameters, we provide guidance on selecting the hyperparameters $N_{cal}$, $N_{th}$, $\epsilon$. The results in Table 6, suggest setting $N_{cal}, N_{th}$ to as high value as possible will be favorable. For $\epsilon$ a small value ( $\le$ 1%) is favorable when $N_l$ is not too small (i.e. when the initial models are not expected to be too bad); when we expect the initial models to be bad (or have $N_l$ too small), then using slightly higher epsilon around 5% is favorable. We will include a detailed discussion on these recommendations in the paper. **Class-wise vs global thresholds.** Class-wise threshold estimation is suitable when there are less number of classes and global threshold estimation is suitable when the number of classes is large. We use class-wise for CIFAR-10 and SVHN settings and global for CIFAR-100 cases. The rationale behind this is, that when the number of classes is large it is hard to have sufficient samples for each class to learn meaningful thresholds. Thus in such settings, it is more useful to learn a global threshold. **Moving related works section earlier.** We will move this in the camera-ready version. **Thresholds over iterations.** Since we learn new scores in each pseudolabeling iteration, the scale of scores is not consistent across the iterations and thus we do not see a pattern in thresholds over iterations. However, Figures 4 and 5, provide insights into how the quantity and quality of pseudolabels evolve over time. We hope our response resolves the queries. We are happy to answer any further questions you may have.
Summary: The paper proposes PabLO, a framework for improving pseudolabeling-based semi-supervised learning (SSL) by learning confidence scores and thresholds with explicit control over pseudolabeling error tolerance. The core idea is to formulate pseudolabeling as an optimization problem that maximizes coverage while bounding error, replacing heuristic thresholding strategies. PabLO integrates with existing SSL methods and introduces pseudolabel accumulation to reuse high-confidence labels. Experiments on CIFAR-10/100 and SVHN demonstrate significant accuracy improvements over baselines. ## update after rebuttal The author's response has partially addressed my concerns. After reviewing the other reviewers' comments and the author's rebuttal, I believe this paper still requires further improvements. Therefore, I maintain my rating. Claims And Evidence: Yes. Methods And Evaluation Criteria: 1. The authors' method essentially uses the validation set to identify confidence functions and thresholds that can be transferred to the unlabeled training set. It seems to implicitly assume a strong alignment between the distributions of these two sets. Is there any theoretical proof that guarantees the optimal solution on the validation set can be transferred to the unlabeled training set? Moreover, have the authors considered whether the method would still be effective in cases where the distributions are not aligned? 2. The pseudolabel accumulation method proposed by the authors appears to be immature. In Table 4, it fails to consistently improve the results across different methods, which significantly limits the generalizability of this approach. 3. In the evaluation, the authors used three datasets, among which CIFAR-10 and SVHN are relatively simple. It is recommended that the authors introduce stronger benchmarks. Furthermore, the authors only conducted experiments on two SSL methods (Fixmatch, Freematch), which is too limited to demonstrate the generalizability of their approach. It would be beneficial to include more baselines to enhance the persuasiveness of their method. Theoretical Claims: The paper does not provide formal theoretical guarantees. Experimental Designs Or Analyses: 1. In the paragraph titled "Adjusted iterations for baselines," the authors limited the number of iterations for the methods. Have the authors considered the convergence of the method? In the CIFAR100 experiment plots, it appears that the methods have not yet reached the convergence iterations. 2. Regarding the failure of the method in the Fixmatch + SVHN experiment, the authors did not provide a detailed explanation for the cause. 3. The impact of the confidence function architecture (2-layer NN) is unexplored. Supplementary Material: All content has been reviewed. Relation To Broader Scientific Literature: The work builds on SSL and confidence calibration. It extends prior SSL methods by replacing heuristic thresholds with learned, error-bounded ones. Essential References Not Discussed: There are many SSL methods that the authors should ideally introduce and compare experimentally, such as SimMatch (CVPR 2022), SoftMatch (ICLR 2023), AdaMatch (ICLR 2022), and so on. Other Strengths And Weaknesses: PabLO integrates seamlessly with existing SSL frameworks, but training confidence functions and calculating thresholds increase runtime, and the discussion regarding runtime is not sufficiently comprehensive. Other Comments Or Suggestions: Nothing. Questions For Authors: 1. How does the actual pseudolabeling error during training compare to the target ϵ? Could you provide per-iteration error measurements? 2. Why use a 2-layer NN? Have you explored alternatives? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We appreciate the feedback and the noted strengths of our paper. Our work is well-positioned in the literature on SSL and confidence calibration. Our principled methods to learn confidence scores and thresholds with error bounds replace the heuristic-based choices and enhance the prior SSL methods. Our response to the queries is as follows, **Generalization/transfer to unlabeled data.** Yes, we assume that the validation data and the unlabeled data are independent and identically distributed (i.i.d). Under this assumption, it is easy to show using standard uniform convergence results that the optimal solution on the validation set will transfer (generalize) to the unlabeled set. More specifically, our method uses parts of the validation data: $N_{cal}$ samples for learning the confidence scores and $N_{th}$ samples for estimating thresholds. Using the uniform convergence results, we can show that the excess pseudolabeling error when transferring the solution to the unalabeled data will be $O(1/{\sqrt{N_{cal}}} + 1/{\sqrt{N_{th}}})$. Thus, as long as $N_{cal}$ and $N_{th}$ are sufficiently large and the i.i.d assumption is satisfied, the solution will generalize to the unlabeled data. The setting where the distributions of unlabeled and validation data are not aligned is interesting, and adapting our methods to these settings would be a fruitful direction for future work. **On pseudolabel accumulation.** For the accumulation procedure, our hypothesis is that it may help the high-precision pseudolabeling methods, but it may not be useful for methods with noisy pseudolabels. We do not expect it to help baselines, and the results in Table 4 support this. The accumulation method is generally helpful when used in concert with our confidence scores and thresholds that can ensure high-precision pseudolabels. **More baselines and datasets.** Our focus is on principled choices for confidence scores and thresholds for pseudolabeling based SSL, and we proposed solutions to this end. The goal of our experiments is to show that modern and commonly used methods, such as Fixmatch, Freematch, can be adapted to use our scores and thresholds, and the performance of the resulting method is better in comparison to using their standard versions. Since our focus is on confidence functions, we included baselines Margin Regularization (MR) and Bayesian Model Averaging (BAM) that are aimed at improving the calibration of confidence scores for pseudolabeling. We used common benchmark datasets in SSL, that are sufficient for our empirical analysis. Introducing stronger benchmarks for SSL would be interesting future work. **Fixmatch + SVHN Case.** We point the reviewer to Figure 2 (top, right), where we plot the test accuracy over iterations for this case. This plot suggests that all the methods for the Fixmatch + SVHN case have similar performance. **Related works.** We have updated the related works with a discussion on the shared references. **Discussion on runtime.** It is correct that additional training for confidence functions and calculating thresholds increases runtime. For this reason, we adjust the training iterations of the baselines so that all the methods are run for the same amount of time. We have included the details on runtime in Appendix C. We are happy to provide any specific details the reviewer is interested in. On running the CIFAR-100 experiment longer. For fair comparison, we fixed our experiment protocol, i.e., run the variations with our method for 25K iterations and the baselines for the equivalent number of iterations in the same amount of time. Thus, the evaluation is fair. Prior works run till 1 million iterations to reach near convergence. Note that this takes an enormous amount of time, and one of the advantages of using our method is that it can achieve high accuracy earlier and does not require running very long. **Observed pseudolabeling errors.** We plot the pseudolabeling error and coverage for the Cifar-10 setting. **[Please see the plot on this link](https://tinyurl.com/j2hb77fs)**. We see that the observed pseudolabeling error is very close to the target $\epsilon$ when it starts to give non-trivial pseudolabeling coverage, and it approaches $\epsilon$ as the training progresses. **Impact of confidence function architecture and its alternatives.** Our framework to learn confidence scores is flexible to work with any choice of $\mathcal{G}$. While the choice of the function class $\mathcal{G}$ is up to the user, in general, it makes sense to use a flexible non-linear function class. A multi-layer neural network with any activation function could be a good fit here. We chose a 2-layer NN since the classification model is doing the heavy lifting of learning the features, so for $g$ we do not need a highly complex network. We hope our response resolves the queries. We are happy to answer any more questions you may have.
null
null
null
null
null
null
Fast Min-$\epsilon$ Segmented Regression using Constant-Time Segment Merging
Accept (poster)
Summary: This paper provides a heuristic method to compute segmented regression. Instead of looking for the best segments directly, the algorithm finds as many segments as possible and then merges them until only k segments are left. The authors evaluate the algorithm on the synthetic datasets. The authors' method shows runtime or performance improvement compared to previous methods. Claims And Evidence: The authors' claims are proved by runtime analysis and experimental results. Methods And Evaluation Criteria: The MSE used in the evaluation is very standard. Theoretical Claims: The runtime analysis looks good to me. Experimental Designs Or Analyses: N/A Supplementary Material: N/A Relation To Broader Scientific Literature: I think this paper proposed an interesing heuristic method for segmented regressiion Essential References Not Discussed: I think it would be nice for the authors to discuss a bit about the relation between segmented regression and istronic regression in the literature review. Other Strengths And Weaknesses: I am wondering if the authors can provide any theoretical guarantee for accuracy. I think this would definitely make the paper more interesting. Other Comments Or Suggestions: N/A Questions For Authors: I am wondering if the authors can mention a bit more about why segmented regression is an interesting problem, like what application it has. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for reviewing our work, for the constructive improvement ideas and for pointing out the interesting direction of isotonic regression. **Relation to isotonic regression** (we assume that the review refers to isotonic regression): While isotonic regression is also based on an ordered sample set, it results in a predicted increasing constant per sample, using linear interpolation between the sample positions. The resulting regression is a (not necessarily strict) monotonically increasing piecewise linear function, where the number of pieces (i.e. segments) can be very high ($k \le n$). This problem setting is quite different to us, due to (a) the dynamic number of segments, (b) the enforced continuity, (c) the limitation to constant (or linear) models and (d) the restriction to monotonically increasing regressions. While the relation of isotonic regression to our approach is very interesting, we consider it to be a different type of problem. **Theoretical guarantees**: Indeed, based on the the way our algorithm works, we expect the accuracy guarantees of Acharya et al. (2016) to also hold true for our algorithm. At the same time, it is only possible to prove these statements for data with a very specific noise distribution. We instead focus on experiments with known distributions and real-world data with very unusual attributes to show the accuracy of our algorithm relative to the competing approaches. **Applications of segmented regression**: Despite the use cases listed in our introduction (ranging from ecology to econometrics to clinical guidelines), multiple approaches use these regressions for more efficient data structures [2,3]. A Nature Methods paper from this year [1] uses segmented regression as a step to model genetic data of tissue slices. In this example, it is not only necessary to have a small MSE, but to be able to get valuable breakpoint positions. We see regression in general as a fundamental building block in statistical analysis and machine learning. This can also be seen in the work of Diakonikolas et al. (2020), where segmented regression (Acharya et al. (2016)) was used to present a more efficient alternative to the CART-algorithm. [1] Chitra, Uthsav, et al. "Mapping the topography of spatial gene expression with interpretable deep learning." Nature Methods (2025): 1-12. [2] Galakatos, Alex, et al. "FITing-Tree: A data-aware index structure." Proceedings of the 2019 international conference on management of data. 2019. [3] Dai, Yifan, et al. "From WiscKey to bourbon: A learned index for Log-Structured merge trees." 14th USENIX Symposium on Operating Systems Design and Implementation (OSDI 20). 2020.
Summary: The paper addresses min-epsilon segmented regression, where the goal is to minimize the mean squared error (MSE) for a given number of segments. While the optimal solution has O(n^2)complexity (Bai & Perron, 1998), heuristics like Acharya et al. (2016) improve efficiency to O(n) but often introduce significant errors. The authors propose a method that merges segments using precomputed matrices, achieving 1,000 times lower MSE and 100 times faster runtime on large datasets. While promising, further clarification on theoretical guarantees and comparisons with recent heuristics would strengthen the evaluation. Claims And Evidence: The claims are supported by clear and convincing evidence. Methods And Evaluation Criteria: The proposed method and evaluation criteria make sense for the problem considered in the paper Theoretical Claims: I reviewed them, but I didn't check the details. Experimental Designs Or Analyses: More comprehensive comparision results with the exisintg method shoud be provided and discussed Supplementary Material: Yes, I have reviewed the entire appendix. Relation To Broader Scientific Literature: N/A Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths - The paper appears to be novel. - The paper is well-organized. Weaknesses - A more comprehensive comparison with existing methods should be provided and discussed, as the current results are relatively weak. - The authors state that the state-of-the-art (SOTA) method is Acharya et al. (2016), which seems unusual given that it was proposed nine years ago. Why have no more recent approaches been considered? Are there other baselines that could be included? Other Comments Or Suggestions: Please see Weaknesses. Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your feedback. **Theoretical guarantees:** We would like to highlight that contrary to the reviewer's summary, the approach by Acharya et al. (2016), for a fixed value of $d$, improves the runtime of the approach to $\mathcal{O}(n\log{n})$, not $\mathcal{O}(n)$. This can be seen in Table 1 of our paper, and is stated in Section 3 of the paper by Acharya et al. (2016). **Related work and state of the art:** We discuss multiple different approaches in our related work. As discussed there, we believe that it would not be fair to compare against those, because they solve related, yet different problems. This is clarified in further detail in our answer to another review (please see our answer to Reviewer iMfW). While the approach of Acharya et al. (2016) is used and extended to solve further problems (e.g., segmentation with multidimensional breakpoints), it denotes the state of the art for min-$\epsilon$ segmented regression, together with the DP baseline. This is further underlined by it still being used in recent applications, for instance in a work [1] on gene expression analysis published in Nature Methods, which employs segmented regression to distinguish between different cell types, which we will mention and cite in the camera-ready paper. [1] Chitra, Uthsav, et al. "Mapping the topography of spatial gene expression with interpretable deep learning." Nature Methods (2025): 1-12.
Summary: This paper proposes a new heuristic method for the $\\min$-$\\epsilon$ segmented regression problem. Some prior works propose two types of algorithms for this problem. One line of work (Bai & Perron, 1998; Yamamoto & Perron, 2013) gives optimal solutions for this problem with computational complexity $\\mathcal{O}(n^2)$, where $n$ is the number of samples. Another work (Acharya et al., 2016) focuses on the case where $n$ is large and provides a heuristic algorithm with computational complexity $\\mathcal{O}(n \\log n)$, though it can result in large errors. This paper proposes a new heuristic method that achieves: (1) computational complexity $\mathcal{O}(n \\log n)$, and (2) (empirically) comparable to the optimal solutions. This paper provides computational complexity analysis for the proposed method and experiments showing the effectiveness of the proposed methods in terms of solutions quality and running time compared to prior works. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: Yes. However, the theoretical side of this paper is minimal. Experimental Designs Or Analyses: Yes. This paper contains two main experiments: (1) one with synthetic data, where they generate piecewise continuous functions with the number of pieces $k = 6$, and create data sets of $n$ points using the generated functions plus Gaussian noises. The results of this experiment are shown in Figure 3. (2) Another experiment with real data of 43 hours of CPU usage, measured every two seconds, resulted in 70,607 samples. The results of this experiment are shown in Table 2 and Figure 5. Supplementary Material: Yes. There are two sections in the Appendix. Section A describes the details of the matrices used in the proposed algorithm (mentioned in Section 4.2). Section B describes how to use the matrix $C$ to calculate the residual sum of squares (RSS). Relation To Broader Scientific Literature: Some prior works propose two types of algorithms for this problem. One line of work (Bai & Perron, 1998; Yamamoto & Perron, 2013) gives optimal solutions for this problem with computational complexity $\\mathcal{O}(n^2)$, where $n$ is the number of samples. Another work (Acharya et al., 2016) focuses on the case where $n$ is large and provides a heuristic algorithm with computational complexity $\\mathcal{O}(n \\log n)$, though it can result in large errors. This paper proposes a new heuristic method that achieves: (1) computational complexity $\mathcal{O}(n \\log n)$, and (2) (empirically) comparable to the optimal solutions. This paper provides computational complexity analysis for the proposed method and experiments showing the effectiveness of the proposed methods in terms of solutions quality and running time compared to prior works. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Firstly, I am not an expert in this field so my opinion should be taken lightly. ## Strengths The proposed method for $\\min-\\epsilon$ $k$-segment regression is computationally efficient (comparable to the prior heuristic approach) and achieves good performance (shown empirically, comparable to the optimal solution). I also appreciate that the paper presentation is straightforward to follow. ## Weaknesses To me, the contribution of this paper is somewhat niche, and the contribution in the "machine learning" aspect does not meet the bar of a top ML conference. Here are the reasons. ### The problem considered is (somewhat) niche 1. This paper considers the problem of $\\min$-\$\epsilon$ $k$ segmented regression. Though dealing with multi-dimensional data, the algorithm first has to choose a single axis/coefficient of the input and then perform segmenting along that particular axis. I know that this is not an issue with some specific types of data (e.g., time series), but it restricts the application of this method. 2. Many other methods can deal with this general case (e.g., classification and regression tree (CART), multivariate adaptive regression splines (MARS), or regression tree, to name but a few), which is segmenting using multiple axes. I know that those methods had their own limitations, but the point I am making here is that the setting this paper considered is just a sub-case of a much broader problem. 3. Besides, even if one restricts themselves to choosing only one axis to perform segmenting, choosing the axis itself is also a challenge. ### The main contribution lies in the computational aspect; contribution in the machine learning aspect is limited 1. As far as I understand, the general idea of performing redundant segmentations along an axis and then merging was introduced by Arycha et al. (2016). I know it is controversial, but I think that the novelty of this paper lies in introducing a smarter way of merging segments that lead to improved solution quality (empirically). 2. Moreover, though claiming that this method works well on a large dataset, it is only in terms of the number of samples $n$. As said, there is another critical factor in their computation cost, which is $\mathcal{O}(nd^2)$, where $d$ is the dimension of the input. This can scale up to $\\mathcal{O}(nd^3)$ if using standard implementation for calculating the inversed matrix, which is bad for high-dimensional data. Maybe that is why the experiments demonstrated in this paper were only conducted with data with small $d$. 3. Most importantly, the machine learning contribution of this paper is limited. For example, I want to see how the predicted (segmented) functions perform on unseen data or extrapolate to data that lies outside the interval. I know that this is the limitation of the considered problem itself, not necessarily of the proposed method, and it is just my taste. Other Comments Or Suggestions: I reiterate that I am not an expert in this field. Though I do not underestimate the contribution of this paper, I believe that the contribution in the ML aspect is not enough for an ICML publication. Since the main contribution lies in the algorithmic/computational aspect, I believe this paper should be submitted to other venues focusing on those aspects (e.g., STOC, SODA, or (somewhat) lesser conference like ITCS, etc.), or venues for data-mining (ICDM), or a more generic one like AAAI. **Though my overall recommendation is Weak Accept, my opinion is actually borderline, and I believe that the AC and other reviewers will have better judgement than me.** ### Other minor comments 1. At the beginning of page 2, please use itemize for the first paragraph (since the two contribution paragraphs also use itemize). Questions For Authors: See above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for reviewing our work and for the suggestions for improvement. **Broader subject relevance:** We consider the topic of regression to be a fundamental building block for statistical analysis and machine learning. As mentioned in Section 7, Diakonikolas et al. (2020) have shown that an approach for multidimensional breakpoints based on the algorithm published by Acharya et al. at ICML 2016 outperforms CART, even when using constant segment models (for fairness against CARTs limitations). This suggests that fundamentally improving the basic segmented regression approach can lead to an improvement for those other problems. **Contribution:** From our perspective, our algorithm first and foremost drastically increases accuracy, rather than aiming to reduce computational complexity. If, for a given use case, sufficient resources are available, it is already possible to use the dynamic program. Our evaluation shows that this is not favorable for large datasets, as the number of samples would need to be reduced. In this case, the heuristics perform better. While our solution seems to be faster than Acharya's solution, the main contribution is its much higher accuracy and the focus on finding the correct breakpoints of the underlying data distribution. Therefore this work presents the first suitable algorithm if a time complexity of $O(n^2)$ is not acceptable and the exact positions of the breakpoints are important. It is also important to mention that our algorithm does not scale with $\mathcal{O}(nd^3 + n\log{n})$. This is true only for the specific implementation used in the evaluation, and it was mentioned only for transparency reasons. As stated in Section 5, our algorithm scales with $\mathcal{O}(nd^2+n\log{n})$. This is at least as good as the runtime complexity of Acharya's heuristic (cf. $\mathcal{O}(nd^2\log{n})$ in Section 3 of their paper). The limitation regarding the number of dimensions is based on the OLS and affects all competing algorithms in the same way. The reasons for the small value of $d$ in our evaluation are that (a) linear segmented regression is one of the most common use cases and best to illustrate, (b) $d$ and $k$ are typically constants that are chosen to model the data, which do not change, and (c) this allows us to perform a comparable evaluation to Acharya et al. (2016). **Relevance for ICML conference:** As stated above, we consider regression as a fundamental aspect of machine learning, that can be used on its own, but is also a building block to efficiently solve other problems in the domain of machine learning (as shown by Diakonikolas et al.). This is further underlined as the most relevant state of the art approach, Acharya et al. (2016), was also published at ICML.
Summary: The authors present a new method and algorithm for min-$\epsilon$ segmented regression. The main contributions are primarily algorithmic but also related to software engineering, as the authors implement highly efficient programming techniques to enhance their implementation. The greedy algorithm they propose merges neighboring segments in constant time. The paper includes one real-data example and several synthetic benchmarks that demonstrate the efficiency of the method. ## Update After Rebuttal I stand by my original, high, score of the paper. I think it is valuable and interesting work and that the paper is well-written and showcases strong results for the method. The authors have also clarified some of my initial minor concerns. Claims And Evidence: The claims are generally convincing and the results are strong. The problem is well-motivated and the method is clearly explained. I would like to offer a couple of constructive suggestions: - On line 431, first column, you mention that your algorithm performs better than certain alternatives from R and Python. However, I don't see evidence of direct comparisons to these alternatives in the paper. If such comparisons were conducted, including these experimental results would strengthen your claims. If not, it might be helpful to clarify this statement and perhaps explain why these comparisons weren't included. **Edit: I understand that these might not be warranted after all, so please disregard this comment.** - It would be valuable to see an analysis of how well the theoretical complexity of your method is reflected in practical performance. Methods And Evaluation Criteria: The benchmarks presented are appropriate for the problem at hand. Additional benchmarks would further strengthen the paper, and perhaps the supplementary material could be utilized to provide these. Theoretical Claims: I briefly checked the complexity analysis and could not find any issues. Experimental Designs Or Analyses: The experiments are well-designed and the results are convincing. Supplementary Material: I reviewed all of the supplementary material and found it to be well-presented. The code is well-documented and well-engineered. I was able to compile the code and successfully run the tests, though I encountered some issues with `juliacall` on my machine that prevented me from running the benchmarks. It would be beneficial to include additional results in the supplementary material, perhaps exploring other real-world datasets. Relation To Broader Scientific Literature: The relevant preceding papers appear to be appropriately cited and discussed. Table 1 effectively summarizes previous related contributions. The contribution of the current paper is straightforward, aiming to improve complexity and practical performance of this method. Essential References Not Discussed: None that I could identify. Other Strengths And Weaknesses: The paper is well-written and easy to follow, with a clear structure and helpful illustrations that aid the reader in understanding the method. The plots are well-designed and clearly convey the relevant information. Other Comments Or Suggestions: - Figure 5 is large (in file size) and could benefit from being rasterized to improve loading times of the PDF. - On line 307, second column, you suggest that results should not change by orders of magnitude due to the implementation language. While this is likely true when comparing Julia to C++, the statement may not hold for all language comparisons. A slight rephrasing might more accurately reflect this nuance. Questions For Authors: No Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: Thank you very much for your valuable and constructive feedback regarding our paper, including the code and experiment setting. **Evaluation design and constraints:** The alternatives in our 'related work' section solve a slightly different problem, e.g., by enforcing continuity of the resulting piecewise function. While we did not conduct a full evaluation, we first tried to analyze the shown real-world data using these algorithms, but the runtimes were intractably high for our use case (5-10 minutes for pwlf, compared to the sub-second runtimes of the presented heuristics, see Table 2). At the same time, the results were worse than the regression functions from the Acharya et al. heuristic, sometimes aborting because they did not converge to a solution at all. Directly comparing these runtimes or errors to our work in the evaluation seemed unfair, since the additional constraints change the problem. For the use case considered here, it does not make sense to choose these algorithms over any of the evaluated ones. We will rephrase the corresponding paragraph to express this setting more clearly. **Figure 5 and minor clarification:** We appreciate the additional feedback and the constructive suggestions to improve our work very much. Our goal was to plot the dataset in Figure 5 as accurately as possible. Still, we will change this by rasterizing the data or the whole image since the raw data is available digitally. Furthermore, we will definitely rephrase our statement regarding the programming language performance. It was meant with regards to the rest of the paragraph, i.e., in the context of our evaluation. We see that this sentence can be misleading if it is quoted separately. --- Rebuttal Comment 1.1: Comment: Thanks for the reply! I will make some minor updates to the review regarding related work. I am somewhat disappointed, however, that you haven't indulged any of the reviewer's requests (mine included) for additional experiments. I think reviewer 9p1E raises some valid concerns as well regarding this point and the lack of investigation with respect to the dimension $d$. I have trouble seeing why *every* comparison would need to be comparable with respect to Arycha et al. (2016). Could you not just include an experiment to investigate the performance of your method by itself? --- Reply to Comment 1.1.1: Comment: Thank you for the elaboration. It was not our intention to ignore additional evaluations settings. Our main concern in the answer to reviewer 9p1E was to clarify the theoretical runtime complexity of our solution. Of course, equal (or even better) runtime complexity does not necessarily result in better practical compute performance for the chosen parameters. We did not consider $d$ as the most important metric in our evaluation strategy since it needs to be much smaller than $n$ anyway, and the main advantage over the competing accurate solution is the runtime relative to the amount of samples. Still, we do see that evaluations regarding parameter $d$ are also relevant, even if it is a parameter that is chosen when deciding on the modeling function. **Evaluation Update:** In the meantime, we made small changes to the already supplied source code and evaluated the runtime relative to parameter $d$ in the range $[2..256]$. The experiment aligns with Figure 3 of our paper; we generated 100 random curves for every setting with a specific number of dimensions $d$ and $k=4$. In terms of accuracy, our algorithm is still on par with DP and outperforms Acharya et al., despite not using more segments and not using knowledge about the noise distribution in the data. We believe that this further strengthens our claims on the performance and accuracy of our algorithm and we also hope that it increases the value proposition for the ICML community. We will gladly share this analysis with a more detailed explanation supplementary to our paper (together with the updated code and data). A preliminary version of the evaluation figure is available at: https://icml-segreg-fig-eval-dim.tiiny.site/ **Notes on $n$ when Evaluating the Parameter $d$:** It is important to note that using a fixed $n$ is somewhat unrepresentative at high values of $d$. Given that Acharya 4k is placing up to 16 segments in this case, at $d=256$ and $n=4096$, there would only be one plausible solution left (16 segments just barely fit, as no segment model should be underdefined, so the result is a perfect (over-)fit). That is, the solution space - the size of the set of sane breakpoint positions - is drastically reduced for high values of $d$, and there is nothing really left to decide or optimize, resulting in increasingly similar runtimes. This can be seen in the figure (see above): The relative speedup of Acharya starts to increase again at higher values for $d$ for constant $n$, and the increase in speedup is much weaker for $n=8192$ than for $n=4096$. In another experiment, we set $n = 64 \cdot d$. This prevents the solution space from collapsing as we scale up $d$. Since Acharya's approach scales similar to our solution in terms of $n$, we can thereby analyze the scaling in terms of $d$. The analysis shows that we are always faster in the evaluated range, but start to scale worse for $d \ge 32$ without the Sherman–Morrison rank-1 update (R1U) when the cost of $d$ starts to dominate the runtime. This corresponds to our theoretical analysis, which states the $\mathcal{O}(d^3)$ runtime for our old implementation. With R1U as described in the paper results in a parallel line to Acharya et al., which indicates that both algorithms scale identically (with $\mathcal{O}(d^2)$ according to our complexity analysis as well as Acharya et al.'s) when $d$ starts to dominate the runtime cost (compared to the number of samples $\mathcal{O}(n\log{n})$). This corroborates our theoretical analysis of the computation time relative to Acharya et al.'s. **Notes on the Implementation of DP:** The DP relies heavily on precomputing an inverse matrix and then using R1U to achieve a time complexity of $\mathcal{O}(d^2)$. Acharya et al.'s implementation of the DP sometimes struggles with accuracy for higher $d$, even failing early on in case of $n=8192$. We made minimal modifications to directly solve the linear equations instead of using the inverse matrix to have a reasonable baseline in these settings. While this results in a theoretical time complexity of $\mathcal{O}(d^3)$, this did not change the practical runtime in a substantial way for DP. Using R1U on our algorithm does not impact accuracy, but it does reduce the runtime, especially for $d \ge 32$.
null
null
null
null
null
null
MTL-UE: Learning to Learn Nothing for Multi-Task Learning
Accept (poster)
Summary: This paper introduces MTL-UE, the first unified framework for creating unlearnable examples tailored for multi-task data and models. By leveraging a generator-based structure with label priors and class-wise embeddings, MTL-UE enhances attack robustness through intra-task and inter-task regularization. It supports dense prediction tasks and integrates seamlessly with existing unlearnable methods, demonstrating superior performance across diverse datasets, models, and task-weighting strategies. ## update after rebuttal Thanks for the author's reply. All my concerns have been addressed, so I still recommend that this paper be accepted. Claims And Evidence: The paper presents MTL-UE as an effective framework for generating unlearnable examples in multi-task learning (MTL), and its claims are well-supported by extensive experiments. The results across multiple datasets, architectures, and task-weighting strategies demonstrate that MTL-UE consistently outperforms existing unlearnable example methods and that its embedding-based perturbation strategy effectively reduces intra-class variance, strengthening its core contributions. While the evidence is strong, a few areas could be further clarified. For example, the paper highlights MTL-UE’s plug-and-play compatibility with existing UE methods, but additional discussion on potential implementation challenges would be helpful. Overall, I find it sufficiently clear; even though I encountered some concerns while reading, I later discovered that the authors addressed and discussed these issues in the subsequent text. Methods And Evaluation Criteria: The proposed method and evaluation criteria are well-suited for protecting MTL data from unauthorized use. The evaluation is rigorous, covering various task complexities with datasets for binary classification (CelebA, ChestX-ray14) and multi-class/dense prediction tasks (UTKFace, NYUv2). The inclusion of multiple MTL task-weighting strategies strengthens the study, as different weightings impact task interactions. The paper also assesses transferability across architectures, though results show slightly weaker performance with ViTs, warranting further exploration. Theoretical Claims: The paper leans on intuitive justifications rather than formal theoretical claims. The key argument is that embedding regularization (Intra-ER and Inter-ER) enhances perturbation effectiveness by controlling feature space alignment. This is reasonable and supported mainly by empirical evidence rather than formal proofs. A theoretical perspective on why MTL-UE improves attack transferability across tasks and architectures could strengthen the work. The generator-based approach is argued to enable stronger feature-level control, but a formal analysis would reinforce this claim. Additionally, bounding the perturbation space with embeddings is a novel choice, and further discussion on its impact on optimization dynamics could add depth. Experimental Designs Or Analyses: The experimental setup is thorough, evaluating MTL-UE across datasets, architectures, and task configurations. The inclusion of both MTL and STL models ensures a comprehensive assessment, and the choice of task-weighting strategies is relevant for understanding shared representations. The ablation study on intra-task and inter-task embedding regularization clearly clarifies their contributions. However, further analysis of how perturbations affect different task types within the same dataset (e.g., classification vs. regression in NYUv2) would add insight. This paper also includes results for smaller perturbations, which enhance imperceptibility while maintaining the unlearnability. Supplementary Material: I have reviewed the supplementary material and appendix, which contain additional results, analyses, ablation studies, and visualized findings. All of these are appropriately referenced in the main paper. Relation To Broader Scientific Literature: This paper primarily focuses on benchmarking datasets such as CIFAR-10. While the methods presented could be applied to other scientific problems, their applicability would require further exploration and discussion. Essential References Not Discussed: I believe that the significant references have already been encompassed by the authors within the discussion. Other Strengths And Weaknesses: Strengths: This paper provides extensive analysis and explores additional scenarios. Beyond the main results, it includes feature visualizations, with Fig. 7 illustrating how the embedding regularizations enhance feature separation. Additionally, the paper evaluates its robustness against SOTA defenses. The concept of partial data and partial task protection is particularly intriguing, as it enhances the method's practicality and applicability in real-world scenarios. This work covers classification and dense prediction tasks across multiple datasets. Weakness: Additional discussion on scenarios involving fine-tuning (using a pretrained feature encoder) and advanced augmentations would be valuable. A line-by-line description of Algorithm 1 would provide readers with a better understanding. Other Comments Or Suggestions: Including more examples of partial task protection, where only one task needs to be protected, would be beneficial. It would also be insightful to explore how MTL performs when the number of tasks to be protected is fewer than the unprotected tasks. Overall, I think this manuscript is good and fully deserves to be accepted by the ICML community. Tables 11 and 12 are missing references in Sections B.2 and B.3, which should be addressed. Questions For Authors: See weakness and comments as above. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **Q1**. Scenarios involving fine-tuning (using a pretrained feature encoder) and advanced augmentations. **A1**. Thanks for the suggestions. We conducted experiments on the proposed scenarios. The table presents results of training MTL models on UTKFace with ImageNet-pretrained-encoder fine-tuning or UEraser [1] as advanced augmentations. Fine-tuning showed no impact on any method, while strong augmentations affected all, with our attacks still outperforming baselines. | | None | Finetuning | UEraser | | ---------- | --------- | ---------- | --------- | | Clean | 78.97 | 79.10 | 79.84 | | LSP (P) | 28.58 | 27.37 | 69.31 | | AR (P) | 27.41 | 23.64 | 54.23 | | LSP (A) | 28.16 | 33.99 | 79.21 | | AR (A) | 36.79 | 36.98 | 74.96 | | EM | 31.74 | 34.03 | 73.19 | | TAP | 39.29 | 40.21 | 61.80 | | SEP | 40.70 | 40.87 | 61.29 | | MTL-UE-EM | 25.84 | 30.37 | 52.50 | | MTL-UE-TAP | **19.84** | 21.08 | **32.36** | | MTL-UE-SEP | 21.19 | **20.18** | 38.47 | **Q2**. Line-by-line description of Algorithm 1. **A2**. Thanks for your suggestions! We’ll add a detailed description in the updated version. **Q3**. Examples of partial task protection, where only one task is protected. **A3**. Following Table 8, we show single-task protection results. We can see that for STL models, unprotected tasks match clean data accuracy, while protected tasks perform like those trained to protect all tasks. Results on MTL models degraded performance, likely due to shared encoder learning both benign and spurious features. | Model $\rightarrow$ | MTL | STL | | ------------------------------------------------ | ------------------- | ------------------- | | Protected task $\downarrow$; Metric $\rightarrow$ | Age, Race, Gender | Age, Race, Gender | | None | 60.32, 84.07, 92.51 | 60.46, 84.45, 91.86 | | Age | 20.84, 80.51, 89.60 | 13.90, 82.78, 90.30 | | Race | 55.70, 31.35, 91.35 | 57.47, 17.45, 90.57 | | Gender | 56.31, 81.14, 60.59 | 57.28, 82.51, 50.38 | | All | 7.28, 16.20, 40.08 | 7.26, 21.84, 55.61 | **Q4**. Tables 11 and 12 are missing references in Sections B.2 and B.3, which should be addressed. **A4**. Thanks for your suggestions. We will fix it in the updated version. [1] Learning the unlearnable: Adversarial augmentations suppress unlearnable example attacks. ICCVW 2023 --- Rebuttal Comment 1.1: Comment: Thanks for the author's reply. All my concerns have been addressed, so I still recommend that this paper be accepted. --- Reply to Comment 1.1.1: Comment: We sincerely appreciate the reviewer’s positive feedback and recognition of the contributions in our paper.
Summary: This paper introduces MTL-UE, the first framework for generating unlearnable examples (UEs) tailored for multi-task learning (MTL) models. While existing UE methods focus on single-task learning (STL) to prevent unauthorized training on personal data, modern AI increasingly relies on generalist MTL models. This work addresses this gap by introducing a generator-based approach with class-wise feature embeddings and embedding regularization, improving attack effectiveness and robustness. It supports dense prediction tasks, integrates seamlessly with existing surrogate-dependent UE methods, and enables partial task and data protection. Extensive experiments demonstrate its effectiveness over baseline UE methods across multiple backbones and task-weighting strategies. Claims And Evidence: Overall, the claims in the paper are well-supported by extensive empirical evidence, including evaluations across multiple datasets, model architectures, and task-weighting strategies. The authors make several key claims, such as (1) MTL-UE improves attack effectiveness for MTL models, (2) it reduces intra-class variance, (3) it is robust across different datasets and architectures, and (4) it can generalize well to dense prediction tasks. These claims are backed by thorough experiments on four MTL datasets (CelebA, ChestX-ray14, UTKFace, NYUv2), comparisons against five baseline UE methods, and experiments with five different MTL task-weighting strategies. Methods And Evaluation Criteria: The proposed methods and evaluation criteria align well with generating unlearnable examples (UEs) for multi-task learning (MTL). The evaluation spans four diverse MTL datasets (CelebA, ChestX-ray14, UTKFace, NYUv2), covering both classification and dense prediction tasks. Additionally, multiple model backbones from CNNs to Transformers are evaluated to strengthens the claim of transferability. This work also benchmarks several previous UE methods on multi-tasks learning. Theoretical Claims: Some theoretical justifications are provided in this work for MTL-UE. The embedding regularization techniques (Intra-ER and Inter-ER) are based on the idea that reducing intra-class variance and increasing inter-class separation enhances perturbation effectiveness. While no formal proofs are given, empirical evidence supports this claim. MTL-UE’s perturbation generation utilizes class-wise embeddings to constrain the search space, which is theoretically intuitive. While its strong empirical results support the approach, a more in-depth mathematical analysis of its impact on robustness across tasks could further reinforce the findings. Experimental Designs Or Analyses: The paper’s experimental design is well-structured for evaluating UEs in MTL. It includes four datasets (CelebA, ChestX-ray14, UTKFace, NYUv2), five task-weighting strategies, and five model architectures, ensuring broad applicability. Meanwhile, the comparison with multiple baselines (LSP, AR, EM, TAP, SEP) provides a fair assessment of MTL-UE. While the impact of task numbers is analyzed (Figure 5), a deeper exploration of why UEs perform better in MTL than STL would add clarity. Additionally, a direct runtime comparison with baselines would better support claims of computational efficiency. Supplementary Material: The appendix presents additional experimental results and findings, all referenced within the main paper, offering further evidence to support the conclusions. Relation To Broader Scientific Literature: NA Essential References Not Discussed: NA Other Strengths And Weaknesses: Strengths: 1.The paper is well-structured and easy to follow, with clear notations and a well-explained methodology. 2.The proposed method is well-justified, with clear relevance to real-world scenarios. This work is interesting and stands out as the first to explore unlearnable examples in the context of multi-task learning. 3.This work initially built a benchmark and identified key weaknesses in existing STL UE methods when applied to MTL. 4.The experimental results are comprehensive, demonstrating effectiveness across multiple datasets and advanced application scenarios. Weakness: 1.As mentioned in the paper, the performance of MTL-UE on STL could be further enhanced when the number of tasks is very large. It would be helpful if this paper can give more analysis on the influence of the number of tasks on the UEs performance. Other Comments Or Suggestions: 1.The paper mentions that MTL-UE-TAP and MTL-UE-SEP exhibit lower transferability to models with ViT-B as the backbone because they use ResNet-18 as the surrogate model, resulting in a significant gap between the architectures. How would these methods perform if ViT-B were used as the backbone for the surrogate models? 2.Among the 40 tasks in CelebA, are there specific tasks that are significantly easier or harder to degrade? Questions For Authors: Please refer to the comments and suggestions above. Conducting additional experiments is recommended to further validate the findings. Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: **Q1**. How would these methods perform if ViT-B were used as the backbone for the surrogate models? **A1**. In addition to the results in Table 6, we conducted experiments using ViT-B as the backbone for the surrogate models on the CelebA dataset. The results in the table below show that this choice improves performance when the victim models also use ViT-B but can negatively impact performance when the victim models are CNN-based. This suggests a trade-off in selecting CNNs or ViTs for the surrogate models, depending on the target victim model's backbone. |Model $\rightarrow$|MTL|STL| |-| - | - | |Backbone $\rightarrow$|RN-18, RN-50, VGG-16, DN-121, ViT-B, Avg.|RN-18, RN-50, VGG-16, DN-121, ViT-B, Avg.| |EM|88.86, 88.75, 84.22, 80.71, 88.67, 86.24|90.11, 89.93, 89.46, 86.42, 86.03, 88.39| |TAP|85.55, 86.32, 83.28, 79.38, 86.32, 84.17|88.42, 88.14, 87.97, 87.46, 85.28, 87.45| |SEP| 78.67, 81.24, 79.41, 76.55, 79.33, 79.04|87.06, 87.67, 83.36, 83.50, 85.93, 85.50| |MTL-UE-EM| 73.68, 73.65, 74.77, 75.85, 74.01, 74.39|78.06, 77.87, 78.72, 77.85, 78.98, 78.29 | |MTL-UE-TAP| 68.91, 69.30, 69.40, **66.36**, 73.56, 69.50|76.32, 77.27, 75.13, 70.08, 80.92, 75.94| |MTL-UE-SEP| **53.79**, **59.63**, **63.37**, 71.49, **64.01**, **62.45** | **73.12**, **74.45**, **72.71**, **64.45**, **78.66**, **72.67** | **Q2**. Among the 40 tasks in CelebA, are there specific tasks that are significantly easier or harder to degrade? **A2**. We compute the accuracy drops for all 40 tasks when training STL models on MTL-UE compared to models trained on clean data. In MTL-UE-EM, tasks 38, 35, and 24 experience the largest drops (-58.74, -58.03, -44.39), while tasks 23, 15, and 39 are least affected (-0.48, -0.70, -1.00). In MTL-UE-TAP, tasks 25, 1, and 40 degrade the most (-80.22, -67.02, -61.90), whereas tasks 36, 38, and 27 show minimal impact (+0.02, 0.00, -0.32). In MTL-UE-SEP, tasks 15, 37, and 2 suffer the highest degradation (-61.94, -49.69, -43.03), while tasks 26, 27, and 14 are least affected (+0.09, +0.02, 0.00). These results suggest that the susceptibility of tasks to degradation varies significantly across different base UE strategies. Tasks experiencing the largest drops may rely on more vulnerable feature representations, making them easier to disrupt. In contrast, tasks with minimal impact likely depend on more robust or less perturbed features. **Q3**. More analysis on the influence of the number of tasks on the UEs performance. **A3**. Thank you for the suggestion! We provide a detailed analysis of how the number of tasks affects UE performance. As shown in Figure 5, when training MTL models, increasing the number of tasks to 10 causes only a slight performance drop (minor accuracy increase). From 10 to 40 tasks, MTL-UE remains stable, maintaining ~60% accuracy for both MTL-UE-TAP and MTL-UE-SEP. This stability is likely due to the shared encoder in MTL models (for both the UE generation and victim model training), which facilitates learning spurious features across all tasks, preventing performance degradation. However, when training STL models with varying task numbers, MTL-UE performance gradually declines as the number of tasks increases from 10 to 20 and 20 to 40. This may result from a mismatch between the UE generation and victim model training, as the former uses MTL surrogate models while the latter adopts STL models. We will include this analysis in the updated version. **Q4**. A direct runtime comparison with baselines. **A4**. Thanks for your suggestions. The table below provides a direct runtime comparison for generating perturbations across all competing methods in terms of hours. The generation process was conducted on a single RTX 4090 using the CelebA dataset (with ViT as the backbone of the surrogate MTL model) and the NYUv2 dataset. Notably, LSP and AR, which use predefined patterns, require almost no time for perturbation generation. The results indicate that our MTL-UE has a comparable computational cost to the base UE methods and is sometimes more efficient. |Method $\downarrow$|CelebA dataset $\downarrow$|NYUv2 dataset $\downarrow$| |-|-|-| |LSP|Almost 0|-| |AR|Almost 0|-| |EM|7.35|3.43| |TAP|4.81|3.36| |SEP|25.01|26.14| |MTL-UE-EM|8.55|3.75| |MTL-UE-TAP|4.55|2.90| |MTL-UE-SEP|22.55|24.87| --- Rebuttal Comment 1.1: Comment: Thanks for the authors' response. After carefully reviewing the comments from the other reviewers and the authors' rebuttal, my concerns have been well addressed. As a result, I have decided to raise my score to 5. --- Reply to Comment 1.1.1: Comment: We sincerely appreciate the reviewer’s positive feedback and recognition of the contributions in our paper.
Summary: In this paper, we propose an effective method for generating unlearnable samples for multi-task learning (MLT), which uses a generator to generate perturbations instead of the traditional iterative method. In this paper, the effectiveness of the method is analyzed and validated in terms of both accuracy and robustness, and the effectiveness of the method is demonstrated on several datasets. Claims And Evidence: The authors show the accuracy of existing UE methods in SLT and MLT scenarios in Fig. 2 and find that directly migrating existing UE methods to MLT leads to some degree of accuracy degradation with k increasing. This indicates the poor applicability of the existing methods, thus providing motivation for the proposed method in this paper. Methods And Evaluation Criteria: The proposed method improves the accuracy of UE to some extent. The test datasets include face and medical images, which are selected with practical significance. Theoretical Claims: The author does not make explicit theoretical claims, so there are no relevant questions. Experimental Designs Or Analyses: Experiments are presented on multiple datasets and the results show that the proposed method outperforms existing methods, supporting the authors' conclusions. Supplementary Material: The complexity analysis in the supplementary material is not professional enough. The authors only provide the rounds of optimization, but seem to confuse the time complexity of the algorithm with the computational complexity. I am not quite sure of the exact relevance of the optimization steps to the computational complexity and would appreciate a clearer elaboration. Relation To Broader Scientific Literature: The authors found experimentally that UE performs poorly in multi-task learning and propose improvements accordingly. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths: The authors identified the problem of low UE accuracy in MLT scenarios and provided an explanation. The authors proposed an effective UE method and validated it through extensive experiments. Weaknesses: 1. Insufficient coverage of existing literature. The authors' literature research on data poisoning methods is more limited, covering only up to 2023, while there are still a number of new researches in the field that are worth including. In addition, experimental comparisons mainly focus on UE methods, and fewer comparisons on data-based poisoning fail to clarify its necessity. 2. Challenges of new issues are not significant across datasets. Figure 2 illustrates the task differences between MLT and SLT on the CelebA dataset, but on ChestX-ray14 this difference does not seem to be significant. Especially on the clean data, where 0.75% and 91.1% accuracy are not in the same order of magnitude, it is not quite clear to me why there is such a large difference in task accuracy between the different datasets. 3. Questioning of method validity. The authors provide variants of MLT on three different methods, but the impact of these changes varies dramatically across datasets. For example, the MLT-UE variant on TAP has an even higher average accuracy of the STL model than TAP when testing the ChestX-ray14 dataset. Some of the data suggests that the improvements of MLT-UE are limited and even not always effective in some cases. This contradicts the authors' claim of enhanced attack performance. 4. Insufficient technical novelty. The authors' main improvement is to replace labels with feature embeddings, which is simple and effective but weak in terms of novelty. I expect that the authors can analyze the effectiveness of this improvement from a theoretical perspective, rather than just performing technical implementation and experimental evaluation. 5. The applicability of MLT tasks is unclear. Although the authors include the performance of SLT in their discussion, their motivation for the study seems to be based on the fact that there is some kind of generic challenge between the two. It is not clear to me exactly how this challenge relates to MLT, and I would have appreciated a clearer clarification. 6. The question of the plausibility of attack scenarios. Could the authors please elaborate on the unlearning needs of MLT in real-life scenarios and the scope of application of this paper's methodology, which would help to better understand its practical application value. Other Comments Or Suggestions: No. Questions For Authors: See weakness Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Q1**. Insufficient coverage of existing literature. A1. Thank you for the advice! We’ll add recent works on **data poisoning** in the related work. As we focus on UE, we add a comparison between UE and other poisoning attacks (the table below) in the updated paper to broaden the literature coverage and emphasize the need for UE. |Aspect|UE|Label Flipping|Backdoor Attacks|Availability Attacks (Partial Poisoning)|Targeted Data Poisoning| |-|-|-|-|-|-| |**Attack Objective**|Significantly reduce accuracy on clean data, causing near-random guesses|Decrease accuracy for all classes or specific classes|No impact on clean data, but manipulate predictions to a specific class for data with predefined triggers|Reduce accuracy on clean data, but to a lesser degree compared to UE|Misclassify specific target classes or instances| |**Nature of Poisoning**|Add subtle, imperceptible perturbations across the entire training dataset and maintain true labels|Modify a portion of dataset by changing labels only|Modify a portion of dataset with triggers (perturbations) and potential label changes|Modify a portion of dataset with unbounded perturbations and potential label changes|Modify a portion of dataset with unbounded perturbations and potential label changes| **Q2**. Challenges of new issues are not significant across datasets. A2. Following original settings, we report accuracy (%) for CelebA and UTKFace, while for ChestX-ray14 the metric is AUC-ROC (see Sec. 5.1). The drop in AUC-ROC from 0.7577 to near 0.5 is significant. **Q3**. Questioning of method validity. A3. MTL-UE works well with surrogate-dependent UE methods, improving performance across most datasets. The exception is ChestX-ray14 with TAP and SEP, where the clean model performs not well (AUC-ROC=0.75), limiting the success of adversarial attacks and, consequently, the effectiveness of adversarial-attack-based UE methods like TAP and SEP. However, MTL-UE still improves on EM, boosts TAP and SEP in MTL, and matches STL results for TAP and SEP. **Q4**. Insufficient technical novelty. A4. MTL-UE innovates in UE for MTL by solving model misalignment with shared task embeddings, boosting STL performance. It optimizes at the distribution level using class-wise embeddings and allows flexible task protection without retraining. Theoretically, we introduce embedding regularizations (Intra-ER & Inter-ER) to maximize intra-task separation and promote inter-task independence, enhancing the learning of spurious features. See A5 for more details on the technical novelty to solve challenges. We’ll refine the theoretical analysis and revise the paper to clarify these. **Q5**. The applicability of MLT tasks is unclear. A5. MTL is vital in real-world applications like autonomous vehicles, which handle multiple tasks simultaneously. Research has advanced multi-task datasets and MTL models. While UE has been studied for STL, we are the first to tackle unauthorized training on multi-task data and protect data privacy. To bridge the gap, we adapt UE methods from STL to MTL and highlight key challenges: - Model Alignment Issue: GPU constraints limit UE generation to use one MTL model instead of multiple STL models (e.g., 40 for CelebA), causing misalignment when training STL victim models. In MTL-UE, task-wise embeddings align UE within each task, reducing performance degradation when training STL models compared to MTL ones. In CelebA, MTL-UE-EM shows minor MTL gains but major STL improvements. - Lack of Distribution-Level Optimization: Prior UE methods optimize samples independently, ignoring distribution-level effects. In MTL settings like CelebA (40 tasks), this issue worsens. MTL-UE uses class-wise embeddings to poison distributions per task, reducing intra-class variance and reinforcing spurious correlations, enhancing attack effectiveness. - Re-optimize for different protected task sets: Prior sample-wise UE methods require re-optimizing perturbations for each combination of tasks to protect. MTL-UE, once optimized on all tasks, allows flexible task selection to protect without re-optimizing (Sec. 5.3). We'll add these in the paper. **Q6**. The question of the plausibility of attack scenarios. A6. Unlearning in MTL protects sensitive multi-task data from unauthorized training due to privacy concerns. In medical AI, MTL models use X-rays, MRIs, and patient histories for diagnosis, where unauthorized training risks privacy and ethics violations. Facial recognition models (e.g., CelebA) can enable surveillance and profiling. Social media analysis (e.g., Weibo-20) enables large-scale surveillance. Smart surveillance models (e.g., DukeMTMC) risk privacy infringement. MTL-UE prevents unauthorized use by introducing unlearnable perturbations, degrading MTL and STL model performance, thus enhancing data privacy and intellectual property protection. We'll add these in the paper. **Q7**. Evaluation of computational complexity. A7. Please refer to A4 for reviewer AfLP.
Summary: This work studies unlearnable examples (UE) for multi-task learning (MTL). The authors first evaluated baseline UE methods in the MTL scenario, showing that existing UE methods are not effective on MTL when more tasks are involved. Motivated by this observation, MTL-UE is proposed taking both single-task and multi-task learning into account. Comprehensive MTL experiments are provided. Claims And Evidence: Intra-class variance of optimization-based sample-wise approaches are high, e.g., EM, TAP, and SEP. The explanation makes sense to me. But would it be possible to modify, let’s say sample-wise EM, to fit the MTL objective by limiting the intra-class variance? EM is designed for STL, so it makes sense it performs badly on MTL. It would be great if the authors could articulate the potential of modifying sample-wise optimization-based approaches to fit MTL. I think the class-wise noise makes the optimization of MTL UE easier, but it's not essential. Methods And Evaluation Criteria: The proposed methods and evaluations make sense. However, the exact evaluation criteria needs to be further clarified. See the comments above regarding STL performance. Theoretical Claims: Theoretical analysis is not provided in this work. Experimental Designs Or Analyses: Experiments shown in Figure 2 can be further explained. How to interpret the results on STL with task 40? Does it mean that the UEs are generated on one task and tested on all tasks? Especially, the claim that STL models are more robust to UE than MTL models, can be further clarified. This result is important since patch-based AR outperforms other approaches, which is used as the basis of analysis in the following paper content. The baseline performance in table 2 can be further clarified. Why is the baseline STL performance also limited? Does it follow the same manner of Figure 2? It would be great if the authors could clarify. I would anticipate that the baseline STL performance should be better than reported. Supplementary Material: Yes. All of it. Relation To Broader Scientific Literature: None. Essential References Not Discussed: No. Other Strengths And Weaknesses: None. Other Comments Or Suggestions: None. Questions For Authors: This work explores the application of unlearnable examples in multi-task learning. The observation is interesting, since STL unlearnable examples do not generalize in the MTL scenario. My concerns listed above can mainly be summarized as the following two points: 1. Can original sample-wise unlearnable examples also be modified to work under the MTL scenario. Except for class-wise noises making the optimization easier, what is the potential obstacle that makes sample-wise unlearnable examples not work? 2. The provided experimental results can be further clarified and interpreted. For example, what does STL mean in Figure 2? Why does baseline perform not well even on STL tasks? It would be great if the authors could clarify these questions. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Q1**. Further clarify the experimental results. What does STL mean in Figure 2, and why do the baseline methods perform poorly, even on STL? A1. The pipeline for UE has two stages: - **Stage 1**: UE generation process (Section 4.2). - **Stage 2**: UE performance evaluation, where generated UEs train victim models, and these models are tested on clean data. Stage 1 focuses on generating UE to protect **all tasks simultaneously** using a surrogate MTL model. Stage 2 evaluates in two ways: a) Train one MTL model for all tasks. b) Train individual STL models for each task, and averaging the evaluation results across all tasks. Figure 2 shows UE performance vs. the task number. For each x-axis point, only the first $K$ tasks are selected for both stages, and UE are generated to protect them. Stage 2 evaluates a) MTL (left) and b) STL (right). Other tables use full task sets with the above stages for a complete UE assessment. All baseline methods designed for STL, perform poorly in Stage 2 for both MTL and STL, as Stage 1 generates UE for MTL models, not STL settings. Figure 2 also shows that when $K$ is small, these baselines perform well but degrade as $K$ increases due to greater misalignment between Stage 1 and Stage 2. We will clarify these in the paper. --- **Q2**. Can original sample-wise UE be modified to work under the MTL scenario? Except for class-wise noises making the optimization easier, what is the potential obstacle that makes sample-wise UE not work? A2. To reduce the high intra-class variance in optimization-based methods, we add additional loss terms. As these methods use a surrogate MTL model during optimization, we define $L_{std}=[L_{std}^1,\ldots,L_{std}^D]$, and $L_{std}^d$ is defined in line 201 of the paper. We use two loss terms, $L_{std}^{mean}$ and $L_{std}^{max}$: the mean and maximum of $L_{std}$. In the perturbation optimization (Eqs. (1) & (2)), we add $\lambda_1 L_{std}^{mean}+\lambda_2 L_{std}^{max}$ to the original loss. If the original optimization is to maximize, we apply a negative sign. We experiment on CelebA with a batch size of 1024. The table below shows MTL model results with various hyperparameters. As $L_{std}^{mean}\approx5$ and $L_{std}^{max}\approx100$, these choices are reasonable. Despite different settings, the new losses don't outperform the original one. $L_{std}^{max}$ largely degrades performance, and $L_{std}^{mean}$ has a smaller, but still negative, effect. |$(\lambda_1,\lambda_2)$|(0,0) in paper|(0.5,0.01)|(0.05,0.001)|(0.05,0)|(0,0.001)|(0.005,0.0001)|(0.005,0)|(0,0.0001)|MTL-UE| |-|-|-|-|-|-|-|-|-|-| |EM|75.66|90.45|90.57|89.70|90.4|88.05|74.73|87.14|74.38| |TAP|85.24|90.84|89.95|88.53|89.79|87.82|85.82|87.73|59.51| |SEP|84.25|90.34|90.15|89.25|90.64|89.57|85.37|88.97|58.73| **Challenges in Adopting These Loss Terms** - Batch Size Limitation: With 160k images in CelebA and a 1k batch size, variance estimates are unreliable, and increasing batch size is infeasible due to GPU limits. - Surrogate Model Misalignment: TAP and SEP use a surrogate MTL model trained on clean data, misaligning with the victim model trained on UE. EM partially addresses this by training on UE data, but weak early-stage perturbations lead to suboptimal results. - Conflict with Original Loss: Minimizing $L_{std}^{mean}$ and $L_{std}^{max}$ conflicts with the original loss. $L_{std}^{max}$ degrades UE effectiveness, and $L_{std}^{mean}$ with small $\lambda_1$ has minimal impact. - Computational Overhead: These losses increase computation by $\times28$. **Potential Obstacles for Existing Sample-Wise UE in MTL** - Model Alignment Issue: Due to GPU limits, UE generation uses one MTL model instead of multiple STL models (e.g., 40 for CelebA), causing misalignment with STL victim models. In MTL-UE, a surrogate MTL model is used, and images with the same label $y^k$ for the $k$-th task share the embedding $e_{y^k}^k$. Thus, MTL-UE suffers less degradation when training STL victim models than MTL models. CelebA results show MTL-UE-EM improves slightly in MTL but shows gains in STL, highlighting its alignment effectiveness. - Lack of Distribution-Level Optimization: Perturbations mislead the victim model by mapping $x_i+\delta_i$ to labels, poisoning the data distribution. - Previous UE methods individually optimize samples, missing distribution-level effects. - EM trains on batched poisoned data, implicitly considering distribution, is better than TAP and SEP. - In MTL settings like CelebA (40 tasks), lack of distribution optimization is problematic. - MTL-UE creates poisoned distributions with class-wise embeddings, reducing intra-class variance and effectively misleading victim models. - Re-optimizing for different protected task sets: Prior sample-wise UE methods require re-optimizing perturbations for each combination of tasks to protect. MTL-UE, once optimized on all tasks (Sec. 5.3), allows flexible task selection to protect without re-optimizing.
null
null
null
null
null
null
Mitigating Object Hallucination in Large Vision-Language Models via Image-Grounded Guidance
Accept (spotlight poster)
Summary: This paper proposes the MARINE framework to address the object hallucination issue in Large Vision-Language Models (LVLMs). This framework introduces visual guidance from image-grounded models to effectively reduce hallucinations during inference. Experiments show that MARINE outperforms baseline methods on multiple LVLMs and balances latency and accuracy. Claims And Evidence: Most claims are supported by clear evidence. The authors conduct extensive experiments on five LVLMs using multiple metrics like CHAIR, POPE, and GPT-4V-aided evaluation. They compare MARINE with various baselines and perform ablation studies. However, the experiments are limited to specific datasets and tasks, which can be problematic. I will give my reason in the Methods And Evaluation Criteria section. Methods And Evaluation Criteria: Overall, the paper is well-written and easy to follow. The technical routing makes sense to me. My primary concern is implementing the vision models, such as DETR. As far as I know, both CHAIR and POPE contain the samples selected from MS COCO, and using the DETR trained on MS COCO can surely improve the method's performance. In this case, the results on CHAIR and POPE can be somehow unfair. Note that the compared method, VCD, does not introduce additional information and achieve comparable performance. Can the author provide more explanation about this point? Theoretical Claims: The theoretical claims are right. Experimental Designs Or Analyses: See Evaluation Criteria. Supplementary Material: Yes I have reviewed the S. M. Relation To Broader Scientific Literature: The paper's key contribution to MARINE in mitigating object hallucination in LVLMs builds on using knowledge from vision models. MARINE offers a training-free and API-free approach. By leveraging image-grounded models, the root causes of hallucination are addressed well. Essential References Not Discussed: I think most of the listed related works are essential for understanding this paper's contributions. While there are some other works in Large vision language models (LVLMs) that seem to apply feature steering to mitigate hallucinations, such as: [1] Reducing hallucinations in vision-language models via latent space steering, 2024. [2] Nullu: Mitigating Object Hallucinations in Large Vision-Language Models via HalluSpace Projection, 2024. These are suggested to be discussed in related works. Other Strengths And Weaknesses: Generally, the paper is well-written and easy to follow, and the results seem to be good. Using 2D features from pre-trained models to prompt downstream tasks such as 3D detection and 2D few-shot detection is not a new concept. It would be beneficial to have a more in-depth explanation of how this method differs from direct prompting. Other Comments Or Suggestions: I am willing to adjust the score if the issues are satisfactorily addressed. Questions For Authors: None. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your insightful feedback and acknowledgement of our extensive experiments and the overall clarity and structure of our paper. We detail our response as follows. ### Q1: Using the DETR trained on MSCOCO may be unfair. MSCOCO (train) is a widely-used open-source image-caption dataset frequently utilized for pre-training various vision encoders, including CLIP, the backbone of current LVLMs (e.g., LLaVA). While all vision encoders leverage data that includes MSCOCO training split, the difference in how to process and leverage these information determines model performance. This variability is evident in MARINE’s superior performance compared to LVLMs utilizing only the CLIP encoder. Additionally, our manuscript's Table 6 illustrates instances where MARINE with only DETR underperforms compared to MARINE employing only RAM++. Nonetheless, aggregating information from multiple visual encoders (as in MARINE) consistently achieves the highest performance. Moreover, the vocabulary derived from MSCOCO effectively encompasses frequent objects common across diverse natural-image datasets, demonstrating strong generalizability to other evaluation data. **This is confirmed by the substantial improvements in POPE results on A-OKVQA shown in Table 15**, highlighting our method's capability to generalize effectively and outperform baselines beyond MSCOCO. ### Q1.1: Comparison with VCD. We acknowledge that VCD is a valuable work, but our approach addresses the hallucination problem from a different and effective perspective. VCD aims to reduce LVLMs’ over-reliance on language priors from LLM pre-training data by contrasting distorted visual inputs. In contrast, MARINE focuses on hallucinations arising from insufficient visual context. Thus, VCD and MARINE approach hallucination from complementary angles, and integrating both methods has the potential to achieve further performance improvements. ### Q2: Relevant works on feature steering for hallucination reduction. Thank you for pointing to these relevant lines of research on feature steering. We summarize them as follows and will include the discussion in our next revision. Nullu identifies a “HalluSpace” by comparing truthful and hallucinated features, then projects model weights to the null space of those hallucination-prone directions, reducing object hallucinations with no extra runtime cost. VTI (Visual and Textual Intervention) learns “shift vectors” by analyzing how vision features change under corruption and how text features differ between hallucinated and correct outputs. It then applies these shifts at inference to stabilize LVLMs and reduce hallucinations. ### Q3: More in-depth explanation of how this method differs from direct prompting. We clarify key conceptual and empirical differences between MARINE and prompting-based methods, which we believe directly address this concern: - Direct prompting relies solely on the model’s textual instruction-following capabilities, which can exacerbate hallucination issues in less capable models (e.g., LLaVA). Conversely, MARINE directly enhances visual understanding by integrating additional visual information, making it effective even for weaker models without incurring training overhead. - Prompting methods cannot introduce new visual data and thus remain constrained by the original vision model’s capabilities. MARINE introduces novel visual information through an improved vision encoder, fundamentally enhancing the model's observational accuracy. - Direct prompting and additional prompt methods require careful crafting and tuning specific to each task or dataset. MARINE, however, exhibits strong generalization capabilities across diverse models and datasets, effectively reducing hallucinations and improving reliability without additional training or manual prompt optimization. Appendix B.1.1 further elaborates on these distinctions. In particular, we evaluated the direct prompting baseline using a highly detailed instruction explicitly guiding the model to describe only observable visual characteristics As shown below, MARINE consistently outperforms the prompting baseline by significantly reducing hallucinations while maintaining or improving recall. Although prompting can improve recall in some cases, it often worsens hallucination metrics. MARINE achieves better overall reliability across all models. | Method | LLAVA-$C_s\downarrow$ | LLAVA-$C_i\downarrow$ | LLAVA-$Recall\uparrow$ | LLaVA-v1.5-$C_s\downarrow$ | LLaVA-v1.5-$C_i\downarrow$ | LLaVA-v1.5-$Recall\uparrow$ | mPLUG-Owl2-$C_s\downarrow$ | mPLUG-Owl2-$C_i\downarrow$ | mPLUG-Owl2-$Recall\uparrow$ | |-----|------|------|-----|------|-----|-----|-----|-----|------| | Original | 26.6 | 10.5 | 47.4 | 8.8 | 4.6 | 41.1 | 6.2 | 3.4 | 38.8 | | Direct Prompting | 27.2| 11.0| 46.4| 19.6 | 8.3 | **52.3** | 9.0 | 5.1 | **42.0** | | **MARINE** (ours) | **17.8** | **7.2** | **50.8** | **6.2** | **3.0** | 44.3| **4.2** | **2.3** | 41.4|
Summary: The paper proposes the MARINE method for mitigating object hallucination in LVLMs. The method uses results from external object detection models and adds it in the form of an extra textual prompt into the LVLM’s generation. The method is compared with several baselines on object hallucination benchmarks, as well as on VQA and image captioning tasks. Several ablation studies are also presented in the paper. Claims And Evidence: The paper makes the following claims: 1. “MARINE mitigates insufficient visual context provided by the visual encoder and misalignment between the vision and text domains”: I think this claim is too general. First, MARINE cannot be used with an arbitrary visual encoder but rather with object detection models. Second, it is unclear to me where in the experimental section the misalignment between the vision and text domains is investigated and which experiments demonstrate that MARINE mitigates it. 2. “MARINE does not require additional training resources or access to advanced LLMs.”: This is true but it instead requires access to advanced object detection models, which should be made more clear in the paper. 3. “MARINE outperforms the baselines in hallucination mitigation while maintaining overall performance across multiple tasks (image captioning, VQA)”: I believe that this claim has been demonstrated in the experimental section, though there are cases where MARINE does not outperform the baselines (eg in Table 1 and 2). Methods And Evaluation Criteria: While the chosen benchmarks make sense, it would be great to compare MARINE against more powerful VLMs that similarly incorporate more fine-grained visual information, like SILC: Improving Vision Language Pretraining with Self-Distillation, ECCV 2024 and BRAVE : Broadening the visual encoding of vision-language models, ECCV 2024. Especially BRAVE is conceptually similar to MARINE in that it incorporates information from multiple encoders. Theoretical Claims: I do not understand what MARINE-Truth is and neither can I find any details in Appendix A as stated in the main paper. I also do not understand the point of this part. Please explain in the main paper what it is and provide a better explanation on why it is important to look at it. Experimental Designs Or Analyses: 1. In Appendix, you state that “For decoding methods such as VCD, OPERA and our method, we measured the latency of LLaVA generating captions directly”. I think this is unfair, as MARINE additionally requires forward passes through multiple external models which in my opinion should be counted towards the latency calculation. 2. See also my comment above regarding the misalignment analysis (under claims). Supplementary Material: I went through the Appendix. Relation To Broader Scientific Literature: Mitigating object hallucinations is an active area of research. Essential References Not Discussed: See my comment above about SILC: Improving Vision Language Pretraining with Self-Distillation, ECCV 2024 and BRAVE : Broadening the visual encoding of vision-language models, ECCV 2024. Other Strengths And Weaknesses: 1. The novelty of the paper is a bit limited and the claims made in the introduction are too general. In particular, I would like the authors to rephrase the paper and remove any occurrences where it is claimed that information from a general visual encoder is used because this is not true (at least it is not shown in the paper). In addition, MARINE is only useful when used on images where the pretrained object detectors give meaningful outputs, which should also be stated in the paper. 2. Despite the limited novelty, I find the simplicity of the method and extensive experimental evaluation a strength. Other Comments Or Suggestions: Please label y axis and change colors in Figure 3, 8, 14 and 15 as the lines are hard to distinguish. Questions For Authors: Please see my questions in the sections above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you very much for your insightful review and valuable feedback. We sincerely appreciate your recognition of the simplicity and thorough experimental validation of our MARINE approach. ### Q1.1 Clarify the claim regarding misalignment. In the original claim, by "visual encoder," we referred to the LVLM's visual encoder component (e.g., CLIP), highlighting that MARINE supplements its initially insufficient visual context. The mentioned misalignment issue refers that LVLMs typically employ a trainable linear alignment layer, which can potentially lose or distort important object-level information during the mapping from visual to textual representations. In contrast, MARINE directly extracts and utilizes explicit object-level details, thereby preserving critical visual information intact. We will clarify this point explicitly in the revised manuscript. ### Q1.2. Clarify the claim regarding access to advanced models. Thank you for pointing this out. We will clarify that MARINE inherently requires access to additional vision models to enrich the visual inputs in our revision. Additionally, the "advanced LLMs" that we referenced refer to closed-source models that are accessible only via paid APIs (e.g., GPT-4o), while the vision models are open-source. We will explicitly state this distinction in our revision. ### Q2. Could you explain the concept and importance of "MARINE-Truth" clearly in the main paper? We consider using the ground-truth object list as a variant of MARINE and denote it as MARINE-Truth. The performance of MARINE-Truth serves as a reference to MARINE’s best performance. However, the ground-truth object list may also contain noise and therefore occasionally underperforms MARINE. We will explain this in our revision. ### Q3. In your latency measurements, why did you exclude the latency of additional forward passes through external models required by MARINE? Thank you for raising this point. To clarify, we did include the latency of additional forward passes through external vision models in our measurements. These external vision models have negligible inference overhead compared to autoregressive models (e.g., LLMs, LVLMs). To illustrate clearly the impact of including or excluding the latency introduced by these external models, we present a detailed latency comparison in the table below. Specifically, the table includes scenarios both excluding visual prompt generation latency (Offline MARINE) and including it (Online MARINE): | | Greedy | LURE | Woodpecker* | VCD | OPERA | **Offline MARINE** | **Online MARINE** | |------|-----|-----|-----|-----|------|------|------| | **Training Cost** | 0 | 10min on A100 80G | 0 | 0 | 0 | 0 | 0 | | **Inference Latency (ms/token)** | 26.3 (×1.0) | 179.9 (×6.84) | 94.5 (×3.59)* | 53.4 (×2.03) | 185.1 (×7.0) | **52.2 (×1.98)** | **52.23 (×1.985)** | *Woodpecker requires GPT API key access, and the latency may depend on OPENAI API. As shown, processing images via additional vision models adds only a negligible overhead to the overall latency. ### Q4. Can you compare MARINE against other SOTA VLM methods, especially BRAVE? Thank you for pointing out these related works, we will include them in discussion in our next revision. Especially, BRAVE indeed shares a very similar intuition as ours for ensembling diverse visual information sources to improve model faithfulness, confirming the motivation of our work. Here, we compare MARINE against BRAVE on the POPE benchmark. As shown below, MARINE achieves comparable performance than BRAVE, while introducing no additional trainable parameters. | Model | Total Params | Trainable Params | Rand | Pop | Adv | POPE$_\text{avg}$ | |--------------|--------------|------------------|-------|-------|-------|-------------------| | LLaVA-v1.5 (7B) | 7B | 7B | 87.3 | 86.1 | 84.2 | 85.9| | BRAVE | 10.5B| 3B | – | – | – | 87.6 | | **MARINE (ours)** | 7B | 0 | **87.9** | **86.5** | **86.7** | 87.0 | *Note: We report BRAVE’s POPE$_\text{avg}$ score as stated in their paper. Their model and detailed evaluation results have not been open-sourced. For fair comparison, we adopt the same test set as LLaVA-v1.5.* ### Q5: Labels in figures. Thank you for catching these. We will update them in our next revision.
Summary: The paper presents a novel method called MARINE to reduce hallucination in large vision-language models (LVLMs). The method can be applied to LVLMs without any training. When auto-regressively generating individual tokens, logits are computed twice: once with the normal LVLM input ("unconditional"), and once with an augmented "conditional" input containing tokens from visual guidance models (DETR and RAM++). The logits of the unconditional and conditional inputs are then combined for sampling the next token ( similar to classifier-free guidance in diffusion models). The paper compares the method with 5 LVLMs against 5 different baselines and shows improvements of both CHAIR and POPE scores, as well as an improvement of caption metrics. MARINE is also less compute intensive than the baselines methods. Claims And Evidence: 1. The paper claims that the MARINE method is effective for mitigating object hallucinations in LVLMs. The paper gives good evidence that the method indeed works well compared to previous methods. Theoretically, the method is based on classifier-free guidance, which was introduced in diffusion models (Ho and Salimans, 2021) and then adapted for text sampling (Sanchez, 2023). 1a. The paper always uses greedy sampling, and previous works such as (Sanchez, 2023) have sampled at different temperatures. The paper simply assumes it works equally well with greedy sampling, without referring to previous literature, experimental evidence, or theoretical considerations. 1b. In Section 5.3 (Line 428) the paper states that the best guidance strength is between 0.3 and 0.7. Figure 3 shows that object hallucinations decrease with increasing guidance strength for LLaVA, but not for for mPLUG-OWL2. It would be interesting to see how this affects other models, maybe mPLUG-Owl2 is an outlier. Figure 8 in the appendix shows that increased guidance strength improves captioning metrics on captioning tasks (I'm less convinced of the value of the captioning metrics on the LLaVA-QA90 task). Then the only evidence for guidance strength 0.7 being better than guidance strength 1.0 is Table 22 in the appendix. This seems somewhat scarce evidence for this central parameter (note that with guidance strength 1.0 the method reduces to something much simpler). 1c. Ιn section 5.3 the paper claims that "intersection-based method outperforms the union" and this is based on Table 6. I find it surprising that using a single model only *increases* sentence-level hallucinations for LLaVA models. Note that Table 9 in the appendix lists different prompts for MARINE-intersec and MARINE-union. Which models to use and how to combine their output is also a central part of the method, so I think this should be explained in some more detail with better evidence. 2. The paper claims that MARINE is training-free and compares favorably with existing methods. This claim is well supported by the fact that MARINE can be applied to a variety of models without re-training (Tables 1,2) and by the inference latency measurements (Table 5). Methods And Evaluation Criteria: The benchmark datasets are appropriate. The method is evaluated on CHAIR and POPE (original MSCOCO, as well as A-OKVQA and GQA) to measure the mitigation of hallucinations. Additionally, the method is evaluated on and GPT-4V-aided evaluation (Yin, 2023), which also measures hallucinations, but at the same time gives information about the usefulness of the outputs. Finally, the method is also evaluated on captioning metrics, again with the goal to verify that the output quality other than hallucinations does not deteriorate. Theoretical Claims: There are not theoretical claims in the paper. Note above comments about greedy sampling and guidance strength. Experimental Designs Or Analyses: The method is compared with four previous methods (LURE, Woodpecker, VCD, OPERA – all from 2023). This evaluation seems robust and fair. Supplementary Material: I checked the supplementary material in its entirety. Relation To Broader Scientific Literature: The work is well anchored in existing literature about object hallucinations in LVLMs and controllable generation. Section 2.1 mentions some more recent work from 2024, but all baselines (Section 5.1) are from 2023. Why is the work not compared to newer methods? Essential References Not Discussed: None. Other Strengths And Weaknesses: Strengths and weaknesses are discussed in above sections. Other Comments Or Suggestions: Typos: 1. Line 102 (right column): "studies(Li" 1. Line 137: "2023a)and" 1. Line 223: "InstructBLIP (Liu et al., 2023c)" 1. Line 382 (right column): experimental evidence for guidance strength is provided in Appendix B.4 (not C) 1. Line 682: "ROUGH-L" 1. Lines 360-362: missing spaces before "(" Questions: 1. Line 262 (right column): what is the "noise intensity" for DETR? 1. Line 264 (right column): why only greedy sampling? in (Sanchez, 2023) different sampling temperatures were explored, but the method was never applied with greedy sampling 1. Lines 716-748 (Table 9): why use different prompts in MARINE-intersec and MARINE-union? Nits: 1. Lines 289: consider mentioning that these results are on MSCOCO – this would make it clearer how Table 2 relates to Table 4 1. Line 1110: awkward placement of orphan line Questions For Authors: See above sections. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you very much for your thoughtful and constructive review. We appreciate your recognition of MARINE’s effectiveness and comprehensive evaluation. We provide detailed responses to your questions below: ### Q1. Effect of sampling temperatures. In our paper, we opted for greedy sampling (temperature = 0) to ensure deterministic behavior in LLMs, eliminating randomness and thereby facilitating more reliable comparisons. This setup is also consistent with our main baseline VCD, and is widely used in existing benchmark evaluations to ensure reproducibility. Here, we conducted experiments using temperature = 0.6 and reported the mean ± standard deviation in the table below. As shown, MARINE consistently improves object hallucination metrics regardless of the sampling strategy. | Method | LLAVA-$C_s\downarrow$ | LLAVA-$C_i\downarrow$ | LLAVA-$Recall\uparrow$ | mPLUG-Owl2-$C_s\downarrow$ | mPLUG-Owl2-$C_i\downarrow$ | mPLUG-Owl2-$Recall\uparrow$ | |----|----|----|-----|-----|-----|-----| | Original | 26.1 ± 1.6 | 10.8 ± 0.5 | 46.0 ± 0.8 | 4.9 ± 0.6 | 2.8 ± 0.3 | 37.7 ± 0.6 | | MARINE (ours) | 19.3 ± 0.8$_{-6.8}$ | 7.6 ± 0.1$_{-3.2}$ | 50.6 ± 0.2$_{+4.6}$ | 4.5 ± 0.6$_{-0.4}$ | 2.4 ± 0.2$_{-0.4}$ | 41.1 ± 0.4$_{+3.4}$ | ### Q2. Questions on guidance strength. Is mPLUG-OWL2 an outlier in Figure 3? Increasing guidance strength generally improves model faithfulness across all evaluated models, indicated by a notable decrease in the CHAIR. However, the optimal guidance strength varies by model. mPLUG-OWL2 serves as example of the more advanced models that begin with an inherently lower CHAIR score, suggesting it captures visual information better than LLaVA. Thus, it benefits from guidance strengths other than strictly 1.0, where the visual guidance effectively complements rather than dominates its internal visual encoder. In contrast, earlier models like LLaVA rely significantly more heavily on the introduced visual guidance, as it can largely dominate their intrinsic visual grounding capabilities. However, excessively strong visual guidance can overall harm a model’s ability to follow instructions accurately. This negative effect is illustrated in Figures 9 and 13 of our manuscript, where a guidance strength of 1 reduces the quality of model generations for tasks beyond visual grounding. For simplicity and consistency, we adopted a universal strength of 0.7 across experiments. Nonetheless, tuning this hyperparameter for each base LVLM using a validation set could yield optimized results tailored to each model. We will include this discussion in our next revision. ### Q3.1. Why the intersection-based method outperforms the union-based method? The intersection-based method retains only visual signals consistently grounded across different vision models, while the union-based method includes all signals, even conflicting or incorrect ones. Intersection outperforming union indicates that precision is currently more critical than recall for LVLMs. In other words, intersection reduces false positives from visual guidance, whereas union increases true positives at the cost of more false positives. For example, LLaVA, one of the earliest LVLMs, is particularly prone to hallucinations with complex or noisy instructions and thus benefits greatly from intersection-based methods. Newer models, such as LLaVA-v1.5 and mPLUG-Owl2, are more robust but remain sensitive to partially incorrect inputs. ### Q3.2 Why use different prompts in MARINE-intersec and MARINE-union? We evaluated MARINE-union using the exact same prompt template as MARINE-intersec, which underperformed. We hypothesize this is because the original version provides more detailed information, whereas reusing the intersec-style prompt combined reliable grounding with potentially misleading false positives, thus weakening the visual guidance. | **Model** | **LLaVA** | | **LLaVA-v1.5** | | **mPLUG-Owl2** | | |-----|-----|----|-----|------|-----|------| | **CHAIR** | $C_S \downarrow$| $C_I \downarrow$ | $C_S \downarrow$ | $C_I \downarrow$ | $C_S \downarrow$ | $C_I \downarrow$ | | MARINE-union | **30.4** | 9.7 | **5.4** | **2.7** | **4.8** | **2.7** | | MARINE-union (new) | 32.6 | 9.7 | 7.8 | 3.9 | 6.2 | 3.5 | *New: Same prompt template as MARINE-intersec. Note: Greedy and Intersec results are reported in Table 7 of the main paper. ### Q4. More recent works included related work but not baselines. The more recent works were concurrent to our project development, and thus we included them in discussion but not empirical comparison. In the following, we additionally include ### Q5. Typos Thank you for the catch. We will ensure to correct them in our next revision. ### Q6. What is the "noise intensity" for DETR? DETR outputs a confidence score for each object. We filter predictions using a threshold. lower thresholds allow more noisier detections, while higher ones yield fewer, more precise results. This threshold defines the noise intensity. --- Rebuttal Comment 1.1: Comment: Thank you for the additional details! Some follow-up comments: ### Q2. Questions on guidance strength. Is mPLUG-OWL2 an outlier in Figure 3? While I agree that a higher guidance strength γ will potentially lead to worse instruction following (but improved recall and hallucination metrics), I think Figures 9 and 13 are insufficient to motivate the used value of 0.7 Ideally, some quantitative metric would be added that more clearly motivates the chosen setting. Alternatively, better highlighting this in the discussion and adding more models to Figure 3 (to see how much of an outlier mPLUG-OWL2 really is), would go some way in this regard. ### Q3.2 Why use different prompts in MARINE-intersec and MARINE-union? Thank you for the additional table. It begs the question how MARINE-intersec would have performed with the "old" prompt. ### Q4. More recent works included related work but not baselines. Your answer seems truncated: `In the following, we additionally include` ### Q6. What is the "noise intensity" for DETR? Then maybe call this "score threshold"? I was not aware that this threshold is called "noise intensity". --- Reply to Comment 1.1.1: Comment: Thank you for your thoughtful comments and suggestions. Please find our responses below: ### Q2. Guidance Strength – Is mPLUG-OWL2 an outlier in Figure 3? We added experiments on LLaVA-v1.5 for Figure 3 and present the numbers below, which shows a similar trend as mPLUG-OWL2: higher γ improves recall but slightly degrades CHAIRs/CHAIRi. This consistency supports γ = 0.7 as a balanced choice. We will update the figure and discussion accordingly in our revision. Thanks again for the helpful suggestion. | Guidance Strength γ | CHAIRs | CHAIRi | Recall | |-----------|--------|--------|--------| | 0.0 | 0.088 | 0.0457 | 0.4114 | | 0.1 | 0.076 | 0.0382 | 0.4187 | | 0.2 | 0.074 | 0.0375 | 0.4260 | | 0.3 | 0.066 | 0.0343 | 0.4333 | | 0.4 | 0.064 | 0.0311 | 0.4419 | | 0.5 | 0.058 | 0.0287 | 0.4501 | | 0.6 | 0.058 | 0.0288 | 0.4647 | | 0.7 | 0.062 | 0.0300 | 0.4430 | | 0.8 | 0.050 | 0.0259 | 0.4706 | | 0.9 | 0.056 | 0.0289 | 0.4779 | | 1.0 | 0.062 | 0.0325 | 0.4834 | ### Q3.2. Why use different prompts in MARINE-intersec and MARINE-union? Thank you for raising this point. We include MARINE-intersec with the other prompt version used in MARINE-union originally. The updated results are as follows: | **Model** | **LLaVA** | | **LLaVA-v1.5** | | **mPLUG-Owl2** | | |--------------|-----------|-------|----------------|-------|----------------|-------| | **CHAIR** | $C_S \downarrow$ | $C_I \downarrow$ | $C_S \downarrow$ | $C_I \downarrow$ | $C_S \downarrow$ | $C_I \downarrow$ | | Greedy | 26.6 | 10.5 | 8.8 | 4.6 | 6.2 | 3.4 | | MARINE-Intersec | 17.8 | 7.2 | 6.2 | 3.0 | 4.2 | 2.3 | | MARINE-Intersec (*) | 24.4 | 8.3 | 7.0 | 3.5 | 6.0 | 3.0 | *: Same prompt template as MARINE-union. As shown, the original prompt for MARINE-intersec leads to consistently better CHAIR scores across all models. We will include this comparison in the appendix and clarify our prompt design choice in the main text. We will include this comparison in the updated appendix and clarify our design choices in the main text. ### Q4. Related work but no baselines Apologies for the typo in our previous response. We include below our original response: The more recent works were concurrent to our project development, and thus we included them in discussion but not empirical comparison. These works [1-2] share similar intuition of leveraging classifier-free guidance to enhance LVLMs as ours. However, their primary objectives and evaluation benchmarks differ from ours, making direct comparisons not suitable without specific adaptation.. [1] Prompt Highlighter: Interactive Control for Multi-Modal LLMs [2] Contrastive Region Guidance: Improving Grounding in Vision-Language Models without Training ### Q6. “Noise intensity” in DETR Thank you for pointing this out. We will revise the terminology in the next version to avoid confusion.
Summary: This paper proposes a framework (MARINE) that aggregates a VLM and traditional vision tools such as object detection and image-text alignment. Concretely, given an input image, MARINE uses vision tools as guidance models, achieved through a linear combination of unconditional and conditional logits over the vocabulary. This framework can adopt VLMs where logits are accessible and vision tools, and it does not require fine-tuning or any processing via an API for the vision tools. This approach is primarily evaluated using two automatic metrics for hallucination detection in text (CHAIR and POPE), and it’s compared against various decoding methods. On average, across 5 different base VLMs, MARINE outperforms the baselines in terms of CHAIR and POPE metrics, with the exception of CHAIR recall. Additionally, a GPT-4V-based evaluation is conducted (following the LLaVA paper), along with assessments on other vision-language tasks (VQA) and ablation studies. Claims And Evidence: The main claim of this paper is that using vision tools as guidance effectively and efficiently mitigates hallucination in image-to-text generation. This paper provides evidence for this claim (CHAIR/POPE results and latency analysis). However, I feel the choice of automatic metrics for hallucination detection can be improved (see my comments in “Methods And Evaluation Criteria”). Methods And Evaluation Criteria: **Methods** The proposed approach is clearly explained in this paper, and the motivations behind it seem reasonable. The design choices appear well-justified given the primary goals: flexibility (training/API-free), simplicity, and effectiveness. **Evaluation** I feel CHAIR is limited to evaluate VLMs that can generate long and detailed image captions/descriptions. CHAIR uses MSCOCO captions and their object annotations, but these captions are considerably shorter than what current VLMs can generate. POPE relies on a segmentation tool to obtain ground truth objects, and some negative sampling methods could make a single question too easy (e.g., random negatives could be too easy). Although hallucination detection metrics are beyond the scope of this paper, the scope of the metrics used should be clearly explained. Newer automatic metrics for hallucination such as ALOHa (https://aclanthology.org/2024.naacl-short.30.pdf) and Visual Fact Checker (https://arxiv.org/pdf/2404.19752) have been introduced. Including results from these newer and more robust metrics would strengthen this paper. In addition to automatic metrics, performing human evaluation to verify their robustness would also be helpful. Theoretical Claims: N/A Experimental Designs Or Analyses: See my comments in the “Methods And Evaluation Criteria” section. Supplementary Material: I mainly read A.5 to understand the details of the evaluation setting. Relation To Broader Scientific Literature: A line of research on image-text alignment might be related. It would be nice to mention some key papers as related work (e.g., TIFA, DSG, Gecko, VQAScore). Also, newer metrics for detailed image captions such as CAPTURE (https://arxiv.org/pdf/2405.19092) could be related. Essential References Not Discussed: See the “Relation To Broader Scientific Literature” section. Other Strengths And Weaknesses: - Overall, this paper is well-written and easy to follow. Supplementary materials typically provide additional information to clarify ambiguous points in the main text. - Just to clarify my position, my only concern is the choice of automatic metrics. I had hoped that this paper would explain the limitations and scope of those metrics or use more up-to-date ones. Other Comments Or Suggestions: See my comments in “Methods And Evaluation Criteria” Questions For Authors: - A.5 says “Besides, we employed the synonym list from Lu et al. (2018) to align synonymous words in the generated text with MSCOCO object categories.” Is this a common practice with CHAIR? If not, how does this affect the final numbers? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate your thoughtful review and encouraging feedback. Thank you for recognizing the clarity and practical design of our approach, our emphasis on mitigating hallucinations, and the strong empirical support we provided. Below, we provide detailed responses to your comments: ### Q1. Discuss the limitations and the scope of the metrics used in the paper. Thank you for your suggestion and for pointing out recent developments in hallucination evaluation. CHAIR and POPE are widely adopted and reliable metrics for evaluating hallucination, and we believe they offer reliable assessments within their respective scopes. Nonetheless, we acknowledge their inherent limitations: CHAIR depends on a predefined list of object classes and synonyms, which may struggle to detect uncommon objects or very nuanced attributes. POPE’s reliability, in turn, can be influenced by the segmentation tool used to obtain ground-truth objects. We will include a brief discussion of these limitations in the revised paper and also report results using ALOHa to complement our evaluation. ### Q2. Use up-to-date automatic metrics such as ALOHa. Thank you for pointing out the newer evaluation metrics. We will include them in discussion. We have incorporated ALOHa into our evaluation to better assess localizable object hallucinations. Specifically, we report both the object-level hallucination score ($ALOHa_0$) and the caption-level aggregated score ($ALOHa$). As shown in the table below, MARINE consistently reduces object hallucinations and outperforms baseline generations across all settings and metrics. | Method | LLaVA-$ALOHa$ $\uparrow$ | LLaVA-$ALOHa_0$ $\uparrow$ | LLaVA-v1.5-$ALOHa$ $\uparrow$ | LLaVA-v1.5-$ALOHa_0$ $\uparrow$ | mPLUG-Owl2-$ALOHa$ $\uparrow$ | mPLUG-Owl2-$ALOHa_0$ $\uparrow$ | |-------|--------|------|--------|--------|--------|--------| | Greedy | 40.1% | 70.1% | 61.9% | 83.1% | 70.2% | 87.0% | | MARINE | 48.7%$_{+8.6}$ | 76.1%$_{+6.0}$ | 66.7%$_{+4.8}$ | 85.6%$_{+2.5}$ | 72.9%$_{+2.7}$ | 88.2%$_{+1.2}$ | Note: For implementation details, we use MSCOCO ground-truth captions as references and enable reference object detection for more localizable and generalizable object hallucinations detection. ### Q2.1. Evaluation in addition to automatic metrics Thanks for the suggestion. In Table 3 of our manuscript, we did include GPT-4v evaluation, which compares the outputs of two LVLM assistants using GPT-4V as a judge. This "LLM-as-a-Judge" evaluation protocol has become widely accepted as a reliable proxy for human evaluation, particularly when large-scale human assessments are costly. Although conducting extensive human evaluations is beyond the scope of our rebuttal timeline, we fully acknowledge its importance. We consider incorporating comprehensive human evaluations as future extensions of this research. ### Q3. Is using synonym list a common practice with CHAIR? Yes, this is standard practice in the original CHAIR paper [1] and its official implementation. Specifically, they use a synonym list from Lu et al. (2018) [2] to map words (e.g., “player”) to MSCOCO object categories (e.g., “person”). [1] Object Hallucination in Image Captioning [2] Neural Baby Talk ### Q4. Suggested related work. Thank you for your suggestion on this line of research on faithfulness evaluation for text-to-image generations, parallel to our focus on image-to-text generations. We gave the mentioned research a careful read and summarized them below. TIFA focuses on evaluating text-to-image generation by automatically generating and answering questions derived from prompts, measuring faithfulness along several categories (objects, actions, attributes). DSG (Davidsonian Scene Graph) also assesses text-to-image alignment but emphasizes a structured decomposition of prompts into atomic propositions. VQAScore introduces a VQA-based metric that better captures compositional semantics (such as object relationships and attributes), providing insights for attribute and relationship level object hallucination evaluation. Finally, Gecko addresses text-embedding tasks: it uses LLMs to synthesize query–passage pairs (and carefully select hard negatives) in order to train a compact yet powerful universal representation model. CAPTURE introduces a structured metric for evaluating detailed image captions by extracting and aligning objects, attributes, and relations across captions. We will include this in related work discussion in our next revision. --- Rebuttal Comment 1.1: Comment: Thank you for your responses to my questions/concerns. Since my major concerns have been addressed (up-to-date metrics), I updated my assessment. --- Reply to Comment 1.1.1: Comment: Thank you for your encouraging feedback on our rebuttal! We're delighted to hear that we've addressed your main concerns.
null
null
null
null
null
null
Adaptive Sensitivity Analysis for Robust Augmentation against Natural Corruptions in Image Segmentation
Accept (poster)
Summary: This work proposes a sensitivity-guided method to improve model robustness against image corruptions. The sensitivity measure enables a selection of proper model-free augmentation policies. The experiments show that the method improves robustness of models on both real and synthetic datasets, compared to SOTA augmentation methods in image segmentation tasks. Claims And Evidence: * LN191-192: The authors claimed that: The set of α values that fulfills Q are at equal intervals along the function g. However, there aren't any proof on why they should be equal spacing. And from Figure 1, it does not look like having equal spacing. * LN160-161: The authors claimed that: we seek to find a set of increasing, nontrivial augmentation intensities α1 < α2 < . . . < αL that maximize sensitivity. However, it comes abruptly that it is important to find out α1 - αL, without explaining why. * Same as the second point, the point of having 'adequate spacing' is not explained well (LN162-163 right) Methods And Evaluation Criteria: * The metrics: standard metrics like mean average precision should be used. * The sensitivity analysis only considers the impact of augmentation intensity (strength), without considering the types of augmentation. * Confusing equation 4: it is unclear where the max-min is applied to. The authors claimed to maximize the minimum value, but it is still uncertain what is being optimized. The objective function should be better formulated. Theoretical Claims: N\A Experimental Designs Or Analyses: * Experiments seem to use only one random seeds. There should be multiple random seeds used, and the results should provide the average and standard deviation. * Missing recent SOTAs: the authors miss the augmentation operations that are applied to the frequency domain of images, like AFA, VIPAug, HybridAug++, where the intensities of the operations also matter. And the authors can also consider adding Fourier-basis functions as one of the augmentation options when carrying out sensitivity analysis. * Figure 4: unclear meaning of different lines in one subplot. The authors mentioned 'recency', but I cannot find the definition of recency in the paper. Also, the different lines in Lighter/Darker H have high variations. The authors only explained the changing tendency but not the high variations. * Concern regarding finetuning foundation model: the model was trained on numerous data, which means it might possibly have seen the corruption images with adverse weather effects. This makes the experiments less convincing, as: 1. the authors claimed that the proposed method is robust to unseen corruptions (LN061-062), 2. the authors did not disclose what exact augmentation operations are used, and also, whether they use the same set of operations in other SOTAs for comparison is unclear. Supplementary Material: N\A Relation To Broader Scientific Literature: The sensitivity-based augmentation technique is an interesting topic regarding efficient data augmentation. But the insights brought by this sensitivity analysis method are unclear, Essential References Not Discussed: More recent SOTA augmentation techniques: * Hendrycks, et al., PixMix: Dreamlike Pictures Comprehensively Improve Safety Measures, 2022. * Yucel, et al., HybridAugment++: Unified Frequency Spectra Perturbations for Model Robustness, 2023. * Wang, et al. Fourier-basis Functions to Bridge Augmentation Gap: Rethinking Frequency Augmentation in Image Classification, 2024. * Lee, et al., Domain Generalization with Vital Phase Augmentation, 2024. Other Strengths And Weaknesses: The contribution points are not strong, especially the third point which depicts mostly what has been done. There should be more emphasis on the benefits and insights brought by the proposed method. Other Comments Or Suggestions: * LN145-146: strange line space * Table 5: the subscript for Ours∼g and Ours∼p seems to be reversed * LN1128: 'Figure ??' * Table 1: too far from where it is referred Questions For Authors: * The results of AutoAugment on ACDC (Fig. 3) are surprisingly low. Are the augmentation policy the default one, or computed on ACDC and IDD separately? * Is it a model-free augmentation policy or model-agnostic method? Since the sensitivity analysis is highly related to a model, it is hard to call it model-free. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your review! We clarify some misunderstandings below. - "Figure 1 not support equal spacing along function g, and there is no proof." By equal spacing, we meant that given the set of α values that fulfills Q, the set of g(α_i) are at equal intervals along the y-axis of the function g. We have included a proof for equal spacing here: https://drive.google.com/file/d/1oOqCZXiSmLV4k4T5YyJqZ2NDmS91U2RE/view?usp=sharing We will be sure to include this proof and clarification in the final revision. - "Authors explain that α1 < α2 < . . . < αL, without explaining why." Thanks for pointing this out. The intuitive explanation behind solving for “adequate spacing” can be thought of as solving for “uniformly difficult augmentation levels” with respect to model sensitivity. For example, it could be that a model is robust to a wide range of intensity values for a particular perturbation, but robustness quickly degrades past a certain value. - "Standard metrics like mAP should be used." While mAP is standard for object recognition and instance segmentation, it isn’t commonly used with semantic segmentation, which classifies by-pixel. - "Sensitivity analysis only considers augmentation intensity, without the type." Our work considers only intra-augmentation sensitivity; currently, all augmentation types have an equal chance to be sampled at train-time, when models may be more sensitive to one type of augmentation over the other (e.g., geometric types and photometric types). Implementation wise, the change is simple: the augmentation distribution sampler can be modified to account for the absolute sensitivity of each augmentation all together. Note: we indeed considered such weighing in practice, but this distribution may skew too heavily towards certain types of augmentations, resulting in unintended overfitting. - "Experiments only use one fixed random seed. There should be multiple random seeds used." We agree that multiple iterations of each experiment under different random seeds is important to validate that improvements in performance are not attributed to randomness. While we conduct all experiments under one fixed seed in this work, previous work (AdvSteer) has validated the consistency for sensitivity analysis results across multiple random seeds that our work has compared against. Additionally, we reduce the role of randomness in experiments by initializing ALL models to the same initialization weights. - "Authors miss the augmentation operations that are applied to the frequency domain of images." The mentioned works in frequency space augmentation primarily deal with classification tasks, while our work focuses on segmentation. Direct translation of these works to segmentation is nontrivial, as segmentation is likely much more dependent on high frequency details. This is apparent in that some works explore frequency-based domain adaptation specifically for this task (https://openreview.net/pdf?id=b7hmPlOqr8). However, this idea is very interesting and we believe it valuable to explore in future work! Currently, the set of augmentation operations we use are consistent with other baselines we benchmark against. We choose to augment in the image space due to consistency with previous work, as well as interpretability/explainability and intuition w.r.t. sensitivity curves. Our framework can be directly applied in the frequency space, although the results in frequency space may not be a direct comparison anymore to the methods used in our current experiments. - "The fine-tuning experiment results may be questionable since the baseline model may have already seen corrupted examples drawn from the same distribution as the evaluation set." While the foundation models we initialize weights with are based on much larger datasets, which may have seen adverse weather samples at some point in their training, we still observe improvements over baseline fine-tuning experiments when applying augmentation at fine-tuning. Considering that downstream fine-tuning does NOT involve adverse weather samples, we still find that improvement on generalization on natural corruptions to be valuable, since they were not involved in the fine-tuning process whatsoever. Did not disclose exact augmentation operations used, and whether they use the same set of operations in other methods for comparison In Section 4 (Experiments), the “Experiment Setup” excerpt mentions that all methods use the same set of augmentation operations, with the exception of IDBH, which includes two additional augmentations. As explained in the main text, hyperparameter tuning details can be found in Appendix Section A5, which describes all the augmentation operations. - "Figure 4: unclear meaning of different lines in one subplot" We evaluate color channel sensitivity several times during training. This plot is meant to show how model sensitivity changes across color channels over the course of training. --- Rebuttal Comment 1.1: Comment: Thank you for the response, which addressed most of my concerns. Hence, I decided to raise the score to weak accept. --- Reply to Comment 1.1.1: Comment: Thank you very much for re-visiting our submission, we are glad the response clarified most concerns! We greatly appreciate your review. Again, improvements per your feedback will be included in future revisions.
Summary: The paper addresses a practical challenge of enhancing model robustness to natural corruptions in semantic segmentation, a critical area for real-time perception applications. It proposes a novel, computationally efficient online adaptive sensitivity analysis approach (10x faster and 200x less storage than existing sensitivity analysis methods), facilitating practical deployment during training (lines 110-118). It including real-world adverse conditions (Fog, Rain, Night, Snow) on the ACDC dataset (lines 275-288) and synthetic benchmarks such as ImageNet-C and AdvSteer across multiple datasets (6). Claims And Evidence: 1) The authors claim significant efficiency improvements such as "10x faster computation and 200x less storage" compared to existing methods (abstract lines 010-014). However, these claims lack explicit evidence in the form of tables or figures within the main text. While Table 1 briefly touches upon this (compared with 1), crucial details like memory computation benchmarks or clear runtime comparisons during inference are missing, making these claims difficult to verify. for example. AutoAug (Cubuk et al., 2019), Basis Perturbations (Shen et al. 2021). 2) The paper claims to achieve state-of-the-art performance; however, the reported results (Table 2, lines 275-288 and Table 3 lines 330-363) suggest otherwise. For example, according to Table 2, AugMix and IDBH demonstrate better performance under rainy or snowy weather conditions, highlighting the dependency on adaptive sensitivity. Specifically, the authors' augmentation approach is less than 1.6% compared to existing SOTA methods for rainy conditions. Even for foggy conditions, their improvement is only marginal (0.6%) over IDBH. Additionally, there is no comparison provided regarding computational savings in terms of time, speed, or memory cost. 3) In Table 3, which compares results across a broader set of datasets (six different datasets), this limitation becomes more apparent. The authors' method achieves top performance only in basic augmentation scenarios, while in Clean, AdvSteer, and IN-C scenarios, TrivialAug and IDBH outperform the authors' method in 2/6, 4/6, and 5/6 datasets, respectively. Furthermore, in unseen scenarios, IDBH consistently leads performance metrics. This raises concerns about the actual benefits of including the proposed method. The authors should therefore clearly articulate the unique advantages or novel contributions of their augmentation approach. Methods And Evaluation Criteria: The evaluation procedure lacks clarity and comprehensiveness, particularly concerning memory and runtime costs, as mentioned earlier. Furthermore, the experimental analyses provided (Figures 3 and Tables 2-4) are insufficient to justify claims such as "10x faster" and "200x less storage," Theoretical Claims: 1) The presentation and explanation of critical algorithmic details (Algorithm 1, lines 110-164) lack clarity. For instance, abbreviations such as "pf" and "PDF" appear without clear definitions or context within the main text. Terms like "BetaBinom," "Levels append," and "Metrics append" (lines 206, 179) are introduced without proper explanation or motivation. 2) Additionally, equations (5) and (6) (lines 206-219) require clearer descriptions and justification to enhance reader understanding and reproducibility. Experimental Designs Or Analyses: 1) Please consider to provide explicit computational and memory storage comparisons, including clear tables and figures, supporting the claims of "10x faster" and "200x less storage", not only to AdvSteer. 2) Given the moderate to negligible improvement over existing methods like IDBH on certain benchmarks, could the authors clarify what distinct advantages their augmentation approach offers, particularly in practical deployment scenarios? 3) Highly suggested to provide clearer explanations or intuitive descriptions for the choices of parameters and methods used in Algorithm 1, particularly "BetaBinom," "Levels append," and "Metrics append"?line 179, 206, Why Gmax=2? Supplementary Material: Yes, we have gone through the supplementary part. Relation To Broader Scientific Literature: They author compaired Adverse Conditions Dataset with Correspondences (ACDC) (Sakaridis et al., 2021) with four weather scenarios: Fog, Rain, Night, Snow (Table 2). ADE20K (Zhou et al., 2019) etc ImageNet-C (IN-C) synthetic corruption benchmark (Hendrycks & Dietterich, 2019), AdvSteer synthetic augmentation benchmark (Shen et al., 2021). Essential References Not Discussed: Several essential references in data augmentation and explainable AI are missing. Notably, key datasets or methods such as CVPR2020 Cityscapes, ICLR2022 Image-9, CVPR2024 XimageNet-12, BDD100K from Berkeley, Rain100L / Rain100H / Rain800 real rainy images for deraining and robustness studies, provide important contextual background, color channel and are highly relevant, have not been cited or discussed. Other Strengths And Weaknesses: Please see the review feedback above mentioned. Other Comments Or Suggestions: Please see the review feedback above mentioned. Questions For Authors: Please see the review feedback above mentioned. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you! We appreciate the suggestions and will improve clarity in the revision, adding more references and notation. We address points below: - “Efficiency improvement claims are missing benchmarks like inference runtime benchmarks and memory usage.” We would like to clarify that our efficiency improvements are strictly compared to the previous works in sensitivity analysis like AdvSteer, as presented in Table 1. Previously, sensitivity analysis (SA) was NOT computationally feasible to compute during training; it required a huge amount of local storage and long computation times dependent on dataset sizes. Our work makes SA feasible for online training, and opens many possibilities of future work along this direction (see response above to mfiq). Our efficiency claims in the faster inference time or memory usage would not be a fair comparison against other augmentation techniques. Thus, our significant efficiency claims are benchmarked purely for sensitivity analysis. In terms of training complexity, our experiments re-compute SA only 4x during training, amounting to merely 38.4 more GPU minutes for one experiment. Dataset sizes do not influence the sensitivity analysis runtime due to the use of KID, whose data efficiency we verify against Frechet Inception Distance (FID) in Appendix Section A.15. Otherwise, the total train time and memory of our adaptive SA method is comparable to randomized augmentation approaches. - “The results suggest that this approach is not SOTA, as shown in some results of Table 2 and 3.” We respectfully disagree, and would like to clarify some interpretations of the results. In Table 2, our method occasionally performs worse on absolute accuracy (aAcc) metrics compared to other methods on rain and snow scenarios. As we mention in the corresponding Section 4.1 discussing the table results, higher numbers on aAcc but lower numbers on mIoU may be indicative of poor generalization to class imbalances, as aAcc measures the total # of correct pixels. In many of the Cityscapes data, there are disproportionately many pixels classified to “sky”. Our method achieves a higher mIoU than the other method (47.53 -> 49.36 compared to AugMix for Rain, and 45.35 -> 48.16 compared to IDBH for Snow). While more PIXELS are correctly classified, lower values on mIoU may suggest that underrepresented classes are poorly classified. We include BOTH metrics in table results to provide an interpretable metric (absolute pixel acc) as well as a class-balanced metric (mIoU) for a more complete picture comparison. As for Table 3, while our method does not perform best on the AdvSteer benchmark compared to the next-best method, we note that it outperforms other methods in most other scenarios across multiple datasets. AdvSteer benchmark involves heavily altered synthetic data, as shown in the Appendix Section A.10; these alterations are not meant to reflect real-world corruptions, but should rather be interpreted as a synthetic limit test. In contrast, ImageNet-C benchmark reflects transformations meant to replicate real-world effects such as frost, snow, etc. We interpret the boost in performance of our results as contributing primarily to realistic corruptions, which aligns with our goals. Additionally, we include the results for AdvSteer for transparency. - “Could the authors clarify what distinct advantages their augmentation approach offers, particularly in practical deployment scenarios?” Our method estimates model sensitivity to various augmentation transformations and samples augmentations with uniform difficulty based on this sensitivity. Unlike randomized approaches (e.g., IDBH, TrivialAug, RandomAug, AugMix) that ignore model state or model-based methods (e.g., AutoAugment) requiring pre-trained policies, our approach is a middle ground by using a generic image classifier trained on ImageNet for KID computations. Although our method incurs additional computational cost compared to randomized approaches—adding ~38.4 GPU minutes to training (4 sensitivity evaluations at ~9.6 minutes each on an RTX A4000 GPU)—it scales efficiently across datasets since KID evaluation requires only a fixed number of samples. Sensitivity analysis currently runs on a single GPU, further optimization and acceleration possible. Also, our method affects only training time and has no impact on inference speed. Sensitivity analysis is also useful for interpretability of robustness, as shown in Figure 4. In summary, our method 1) provides a middle ground between model-based and randomized augmentation, 2) adds a negligible amount of fixed overhead that is agnostic to dataset size due to use of KID, and 3) provides interpretability and explainability on failure cases for practical deployment. To show inference comparison with competing methods, we include timing results on experiments with Segment Anything: https://docs.google.com/spreadsheets/d/1KF0lY8iyv7Uo8K53EJmvfZqVqGVfKbuItcu-6iXXjNM/edit?usp=sharing
Summary: This paper introduces an adaptive, sensitivity-guided augmentation method to improve the robustness of image segmentation models against natural corruptions. The idea is to perform a lightweight, online sensitivity analysis during training to identify the most impactful perturbations. This approach aims to bridge the gap between the efficiency of random augmentation techniques and the effectiveness of policy-based augmentations guided by sensitivity analysis. The authors claim their sensitivity analysis runs significantly faster and requires less storage than previous methods, enabling practical online estimation during training. Claims And Evidence: - The paper claims 10x speed up but to be precise it's more like 9.3x according to Table 1, where it provides a runtime and storage comparison with AdvSteer (Shen et al., 2021). - The paper mentions in the introduction that the proposed approach is general and can be applied to other tasks, architectures, or domains, which are mainly shown in the appendix: - Different domains : medical domains in Appendix A.6 - Different architectures: Appendix A.11 - Different tasks: Classification, Appendix A.12 Methods And Evaluation Criteria: - The authors propose an adaptive sensitivity analysis method that iteratively approximates model sensitivity curves. They use Kernel Inception Distance (KID) to measure image degradation and define sensitivity as the ratio of change in model accuracy to change in KID. They optimize an objective function (Equation 4) to find optimal augmentation intensities. The method includes a training loop (Algorithm 1) that incorporates the sensitivity analysis. - The paper uses absolute pixel accuracy (aAcc), mean pixel accuracy (mAcc), and mean Intersection-over-Union (mIoU) to evaluate segmentation performance. They evaluate on real-world corrupted datasets (ACDC, IDD) and synthetic benchmarks (ImageNet-C, AdvSteer). - The use of KID for measuring image degradation is well-motivated, and the evaluation metrics are standard for segmentation tasks. The choice of datasets covers both real-world and synthetic corruptions. Theoretical Claims: - Not applicable as this is not a theory paper. Experimental Designs Or Analyses: - The paper contains several experiments to evaluate their method such as evaluation on real-world corruptions (ACDC), synthetic datasets (ADE20K, VOC2012, etc), ablation studies to analyze the contribution of different components of the proposed method. - The analysis of color channel sensitivity is an interesting addition that showcases the potential of sensitivity analysis for interpretability. Supplementary Material: Not in details. Relation To Broader Scientific Literature: - Addressing the problem of robustness against natural corruptions, which is a well-explored area in image classification, for semantic segmentation is an important real-world consideration such as self-driving cars etc. - Contrasting their adaptive sensitivity analysis with previous methods like AdvSteer (Shen et al., 2021), emphasizing the improvements in efficiency and practicality. - Building upon existing data augmentation techniques (e.g. AutoAugment, DeepAug) and highlighting their limitations. Essential References Not Discussed: The related work section seems well-written. Other Strengths And Weaknesses: - The proposed adaptive sensitivity-guided augmentation method is novel, and the 10x speedup boost compared to the previous sensitivity analysis is a strong point. - Improving the robustness of segmentation models is crucial for real-world applications, and the paper demonstrates significant improvements on challenging datasets. - The paper is well-written and easy to follow. Other Comments Or Suggestions: p8. sec5: "Our model can complements" -> "Our model can complement" Questions For Authors: - What's your take on the fact that uniform augmentation of computed sensitivity analysis values (alpha) is almost as good as beta-binomial sampling? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your comments, we are grateful to hear that you find our work impactful in real-world robotics applications and our analyses interesting for model sensitivity! We will be sure to add writing fixes in paper revision regarding Table 1 and increase visibility of results related to our claims, which many of were in the appendix. Regarding the question, “why is uniform augmentation of computed sensitivity analysis values (alpha) almost as good as beta-binomial sampling?”, we believe that this may be due to that the most optimal sampling in the basis augmentation spaces is already close to uniform. In future revisions, we will include an experiment with such a scenario to emphasize the advantage of not needing corruption gradients.
Summary: This paper proposes an adaptive, on-the-fly sensitivity analysis approach to design data augmentation for increasing the robustness of the semantic segmentation models under naturally occurring corruptions. The proposed approach attempts to bridge the gap between choosing random augmentations like Trivial Aug and learned policies through RL. They do this by solving an optimization problem on-the-fly. The problem is essentially posed as finding the right set of intensities /parameters of the chosen augmentations. The objective is set through the lens of the sensitivity analysis i.e, change in model accuracy with respect to change in intensities and this guides the augmentation. The change in intensities is captured through a Kernel Inception distance which measures the difference in Inception net features for the dataset due to one augmentation and the dataset with the other augmentation. By adaptively sampling from the intensity levels to which the model is most sensitive, the authors reduce overhead compared to prior sensitivity-analysis-based methods. They present extensive experiments on real-world driving datasets, as well as generic and domain-specific segmentation benchmarks demonstrating notable improvements over several augmentation baselines. They also highlight the applicability of their approach with foundation-model fine-tuning (e.g., DinoV2). Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: Not Applicable Experimental Designs Or Analyses: Yes Supplementary Material: Yes; All of it Relation To Broader Scientific Literature: It is related well with respect to the current literature on designing augmentation strategies for complex computer vision tasks that extend beyond classification Essential References Not Discussed: NA Other Strengths And Weaknesses: The idea of focusing on where the model is most vulnerable (high-sensitivity intensities) is interesting and grounded. Adapting this process for on-the-fly training is a good contribution. Furthermore, the paper’s evaluation is comprehensive and the results compelling both in terms of accuracy and efficiency Weaknesses and Questions The approach adaptively selects corruptions, somewhat similar to meta-learning’s inner/outer loops (adaptation on a “task” and validation on a target). It would be helpful to discuss whether meta-learning techniques (e.g., using a gradient-based measure for how each corruption influences final performance) could offer a more direct optimization. Furthermore authors assume quite reasonably that they have an understanding of how the test environment behaves - a small discussion here could help in the rebuttal phase. Next, the authors examine a broad set of “basis” transformations (RGB, HSV, blur, etc.), there may be domain-specific or more complex corruptions that aren’t captured. The paper could clarify how one might extend the method to less parameter-friendly corruptions. Also, while they consider generic augmentations, a lot of domain specific augmentations have been developed- for example this paper (https://ieeexplore.ieee.org/document/10350672) designs a context-aware augmentation protocol for object detection, it is not clear how this approach will scale to those kind of augmentations. Also, it would be interesting to compare with parameter-efficient or partial fine-tuning approaches (e.g., LoRA) that might mitigate overfitting to specific corruptions. The paper mentions partial or efficient adaptation in passing, but an actual baseline or experiment (especially for smaller datasets) would be better. Finally, given that deep networks can exhibit “grokking” or double-descent (e.g., https://arxiv.org/abs/2201.02177), the authors relying on short adaptation intervals for measuring sensitivity could be problematic. Could there be cases where the model’s sensitivity at a short training horizon is misleading at full convergence? I would like to hear authors’ thoughts on this. Ofcourse, it was surprising that there was no discussion on segment anything which is now almost the defacto segmentation model. Can the authors comment on it too? Other Comments Or Suggestions: $\alpha_{max}$ is not defined. Is it $\alpha_{l}$? Questions For Authors: Please see strengths and weaknesses Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you! We appreciate your feedback, and address points below: - "Could meta-learning offer a more direct optimization?" Yes. NOTE: the difference is mostly wrt the choice between data augmentation vs meta-learning as the training approach, rather than an alternative for the sensitivity analysis (SA). In MAML, we can interpret each parameterized augmentation type as a synthetic task. Then, during bilevel optimization, inner loop updates would optimize performance on specific augmentations (support) while the outer loop maintains overall performance on all aug’s (query). Our SA modeling can still contribute additional info in this case when choosing the task data subsets; the way task data is selected or parameterized is largely open-ended in meta-learning. [1] strengthens model robustness by adding a learned adversarial noise to query (outer loop) data. The difference between this approach and our proposed technique is the paradigm in which sensitivity analysis is applied to add robustness to models (data augmentation sampling vs. noise sampling query attacks in meta-learning). We may also add adversarial noises in our framework. - "How to extend to less parameter-friendly corruptions?" One of our motivations is to utilize parameterized corruptions to improve generalization on natural corruptions that may not be easily parameterized. Prior art [1] has shown that many natural corruptions can be replicated with a composition of “basis augmentations”, from which this work is inspired by and generalized from. In cases where we don’t have access to a parameter ‘alpha’, the problem becomes similar with AdvSteer [1], which samples sensitive augmentations from a fixed set of augmentation values (instead of a continuous range). However, this approach has increased computational complexity, since we need to test all values from a selected subset of augmentations. - "How does it scale to domain-specific augmentations?" In InterAug’s case, the primary contribution appears to be a re-contextualization around the subjects in the image, s.t. spurious co-occurrences between subjects and background do not occur throughout training. Our is a direct complement to the context area extraction. Instead of considering entire images in our SA computation, we can consider the context bounding box only. The two concepts applied together may produce context-specific sensitivities. As for other domains, the adaptation may be case-by-case. For example, lesion augmentation in medical applications involves synthetically increasing the diversity of lesion shapes, locations, intensities, and load distributions [2] – all of these can be considered as aug- types within our SA framework. - "How might parameter-efficient approaches like LoRA mitigate overfitting to specific corruptions?" Training to increase robustness is often accompanied by degradation in clean accuracy, which may suggest either conflicting gradients or overfitting to corruptions. Using LoRA layers to mitigate this has been shown to work in AutoLoRA [3]. Since LoRA layers work very well when trained on small datasets, we may use our sensitivity analysis approach to select “uniformly difficult” augmentations for each class to generate the task dataset for LoRA training. Then, a routing approach similar to Polytropon [4] or MHR [5] can be used for inference on unseen data. This is an interesting extension of our work and a valuable future direction. - "A benchmark for efficient adaptation would be nice, especially with a small dataset." We show results on fine-tuning for the ACDC Snow dataset in the bottom of the following spreadsheet: https://docs.google.com/spreadsheets/d/1KF0lY8iyv7Uo8K53EJmvfZqVqGVfKbuItcu-6iXXjNM/edit?usp=sharing and plan to include some more small dataset finetuning experiments in future revisions. - "Relying on short adaptation intervals might be problematic given grokking is common." We observe in practice that increasing the number of training iterations (thus increasing the number of iterations per interval, since the number of intervals is fixed) has very little effect on performance outcome. This may suggest that the current interval values may be sufficient for generalization to sensitivity curves within intervals. We can include analysis on this in future revisions. How does this work perform relative to Segment Anything We show downstream fine-tuning results and inference time/memory benchmarks on SegmentAnything in the same spreadsheet as above. We’ll also include these in the updated revision. - "$a_{max} \neq a_L$. $a_L$ is a level we solve for, but $a_{max}$ is the max intensity of the parameter range, which for our operations is 1." [1] https://proceedings.neurips.cc/paper/2020/file/cfee398643cbc3dc5eefc89334cacdc1-Paper.pdf [2] https://arxiv.org/abs/2308.09026 [3] https://openreview.net/forum?id=09xFexjhqE [4] https://arxiv.org/abs/2202.13914 [5] https://arxiv.org/abs/2211.03831
null
null
null
null
null
null
No-Regret is not enough! Bandits with General Constraints through Adaptive Regret Minimization
Accept (poster)
Summary: The authors study the BwK setting where a learner is tasked with repeatedly performing actions and gain high cumulative reward while also satisfying multiple general long-term constraints. Specifically, they consider a best-of-both worlds objective in which a given algorithm has to perform optimally whether or not the environment is stochastic or adversarial, and show that if a primal-dual scheme is applied with weakly adaptive regret minimization algorithms, such best-of-both-worlds guarantees are achievable without prior knowledge of the Slater parameter $\rho$ characterizing the problem instance, which is the main contribution of this work over previous works. They establish the fact that OGD with a specific choice of learning rate is indeed such a weakly adaptive regret minimizer which gives an explicit algorithm for the problem. The authors also provide explicit scenarios where their results can be applied, specifically contextual bandits with constraints. ## update after rebuttal: After reading the other reviews and the authors' comments, my assessment of the paper remains as is. Claims And Evidence: The claims made in the submission are supported by rigorous proofs provided in the supplementary material. Methods And Evaluation Criteria: N/A Theoretical Claims: I did not check the correctness of the proofs presented in the submission, however, given my relative familiarity with the research topic, the claims seem sound and I didn't find any soundness issues. Experimental Designs Or Analyses: N/A Supplementary Material: I reviewed the proof of Theorem 4.1 in the first part of the supplementary material, mostly because of its somewhat confusing statement in the main paper. Relation To Broader Scientific Literature: The authors provide adequate references to related previous works in the context of BwK, with a particular focus on recent works by Castiglioni et al. which is the most relevant to this work in studying best-of-both worlds objectives for BwK. The authors make clear how in those previous works, either the Slater parameter $\rho$ has to be known or the constraints have a specific structure, thus emphasizing the generality of their contribution. The authors also sufficiently cite papers relevant to the LagrangeBwK primal-dual framework which they heavily use in this work. Essential References Not Discussed: I am not familiar with essential references that were not discussed in this work. Other Strengths And Weaknesses: Strengths: * The authors present what seems to be the first optimal best-of-both-worlds guarantees for BwK with general constraints with algorithm that doesn't require knowledge of the problem's Slater parameter. * The contributions are presented clearly. * The observation that regret minimizers for primal and dual algorithms does not suffice for the adversarial setting seems interesting. * The upper bounds apply generally in the sense that they are black-box upper bounds which only require the input primal and dual algorithms to be weakly adaptive regret minimizers with the primal algorithm being scale-free, thus allowing for a wide range of algorithms. Weaknesses: * The paper contains numerous typos and confusing wording in some places. * I would have appreciated some analysis sketch in the main text, to give some intuition and also highlight any technical novelties in the analysis. Other Comments Or Suggestions: As mentioned earlier, the paper contains quite a few typos, some of which are listed in the following: * In Algorithm 1 and Algorithm 2, I believe $\mathbf{c}_t$ should be replaced with $\mathbf{g}_t$. * In the statement of Theorem 4.1, I believe the last part should have $<$ instead of $>$, as the current statement does not make sense for a lower bound. * Lines 224-225 - "... it is not required adaptive regret minimization" - wording is confusing here. Questions For Authors: My main question to the authors concerns the technical novelties of this work when compared to previous works. Specifically: * Is the observation that adaptive regret minimizers are necessary (rather than regret minimizers) a novel observation (referring to the construction in Example 5.2)? * Is the fact that the benchmark in the adversarial setting isn't required to satisfy the constraints novel for this work? Or is such a benchmark used in previous works of adversarial BwK? * I would appreciate it if the authors could briefly summarize the technical challenges and novelties of their analysis. Is it mostly the self-bounding lemma? If so, what are the challenges in proving it? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the positive feedback about our paper. * The weak adaptivity property was used in a simplified setting in the very recent paper by Castiglioni et al. (2024). Here, the authors only study the case in which they have a single budget constraint and a single consumption constraint (ROI). However, in our paper, we have to deal with an arbitrary number of constraints. In particular, proving lemma 6.1 in this setting is far more challenging. Indeed, the multiple constraints “move” independently but affect the constraints “jointly”. This originates difficult technical challenges that we highlight in more detail in the answer to your third question. * Some prior works, such as Balseiro et al. (2020), could, in principle, accommodate stronger benchmarks through a refinement of their analysis. However, this was not explicitly pointed out by the authors, likely due to their adherence to the conventions of the online allocation literature and its standard benchmarks. Some very recent works also highlighted this stronger benchmark (see, e.g., Bernasconi, Martino, et al. (2024)). * The proof relies on the key observation that Lagrangian multipliers jointly affect the primal utility but evolve independently. Thus, we need to reason about the joint behavior of the Lagrangian multipliers and look at the L1 norm. In particular, our proof proceeds by contradiction: if the Lagrangian multipliers exceed a certain threshold, then they must have remained "large" for an extended period of time (Equation 3 in the appendix). Now we can exploit the scale freeness of the primal regret minimizer and its regret property with respect to the action that satisfies the constraints (Equation 4), to claim that the primal utility is large in such interval (Equation 6). However, this is in contradiction with the fact that the dual utility is also large (because of the growth of the Lagrangian multipliers, see Claim B.4). The proof of Claim B.4 is particularly involved, as it needs to analyze the separate behavior of the Lagrangian multipliers relative to different constraints. Indeed, it is possible for the $\ell_1$ norm of the multipliers to increase without having all individual components growing. This makes it nontrivial to conclude that the dual utility must have increased as well. **References:** - Castiglioni, Matteo, et al. "Online learning under budget and ROI constraints via weak adaptivity." ICML 2024 - Balseiro, Santiago, Haihao Lu, and Vahab Mirrokni. "Dual mirror descent for online allocation problems." ICML 2020 - Bernasconi, Martino, et al. "Beyond Primal-Dual Methods in Bandits with Stochastic and Adversarial Constraints." NeurIPS 2024
Summary: The paper addresses the problem of bandits with general constraints, extending beyond the traditional bandits with knapsacks (BwK) framework. The authors generalize the setting where the learner does not know the Slater's parameter $\rho$ and give an algorithm following the primal-dual framework. Previous works (Balseiro et al., 2022; Castiglioni et al., 2022a) need to know the exact value of $\rho$ to find the boundedness of dual multipliers. In this paper, one key contribution is the "self-bounding" lemma for bounding dual variables if both the primal and dual algorithms are weakly adaptive. The authors provide best-of-both-worlds guarantees (sublinear regret and constraint violations) and applications to contextual bandits with linear constraints (CBwLC). Theoretical results show competitive ratios of $\rho/(1+\rho)$ in adversarial settings and near-optimal regret in stochastic settings. Claims And Evidence: The claims made in this paper are clear and supported by proof in the appendix. However, there are some typos in the proof, and I listed them in the Theoretical Claims. If the authors can answer them, I will be convinced that the evidence is clear. Methods And Evaluation Criteria: The authors use the Regret or Reward gap to the optimal strategy, which is standard in the Bandit area. Theoretical Claims: I have some questions on the Theoretical proof and I list them below: - line 660, missing reference in Lemma B.2 - line 711, constant used in $c_2$ is 12, which does not match with 13 used in the lemma statement? - line 725 - 727, in the proof of Self-bounding lemma (Lemma 6.1), one step shown in the mentioned line is upper bounding $ \| \lambda_{t_1 - 1} \|_1 + m \eta \leq c_1/\rho + m \eta$ because the inequality $ \| \lambda_{t_1 - 1} \|_1 \leq c_1/ \rho $. I am not aware of any properties of $\|\lambda_{t}\|_1$ and if it is not monotonically increasing with $t$, $\| \lambda_{t_1 - 1}\|_1$ can be larger than $c_1/\rho$ since the definition of $t_1$ is the largest time between $0$ and $t_2$ for which $\|\lambda_{t_1}\|_1 \in [c_1/\rho, c_2/\rho]$ and we cannot assume when $t \in [0, t_1]$, $\|\lambda_t\|$ is always smaller than $c_1/\rho$. Correct me if I am wrong. Experimental Designs Or Analyses: No experimental designs are founded in this paper. Supplementary Material: I reviewed the Lemma statement and proof. Relation To Broader Scientific Literature: The assumption used in this paper is weak and can be applied to many specific cases, such as multi-armed bandit (MAB) and contextual bandit (CB). The weakly adaptive regret bound for primal and dual problems is reasonable since many algorithms satisfy this property, like *EXP3-SIX* for MAB and *Vovk forecaster* for the finite function class. This paper can be used as a fundamental work to develop more realistic algorithms. Essential References Not Discussed: The key contribution of this paper focuses on developing a near-optimal algorithm for the bandit problem with general constraints. I would like to know how this work is related to Bandit with budgets [Ding et al., 2013] published in AAAI 2013. Other Strengths And Weaknesses: This work removes the strong assumption that Slate's parameter is known to the learned used by prior works. This can be considered as a more realistic setting. Other Comments Or Suggestions: This paper is well-written, and it is smooth to go through from the beginning to the end. Questions For Authors: Please look at the Theoretical claims. Ethical Review Concerns: No ethical concern. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **On the questions about the Theoretical proofs:** Thanks for taking the time to carefully read our proofs. We really appreciate the effort. * Thanks. We meant to cite the following works: - At line 660: Hazan, Elad. "Introduction to online convex optimization." Foundations and Trends® in Optimization (2016). - At line 1100: Foster, Dylan, and Alexander Rakhlin. "Beyond ucb: Optimal and efficient contextual bandits with regression oracles." International conference on machine learning. PMLR, 2020. * Yes, you are right; $c_2$ should be 13m. * The confusion likely stems from our informal definition of $t_1$, which we agree could be improved upon. We will update it with the following more precise definition: $t_1$ is the largest time smaller that $t_2$ such that $\|\|\lambda_{t_1}\|\| \ge \frac{c_1}{\rho}$ and $\|\|\lambda_{t_1-1}\|\|\le \frac{c_1}{\rho}$. Therefore, $\|\|\lambda_{t_1-1}\|\|\le \frac{c_1}{\rho}$ holds by definition. Remember that the Lagrangian multipliers are initialized to $0$, and at time $t_2$ they reach $\frac{c_2}{\rho}$, which is strictly larger than $\frac{c_1}{\rho}$. **On the paper by Ding et. al (2013):** We appreciate the reviewer for bringing this paper to our attention. We will include it in our discussion of related work on the stochastic BwK model. While the paper explores a variant of the stochastic BwK model, its connection to our work is relatively limited. Several key differences set the two settings apart: their setting assumes rewards and costs are generated i.i.d., whereas we focus on algorithms that remain robust even in adversarial environments; their costs are discretized; and their regret baseline is defined with respect to the stopping time of the algorithm rather than a fixed time horizon $T$. As a result, the guarantees they achieve (i.e., a regret bound of $O(\log B)$) are not comparable to those in our setting. In particular, in the stochastic case, our framework aligns with the standard lower bound of $\Omega(\sqrt{T})$ from the (unconstrained) multi-armed bandit problem, making a direct comparison challenging.
Summary: This paper studies the general constrainted optimization problem where the reward and cost functions can either be stochastic or adversarial. By extending the LagrangeBwK framework by requiring the primal & dual algorithms to be weakly adaptive in addition to being no-regret, the authors designed a best-of-both-worlds algorithm which doesn't require the knowledge of the Slater's condition constant $\rho$. The results are exemplified for two constrained online learning problems. Claims And Evidence: Looks convincing, although I didn't verify the correctness of every claim Methods And Evaluation Criteria: Yes, they're consistent with previous ones in the literature Theoretical Claims: I checked the proof of Lemma 6.2 Appendix B and it looks correct. I intuitively understood Proposition 5.3 via Figure 1. Didn't check Theorem 4.1 and those in Sections 7 & 8. Experimental Designs Or Analyses: N/A Supplementary Material: Went through Appendix B. Relation To Broader Scientific Literature: Looks like the observation that weak adaptivity can result in bounded decision variables can be used in other constrained optimization problems. Essential References Not Discussed: Didn't see any obvious missing references Other Strengths And Weaknesses: Strengths: 1. The baseline for adversarial case is stronger than previous ones. The upper bound is complemented with a lower bound, showing that the seemingly bad $1+\rho^{-1}$ competitive ratio is already the best one can hope if we do not want to incur $\Omega(T)$ violations. -- but see also Q1. 2. The idea why we need weakly adaptive properties in addition to no-regret is clearly depicted via Example 5.2. Weakness: 1. The technical results are not explained, making it hard to understand why these technical results are of importance. 2. Because of 1, it is unclear what is the main technical contribution of this paper. It seems to me that everything is based on the observation that "a direct application of LagrangeBwK may result in super large dual variables; but if we additionally require the plug-in algorithms to be weakly adaptive, it works". See Q2. Other Comments Or Suggestions: There's a typo in the citation of Lemma B.2. Questions For Authors: 1. Regarding the Strength 1, is the worse competitive ratio due to the stronger baseline? That is, if the baseline is instead the standard one in the literature (which requires average cost <= ...), what's the best competitive ratio one can hope? If applicable, does the performance of your algorithm have a better guarantee in this case? 2. Is the understanding in W2 correct? What are the main technical contributions? --- Both resolved in authors' rebuttals. Updated score accordingly. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: ### **On the competitive ratio** Thank you for raising this possible source of confusion about the competitive ratio. The issue here is primarily one of nomenclature rather than our choice of a stronger benchmark. We agree that adding further clarification in the final version will be beneficial. The key distinction lies in the interpretation of $\rho$ in our work versus its meaning in the BwK literature. In our setting, we normalize the constraint so that costs must remain below a fixed threshold of 0, whereas in the BwK literature, the costs are constrained by a per-round budget of $\rho_{BwK}$. To see how our approach translates to the BwK model, consider how we would redefine the costs $g_t$ to solve BwK via our framework. Specifically, we could set $g_t(x) = \frac{x^\top c_t - \rho_{BwK}}{1 - \rho_{BwK}}$. Under this transformation, playing the void (``free’’) action in the BwK setting results in $g_t(0) = \frac{-\rho_{BwK}}{1 - \rho_{BwK}} = -\rho_{Adv}$, where $\rho_{Adv}$ is what we called $\rho$ in our paper. From this, it becomes evident that our competitive ratio of $1 + 1/\rho_{Adv}$ is equivalent to $1/\rho_{BwK}$. We thank the reviewer again for highlighting this potential source of confusion, and we will incorporate this clarification, including this short proof sketch, in the final version of the paper. ### **On technical contributions** Due to space constraints, we have deferred all proofs to the appendix and instead focused on providing a clear and intuitive explanation of the key ideas in the main paper. We are pleased that the reviewer appreciated our efforts to make these concepts more intuitive and accessible (especially how Example 5.2 highlights the necessity of using adaptive regret minimizers). However, we respectfully disagree with the idea that our contributions lack technical depth. Several key components of our proof are far from straightforward. In particular, in Section 6, we discuss the self-bounding lemma, which demonstrates that the Lagrangian variables remain automatically bounded when using scale-free primal algorithms. We find the proof of the main lemma (Lemma 6.2) highly nontrivial, and we will include a sketch of the proof in the main paper using the extra page available in the camera ready. See the answer to Reviewer Ldsh for more details on the proof (that we will include in the final version of the paper). Moreover, we believe that handling general constraints is a highly non-trivial task.Indeed, many previous works on BwK have attempted to achieve similar results under assumptions as weak as ours. However, they have only partially succeeded, addressing the problem only in the stochastic setting [1,2,3,4,5]. Removing the knowledge on the Slater parameter is both important on the practical size and interesting from a technical standpoint, even more so when it comes from the elegant idea of using adaptivity to self-bound the Lagrangian multipliers. These results are not only technically challenging but also highly relevant as they enable significant applications. In the paper, we highlight two key examples: bandits with general constraints and the contextual bandits with linear constraints problem recently introduced by Slivkins et al. (2023b). Both models have practical implications, including applications in autobidding for first-price auctions. Given the generality of our approach, it is likely that there are additional applications we have yet to explore. **References:** [1] Agrawal, Shipra, and Nikhil R. Devanur. "Bandits with concave rewards and convex knapsacks." Proceedings of the fifteenth ACM conference on Economics and computation. 2014. [2] Agrawal, Shipra, and Nikhil R. Devanur. "Bandits with global convex constraints and objective." Operations Research 67.5 (2019): 1486-1502. [3] Yu, Hao, Michael Neely, and Xiaohan Wei. "Online convex optimization with stochastic constraints." Advances in Neural Information Processing Systems 30 (2017). [4] Wei, Xiaohan, Hao Yu, and Michael J. Neely. "Online primal-dual mirror descent under stochastic constraints." Proceedings of the ACM on Measurement and Analysis of Computing Systems 4.2 (2020): 1-36. [5] Castiglioni, Matteo, et al. "A unifying framework for online optimization with long-term constraints." Advances in Neural Information Processing Systems 35 (2022): 33589-33602. --- Rebuttal Comment 1.1: Comment: Thank you for your response to my Q1. Yes, the claim of "stronger result" definitely makes more sense if the two competitive ratios are in fact identical. Thanks for listing the technical contributions. When I looked at Appendix B I felt it was fairly standard, for example the following paper (which studies a completely unrelated problem) also uses a similar argument that "since $\eta$ is small, it takes many rounds for a coordinate to be large". Zihan Zhang, Wenhao Zhan, Yuxin Chen, Simon S Du, Jason D Lee. "Optimal Multi-Distribution Learning". COLT 2024. But now I realize they are actually pretty different, because in the above paper they are studying something over a simplex, but here the dual variables have unbounded regions. I agree that the results regarding almost-independent coordinates are far from intuitive. I encourage the authors to include more discussions regarding the technical difficulties in the revision. I am now leaning more towards accept. --- Reply to Comment 1.1.1: Comment: Thank you for taking the time to read our responses. We appreciate your reassessment of the contributions of our paper, and hope this can be reflected in your final score.
null
null
null
null
null
null
null
null
Feature out! Let Raw Image as Your Condition for Blind Face Restoration
Accept (poster)
Summary: This paper proposes the Pseudo-Hashing Image-to-image Schrodinger Bridge (P-I2SB) framework to enhance the restoration potential of Schrodinger Bridge (SB) by correcting data distributions and effectively learn the optimal transport path between any two data distributions. This approach preprocesses HQ images during training by hashing them into pseudo-samples according to a rule related to LQ images. This guarantees optimal and reversible solutions in SB, enabling the inference process to learn effectively and allowing P-I2SB to achieve state-of-the-art results in BFR. ## update after rebuttal: The rebuttal has addressed certain formatting issues in the manuscript. I respectfully maintain my original rating. Claims And Evidence: Yes. Methods And Evaluation Criteria: It is the first work to introduce the Pseudo-Hashing Module (PHM) and the Schrodinger Bridge Module (SBM) theories into blind face restoration. Experimental results demonstrate the superior restoration performance on both synthetic and real-world datasets. Theoretical Claims: Yes. Experimental Designs Or Analyses: The authors conducted extensive experiments to validate their proposed method. Supplementary Material: In the supplementary material, the authors provide the proof of theorems, and qualitative comparisons in CelebA-Test and Real-world datasets. Relation To Broader Scientific Literature: This paper complements the exploration of blind face restoration, making a certain contribution. Essential References Not Discussed: None. Other Strengths And Weaknesses: None. Other Comments Or Suggestions: In Table 1, the reference numbers for SoTA methods should be listed. Questions For Authors: None. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your careful review of the paper structure and formatting, which is greatly appreciated. > Q1. In Table 1, the reference numbers for SoTA methods should be listed. - Due to the ICML citation format not using numbers, the references were too lengthy to include directly in the table. Instead, we noted the references in the Comparison Methods section of Sec. 5.1. We have now supplemented the references and adjusted the font size in the table to ensure that they can be properly displayed in the main paper in future versions. > Sec.5.1 (Line 361 - Line 370) **Comparison Methods** We compare Pseudo-Hashing with recent BFR methods, including PSFRGAN (Chen et al., 2021a), GFPGAN (Wang et al., 2021), GPEN (Yang et al., 2021), VQFR (Gu et al., 2022), CodeFormer (Zhou et al., 2022), RestoreFormer (Wang et al., 2022), DMDNet (Li et al., 2022), DAEFR (Tsai et al., 2023), DifFace (Yue & Loy, 2022), DR2 (Wang et al., 2023), PGDiff (Yang et al., 2024), DiffBIR (Lin et al., 2023), PMRF (Ohayon et al., 2024), FlowIE (Zhu et al., 2024) and I2SB (Liu et al., 2023) . | Metrics | Input | GPEN | GFP | Restore | DMDNet | DAEFR | DifFace | DiffBIR | DR2 | PGDiff | PMRF | FlowIE | I2SB | **P-I2SB** | |----------------|--------|------|-----|---------|--------|-------|---------|---------|-----|--------|------|--------|------|------------| | | | *CVPR* | *CVPR* | *CVPR* | *TPAMI* | *ICLR* | *TPAMI* | *ECCV* | *CVPR* | *NIPS* | *ICML* | *CVPR* | *ICML* | | | | | (Yang et al., 2021) | (Wang et al., 2021) | (Wang et al., 2022) | (Li et al., 2022) | (Tsai et al., 2023) | (Yue & Loy, 2022) | (Lin et al., 2023) | (Wang et al., 2023) | (Yang et al., 2024) | (Ohayon et al., 2024) | (Zhu et al., 2024) | (Liu et al., 2023) | | | SSIM ↑ | 0.6460 | 0.6777 | _0.6827_ | 0.6219 | 0.6727 | 0.5892 | 0.6494 | 0.6570 | 0.6554 | 0.6220 | 0.6815 | 0.6479 | **0.7047** | 0.6581 | | PSNR ↑ | 24.921 | 25.423 | 25.401 | 24.206 | 25.318 | 22.439 | 24.055 | 25.297 | 24.194 | 22.920 | _26.001_ | 24.594 | **26.174** | 25.405 | | FID ↓ | 93.564 | 22.508 | 20.676 | 17.080 | 22.790 | 18.295 | 19.654 | 19.288 | 32.628 | 22.547 | _14.248_ | 21.393 | 25.6026 | **13.910** | | NIQE ↓ | 9.1407 | 6.7775 | 6.7324 | _5.3440_ | 6.7038 | 5.3992 | 6.1638 | 6.4053 | 8.1487 | 5.4556 | 5.6228 | 6.3571 | 6.5709 | **5.3300** | | LPIPS ↓ | 0.5953 | 0.2956 | 0.2823 | 0.2702 | 0.2965 | 0.2695 | 0.3052 | 0.2689 | 0.3447 | 0.3011 | _0.2413_ | 0.2623 | 0.2851 | **0.2395** | |
Summary: The authors present Pseudo-Hashing Image-to-Image Schrödinger Bridge (P-I2SB), a novel framework inspired by optimal mass transport. By correcting data distributions and effectively learning the optimal transport path between them, it enhances the restoration capabilities of Schrödinger Bridge (SB). Experimental results demonstrate that P-I2SB achieves state-of-the-art performance in blind face restoration (BFR), producing more natural textures compared to previous methods. ## Update after rebuttal The authors have addressed most of my concerns. However, one point remains unclear: whether commonly used data augmentation techniques in image restoration—when applied alone, without the involvement of SwinIR or other external components—can help the Vanilla-I2SB model better identify the optimal path. Clarifying this would further illuminate the distinction between the effects of the PHM module and those of data augmentation. I will keep my score unchanged. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes, the proposed method achieves state-of-the-art performance on the BFR task. Theoretical Claims: Yes, The theoretical claims and proofs appear similar to those of I2SB. As I'm not an expert in this area, I did not notice any specific issues. Experimental Designs Or Analyses: Yes, I have carefully reviewed the experimental design and the corresponding ablation studies, and I did not encounter any issues. Supplementary Material: Yes, I reviewed the supplementary material, particularly the appendix text, which offered additional analysis and experimental results. Relation To Broader Scientific Literature: This paper builds upon the I2SB (ICML 2023) framework by integrating a novel pseudo-hashing preprocessing strategy. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths: 1. The paper is well-structured and clearly presented. 2. The experiments are comprehensive, with thorough ablation studies. 3. The authors provide ample discussion, addressing both the method’s advantages and limitations. Weaknesses: 1. While the authors claim that blind tasks do not align with the typical Monge optimal transport framework—due to the non-uniqueness of “optimal transport” between images—this purportedly contradicts the optimality principle of a reversible SB and complicates constructing a consistent I2SB model. However, their proposed approach appears more akin to a data augmentation technique. Similar to the data augmentation method in DiffBIR, pairing I2SB with such an approach could potentially achieve similar results, suggesting the method may be overly simplistic. 2. Although the authors assert that their approach learns more optimal transport paths than existing methods, they offer limited analyses or evidence to validate that the paths are indeed optimal. Additional experiments or analyses, such as those found in Appendix Figure 6, would help substantiate these claims. Other Comments Or Suggestions: 1. Lines 73 and 147 contain incorrect double quotation marks. 2. Line 164 references “problem (2),” but the text does not provide a corresponding explanation. Questions For Authors: What is the significance of designing Cat-I2SB? After performing the cat operation, the resulting number of channels differs from other methods. If only the first three channels are retained, equation (12) can be simplified to $\(I_{hq}, I_{lq}^{d})$. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your comprehensive review and valuable insights. > Q1. Their approach resembles a data augmentation technique, like in DiffBIR, suggesting that pairing I2SB with it might yield similar results, indicating potential simplicity. - **Our P-I2SB is not a data augmentation method.** Data augmentation involves using certain properties of images, such as rotation and brightness, to augment the actual numerical distribution of training data without altering the semantic information of the images. In contrast, the strategy design in pseudo-hashing module (PHM) is intended to satisfy the boundary constraints in SB models, requiring the direct or indirect involvement of degradation representations, and this hashing design is reversible. - **DiffBIR** consists of two stages: first, the removal of degradation information, followed by generating a high-quality image that balances quality and identity. This is **completely different** from PHM. In our first stage (PHM), we do not remove any degradation information; instead, we rely on an SB-based model to learn indirect restoration relationships. > Q2. They provide limited evidence that the paths are optimal; additional analyses, like those in Appendix Figure 6, would help substantiate these claims. - **Existence of Optimal Path in P-I2SB**: We first establish the existence of the optimal path in our P-I2SB. Our model is constructed based on SB theory, as outlined in the Preliminaries (Sec.3). The consistency between Equation (7) and the optimal transport problem in Equation (2) reveals why the solution of the SB model is the optimal transport path. - **Monge's Problem**, linked to image restoration, seeks the optimal mapping $T$ between two distributions under a certain cost metric. While it can theoretically map low-quality to high-quality images, its practical scope is limited by mapping constraints. Therefore, the optimal transport problem introduces a version that finds the optimal joint distribution, ensuring a solution always exists. This is depicted in Equation (2). In BFR problems, the relationship between low- and high-quality images is more complex. To leverage the existing I2SB model, as discussed in Preliminaries (Sec.3), we propose PHM to address these complexities. - **Additional Experiments and Analyses**: To analyze the superiority of the path found by our P-I2SB, we compared P-I2SB with the baseline method on the loss analyses along the path $\{x_t\}, t \in [0,1]$. The results can be found at the following link: [https://anonymous.4open.science/r/P-I2SB](https://anonymous.4open.science/r/P-I2SB). $$ forward: X_t\sim q(X_t|X_0, X_1)=\mathcal{N}(X_t;\mu_t(X_0, X_1), \Sigma_t), inverse: \hat{X_t} \sim p(X_t|X_0^\epsilon,X_{t+1})=\mathcal{N}(X_t;\mu_\sigma(X_0^{\epsilon}, X_{t+1}), \Sigma_\sigma). $$ > Q3. Lines 73 and 147 contain incorrect double quotation marks. Thank you for pointing this out. Due to compilation issues, we have used **bold** and *italic* formatting instead to avoid errors related to the use of double quotation marks. The revised version is as follows: - the ***optimal transport*** between images is not unique - a ***relaxed*** version introduced by Kantorovich > Q4. Line 164 references “problem (2),” but the text does not provide a corresponding explanation. - Thank you for your insightful question, which helped us identify an error in Equation (7). The corrected version is as follows, where $\bar{v}(t,x):=f_t(x)-\frac{\sqrt{\beta_t}}{2}\nabla\log\bar{p}(t,x)$ and $p(t,x)\mathrm{d}x = \mu_t(\mathrm{d}x)$. $$ \inf_{(p,v)} \int_0^1\int_{\mathbb{R}^{d}} [\frac{1}{2}||v(t,x)-\bar{v}(t,x)||^2 + \frac{\beta_t}{8}||\nabla\log\frac{p(t,x)}{\bar{p}(t,x)}||^2]p(t,x)\mathrm{d}x\mathrm{d}t. \tag{7} $$ - **The text "as shown in problem (2)" means it is actually the same as equation (2)**. This implies that the SB problem is linked to the optimal transport problem. Furthermore, $p(t,x), (t\in[0,1])$ obtained by solving the SB model represents a form of optimal transport path. The phrase "as shown in problem (2)" is incorrect, and we will correct it to "just like equation (2)". $$ \inf_{(\mu,v)} \int_0^1{\int_{\mathbb{R}^{d}} ||v(t,x)||^2 \mu_t(\mathrm{d}x)}\mathrm{d}t. \tag{2} $$ > Q5. What is the significance of designing Cat-I2SB? If only the first three channels are retained, equation (12) can be simplified to $(I_{hq}, I_{lq}^d)$. - The Cat-I2SB is designed to **implement PHM strategies by hashing training image pairs** from $(I_{hq}, I_{lq}^d)$ to $(I_{hq} \oplus I_{lq}^d, I_{lq}^d \oplus I_{lq}^d)$, enabling the creation of varied training pairs for different degradations $d$. - In Equation (12), only the first three channels are used in the final data. However, this hashed data is utilized during P-I2SB training, where the diffusion network's input and output channels are adjusted. The original pairs $(I_{hq}, I_{lq}^d)$ remain unhashed. This modification increases the parameter size of Unet, approximately 558.6MB. --- Rebuttal Comment 1.1: Comment: Thank you to the authors for the detailed rebuttal, which has resolved most of my questions. However, I am still curious about the effect of combining Vanilla-I2SB with data augmentation (such as the data augmentation methods used in DiffBIR) and look forward to further understanding the different impacts of the PHM module and data augmentation methods in the model. --- Reply to Comment 1.1.1: Comment: > Q1. However, I am still curious about the effect of combining Vanilla-I2SB with data augmentation (such as the data augmentation methods used in DiffBIR) and look forward to further understanding the different impacts of the PHM module and data augmentation methods in the model. 1. ***Understanding Data Augmentation in DiffBIR***: The restoration process in DiffBIR [1] is divided into two stages: degradation removal (Stage 1) and information regeneration (Stage 2). In Stage 1, DiffBIR employs various restoration modules to address degradations specific to each BIR task. For BFR tasks, DiffBIR utilizes SwinIR as a pre-trained model, allowing it to generalize effectively to unknown degradations by leveraging performance of SwinIR across different tasks. 2. ***Combining Vanilla-I2SB with Data Augmentation***: Based on pre-trained degradation removal in Stage 1 of DiffBIR, Vanilla-I2SB can achieve improved performance. This "data augmentation" involves using a lightweight restoration model to initially process low-quality images, reducing the restoration difficulty for Vanilla-I2SB by removing most degradations. However, this approach presents two challenges: first, large parameter size of SwinIR allows it to function as an independent restoration model, potentially skewing fairness when comparing the capabilities of the combined Vanilla-I2SB and SwinIR model with other SOTA methods; second, altering the low-quality input contradicts our exploration of SB models addressing joint distribution issues between complex data distributions, as SwinIR's pre-restoration simplifies the distribution complexity of low-quality images. 3. ***Different Impacts*** of the PHM module and data augmentation: **The primary difference lies in PHM targeting high-quality data distributions, while DiffBIR focuses on low-quality ones**. This distinction arises from the different objectives of PHM and DiffBIR. Our PHM applies pseudo-hashing on the high-quality marginal distribution to assist SB models in finding optimal paths, addressing challenges in constructing optimal solutions for blind image restoration tasks. Conversely, DiffBIR uses a pre-trained SwinIR model for initial low-quality image restoration, as non-degraded images serve as better conditions for diffusion-based models, aiming to simplify the detail generation in Stage 2. 4. ***Future Directions***: We appreciate your suggestions, which have prompted us to further explore the differences between PHM and SwinIR pre-restoration. We are investigating and experimenting with ways to improve the impact of low- and high-quality marginal distributions on SB models, aiming for better progress in future work. _reference: [1]Lin X, He J, Chen Z, et al. DiffBIR: Toward blind image restoration with generative diffusion prior[C]//European Conference on Computer Vision. Cham: Springer Nature Switzerland, 2024: 430-448._
Summary: This paper proposes the Pseudo-Hashing Image-to-Image Schrödinger Bridge (P-I2SB), a novel framework for blind face restoration (BFR). The key insight of this paper is that using raw LQ images directly as the starting point for the reverse diffusion process is theoretically optimal. The authors argue that Schrödinger Bridge (SB)-based approaches offer a better alternative to conventional diffusion-based BFR methods by explicitly learning the optimal transport path between the HQ and LQ distributions. To address limitations in existing SB models (such as optimality and reversibility issues), the paper introduces a Pseudo-Hashing Module (PHM) that preprocesses HQ images into pseudo-samples, ensuring a structurally similar distribution to LQ images. This facilitates an optimal and reversible transformation in the SB framework. Extensive experiments demonstrate that P-I2SB outperforms prior BFR methods in terms of texture realism, and preservation of facial details. Claims And Evidence: The authors claim: 1.Schrödinger Bridge-based approaches can offer theoretically optimal transport paths for image restoration. 2.The Pseudo-Hashing Module (PHM) improves SB-based restoration by ensuring reversibility and distribution alignment between LQ and HQ samples. 3.P-I2SB achieves state-of-the-art (SOTA) results in BFR, outperforming existing methods in quality and efficiency. these claims are supported by theoretical analysis and quantitative results. Methods And Evaluation Criteria: The paper presents a strong methodological foundation, leveraging optimal transport theory to redefine BFR as a Schrödinger Bridge problem. The proposed PHM preprocessing step ensures that LQ and HQ distributions align better, improving reversibility in the transformation. Theoretical Claims: The paper provides a mathematical justification for using Schrödinger Bridge models in BFR. The derivations appear correct, but the paper could include more ablation studies to quantify the impact of PHM on transport efficiency explicitly. Experimental Designs Or Analyses: The experimental setup is solid Supplementary Material: The paper provides additional implementation details Relation To Broader Scientific Literature: This work builds upon Diffusion-based image restoration and Schrödinger Bridge methods in generative modeling. Essential References Not Discussed: No Other Strengths And Weaknesses: Strengths: (1) Theoretical novelty: Reformulates BFR as an optimal transport problem, leading to a more principled approach. (2) Practical improvements: Achieves better texture details and inference efficiency than prior methods. (3) Conceptually elegant: The pseudo-hashing mechanism is a clever solution to the non-reversibility issue in SB models. Weaknesses: (1) The computational complexity of PHM preprocessing is not well analyzed. (2) More ablations on different hashing strategies are required. (3) Limited real-world evaluations—most experiments use synthetically degraded images. Other Comments Or Suggestions: No Questions For Authors: 1. Could adaptive pseudo-hashing improve restoration in real-world scenarios? 2.What is the computational cost of PHM preprocessing relative to standard diffusion models? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your detailed feedback and constructive suggestions. Your input is crucial to refining and enhancing the quality of our paper. > Q1. The computational complexity of PHM preprocessing is not well analyzed. What is the computational cost of PHM preprocessing relative to standard diffusion models? - **Computational Complexity**: In comparison to the baseline methods, our approach incorporates an additional pseudo-hashing module (PHM). The computational complexity of this module is as follows: for Cat-I2SB, it is $\mathcal{O}(n)$; for Res-I2SB, it is $\mathcal{O}(n)$; and for Noise-I2SB, it is $\mathcal{O}(nT)$, where $T=10$ denotes the local number of steps in DDIM, and $n$ denotes the number of input images. This computational complexity is calculated with respect to processing $n$ input images. During inference, our method only adds the Inverse-PHM process relative to the baseline methods, without significantly increasing the complexity of this model or inference time. > Q2. More ablations on different hashing strategies are required. The paper could include more ablation studies to quantify the impact of PHM on transport efficiencyexplicitly. - We compared the ablation experiments of different strategies in **Table 3** in main paper and **Figure 7** in the appendix. The related comparison results are provided here again. Additionally, we considered combining all three strategies. Due to time constraints, this was only validated in a toy experiment, as shown in **Sec.4.3 (Toy Exploration and Analysis)**, and more results are shown at the following link: [https://anonymous.4open.science/r/P-I2SB](https://anonymous.4open.science/r/P-I2SB). - **Table.3 Ablation studies** on CelebA-Test. (a) donates Vanilla-I2SB as the baseline, (b)-(c) compare different condition guidance, and (d)-(f) compare different Pseudo-Hashing strategies. "lq" donates LQ images as the condition in forward and reverse process and "da" donates degradation-aware representation as condition. | Method | Condition | Condition | Hashing | Metrics | Metrics | Metrics | |-----------------|:---------------:|:---------------:|:--------------------:|:----------------:|:---------------:|:-----------------:| | | lq | da | strategy | NIQE↓ | FID↓ | LPIPS↓ | | (a) | | | | 25.6026 | 6.5709 | 0.2851 | | (b) | ✓ | | | 25.9522 | 6.4849 | 0.2904 | | (c) | ✓ | ✓ | | 25.4932 | 6.3898 | 0.2945 | | **(d) ours** | ✓ | ✓ | Noise | 18.2646 | 5.7595 | 0.2678 | | **(e) ours** | ✓ | ✓ | Cat | 13.9109 | 5.3300 | 0.2395 | | **(f) ours** | ✓ | ✓ | Res | 14.2941 | 5.4401 | 0.2431 | > Q3. Limited real-world evaluations—most experiments use synthetically degraded images. Could adaptive pseudo-hashing improve restoration in real-world scenarios? - We conducted tests on four datasets in total. Among them, CelebA-Test consists of synthetically degraded images, while ***LFW, CelebChild, WebPhoto-Test, and Wider*** are real-world image datasets. **Table 2** and **Figure 5** in main paper quantitatively and qualitatively compare the restoration performance on these four real-world datasets. Additionally, in **Appendix Sec. J**, we provide a detailed comparison of the restoration effects across the four real-world datasets using Figures 9, 10, 11, and 12. - The reason for selecting these four datasets for testing is that previous SOTA methods have been validated on these publicly available datasets. They include various low-quality images with different levels of degradation found in real-world scenarios. This allows for a fairer comparison between our method and the SOTAs.
Summary: This paper proposes P-I2SB, a novel framework for blind face restoration that leverages a pseudo-hashing strategy to preprocess image pairs and a Schrödinger Bridge Module (SBM) to learn optimal transport paths between LQ and HQ distributions. The key innovation lies in directly using raw LQ images as endpoints in the diffusion process, addressing limitations in existing Schrödinger Bridge (SB) methods related to solution optimality and reversibility. Experiments demonstrate state-of-the-art performance on synthetic and real-world datasets. ## update after rebuttal Thank you for your reply. After considering the comments from the other reviewers, I have decided to increase the score. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: Yes Experimental Designs Or Analyses: Yes Supplementary Material: Yes. Mainly the visual effects. Relation To Broader Scientific Literature: n/a Essential References Not Discussed: n/a Other Strengths And Weaknesses: Strengths: 1. The pseudo-hashing module (PHM) is a novel and theoretically grounded approach to ensure distribution alignment, enabling direct use of LQ images without complex feature extraction. 2. Comprehensive theoretical analysis justifies the framework’s design and improvements over vanilla SB methods. Weaknesses: The paper does not thoroughly analyze the computational overhead of the pseudo-hashing strategies (Cat/Res/Noise-I2SB) compared to baseline methods, despite claiming retained inference speed. Other Comments Or Suggestions: n/a Questions For Authors: - What about the sensitivity to degradation types and the scalability to non-face domains? - Res-I2SB assumes that the degradation process can be modeled as a linear residual from HQ to LQ, but in reality, degradation (such as JPEG compression and non-uniform blurring) is mostly a nonlinear transformation, which may lead to insufficient fitting ability of the model for complex degradation. - Cat-I2SB, Res-I2SB, Noise-I2SB still require manual prior selection. I wonder if it would be better to introduce pre-trained degeneration classifiers to guide strategy selection? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate the thorough review and insightful comments you have provided. > Q1. The paper does not thoroughly analyze the computational overhead of the pseudo-hashing strategies (Cat/Res/Noise-I2SB) compared to baseline methods, despite claiming retained inference speed. - **Computational Complexity**: In comparison to the baseline methods, our approach incorporates an additional pseudo-hashing module (PHM). The computational complexity of this module is as follows: for Cat-I2SB, it is $\mathcal{O}(n)$; for Res-I2SB, it is $\mathcal{O}(n)$; and for Noise-I2SB, it is $\mathcal{O}(nT)$, where $T=10$ denotes the local number of steps in DDIM, and $n$ denotes the number of input images. This computational complexity is calculated with respect to processing $n$ input images. During inference, our method only adds the Inverse-PHM process relative to the baseline methods, without significantly increasing the complexity of this model or inference time. > Q2. What about the sensitivity to degradation types and the scalability to non-face domains? - **Sensitivity to Degradation Types**: The pseudo-hashing module (PHM) is specifically designed for image restoration under various unknown degradation. The three hashing strategies presented in the paper introduce degradation feature representations either directly or indirectly. This enables the module to perceive both the type and degree of degradation, thereby effectively handling low-quality image restoration across different degradation types. - **Scalability to Non-Face Domains**: Our method possesses scalability, and we have verified its effectiveness primarily in blind face restoration (BFR), where the demand for naturalness is exceedingly high. As noted in *"Limitations of Face Image Generation" [1]*, face restoration is more challenging than non-face tasks due to the need for naturalness and identity consistency. The restoration of identity, micro-expressions and other details is crucial, significantly validating the effectiveness of our method. Meanwhile, non-face models emphasize robustness, and currently, no general model can be directly applied to face restoration. We plan to extend our method to general restoration tasks in the future. The advancement of BFR is also critical to the development of face restoration field, and we will continue to advance BFR and expand our research into general domains. _reference: [1] Rosenberg H, Ahmed S, Ramesh G, et al. Limitations of face image generation. In AAAI, 2024, 38(13): 14838-14846._ > Q3. Res-I2SB assumes that the degradation process can be modeled as a linear residual from HQ to LQ, but in reality, degradation (such as JPEG compression and non-uniform blurring) is mostly a nonlinear transformation, which may lead to insufficient fitting ability of the model for complex degradation. - It is instructive for us. The hashing module in Res-I2SB is not designed to construct a linear residual from LQ to HQ. Instead, it aims to model the transformation relationship between two data distributions using a SB-based model. This transformation is often a complex nonlinear one, which necessitates the use of the SB model for resolution. In Res-I2SB, $(I_{lq}, I_{lq} - I_{hq})$ is treated as a new image pair. We do not further explore its linear relationship but require that $I_{lq} - I_{hq}$ serve as a new boundary distribution to help find the optimal path. - In this context, the SB model aims to find the optimal joint distribution between given distributions, not to map low-quality to high-quality images. This joint distribution perspective is a broader generalization. Our goal is to establish the optimal dynamic diffusion path between these images, without assuming any specific mapping relationship, such as a linear constraint. Preliminaries (Sec.3) in main paper shows the evolution from a mapping to a joint distribution perspective in optimal transport problems. > Q4. Cat-I2SB, Res-I2SB, Noise-I2SB still require manual prior selection. I wonder if it would be better to introduce pre-trained degeneration classifiers to guide strategy selection? - Since we are dealing with a blind task where the degradation parameters and types are random and uncertain, potentially involving multiple combinations, classifiers cannot be used to clearly distinguish each category. - In terms of strategy design, Noise-I2SB handles differences caused by various degradations using degradation representation. We employ the formula $I_{hq}^{noise} = I_{lq}^d + \lambda_d \epsilon$, where $\lambda_d$ represents the noise level, which is directly related to $I_{lq}^d$. Here, we set $\lambda_d$ as the degeneration representation feature of $I_{lq}^d$ with an MLP layer. - Looking forward, we will focus on researching how to design PHM strategies without relying on manual prior selection. This will require further exploration of the relationship between the conditions for the optimal solution in SB and the boundary constraints.
null
null
null
null
null
null
EraseAnything: Enabling Concept Erasure in Rectified Flow Transformers
Accept (poster)
Summary: This paper highlights the limitations of existing concept-erasing methods, such as CA, ESD, and UCE, which were developed for Stable Diffusion models utilizing U-Net, cross-attention, and CLIP text encoders. The authors argue that these methods are ineffective for Flux, a modern multi-modal diffusion transformer that employs a T5 text encoder. To address this gap, the paper proposes new loss functions for concept erasure. Recognizing that concept erasure involves a bi-level optimization problem balancing both concept removal and irrelevant concept preservation, the authors integrate multiple loss terms: the original ESD loss (Equation 2), an attention-attenuation loss (Equation 3), a diffusion loss (Equation 4), and a reverse self-contrastive loss (Equation 5) to preserve irrelevant concepts. The experiments primarily focus on nudity removal, but also include tests on entity, abstraction, and relationship-based concepts, as well as celebrity face removal. Claims And Evidence: This paper argues that existing text-to-image concept-erasing methods, such as CA, ESD, and UCE, which were originally developed for Stable Diffusion architectures, fail to generalize to the Flux architecture. The authors demonstrate this claim visually in Figure 1, showing that applying methods like ESD, UCE, and EAP to Flux does not effectively erase concepts such as "nude." However, the evidence provided in this paper is insufficient to fully support this claim. Firstly, according to line 363, the authors conducted experiments that only fine-tunes “add_k_proj” and “add_q_proj” within the dual-stream blocks of Flux. This limited approach raises concerns because it excludes other potentially crucial layers, such as all attention layers across both 19 dual-stream and 38 single-stream blocks ("to_k", "to_q", "to_v", "to_out.0"), which could significantly influence concept erasure. To demonstrate that methods designed for Stable Diffusion are ineffective for Flux, a more comprehensive evaluation involving fine-tuning of all relevant attention layers is necessary. In the appendix, while the authors mention excluding "add_v_proj" and "to_v" due to numerical sensitivity, fine-tuning these layers is common practice in the community [A]. Moreover, the authors primarily rely on LoRA-based fine-tuning, which inherently preserves the original concepts within pre-trained model weights. Thus, to convincingly demonstrate genuine concept erasure, experiments involving full fine-tuning are required. Secondly, Flux employs not only the T5 encoder but also the CLIP text encoder. Since Flux utilizes both encoders, it remains unclear whether concept-erasing methods would perform differently if we use the CLIP encoder alone. [A] huggingface, https://github.com/huggingface/diffusers/tree/main/examples/dreambooth Methods And Evaluation Criteria: The evaluation methods generally make sense for assessing quality. However, the claim in Section 5.2 that FlexControl provides an "erase anything" solution requires additional supporting evidence. Specifically, authors should demonstrate the method's effectiveness through diverse and challenging cases, such as erasing color (i.e., red rose, green bag) or object count (i.e., two oranges, three cats). Theoretical Claims: The target in the loss function is a scalar value using the L2 norm. However, ESD’s objective function regresses the model toward the guided prediction of the pre-trained model. Experimental Designs Or Analyses: As mentioned previously, the experimental analysis should include results from LoRA and full fine-tuning across all attention layers. Supplementary Material: Misleading argument on trainable parameters in Section A. See “Claims And Evidence” Relation To Broader Scientific Literature: Concept erasure in generative models is a key research challenge in text-to-image generation. This paper extends prior works, such as ESD, by adapting the loss formulation to transformer-based architectures like Flux. Essential References Not Discussed: No issues found. Other Strengths And Weaknesses: See “Claims And Evidence” Other Comments Or Suggestions: Additional evaluation with Stable Diffusion 3. Questions For Authors: No additional questions. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your detailed comments and interest in our work! - **(A) Limited Fine-Tuning due to VRAM Constraints** We acknowledge the reviewer's concern regarding limited fine-tuning. Due to 80GB VRAM constraints on our single A100, full fine-tuning was infeasible. We opted for LoRA, prioritizing layers with the most significant impact on text-to-image generation. Optimizing `to_q` and `to_k` degraded image quality without effective concept erasure, while `add_q_proj` and `add_k_proj` proved effective. Optimizing `add_v_proj` and `to_v` yielded noisy outputs, leading to their exclusion. We recognize the architectural differences between Flux and Stable Diffusion (MMDiT vs. U-Net, Rectified Flow vs. DDPM/DDIM) and base our conclusions on empirical observations. - **(B) T5 Dominance in Flux Generation** Our experiments demonstrate that the T5 encoder significantly influences Flux's generation. As shown [here](https://imgur.com/a/047aypl), `prompt_embeds` from T5 acts as `encoder_hidden_states`, similar to CLIP in SD models. Conversely, `pooled_prompt_embeds` from CLIP primarily affects time embeddings, with minimal impact on final output. Adding noise to T5 features drastically altered the output, while changes to CLIP features were negligible. Therefore, we focused on T5. - **(C) EraseAnything: Quantity and Color Validation** We appreciate the reviewer's request for diverse examples. To further validate EraseAnything's robustness, we provide examples demonstrating erasure of quantity and color: **"green" from "green bag," "red" from "red rose," "five" from "five pencils," and "three" from "three cats"** ([image](https://imgur.com/a/TIxXi9u)). Combined with supplementary material, this reinforces EraseAnything's effectiveness. - **(D) Rationale for Excluding SD3** We excluded SD3/SD3.5 due to their comparatively lower general image generation performance. Given Flux's status as a flagship model developed by **Robin Rombach's team**(Black Forest Lab, father of Stable Diffusion), we believe our findings on Flux sufficiently demonstrate EraseAnything's capabilities. - **Image URLs:** * **T5 vs. CLIP:** [https://imgur.com/a/047aypl](https://imgur.com/a/047aypl) * **Erase Quantity & Color:** [https://imgur.com/a/TIxXi9u](https://imgur.com/a/TIxXi9u) --- Rebuttal Comment 1.1: Comment: I appreciate the authors’ responses to points (B) and (C) in the rebuttal. However, I still have concerns regarding (A) and (D). I understand that full fine-tuning may be infeasible due to VRAM limitations. That said, regarding the use of LoRA, I remain unconvinced that fine-tuning only the “add_k_proj” and “add_q_proj” layers in Flux is sufficient, as these exist in only 19 of the 57 transformer blocks. In this context, I believe it would be meaningful to evaluate the proposed method on SD3, where all blocks are dual-stream and contain “add_k_proj” and “add_q_proj” layers. This would help clarify whether the issue is specific to Flux’s architecture or generalizable to DiT-based models. Since the core motivation of this work is that methods effective on U-Net-based models do not transfer to DiT-based models, a deeper examination of the trainable parameter choices is central to the paper's contribution. --- Reply to Comment 1.1.1: Comment: Thank you for the insightful comments. Regarding point (A) and the choice of `add_q_proj` and `add_k_proj` for LoRA fine-tuning: As illustrated partially by the provided code snippet. Specifically, `encoder_hidden_states` (carrying the text conditioning) are projected via `encoder_hidden_states_query_proj` and `encoder_hidden_states_key_proj` (these correspond to the `add_q/k_proj` in the `diffusers` implementation). These projections are then concatenated with the image features' query and key vectors, respectively: ``` shell # Assuming encoder_hidden_states_*_proj correspond to add_q/k_proj layers query = torch.cat([encoder_hidden_states_query_proj, query], dim=2) key = torch.cat([encoder_hidden_states_key_proj, key], dim=2) # value projection is typically separate and not targeted here ... # Attention scores are calculated using these combined representations attn_weight = query @ key.transpose(-2, -1) * scale_factor ``` By applying LoRA to add_q/k_proj, we are directly modifying the weights that project the text conditioning before it influences the attention scores (`attn_weight`) calculated w.r.t the image features. This provides a targeted way to modulate how the text concept influences the image generation process at these specific cross-attention points. We acknowledge the reviewer's observation that these layers exist in only 19 of the 57 blocks in the Flux[schnell/dev]. While tuning only a subset seems incomplete, we hypothesize that these particular dual-stream blocks are critical junctions for integrating text-based conceptual information. Our empirical results suggest that modifying the text injection mechanism even within these key blocks provides a sufficiently strong and targeted signal to guide the model effectively for concept manipulation tasks. The visualization linked below, which shows the image impact of altering attention weights (related to the output of these layers), offers some qualitative support for the sensitivity of the generated output to modifications within this mechanism: `https://imgur.com/a/TSTnMBO`. Regarding the suggestion to evaluate on SD3 and SD3.5: We consent to the reviewer and will add those experiments and incorporate the results into the final version of the paper to verify the adaptability of EraseAnything.
Summary: Given that current text-to-image models can generate inappropriate content related to pornography, violence, or copyright violations, the problem of effective concept erasure has become a critical research topic. Existing methods have proven effective for Stable Diffusion but are challenging to directly adapt to SD3 and FLUX. This paper investigates concept erasure algorithm on FLUX, highlighting the differences between FLUX and SD in terms of model structure and encoder properties. The authors propose a robust concept erasure algorithm leveraging bi-level optimization techniques, integrating Forge Me Not, Erasing Stable Diffusion, and Reverse Self-Contrastive approaches. Experimental results demonstrate that their method achieves superior qualitative and quantitative erasure performance on the FLUX architecture, surpassing other existing concept erasure algorithms. ## update after rebuttal The author show the comparison between bi-level optimization and multi-objectives and explain the reason why attention localization owns contribution to the flux-based architecture and the generative field, which address my concern. However, the attention localization and the bi-level optimization is kind of simple and boardline to me, so I keep my rate. Claims And Evidence: 1. The paper appears to overclaim the contribution of bi-level optimization. In the introduction, the authors present bi-level optimization as a core contribution. However, as described in Algorithm 1, the proposed method simply alternates between optimizing the erasure and preservation losses, which is essentially a trivial multi-objective optimization implementation without notable innovation. 2. The paper seems to overclaim the contribution of attention localization. In the introduction, the authors treat Attention Localization as a core contribution and refer to it as “a depth analysis.” However, as later sections describe, MMDiT applies a concat operation on Q and K, followed by a self-attention-like structure. Identifying the corresponding positions of image and text tokens in the attention matrix is straightforward and does not qualify as “a depth analysis.” Methods And Evaluation Criteria: Yes. Theoretical Claims: No theoretical claims. Experimental Designs Or Analyses: The paper employs well-established techniques such as Attention, Erasing Stable Diffusion (ESD), and LoRA, which are already mature methods for concept forgetting and fine-tuning. The authors merely make slight modifications to the loss function and attention matrix representation to apply them to FLUX. The fact that these methods can be easily adapted suggests that transferring SD-based forgetting algorithms to FLUX is not particularly challenging—contradicting the authors’ claim that adapting SD-based methods to FLUX presents fundamental difficulties. Instead, it seems that the authors achieve better performance simply by stacking existing methods. Supplementary Material: I read all of the supplementary. Relation To Broader Scientific Literature: Flow-matching-based diffusion models represent the cutting edge of generative models in current research. Investigating their forgetting mechanisms is of great importance for the future of AI safety. The method proposed in this paper lays a foundational groundwork for future research in this direction, offering significant value for advancing both the capabilities and responsible use of such models. Essential References Not Discussed: No. Other Strengths And Weaknesses: None. Other Comments Or Suggestions: Typo: In line 197, nude nude should be nude. Questions For Authors: My main concern lies in the authors' overstatement regarding the contribution of their method, as well as the claimed ease of adapting it to flow-matching-based diffusion models. The author should answer the following question to show the novelty and contribution of their method. 1. What's the difference between bi-level optimization and multi-object optimization? 2. What's the depth analysis of attention localization? it seems to be a very straightforward idea. 3. What's the real difficulty of adapting to unlearning into flow matching? The method the authors used is just the combination of existing methods with slight modification. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate your insightful feedback. We will revise the manuscript to ensure a balanced and objective narrative, avoiding any exaggeration or overstatement. - **(A) Difference between bi-level optimization and multi-objective optimization** We frame unsafe concept erasing as a bi-level optimization problem, rather than a multi-objective one. While multi-objective optimization balances competing goals equally (e.g., erase unsafe concepts and preserve irrelevant ones), it lacks a clear prioritization. In contrast, bi-level optimization explicitly models a hierarchy: • Lower-level erases unsafe concepts. • Upper-level evaluates whether irrelevant concepts are preserved. This reflects the asymmetric nature of our goals: erasure is primary; preservation is a constraint. Bi-level optimization allows for more precise control, better mirrors real-world usage (apply first, evaluate second), and is well-suited for safety-critical tasks where minimizing unintended harm is essential. - **(B) attention localization** Attention localization analysis in UNet-based Stable Diffusion is intuitive and well-understood due to the presence of explicit cross-attention mechanisms. However, in the joint attention architecture of MMDiT, explicit cross-attention is absent. Our work demonstrates that, despite this absence, the joint attention mechanism in MMDiT still retains attention localization properties similar to those found in UNet's explicit cross-attention. Leveraging this insight, we introduce a framework to concept erasure within the MMDiT architecture. Although this insight may seem straightforward, it is beneficial for the research community, particularly benefiting future FLUX-based erasure studies. - **(C) real difficulty in adapting unlearning to flow matching?** Achieving effective concept erasure in Flow Matching models presents a significant challenge, as direct application of existing methods like UCE and ESD proves inadequate. To address this, we conducted a comprehensive structural analysis of Flux, meticulously probing its intricacies to identify viable improvement strategies. Through extensive experimentation, we found that precise adjustments to the `to_q_proj` and `to_k_proj` projections within the dual transformer block are essential for successful erasure. --- Rebuttal Comment 1.1: Comment: I admit that this paper is the first one that points the attention localization in FLUX, although it is extremely straightfoward, so, fine. As for bi-level optimization, please show the comparison with multi-objective optimization, or it could not be argued as core contribution. --- Reply to Comment 1.1.1: Comment: Thank you for the positive feedback and recognition! For the multi-objective optimization evaluation, we adopted the experimental settings from Table 3 of the original paper, focusing on specific categories: `Entity` (e.g., `soccer`) and `Abstraction` (e.g., `artistic style`). The table below compares Bi-level Optimization (BO) with multi-objective optimization. We report CLIP classification accuracies (%) for each erased category across three metrics: - Acc_{e} (Efficacy): Accuracy on the erased category itself (lower is better ↓). - Acc_{ir} (Specificity): Accuracy on remaining, unaffected categories (higher is better ↑). - Acc_{g} (Generality): Accuracy on synonyms of the erased class (lower is better ↓). | METHOD | ACCe ↓ | ACCir ↑ | ACCg ↓ | |---------------------|--------|---------|--------| | Bi-level (ENTITY) | **12.5** | **91.7** | **18.6** | | Multi-objective (ENTITY) | 12.7 | 79.3 | 28.5 | | Bi-level (ABSTRACTION) | **21.1** | **90.5** | **24.7** | | Multi-objective (ABSTRACTION) | 22.3 | 77.4 | 31.2 | Which demonstrate that BO has it merit in such task, and we will add it in the final copy of this version.
Summary: In this paper, the authors propose a methodology for concept unlearning while ensuring the preservation of unrelated concepts in the latest text-to-image (T2I) models based on Flow Matching and Transformer-based diffusion models such as Flux. The authors introduce a bi-level optimization (BO) framework. The lower-level optimization focuses on concept removal, while the upper-level optimization ensures the preservation of unrelated concepts. The proposed method is evaluated through quantitative experiments, outperforming state-of-the-art techniques in nudity erasure and output preservation, except for UCE in the “nudity” concept. Claims And Evidence: All the claims made in this paper are supported by clear and convincing evidence. Methods And Evaluation Criteria: The proposed method and the evaluation criteria make sense for the problem. The authors follow standard evaluation criteria that evaluate the method on the benchmark. However, adversarial attack-based benchmarking is lacking. For instance, Ring-a-bell [1] and UnlearnDiff [2] should be used like the many baselines for detailed comparisons. [1] “Ring-A-Bell! How Reliable are Concept Removal Methods for Diffusion Models?,” Tsai et al. [2] “To Generate or Not? Safety-Driven Unlearned Diffusion Models Are Still Easy To Generate Unsafe Images ... For Now,” Zhang et al. Theoretical Claims: The derivation for the loss is sound and justifies the claim (embedding of the target concept is aligned with the embeddings of irrelevant concepts and pushed away from the synonym of the target concept). Experimental Designs Or Analyses: Yes, the soundness of the following experimental designs by the author is verified and looks good. Supplementary Material: Yes, I reviewed all the supplementary sections. Relation To Broader Scientific Literature: The proposed approach targets the transformer-based recent models, which might interest a broader community. Essential References Not Discussed: In the literature work, FMN[1] paper could be added as they do attention (cross-attention) regularization between the attention map and text embeddings for concept erasure. [1] “Forget-Me-Not: Learning to Forget in Text-to-Image Diffusion Models,” Zhang et al. Other Strengths And Weaknesses: Strengths: - The paper has been written in a concise and clear way. - The analyses presented on why the current state-of-the-art methods do not work for models Flux is presented very well. - The paper explored the potential of scaling concept erasure to multiple concepts and presented results in the appendix F2 Multiple Concept Erasure. - The User Study analysis shown in Figure 4 and in Appendix E is extremely helpful in assessing the effectiveness of the method under various metrics. Weakness: - Several evaluations, such as adversarial attacks such as Ring-a-Bell [1], UnlearnDiff [2], and Attacks, have not been presented. It could be a useful evaluation of the effectiveness of the attention map regularization loss method. - Table 2/3/4 can include more baseline methods (AdvUnlearn) for an in-depth analysis. Look at (https://huggingface.co/spaces/Intel/UnlearnDiffAtk-Benchmark)[https://huggingface.co/spaces/Intel/UnlearnDiffAtk-Benchmark]. - Additionally, Flux-based baselines are missing. For instance, many of the prior works (AdvUnlearn, UCE, ESD, etc.) could be extended to Flux and treated as the baseline to see the proposed approach's impact truly. [1] “Ring-A-Bell! How Reliable are Concept Removal Methods for Diffusion Models?,” Tsai et. al. [2] “To Generate or Not? Safety-Driven Unlearned Diffusion Models Are Still Easy To Generate Unsafe Images ... For Now,” Zhang et. al. Other Comments Or Suggestions: N/A Questions For Authors: - Can you elaborate on the last paragraph in Appendix A. Flux Architecture: “For a fair comparison, we have adapted traditional methods such as…conducted under a consistent and relevant framework.” on why the said modification ensures consistent comparative analysis? - Instead of the ESD loss function, can we utilize the UCE loss function since you established a linear relationship between the text embeddings and attention map? How would it affect the performance of the model? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your kind words and recognition! - **(A) Adversarial attack experiments** Thank you for the suggestion to include adversarial attack experiments, which we consider very important. Following the paper's methodology, we used `NudeNet` (Bedapudi, 2019) with a detection threshold of **0.6** to test the **Attack Success Rate (ASR)** on the [RingABell-Nudity](https://huggingface.co/datasets/Chia15/RingABell-Nudity) `[1]` dataset (comprising 285 Ring-A-Bell revised prompts focused on nudity). Since the prompts in RingABell-Nudity are already processed according to the standard procedure, we did not reapply the Ring-A-Bell method. The table below shows the results of our tests on ESD, CA, and our proposed method using this dataset. We also included the attack results from MU-Attack `[2]`. Step 0 means only attack the very initial `velocity` of Flux, Step 0,1,2 means attack the initial three `velocity`. According to our experiments, when attack too much steps, would yield irrelevant image w.r.t to the prompt. | Concept | Methods | Flux[dev] | ESD (Flux[dev]) | CA (Flux[dev]) | EraseAnything (Flux[dev]) | | :------------------ | :---------------------- | :-------- | :-------------- | :------------- | :------------------------ | | Nudity (RingABell Nudity) | Original (Org) | 59.65% | 7.36% | 3.16% | 2.46% | | | MU-Attack (step 0) | 64.56% | 11.57% | 15.44% | 8.77% | | | MU-Attack (steps 0,1,2) | 65.96% | 14.74% | 16.49% | 11.93% | - **(B) More baselines** Thank you for your advice. We will conduct a thorough probe on relevant papers, and incorporate more baseline methods, such as `AdvUnlearn` into the final version. - **(C) Flux baseline missing?** We have implemented all relevant methods, including ESD, CA, and MACE, within the Flux[dev] framework. Therefore, all concept erasing methods reported in the tables were conducted on Flux[dev]. - **(D) Questions** Firstly, this signifies that we have adapted the previous SD 1.5 methods to the Flux[dev] architecture. Secondly, given that UCE led to poor visual results ([https://imgur.com/a/at2lkh8](https://imgur.com/a/at2lkh8))`[3]`—a finding consistent with its SD 1.5 implementation after careful review—we opted to use ESD as our baseline. - **References** [1] ["Ring-A-Bell! How Reliable are Concept Removal Methods for Diffusion Models?," Tsai et al.](https://github.com/chiayi-hsu/Ring-A-Bell) [2] ["To Generate or Not? Safety-Driven Unlearned Diffusion Models Are Still Easy To Generate Unsafe Images ... For Now," Zhang et al.](https://github.com/OPTML-Group/Diffusion-MU-Attack) [3] [https://imgur.com/a/at2lkh8](https://imgur.com/a/at2lkh8) --- Rebuttal Comment 1.1: Comment: I thank the authors for providing more clarifications. * (A) I appreciate the authors performing the additional experiments on these benchmarks. Given their importance, if accepted, I hope this will be added to the camera-ready draft. * (B) I hoped to get more apple-to-apple comparisons during the rebuttal phase, as AdvUnlearn is a very strong baseline, and comparison is missing. * (C & D) Thanks for the clarification. I am inclined to increase the score to 3 (weak accept) or even 4 (accept) if a comparison with AdvUnlearn is provided during the discussion phase and shows the improvement. --- Reply to Comment 1.1.1: Comment: Thank you for your kind words! To promptly address point (B), we provide a comparison between the fast version and the standard version, using the same experimental setup as previously described and same optimization practice as defined in `https://github.com/OPTML-Group/AdvUnlearn/tree/main`. **Response to (B)**: As demonstrated by the T5 vs. CLIP comparison `https://imgur.com/a/047aypl`, optimizing CLIP embeddings within Flux yields a negligible impact on the final output. Therefore, to respond to your inquiry, we have applied the same optimization method (AdvUnlearn) to the T5 model, utilizing the previously mentioned experimental settings. | Concept | Methods | Flux[dev] | ESD (Flux[dev]) | CA (Flux[dev]) | AdvUnlearn AT (Flux[dev]) | AdvUnlearn Fast-AT (Flux[dev]) | EraseAnything (Flux[dev]) | |-------------------------|------------------------|-----------|-----------------|----------------|---------------------------|---------------------------|---------------------------| | Nudity (RingABell Nudity) | Original (Org) | 59.65% | 7.36% | 3.16% | 6.67% | 9.82% | 2.46% |
Summary: This paper introduces EraseAnything, a flux-based concept erasing method designed to selectively remove target concepts while preserving irrelevant ones. The authors employ a bi-level optimization strategy to mitigate overfitting and catastrophic forgetting—key challenges in concept erasure. Experimental evaluations across diverse tasks demonstrate the method’s effectiveness and versatility, highlighting its potential impact in controlled information removal and model robustness. ## update after rebuttal The authors’ rebuttal has clarified my previous concerns. Taking into account the other reviewers' feedback and the authors' response, I choose to maintain my original evaluation (Weak accept). Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: No, as there are no theoretical claims made Experimental Designs Or Analyses: I reviewed the experimental results presented in Tables 2, 3, and 4, and they appear to be correct. Supplementary Material: The Supplementary Material includes code for both training and testing. However, I did not run the code myself. Relation To Broader Scientific Literature: This paper explores the problem of targeted concept erasure in deep learning models, aligning with broader discussions in the machine learning community on model interpretability, unlearning, and mitigating biases. The proposed Flux-based approach builds upon rectified flow transformers, contributing to existing literature on concept erasure and catastrophic forgetting. The work is relevant to ongoing discussions in ICLR, NeurIPS, and ICML regarding responsible AI and controllable generation in large models. Essential References Not Discussed: None Other Strengths And Weaknesses: **Paper Strengths:** The paper is well written. The main motivation is clear and easy to understand. **Major Weaknesses:** In Table 2, UCE outperforms the proposed EraseAnything in terms of the number of DETECTED NUDITY instances but performs worse in terms of FID and CLIP on the MS-COCO 10K dataset. What accounts for this discrepancy between these metrics? Does this imply that UCE is superior to EraseAnything overall? Other Comments Or Suggestions: None Questions For Authors: See weakness. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your kind words and review! To be concise: * **UCE**'s aggressive nudity removal significantly distorts images. * **EraseAnything** prioritizes image quality and text alignment, offering a better trade-off. As shown in [this image](https://imgur.com/a/at2lkh8), optimizing **'K'** in our UCE implementation on **Flux[dev]** reduces nudity but degrades image quality, highlighting this inherent trade-off. (Optimizing **'Q'** had no effect, and **'V'** yielded noisy images.) While UCE removes more nudity, EraseAnything maintains superior image quality (better performance on FID and CLIP on the MS-COCO 10K dataset), which is crucial for practical use. * **UCE Optimization** [https://imgur.com/a/at2lkh8](https://imgur.com/a/at2lkh8) --- Rebuttal Comment 1.1: Comment: The authors’ rebuttal has clarified my previous concerns. Taking into account the other reviewers' feedback and the authors' response, I choose to maintain my original evaluation (Weak accept). --- Reply to Comment 1.1.1: Comment: Thank you for your valuable time and insights. We truly appreciate your support.
null
null
null
null
null
null
Mahalanobis++: Improving OOD Detection via Feature Normalization
Accept (poster)
Summary: The paper proposes a simple fix to the Post-Hoc OOD detection technique based on the Mahalanobis distance computed on the feature space of the neural network of interest. This simple fix consists of normalizing the features by their $l_2$ norm before computing the distance. The authors emphasize how the samples violate the assumptions underlying Mahalanobis distance in the feature space: - Assumption 1: the class wise features follow a multivariate normal distribution - Assumption 2: class conditional covariance matrices are the same They do so by analyzing the magnitude of the feature norm, emphasizing how the fix can alleviate this problem. Experiments on a comprehensive benchmark of various models empirically demonstrate the effectiveness of this method. Claims And Evidence: The problem with the feature norm is clearly illustrated with experiments based on Lemma 3.1, expected squared variance deviation, and QQ plots. However, I have some concerns with the link between the fix and the assumptions. The fix intends to alleviate the difference between the feature norms of samples from different classes, but I do not see how it makes the features satisfy assumptions 1 and 2. Specifically, nothing ensures that after the fix, which is just a normalization, the features follow a normal gaussian, and that their covariance matrices are equal. - QQ plots are here to show that the obtained features are closer to normal gaussian, but they might still be non gaussian. - I do not see the relation between assumption 2 and the expected squared variance deviation, which I think is not a standard metric. Two very different covariance matrices could have a deviation of zero with this metric. Why not conducting statistical tests, or using standard probability distribution divergence measures that can be estimated? Methods And Evaluation Criteria: - The experiments to emphasize the problem with feature norm are thorough and theoretically grounded - The evaluation benchmark is extensive. Theoretical Claims: I checked the proof of Lemma 3.1 which is Ok. I skimmed through Appendix C (proof of expected squared variance deviation) but did not check thoroughly. Experimental Designs Or Analyses: yes Supplementary Material: yes Relation To Broader Scientific Literature: The contributions are closely related to (Lee et al. 2018b) and (Ren et al, 2021) which are appropriately discussed. Essential References Not Discussed: All essential references are discussed to the best of my knowledge Other Strengths And Weaknesses: ### Strenght - The problem with Assumptions 1 and 2 is clearly emphasized - The method is simple - It consistently improves the performance of Mahalanobis method ### Weaknesses - What is called a "fix" might not be an actual "fix" but just a tool to make Mahalanobis method better - Concerns with expected squared variance deviation Other Comments Or Suggestions: - In the proof of Lemma 3.1, $X$ is not introduced (the lemma is about $\Phi(X)$) - Eq. 5 "trace" please use the same notation as lemma 3.1 ("tr") Questions For Authors: - Could you plot the distribution of normalized features in a plot similar to Figure 3? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful comments and appreciate the positive feedback. Below we address the reviewers remarks: - __“The fix intends to alleviate the difference between the feature norms of samples from different classes”__ We would like to clarify that different feature norms for samples from different classes are not a problem per se. For instance, if the mean vectors of different classes were of different magnitude (which is typically not the case), the Gaussian assumption could still be satisfied. However, our analysis shows that the observed feature norm distribution and the one that we would expect under the Gaussian model are significantly different, for instance showing very heavy tails. We take this as indication that the Gaussian assumption is violated, and substantiate this with further analysis (QQ plots, etc). - __“nothing ensures that after the fix, which is just a normalization, the features follow a normal gaussian, and that their covariance matrices are equal.”__ and __“QQ plots are here to show that the obtained features are closer to normal gaussian, but they might still be non gaussian.”__ We agree that we cannot guarantee that the features follow a normal distribution. In fact, we believe that there is no reason to believe that the features follow any particular distribution. However, we provide strong empirical evidence that _modelling_ the feature distribution with a normal distribution with shared covariance is _more appropriate after normalization_. In particular, 1) the QQ plots are less skewed, 2) the shared covariance assumption is better satisfied, and 3) the feature norm does not act as a confounder for OOD detection anymore. - __"What is called a 'fix' might not be an actual 'fix'"__ In addition to the above, we are happy to rephrase from "fix" to e.g. "remedy" - __“Could you plot the distribution of normalized features in a plot similar to Figure 3?”__ The feature norms of the normalized features would show as a straight line at 1 with no deviation. Please let us know in case this does not clarify the question. - We thank the reviewer for the remarks about the trace notation and for noting that $\Phi(X)$ has not been properly introduced. We will adjust the notation and clarify that $\Phi(X)$ is a random variable representing the feature distribution for input $X$. - __”two very different covariance matrices could have a deviation of zero with this metric. “ (Eq. 5)__ We respectfully disagree with the reviewer. In particular, we can write $$\mathbb{E}_u \left[ \left( \frac{u^T \hat{\Sigma}_i u}{u^T \hat{\Sigma} u} - 1 \right)^2 \right]=\mathbb{E}_u \left[ \left( \frac{u^T (\hat{\Sigma}_i-\hat{\Sigma}) u}{u^T \hat{\Sigma} u}\right)^2 \right]$$ Since $\Sigma$ is pd, $u^T {\Sigma} u>0$, and the only way for the expectation to be zero is that $u^T (\hat{\Sigma}_i-\hat{\Sigma}) u=0$ for all $u$, wich is only the case when $\Sigma=\Sigma_i$. - __"expected squared variance deviation ... is not a standard metric"__ We agree that this metric is not commonly evaluated, but we argue that it is the right one to look at. In particular, the Mahalanobis distance performs a whitening by the variances: Deviations in a certain direction are measured _relative_ to the sample variance in this direction. Small absolute deviations can thus result in large distances when they are along a direction of small variance. We therefore need a measure that can capture _relative_ deviations instead of absolute deviations, since absolute deviations would be dominated by directions of large variance. Our proposed measure computes the _relative deviation_ of the variance of $\Sigma_i$ from $\Sigma$ in _every direction u_ and averages this deviation over all directions. This is a natural way to assess whether $\Sigma_i$ and $\Sigma$ are similar in all possible directions in the feature space. A similar measure that is commonly used to compare covariance matrices is the Riemannian metric (see e.g. [1,2]) $d(\Sigma_1, \Sigma_2) := \sqrt{\mathrm{tr}\left(\ln^2\left({\Sigma_1^{-0.5}}\Sigma_2{\Sigma_1^{-0.5}}\right)\right)}$. It is also possible to compute an appropriate measure with divergences like the KL divergence: $KL_{\text{normal}}(I_n,\Sigma_1^{-0.5}\Sigma_2\Sigma_1^{-0.5})$. We evaluate both, confirming that normalization aligns the covariance structure in a meaningful way (lower is better): ||Riemann|Riemann|KL| KL| |:-------------|------------------------:|:----------------------|----------------------------:|:--------------------------| ||unnormalized |normalized|unnormalized|normalized| | mean|98.2 | **88.2**| 1090.6|**982.0**| | median|93.6 | **84.7** | 1011.0 |**908.3** | We are happy to discuss any of the points further! [1] Förstner & Moonen. (2000). A Metric for Covariance Matrices. 10.1007/978-3-662-05296-9_31. [2] Pennec, Fillard, & Ayache. A Riemannian Framework for Tensor Computing. Int J Comput Vision 66, 41–66 (2006). --- Rebuttal Comment 1.1: Comment: I appreciate the author's response and would like to increase my rating. --- Reply to Comment 1.1.1: Comment: We are glad that the reviewer appreciates our rebuttal response and would like to thank them for raising the score!
Summary: This submission focuses on the OOD detection task and it proposes a simple yet effective method to improving the Mahalanobis distance approach. ## update after rebuttal The authors rebuttal has largely addressed my concerns and I thus maintain my positive rate. Claims And Evidence: While in a mixture of theoretical and emphrical way, the reviewer believees that the submission is supported by clear evidences. Methods And Evaluation Criteria: Yes, the reviewer believes that the proposed method makes sense. Theoretical Claims: The reviewer hasn't carefully checked the proof of Lemma 3.1. Yet, Lemma 3.1 is at least not contradicting with the reviewer's intuition. Experimental Designs Or Analyses: Yes, the reviewer has gone through the experimental settings in the main paper and finds that they largely make sense. Supplementary Material: The reviewer has quickly gone through the experimental parts in the supp, but not line by line for the theoretical part. Relation To Broader Scientific Literature: The reviewer believes that Gaussian distribution assumption can be very common in different scientific literatures. From this perspective, it is very interesting for a method to appear if it does can help mitigating the violation of this assumption in a relatively simple way. Essential References Not Discussed: N.A. Other Strengths And Weaknesses: I really appreciate the authors effort in perfoming extensive experimental analysis and I believe that they can to a large scale strength this submission. Below, I still have several queries over this submission that I hope can be addressed to further improve the quality. 1. I hope the related works section can be better organlized and the differences between the proposed method and existing methods to be better elaborated. For example, when the method names a subsection called Mahalanobis distance, it actually wants to review Mahalanobis-distance-based existing methods from my understanding. Thus, it is important for this to be clarified. Meanwhile, it is appreciated if the difference between the proposed method and existing similar methods to be better elaborated. 2. When the authors present Lemma 3.1, if I am not wrong, it is only the property of Gaussian distributed features but not sufficient condition. If this is the case, I appreciate the authors to make this more clear there to avoid reader's misunderstanding. 3. The authors claim that "we expect this to be negligible due to the large dataset size". I first appreciate more explanation or elaborate on this negligibility. Meanwhile, the authors seem to require the size to be very large (>10^6). What if in some cases that this is not the case? If the negligibility still holds? 4. Finally, if I am not wrong, the key motivation seems to be concentrating the feature norm. I am thus a bit curious here, what if we not only normalize like Eq. 6 but concentrate the feature even further? What will happen? Meanwhile, while I admit its naturaness, is there any specific reason for the authors to choose to perform concentration via normalization? I still have these queries yet I remain positive on this submission. I thus vote for weak accept now. Other Comments Or Suggestions: N.A. Questions For Authors: (see above in other weaknesses.) Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their positive feedback, and for appreciating our work. We address the remarks below: 1. __"organize related work section"__ and __"elaborate difference to existing similar methods"__ We will extend the discussion about related work, and emphasize the differences to previous work that used feature normalization [3,4] or the Mahalanobis distance, or both [1,2]. Most importantly, other works have investigated _train-time_ methods that involve normalization. Either implicitly through contrastive losses (CIDER [1], SSD[2]), or explicitly to improve OOD detection [3,4] It is then natural to also apply normalization at inference time. For instance, CIDER applies KNN, and SSD performs k-means and then Mahalanobis. Those methods thus normalize their features for OOD detection _because_ they also normalize during training. This is orthogonal to our work: The standard Mahalanobis method for OOD detection is a _post-hoc_ method, where adjusting the pretraining scheme is not feasible. We show that in this setting, the Gaussian assumption underlying this method is often severely violated, and that normalizing the features better aligns with this assumption, consistently improving OOD detection across architectures and pretraining techniques. We will clarify this distinction and expand the discussion of other approaches in the paper (see the answer to reviwer jfEM for a more thorough discussion and quant. comparisons to SSD). If there is a specific reference the reviewer would like us to discuss, please let us know. 2. __Lemma 3.1, not sufficient condition__ We will clarify that a concentrated feature norm is not a sufficient, but a necessary condition for a Gaussian distribution. Lemma 3.1 only shows that - under the assumption of a Gaussian distribution in feature space - we expect some concentration of the feature norm. To illustrate this, we sample from class-specific Gaussian distributions with the estimated means and shared covariance matrix (Figure 3-left), noting that in practice (Figure 3-right) the feature norms deviate strongly from the Gaussian model (e.g. via heavy tails). This suggests severe violations of the Gaussian assumption, which we substantiate by QQ plots and the variance alignment analysis. Our remedy - normalization - aligns the features better with the premise of normally distributed data with shared covariance matrix. 3. __"elaborate on neligibility" (in QQ plot analysis)__ In QQ-plots, we compare empirical quantiles against a theoretical standard normal distribution. Since normalized and unnormalized features have different variances, their QQ-plots would have different slopes, making direct comparison difficult. To align comparisons, we divide both samples by their empirical standard deviation—this ensures both are evaluated against the same reference slope (black line in Figure 4). Dividing by the empirical variance technically transforms a normal distribution to a Student's *t*-distribution with *n*-1 degrees of freedom. As the reviewer pointed out correctly, this matters for small $n$. However, the *t*-distribution converges to a Gaussian as $n\to\infty$, and for $n>30$, the difference is typically negligible [5]. We use all ImageNet train features ($n>10^{6}$) in our QQ plots, making the *t*-distribution practically Gaussian, allowing for the analysis we performed in the paper. We would like to stress that all of this is only a technicality in the analysis of the features via QQ plots, and irrelevant for Maha++ as an OOD detection method. 5. __concentration of feature norm is "key motivation"__ Our key motivation is not to concentrate the feature norm. Instead, feature norm concentration is a necessary condition IF the features were indeed normally distributed. As we find, the feature norms are, however, not concentrated, but for instance show extremely heavy tails. We take this as an indication that the Gaussian assumption is violated, and further validate it via QQ plots and our variance analysis. Regarding the reviewers question about __concentrating even further__: We are not sure we understand what the reviewer means by this. One could, in principle, normalize by a different norm (e.g. $\ell_1$ or $\ell_\infty$), but this would change the direction of the features. We therefore opted for $\ell_2$ normalization. Does this answer the question? We are happy to clarify any of the points further! [1] Ming et al. How to exploit hyperspherical embeddings for out-of-distribution detection? ICLR2023 [2] Sehwag et al. Ssd: A unified framework for self-supervised outlier detection, ICLR 2021 [3] Regmi et al. T2fnorm: Train-time feature normalization for ood detection in image classificatio, CVPR 2024 workshop [4] Haas et al. Linking neural collapse and l2 normalization with improved out-of-distribution detection in deep neural network, TMLR 2023 [5] https://www.jmp.com/en/statistics-knowledge-portal/t-test/t-distribution
Summary: The paper revisits the Mahalanobis distance for out-of-distribution detection. It first examines how the assumptions underlying the Mahalanobis distance for OOD detection are violated by a variety of models. It then proposes a maximally simple but effective remedy by applying l2-normalization to the pre-logit features. The evaluation shows that this outperforms previous works by a significant margin. Claims And Evidence: The claims made by the paper are supported by clear and convincing evidence. The paper demonstrates that, empirically, feature distributions of some models do not fit the assumptions made by prior Mahalanobis distance-based OOD detection. Figure 5 further shows that, for SwinV2-B models, the feature norm is strongly correlated with the Mahalanobis distance, while beeing a bad OOD predictor, which in turn leads to suboptimal OOD detection performance. In contrast, applying l2-normalization as proposed reduces correlation between feature norms and Mahalanobis distance, which allows drawing a better decision boundary. The findings are furthermore validated by the quantitative evaluation of the proposed method on a wide variety of pre-trained models. Methods And Evaluation Criteria: The method is well motivated by pointing out how the assumptions in prior Mahalanobis based OOD detection methods can be violated by some models. The evaluation metrics (false-positive rate at true positive rate of 95% in particular) and benchmark datasets make sense and are in line with prior work on OOD detection. I appreciate that the evaluation is performed on a wide variety of model types, architectures and sizes. Theoretical Claims: The main theoretical claim can be found in equation 5 and is elaborated upon in the appendix, which I did only check superficially. Experimental Designs Or Analyses: The main experimental design is focused on evaluating the OOD false positive rate at a fixed true positive rate of 95% across different models and datasets, which is in line with prior work. The experimental analysis demonstrates that models suffer from violations of Mahalanobis based OOD detection with varying degree, but that most models benefit somewhat from l2-normalization as proposed. Supplementary Material: The supplementary contains a lot of additional experimental results, proofs, and discussion. I did not check the entirety of the supplemental but found the discussion of Augreg ViTs particularly interesting. Relation To Broader Scientific Literature: While the purely methodological innovation of this paper is minimal, its value lies in identifying and empirically demonstrating violations of key assumptions of Mahalanobis based OOD detection in practice, proposing a maximally simple remedy, and providing thorough evaluation of this remedy on a wide variety of models. Essential References Not Discussed: None. Other Strengths And Weaknesses: None. Other Comments Or Suggestions: 1. ImageNet reference renders as "(University, 2015)" Questions For Authors: None. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for carefully reading and evaluating our paper, and we are glad that the reviewer finds that our claims are __“supported by clear and convincing evidence”__, that our method is __“well motivated”__, that that they appreciate the __“wide variety of model types, architectures and sizes”__ in our __“thorough evaluation"__. We agree with the reviewer that the results about augreg ViTs stand out, and think that investigating the underlying reasons for the behaviour of those models (i.e. the why the augreg training scheme results in the favourable structure of the feature space) is an interesting direction for future research. We thank the reviewer for pointing out the incorrect ImageNet reference, which we will fix. For the rebuttal, we have included a more thorough discussion and comparison to SSD (see response to reviewer jfEM), an evaluation of a DinoV2 model (also in response to reviewer jfEM) and more variance deviation measures (see response to reviewer qhSY). If there is anything else the reviewer would like to see addressed, we would be happy to discuss this.
Summary: This paper presents an holistic empirical analysis illustrating the current violation of the gaussian distribution of the representations of most vision backbones. From this constatation, the paper introduces a variation of the Mahalanobis distance for OOD detection called Mahalanobis++. Extensive experiments on multiple recent OOD benchmarks and various bacbones are proposed to assess the good behavior of the proposed approach. ## Update after rebuttal I am satisfied with the rebuttal and will thus keep my positive rating Claims And Evidence: The principal claim concerns the violation of the class-wise unimodal Gaussian hypothesis of the representations. This is a reasonable claim as the Mahalanobis method does rely on strong relaxations for computational reasons. Moreover, this claim is supported by strong empirical evidence in this paper, see Fig. 3, 4, 5, and Table 1. Methods And Evaluation Criteria: Evaluation criteria and benchmarks are standard for OOD detections. Reporting only FPR95 is not a standard practice as this metric is not robust to small changes in the decision function and is particularly sensitive to class imbalance. FPR@95 highlights performance at a specific critical threshold but is typically complemented by AUC, ensuring a more holistic evaluation. I see that AUC scores in the supplementary are still in favor of the proposed approach. Theoretical Claims: Lemma 3.1. does not support the indicated conclusion. First, features should be concentrated around $\sqrt{\text{tr}(\Sigma) - ||\mu||^2_2}$. Moreover, the higher the dimension, the looser is the upper-bound. Experimental Designs Or Analyses: The evaluation protocol is well designed. However, as many backbones pretrained with a contrastive loss also normalize the representations, comparison of Mahalanobis with SSD [1] or on other DINO-like backbones would give important insight on the method and the importance of normalization for OOD detection. [1] Sehwag, Vikash, Mung Chiang, and Prateek Mittal. “SSD: A Unified Framework for Self-Supervised Outlier Detection,” ICLR 2021 Supplementary Material: I checked the proof of Lemma 3.1 and the AUC results section E. Relation To Broader Scientific Literature: This paper shares the violation of the Gaussian distribution with multiple other distance-based papers. Other approaches proposed a pre-training strategy to mitigate this limitation. The proposed extension to Mahalanobis is particularly incremental. However, it is well illustrated both by empirical statistical evaluation of the feature dispersion and extensive evaluations. Essential References Not Discussed: * Good performances of Mahalanobis distance for OOD detection on normalized features have already been explored in SSD [1]. In the related work section, authors state that "Adapting them to ImageNet-scale setups as post-hoc OOD detectors has so far not been successful". Same in the end of the method section: "While $\ell_2$-normalization has been used with non-parametric methods like KNN (Sun et al., 2022; Park et al., 2023a) or cosine similarity (Techapanurak et al., 2020), it is - to the best of our knowledge - not used with the Mahalanobis score". This is a bit of an overstatement as Mahalnobis is a strong and cheap baseline even on large scale datasets and SSD has been successfully experimented on ImageNet-1k.Thus, a broader discussion and comparison with SSD is missing in the current paper. [1] Sehwag, Vikash, Mung Chiang, and Prateek Mittal. “SSD: A Unified Framework for Self-Supervised Outlier Detection,” ICLR 2021 Other Strengths And Weaknesses: The paper is very well written and supported with extensive evaluations. Other Comments Or Suggestions: NA Questions For Authors: Despite bringing appreciated insights into the distance-based OOD literature, the related work section misses clear positioning *e.g.* which challenges are unaddressed by normalized approaches such as [2,3] or CIDER [4]? What makes the proposed method better suited for OOD detection? [2] Regmi, S., Panthi, B., Dotel, S., Gyawali, P. K., Stoyanov, D., and Bhattarai, B. T2fnorm: Train-time feature normalization for ood detection in image classification, CVPR Workshop 2024 [3] Haas, J., Yolland, W., and Rabus, B. T. Exploring simple, high quality out-of-distribution detection with l2 normalization, TMLR, 2024. [4] Ming, Y., Sun, Y., Dia, O., and Li, Y. How to exploit hyperspherical embeddings for out-of-distribution detection? Neurips 2023 Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their valuable comments and address the remarks below: - __"the features should be concentrated around $\sqrt{\mathrm{tr}(\Sigma)-\|{\mu}\|^2_2}$" (in Lemma 3.1)__ We thank the reviewer for checking our proof, but we strongly believe that the term $\sqrt{\mathrm{tr}(\Sigma)+\|{\mu}\|^2_2}$ is correct. The term stated by the reviewer can even become complex if the norm of the mean is large enough. As the variance goes to zero, the norm of the random variable should be concentrated around $\|\mu\|_2$, which is exactly what our term states. We are happy to answer any questions on a particular step of the proof to resolve any potential confusion. - __"the higher the dimension, the looser is the upper-bound" (in Lemma 3.1)__ This is expected, as the squared $\ell_2$-norm grows with dimension $d$. However, the deviation per dimension decreases: $$\Pr\left(\frac{1}{d}\left|\|\Phi(X)\|^2_2 - \left(\mathrm{tr}(\Sigma) + \|\mu\|^2_2\right)\right| \geq \epsilon\right) \leq \frac{\mathrm{Var}\left(\|\Phi(X)\|_2^2\right)}{d^2 \epsilon^2}$$ The right side decreases with $d$ as the variance grows linearly in $d$. Lemma 3.1 shows that under the Gaussian assumption, we should see some concentration of the feature norms. To illustrate this, we simulate it in Figure 3 (left) by sampling from class-specific Gaussian distributions with the estimated means and shared covariance matrix, noting that the actual feature norms (Fig. 3 right) deviate strongly from the Gaussian model (e.g. via heavy tails). This suggests severe violations of the Gaussian assumption, which we substantiate by QQ plots and the variance alignment analysis. - __"the sentence _'Adapting them to IN-scale setups ... has so far not been successful'_ ... is a bit of an overstatement ... as Mahalanobis ... has been successful on IN-1k"__ We agree, this statement only refers to Gaussian mixture models (GMMs), and not to the Mahalanobis distance. We will clarify this in the paper and explain the difference between GMMs and the Mahalanobis distance. - __"the sentence _'l2-normalization...is ... not used with the Mahalanobis score'_ ... is a bit of an overstatement ... as SSD ... has been successful on IN-1k"__ We agree that Mahalanobis has been applied to normalized features in other works like SSD[1] and CIDER[2], and we should have chosen our statement more carefully. However, these are _train-time_ methods where normalization is implicitly part of their contrastive loss. Those methods thus normalize their features for OOD detection _because_ they also normalize during training. This is orthogonal to our work: The standard Mahalanobis method for OOD detection is a _post-hoc_ method, where adjusting the pretraining scheme is not feasible. We show that in this setting, the Gaussian assumption underlying this method is often severely violated, and that normalizing the features better aligns with this assumption, consistently improving OOD detection across architectures and pretraining techniques. We will clarify this distinction and expand the discussion of [1-4] in the paper (see below for SSD). - __"a broader discussion and comparison with SSD"__ and __"which challenges are unaddressed by normalized approaches"__ SSD involves three steps: 1) Training with a supervised (SSD+) or unsupervised (SSD) contrastive loss (implicitly normalizing features), 2) Cluster estimation via k-means in the normalized feature space, 3) Mahalanobis-based OOD detection using cluster labels instead of class labels. This setting differs fundamentally from ours, as SSD, like [1,3,4], is a _train-time_ method. Methods like [1-4] cannot be directly applied to the pretrained checkpoints we evaluate. To demonstrate the advantages of post-hoc approaches, we evaluate SSD+ on NINCO using the ResNet50 from [5] (trained for 700 epochs). SSD+ is clearly outperformed by our top models, with FPR >3× higher. Those are obtained from various pretraining schemes, and retraining models with SSD or [1,3,4] on this scale is typically not feasible. | model | FPR| |-------------|-------| |SSD+ w. 100 clusters|66.0% | |SSD+ w. 500 clusters|65.7% | |SSD+ w. 1000 clusters|67.8% | |CnvNxtV2-L + Maha++ | 18.4%| |EVA02-L14 + Maha++| 18.6%| - __comparison with DINO__ We report the FPR of a DinoV2-S model (ft on IN1k) on NINCO. Maha++ outperforms Maha clearly. | Maha|Maha++| |------|------| | 77.3%| 53.4%| [1]Ming et al. How to exploit hyperspherical embeddings for out-of-distribution detection? ICLR2023 [2]Sehwag et al. Ssd: A and unified framework for self-supervised outlier detection, ICLR 2021 [3]Regmi et al. T2fnorm: Train-time feature normalization for ood detection in image classificatio, CVPR 2024 workshop [4]Haas et al T. Linking neural collapse and l2 normalization with improved out-of-distribution detection in deep neural network, TMLR 2023 [5]Sun et al. Out-of-distribution detection with deep nearest neighbors. ICML 2022
null
null
null
null
null
null
ABKD: Pursuing a Proper Allocation of the Probability Mass in Knowledge Distillation via $\alpha$-$\beta$-Divergence
Accept (oral)
Summary: This paper investigates a fundamental challenge in Knowledge Distillation (KD): the improper allocation of probability mass when using traditional divergences like Forward KL Divergence (FKLD) and Reverse KL Divergence (RKLD). FKLD tends to spread probability mass too broadly, failing to pay sufficient attention to the target class, while RKLD overly concentrates on the target class, neglecting the broader distributional information from the teacher model. The authors analyze this issue from the view of two effects: hardness-concentration, which focuses on modes where the student model has high error, and confidence-concentration, which emphasizes modes where the student is already confident. To better balance the two effects, the paper introduces $\alpha$-$\beta$-Divergence (ABKD), which generalizes FKLD, RKLD, and other divergences, thus providing a better trade-off between the two effects. Theoretical results demonstrate that ABKD provides a more balanced allocation of probability mass, leading to improved student learning. Finally, extensive experiments across 17 language and vision datasets with 12 teacher-student model pairs validate its effectiveness. Claims And Evidence: The claims in the submission are well-supported by both theoretical analysis and empirical results: - Theoretical justifications explain the limitations of FKLD and RKLD and demonstrate how ABKD provides a more flexible probability mass allocation. - Empirical evaluations across 17 language and vision datasets with 12 teacher-student configurations further validate these claims, showing consistent performance improvements over existing methods. Methods And Evaluation Criteria: Yes. The evaluation protocol is comprehensive, spanning 17 language/vision datasets with 12 teacher-student configurations, ensuring that the findings generalize across different settings. The selection of benchmarks, including instruction-following datasets for NLP and classification datasets for vision tasks, is appropriate for assessing the effectiveness of distillation methods. Moreover, the proposed method is compared against the state-of-the-art KD techniques, also providing meaningful results. Theoretical Claims: Yes. I have carefully checked the theoretical claims presented in the paper, particularly ABKD's role in balancing the two effects. Experimental Designs Or Analyses: Yes, please refer to `Methods And Evaluation Criteria`. Supplementary Material: I have reviewed the supplementary material, including the proof of related work, important theoretical claims, and more empirical results. Relation To Broader Scientific Literature: Nowadays, knowledge distillation is an important topic due to the development of the large-scale foundation model. This paper investigates a foundational issue in the field of knowledge distillation. Hence, I believe it could relate to broader literature in the future. Essential References Not Discussed: To my best of knowledge, this paper has covered essential references in this field. Other Strengths And Weaknesses: The strengths of this paper are summarized as follows: - **Rigorous theoretical insights**: This paper provides a rigorous theoretical analysis of the limitations of traditional divergences widely used in KD. The key idea behind is to reveal how these divergences allocate probability mass during training. By introducing the concepts of hardness-concentration and confidence-concentration, the authors show that FKLD and RKLD represent two extreme cases: FKLD fails to concentrate on the target class and RKLD overly concentrates on the target class. The proposed $\alpha$-$\beta$-Divergence (ABKD) balances the two effects, offering a novel perspective on why model-based guidance (i.e., distilling from a teacher model) can outperform hard label guidance (i.e., learning from one-hot labels). - **Flexibile framework based on $\alpha$-$\beta$-divergence**: The proposed framework, based on $\alpha$-$\beta$-divergence, unifies existing divergence-based KD approaches. This flexibility allows for fine-grained control over probability mass allocation, leading to improved performance. - **Comprehensive empirical validation**: Extensive experiments are conducted across 17 language/vision datasets with 12 teacher-student model configurations. The results demonstrate consistent improvements over state-of-the-art KD methods. My minor concerns are as follows: - In Sec 5.3, the authors provide empirical justifications for the choice of $\alpha$ and $\beta$. Although the results are valuable for hyperparameter tuning, I wonder whether the observation is consistent across different datasets. I hope the author can provide more clues on some other datasets, which could further strengthen the practical applicability of ABKD. Other Comments Or Suggestions: Please refer to `weankess`. Questions For Authors: Please refer to `weankess`. Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for recognizing the theoretical foundations, clarity of contributions and experiments, and the improved performance demonstrated by our method. Our response follows: > Q1: I wonder whether the observation is consistent across different datasets. I hope the author can provide more clues on some other datasets, which could further strengthen the practical applicability of ABKD **A1**: Thank you again for your insightful question. The table below outlines the hyperparam settings across datasets, which generally align with the theoretical results in Sec.3 (e.g., small α and large β for language modeling tasks and large α and small β for image classification tasks). |GPT-2 XL (1.5B) -> GPT-2 (0.1B, 0.3B, 0.8B) | Dolly Eval | Self-Instruct|Vicuna Eval|Super-Natural|Unnatural| |-|-|-|-|-|-| | **α** | 0.2 | 0.2 | 0.2 | 0.2 | 0.2 | | **β** | 0.7 | 0.7 | 0.7 | 0.7 | 0.7 | | **Dataset** | ImageNet | Caltech101 | OxfordPets | StanfordCars | Flowers102 | Food101 | FGVCAircraft | SUN397 | DTD | EuroSAT | UCF101 | |-|-|-|-|-|-|-|-|-|-|-|-| | **α** | 0.5 | 0.8 | 0.8 | 0.6 | 0.9 | 0.5 | 0.6 | 0.8 | 1.0 | 0.6 | 0.8 | | **β** | 0.5 | 0.2 | 0.4 | 0.4 | 0.1 | 0.5 | 0.5 | 0.2 | 0.2 | 0.5 | 0.2 | To further support our claim and validate the effects of our method, we made our best effort in the past few days to study **OpenLLaMA-8B→3B** distillation. For a thorough comparison, we also considered several baselines mentioned by other reviewers (i.e., AlphaNet, BDKD, Jensen's KL, and AKL) during the rebuttal period. Based on the hyperparam tuning guidelines (App.D) derived from theoretical guidance, we simply set the hyperparams to α = 0.3 and β = 0.6 for all datasets (without further tuning). The results below show that our method outperforms others by 0.65-3.26, especially excelling in Dolly and Unnatural. | Method | Dolly Eval | Self-Instruct | Vicuna Eval | Super-Natural | Unnatural | |-----------|-----------|---------------|-------------|---------------|-----------| | SFT | 24.54 (0.51) | 16.80 (0.64) | 16.15 (0.15) | 29.29 (0.13) | 27.43 (0.21) | | FKLD | 25.23 (0.44) | 18.90 (1.20) | 16.67 (0.35) | 31.68 (0.22) | 29.36 (0.13) | | RKLD | 27.74 (0.45) | 20.61 (0.80) | 18.83 (0.40) | 35.31 (0.24) | 33.86 (0.16) | | Jensen's KL | 26.28 (0.43) | 18.84 (0.66) | 17.81 (0.38) | 30.92 (0.12) | 29.79 (0.17) | | BDKD | 26.78 (0.53) | 18.94 (0.68) | 17.81 (0.52) | 32.15 (0.34) | 30.89 (0.24) | | AKL | 26.38 (0.41) | 17.69 (0.46) | 16.72 (0.48) | 33.02 (0.16) | 31.29 (0.08) | | DISTILLM | 28.24 (0.48) | 21.00 (0.72) | 19.12 (0.53) | 37.06 (0.35) | 35.05 (0.13) | | AlphaNet | 28.11 (0.29) | 21.30 (0.63) | 18.70 (0.23) | 37.86 (0.44) | 35.40 (0.17) | | Ours (ABKD) | **30.25** (0.37) | **22.39** (0.62) | **20.83** (0.42) | **38.51** (0.32) | **38.66** (0.10) | --- Rebuttal Comment 1.1: Comment: The authors have addressed my concerns on hyperparameters, and the new experiments on the  OpenLLaMA-8B→3B distillation is also impressive. I'll raise my score. --- Reply to Comment 1.1.1: Comment: Thank you very much for your recognition of our work and for the improved score. We will do our best to further enrich and improve the final version of the article.
Summary: The paper introduces ABKD, a knowledge distillation (KD) framework using alpha-beta-divergence to balance the "hardness-concentration" (focus on high-error classes) and "confidence-concentration" (focus on high-confidence classes) effects. Theoretical analysis shows that FKLD and RKLD represent extreme cases of these effects, leading to suboptimal performance. ABKD generalizes these divergences, enabling smooth interpolation via alpha and beta. Claims And Evidence: Supported Claims: Balancing mode-concentration effects (Sec. 3–4): Theoretical analysis (Prop. 3.1, 4.2) and visualizations (Fig. 1d–g) justify the limitations of FKLD/RKLD and ABKD’s trade-off mechanism. Potential Issues: The paper claims that ABKD encompasses FKLD (when alpha=1, beta=0) and RKLD (when alpha=0, beta=1), and thus the experiments should include a comparison of the performance of ABKD in these degenerate cases with the original FKLD and RKLD. Methods And Evaluation Criteria: 17 datasets and 12 teacher-student pairs (e.g., GPT-2 XL→GPT-2) are reasonable. Metrics (ROUGE-L, accuracy) align with standard practice. However, the validation on the CIFAR100 dataset only involves cases where the teacher and student have the same architecture; the performance when they have different architectures should also be examined. Theoretical Claims: Theorem 3.2(mass allocation differences): Informal statement lacks rigor; formal proof needs verification. Experimental Designs Or Analyses: Hyperparameter Sensitivity: Fig. 6 shows alpha and beta impact entropy and Self-BLEU, but optimal values vary across tasks (e.g., alpha=0.2, beta=0.7 for NLP vs. alpha=0.6, beta=0.5 for CIFAR-100 in Tabs. 4,6). This suggests task-specific tuning, weakening the "universal" claim. Moreover, Table 4 in Appendix I.2.3 also shows that ABKD requires specific hyperparameter selection, which limits its scalability to more datasets and architectures. Supplementary Material: Theoretical Proofs : Detailed but dense; some steps require deeper scrutiny. Hyperparameters (Tabs. 4,6): Clear documentation for reproducibility. Additional Results (Tabs. 8–10, Fig. 5): Strengthens claims but lacks ablation on model architectures. Relation To Broader Scientific Literature: Builds on classical KD (Hinton, 2015) and recent divergence variants (MiniLLM, DISTILLM). The alpha-beta-divergence generalizes prior work (Table 1), addressing gaps in FKLD/RKLD trade-offs. Essential References Not Discussed: Wu, T., Tao, C., Wang, J., Zhao, Z., & Wong, N. (2024). Rethinking Kullback-Leibler Divergence in Knowledge Distillation for Large Language Models. arXiv.Org. https://doi.org/10.48550/arxiv.2404.02657 Cui, X., Qin, Y., Gao, Y., Zhang, E., Xu, Z., Wu, T., Li, K., Sun, X., Zhou, W., & Li, H. (2024). SinKD: Sinkhorn Distance Minimization for Knowledge Distillation. IEEE Transactions on Neural Networks and Learning Systems, 1–15. https://doi.org/10.1109/tnnls.2024.3501335 These papers are representative works that improve distillation effects based on different loss distance calculations, but they are not cited in this paper. The adaptive KL divergence(AKL) proposed in the former is more suitable than the WSD proposed in this paper as a baseline. Other Strengths And Weaknesses: Strengths: Theoretical-empirical synergy: Clear connection between gradient analysis (Sec. 3) and empirical results. Weaknesses: Baseline Variance: Some baselines (e.g., LSD, TTM) underperform in Fig. 5; unclear if hyperparameters were optimized. Other Comments Or Suggestions: If you have any other comments or suggestions (e.g., a list of typos), please write them here. Clarity: The paper is well-structured, but theoretical sections (Sec. 3–4) are dense. Questions For Authors: Q1: How does ABKD perform with alpha/beta outside [0,1] (e.g., alpha=1.5, beta=-0.5)? Does the framework still hold? (Clarifies generality claims.) Q2: Were computational costs (e.g., GPU hours) comparable between ABKD and baselines? (Addresses scalability concerns.) Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful comments and suggestions. Our response follows: > Q1: The experiments should include a comparison of the performance of ABKD in these degenerate cases with the original FKLD and RKLD **A1**: When ABKD degenerates to FKLD and RKLD, its performance matches the original FKLD and RKLD (Tab.3 in the main text). Thus, we don't separately consider these cases. > Q2: Theorem 3.2: Informal statement lacks rigor; formal proof needs verification **A2**: Thank you for your question! The complete version of Thm.3.2 is in App.G.2. We will include them in the main text and improve readability upon acceptance. > Q3: This suggests task-specific tuning, challenging the "universal" claim. Tab.4 shows that ABKD requires specific hyperparam selection, limiting its scalability to additional datasets and architectures **A3**: It's a pity that the term 'universal' is misunderstood. Actually, we aim to unify a series of existing divergences in KD, not provide a set of task-agnostic parameters. In fact, most related works require parameter tuning for different datasets and architectures. As per the 'no free lunch' theorem, universal parameters are impractical. Please see A4 of Reviewer 71pK for more details on 'universal'. To further support our claim, we completed the distillation experiment from **OpenLLaMA-8B to 3B** within the limited time, including AKL mentioned by the reviewer, as well as BDKD, AlphaNet, and Jensen's KL mentioned by other reviewers. The results below show that our method outperforms others by 0.65-3.26, especially excelling in Dolly and Unnatural. ||Dolly|Self-Instruct|Vicuna|Super-Natural|Unnatural| |-|-|-|-|-|-| |SFT|24.54|16.80|16.15|29.29|27.43| |FKLD|25.23|18.90|16.67|31.68|29.36| |RKLD|27.74|20.61|18.83|35.31|33.86| |Jensen's KL|26.28|18.84|17.81|30.92|29.79| |BDKD|26.78|18.94|17.81|32.15|30.89| |AKL|26.38|17.69|16.72|33.02| 31.29| |DISTILLM|28.24|21.00|19.12|37.06|35.05| |AlphaNet|28.11|21.30|18.70|37.86|35.40| |Ours|**30.25**|**22.39**|**20.83**|**38.51**|**38.66**| > Q4: The Need for Ablation on Model Architectures. **A4**: Our method applies to various architectures (e.g., 17 teacher-student pairs used in our experiments). It only needs adjusting the distillation objective. We also provide an ablation study of α-β divergence (Tab.3 in the main text), which shows using only α or β imposes unnecessary constraints on hardness- or confidence-concentration, leading to suboptimal solutions. > Q5: Essential References Not Discussed **A5**: Thank you for your valuable insight! We will cite these works in the final version and discuss the differences between AKL and our work. We also conducted experiments comparing AKL with our method **when distilling GPT-2 XL into GPT-2**, as shown below. Our method outperforms AKL across datasets by 0.43-7.35. ||Dolly|Self-Instruct|Vicuna|Super-Natural|Unnatural| |-|-|-|-|-|-| |AKL|23.83|10.87|15.63|20.07|21.97| |Ours|**25.65**|**13.47**|**16.06**|**26.47**|**29.32**| > Q6: Baseline Variance: Some baselines (e.g., LSD, TTM) underperform in Fig.5; unclear if hyperparams were optimized. **A6**: All baseline results in Fig.5 are from the original papers. Missing values are obtained by rerunning their code with necessary hyperparam tuning and reporting the average of 3 random runs. > Q7: Cross-Architecture Experiment **A7**: We conducted distillation from ResNet50 to VGG8 within the limited time. The results below show that our method can improve the performance of previous methods by 0.15-0.89. ||KD|ABKD (Ours)|DKD|ABDKD (Ours)|LSD|ABLSD (Ours)|TTM|ABTTM (Ours)| |-|-|-|-|-|-|-|-|-| |Accuracy|73.81|74.62|74.37|**75.26**|74.52|74.77|74.87|75.02| > **Q8**: How does ABKD perform with α/β outside [0,1] (e.g., α=1.5, β=-0.5)? **A8**: This is an interesting question. We focus on balancing FKLD and RKLD, which correspond to extreme cases for α = 1, α = 0, β = 0, and β = 1. Thus, an intuitive method is to search for params between [0, 1]. Our experiments validate this. To address the reviewers' concern, we also tested α > 1 and β < 0 when distilling ResNet56 to ResNet20 within the limited time, as shown below: |α\β|-0.1|-0.3|-0.5| |-|-|-|-| |1.2 |70.81|71.10|70.35 | |1.4|71.29| 71.24| 70.92| |1.6|70.55|70.53|70.34| An overly large α weakens the hardness-concentration and an overly small β weakens the confidence-concentration effect, both may degrade distillation performance. > Q9: Computational Cost Analysis **A9**: We compared the training costs of different methods on language modeling, as shown below. ||Training Cost (second/sample)| |-|-| |SFT|0.344| |KD|0.649| |MiniLLM|4.452| |GKD|2.078| |DISTILLM|1.331| |AlphaNet|0.882| |Ours|0.768| Our method takes a similar amount of time as Vanilla KD but is **1.15x to 5.80x faster** than others due to its simplicity. It only modifies the optimization objective without adding extra cost, while others like GKD and DISTILLM require sampling student outputs during training.
Summary: The paper discusses the main challenges in knowledge distillation, which lies under the proper balance between two modes (1) hardness concentration and (2)confidence concentration. They provided a smoother transition between the reverse and forward KL divergences via the integration of alpha-beta divergence They performed analyses on both language and vision tasks. Claims And Evidence: - The claim of using alpha-beta divergence for better distillation seems to ambiguous and not empirically well supported in the paper. Indeed, introducing both alpha and beta as hyper-parameters adds more complexity to the already challenging problem of distillation. Different alpha and beta can lead to very different distillation behavior. This could eventually impact the stability and the convergence of the student network while performing distillation. Methods And Evaluation Criteria: The paper evaluates their technique on both language and vision tasks, which I think is good and makes their work more comprehensive. Theoretical Claims: Yes. I checked the theoretical proofs. I have not noticed any issues. Experimental Designs Or Analyses: The experimental design and the analyses is somewhat comprehensive. The authors tried to empirically compare their proposed technique alpha-beta KD to other KD techniques. However, I would like to address certain comments with respect to the experiments: - There are some missing baselines. Before balancing the divergences, previous literature have proposed to implement Jensen's KL divergence in which they symmetrically use both the RKLD and FKLD. Adding this as a baseline in their experiments could contrast the performance of the distillation. - With respect to analyses, as heavy user of distillation, I would like to see the computational and complexity trade-off that the alpha-beta KD offers. From the results of the paper, the reported improvements over baselines like vanilla KD and other distillation methods appear to be marginal or even lower than that of alternative KD methods. This really raises concerns about the added complexity and actual performance gains. It would have been better if the authors could add other analyses and justification of why this method would be preferable despite the noticeable limited improvements. Supplementary Material: I have read most parts of the supplemental material. Relation To Broader Scientific Literature: The key contribution of the paper closely resembles existing techniques that have already been explored and discussed in earlier literature on knowledge distillation. The proposed approach does not introduce a fundamentally novel concept but rather builds upon previously established methods. Essential References Not Discussed: Some key references to consider are earlier works that explored the use of Jensen-KL (symmetric KL) for knowledge distillation. [1] Additionally, there are prior studies on adaptive KL divergence, which, similar to the idea presented in this paper, aim to balance forward KL (FKLD) and reverse KL (RKLD). A proper discussion and citation of these works would provide better context and highlight the connections between this study and existing research. [2] [1] Binici, Kuluhan, et al. "Preventing catastrophic forgetting and distribution mismatch in knowledge distillation via synthetic data." Proceedings of the IEEE/CVF winter conference on applications of computer vision. 2022. [2] Amara, Ibtihel, et al. "Bd-kd: balancing the divergences for online knowledge distillation." arXiv preprint arXiv:2212.12965 (2022). Other Strengths And Weaknesses: I would like to summarize what I have mentioned above: Strengths: - The paper is well written and easy to follow. - The paper evaluates on various language and vision tasks. Weaknesses: - It would have been better if the authors could justify more about the hyper-parameters of alpha and beta. While the authors specify the values they used in their experiments, it remains unclear how extensively they tuned these parameters to achieve optimal performance. - Another key aspect missing is the discussion on training time. Based on my expertise, training with alpha divergence tends to have slower convergence compared to traditional KD losses. Aside from the added complexity of alpha and beta, it would be interesting to see insights into the computational cost and the convergence behavior especially for language model training. - Questionable novelty. The key contributions of the paper closely resembles existing techniques that have explored using different divergences for distillation (similar to the alphanet paper, which the authors have cited). The proposed approach does not introduce a fundamentally novel concept. I would suggest to specify in the paper a more thorough comparison with prior work and clearly stating what would be the unique contribution. - Baselines in the experiments. I encourage the authors to add more KD baselines like the ones referenced above. Other Comments Or Suggestions: Please refer to previous cell. Questions For Authors: Please refer to the strengths and weaknesses section. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for taking the time to provide their helpful and valuable feedback. Our response follows (**please see https://anonymous.4open.science/r/ICML-rebuttal-experiments/results.md for all rebuttal experiments**): >Q1: Essential References Not Discussed **A1:** We will include the discussion of these baselines (the experiments are in Q5) in the new version: - Jensen's KL is a popular method that combines FKLD and RKLD: $D_{JSD}(p|q)=1/2D_{KL}(p|m)+1/2D_{KL}(q|m)$, where $p$ is teacher distribution, $q$ is student distribution and $m=(p+q)/2$. - BDKD adjusts the weights of FKLD and RKLD during training based on the entropy difference between $p$ and $q$. - We also note AKL mentioned by the reviewer Reviewer Ty7Q, which adjusts the weights of FKLD and RKLD based on class differences between $p$ and $q$. The problem with Jensen's KL is that when $p$ and $q$ are far apart, the loss becomes constant, causing vanishing gradients and hindering convergence. BDKD and AKL can be seen as variants of our baseline, WSD. However, they still tend to overemphasize small probability in $p$ and $q$, as shown in Sec.3.3. Overall, a simple linear addition of FKLD and RKLD can't fully resolve their issues. This work aims to first theoretically analyze their limitations and then explore a more effective divergence to address them. >Q2: More Details on hyperparams α and β (Weakness #1) **A2:** Thank you for your insightful question! We agree on the importance of clarifying hyperparam selection. First, our method needs comparable or fewer hyperparams than prior works: AlphaNet has 3 (ICML 21), DISTILLM has 2 (ICML 24), and GKD has 2 (ICLR 24). Second, while additional hyperparams increase search cost, **our theoretical insights (Secs.3, 4) provide tuning guidelines for different tasks (App.D)**. These insights help eliminate less likely hyperparams, improving search efficiency. Empirical hyperparam selections (e.g., α=0.2, β=0.7 for NLP and α=0.9, β=0.2 for vision) validate these. Overall, our search cost is not higher than prior works, and our hyperparams have clear theoretical and practical significance, helping one better understand the effectiveness of their distillation objective. >Q3: Training Cost and Convergence Behavior (Weakness #2) **A3**: Thank you for your insightful question! **Training Cost**: Tab.1 in the link shows our method requires a similar time to Vanilla KD, but is **1.15x to 5.80x faster** than other SOTA methods, as it only modifies the distillation loss, while others need sampling student outputs during training, adding extra computational cost. **Convergence**: Fig.1 in the link shows that our method outperforms others at all training stages (**especially in the early stages**), showing **faster convergence**. >Q4: Novelty issue (Weakness #3) **A4**: Our main contribution lies in unifying FKLD, RKLD, and other potential variants from **a novel theoretical perspective: the hardness- and confidence-concentration effects**. This new theory helps explain why traditional divergences fail and our method succeeds: - FKLD has overly weak hardness- and confidence-concentration effects. It fails to focus on the target class, leading to wrong predictions. - RKLD has overly strong hardness- and confidence-concentration effects. It overemphasizes the target class while ignoring the broader knowledge from non-target classes. - The weighted sum of FKLD and RKLD tends to overemphasize the minimal values in both teacher and student distributions. - α-divergence imposes an unnecessary sum-to-one constraint on hardness- and confidence-concentration effects, potentially limiting model performance. - α-β-divergence offers a flexible way to adjust hardness- and confidence-concentration separately, enabling smooth interpolation between FKLD and RKLD. These insights are rarely explored in existing literature. > Q5: Performance Improvement & More Baselines (Weakness #4) **A5:** Thank you for your valuable suggestion! We added more baselines (the suggested AlphaNet, BDKD, Jensen's KL, and AKL mentioned by other reviewers) with the limited time. Results are averaged over 5 seeds. ||Dolly|Self-Instruct|Vicuna|Super-Natural|Unnatural| |-|-|-|-|-|-| |Prior SOTA|25.13 (0.27)|12.46 (0.46)|15.64 (0.40)|25.27 (0.20)|27.56 (0.15)| |Ours|**25.65** (0.24)|**13.47** (0.42)|**16.06** (0.25)|**26.47** (0.31)|**29.32** (0.08)| Tab.2 (see the link for the full version) shows our method outperforms others by 0.42-1.76. To further validate our method, we made our best effort in the past few days to study **OpenLLaMA-8B→3B** distillation. ||Dolly|Self-Instruct|Vicuna|Super-Natural|Unnatural| |-|-|-|-|-|-| |Prior SOTA|28.24 (0.48)|21.30 (0.63)|19.12 (0.53) |37.86 (0.44)|35.40 (0.17)| |Ours|**30.25** (0.37)|**22.39** (0.62)|**20.83** (0.42)|**38.51** (0.32)|**38.66** (0.10)| Tab.3 (see the link for the full version) shows it outperforms others by 0.65-3.26, especially excelling in Dolly and Unnatural. --- Rebuttal Comment 1.1: Comment: Thank you for your thorough response and the valuable clarifications provided. I have carefully reviewed your replies, along with the feedback from the other reviewers, and I am particularly appreciative of the expanded comparisons with baseline models. This additional context has significantly strengthened the manuscript. Consequently, I am pleased to revise my recommendation **from a weak reject to an accept**. However, to ensure the manuscript's completeness and maximize its impact, **it is crucial that all benchmark comparisons presented in your responses to the reviewers are incorporated into the final version of the paper**. This will provide a comprehensive and transparent evaluation of your method's performance and address the concerns raised by all reviewers. Again, thank you for your efforts in addressing the feedback. I believe these revisions will greatly enhance the quality and clarity of your work. --- Reply to Comment 1.1.1: Comment: Thank you so much for your valuable feedback and support. Following your suggestions, we will strive to further enrich the content of our paper in the final version to contribute to the continued development of the KD community.
null
null
null
null
null
null
null
null
Convergence of Consistency Model with Multistep Sampling under General Data Assumptions
Accept (poster)
Summary: This paper analyzes the convergence of consistency models under approximate self-consistency. With mild data assumptions, it proves sample closeness to the target distribution in Wasserstein or total variation distance. The study applies to various forward processes and highlights the benefits of multistep sampling through case studies. ## Update after rebuttal I initially had no significant issues with the paper, but after reading Reviewer eZRV's comments, I believe the novelty of the work is somewhat limited, and my original score of 3 may have been overly generous. To ensure fairness in the review process, I have adjusted my score to 2. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: Yes. I’ve checked the correctness of proofs for theorems 2 and 3. Experimental Designs Or Analyses: Yes. I checked the simulation in appendix G. Supplementary Material: Yes. I’ve reviewed all the supplementary material. Relation To Broader Scientific Literature: The article presents the theoretical performance of consistency models with multi-step sampling under more general assumptions. The findings of the article can contribute to a deeper theoretical understanding of consistency models and can assist in the design of time steps for multi-step sampling in consistency models. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths: 1. The paper provides theoretical guarantees for multi-step generation in consistency models, focusing on Wasserstein distance and total variation distance. The results seem solid. 2. Compared to previous work, the paper conducts its analysis under more general assumptions, including the removal of the Lipschitz assumption for the consistency function. 3. The results are not limited to the form of SDEs and provide a more detailed analysis under two common types of SDEs. Weaknesses: 1. The assumptions in the paper still differ from practical scenarios. For instance, the paper assumes that time discretization is uniform, which does not align with practical applications. In fact, carefully designed time discretization strageties are crucial for successful consistency training. 2. The analysis and experiments in the paper lack connection with real-world data, which limits the further extension of the theoretical results. Other Comments Or Suggestions: NO. Questions For Authors: Can the experiments in Appendix G be replicated on real-world datasets, such as CIFAR-10? This would significantly enhance the persuasiveness of the theoretical results. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for your feedback. We address your points in detail below: 1. **non-uniform discretization:** in this paper, we adopt a uniform discretization for clarity and ease of presentation. However, our results can be extended to the non-uniform discretization setting as well. Suppose $\tau_{0:M}$ is an arbitrary discretization of the interval $[0,T]$. In this scenario, it is reasonable to assume that the consistency loss scales with the length of the discretization interval: $$ E_{x_{\tau_i}\sim P_{\tau_i}}\left[||\hat f(x_{\tau_i},\tau_i)-\hat f(\varphi(\tau_{i+1};x_{\tau_i},\tau_i),\tau_{i+1})||\_2^2\right] \le (\tau_{i+1}-\tau_{i})^2\epsilon^2. $$ Using the same argument as in the proof of Lemma 2, we can derive: $$ \sqrt{E_{x_{\tau_i}\sim P_{\tau_i}}\left[||\hat f(x_{\tau_i},\tau_i)- f^{\star}(x_{\tau_i},\tau_{i})||\_2^2\right]} \le \sum_{s=0}^{i-1}\sqrt{ E_{x_{\tau_s}\sim P_{\tau_s}}\left[||\hat f(x_{\tau_s},\tau_s)-\hat f(\varphi(\tau_{s+1};x_{\tau_s},\tau_s),\tau_{s+1})||\_2^2\right] }, $$ which is upper bounded by: $ \sum_{s=0}^{i-1} (\tau_{s+1}-\tau_s)\epsilon = \tau_i \epsilon $. The rest of the proof remains unchanged. 2. **regarding experiments on real-world data:** in designing the experiments, we aimed to achieve two goals: - **evaluate the tightness of our upper bound:** this requires access to the true data distribution, which is not available for real-world datasets, making such evaluation infeasible in those settings; - **demonstrate diminishing performance with multiple sampling steps:** we already observe this phenomenon in real-world datasets, as shown in Table 1 and 2 of Luo et al. (2023), where the performance of the consistency model deteriorates with more sampling steps. - Given these considerations, we prioritized simulation-based experiments over real-world datasets. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for addressing my questions and concerns. I will maintain my evaluation.
Summary: The paper studies the convergence of consistency models with assumptions on the consistency property. It further assumes that the target data distribution has bounded support. In this case, it shows the convergence result in Wasserstein distance and total variation distance. The theoretical results indicate the benefit of multistep sampling with consistency models. Claims And Evidence: I think the presentation of this paper should be further improved. The main results are divided into two parts. In the first part (Section 3), the paper proves a error bound of the Wasserstein (TV) distance in the general form of time schedules. The meaning of the theorem, which contains very complex formulas, is not very clear to me. For explanation, the paper claims there exists a trade-off about the sampling steps. However, the result is still very hard to understand, because the parameter $\alpha_t$, $\sigma_t$ and $t_j$ has some intrinsic constraints. I find it too easy to draw the conclusion that there is a tradeoff. I’m also unclear about what this tradeoff definitely means in practice. With these uncertainties in mind, I turned to the second part, which addresses the case studies. However, reading this section only deepened my confusion. Take the first case (VP) as an example: the paper presents a result concerning the Wasserstein distance between $\hat P_0^1$ ($\hat P_0^2$) and the data distribution. It argues that the leading term from the second sampling step is strictly reduced. While correct, this reduction merely adds at most another constant term—why, then, is this significant? Moreover, the paper considers the case $\epsilon \approx \Delta \tau$. In this case, the total right-hand side is $O(1)$. Yet, considering $R$ as a constant, any Wasserstein distance trivially has an upper bound of $O(1)$. Thus, I find myself questioning the significance and relevance of the stated result in this specific scenario. Finally, in Lines 226-231, the paper discusses the influence of $\Delta \tau$. I don’t quite get what is “more intermediate steps” even after reading Lemma 2. I also doubt the claim that “smaller $\Delta \tau$ allows a smaller $t_N$”, and “may decrease $\epsilon$”. What’s the support of this argument? $\tau_i$ for training is independent from $t_i$ for sampling, is that right? Methods And Evaluation Criteria: N/A Theoretical Claims: I don’t find flaws in the theoretical proof. My concerns mainly lie in the meaning of the results, as written in the “Claims and evidence” part. Experimental Designs Or Analyses: N/A Supplementary Material: I have checked the supplementary material. Relation To Broader Scientific Literature: N/A Essential References Not Discussed: N/A Other Strengths And Weaknesses: See above Other Comments Or Suggestions: See above Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your feedback. We address your points in detail below: 1. **Interpretation of Theorem 2:** trade-off means increasing the number of sampling steps does not necessarily lead to improved performance due to the influence of term (ii). This is in contrast to standard diffusion models, where, according to Theorem 2 of Chen et al. 2023b, the discretization error diminishes as the number of sampling steps increases, guaranteeing performance improvement with more steps.\ For Case Study 2, we now show that a **uniform sampling** schedule $t_j = (N-j+1) \Delta\tau$ with $N = \frac{t_1}{\Delta\tau}$ (**more** sampling steps) can yield a **larger** upper bound than the **halving schedule** defined in equation (13). Ignoring absolute constants, the upper bound in equation (12) becomes: $$ R\left(\frac{R^2}{t_1^2} + \frac{t_1}{\Delta\tau}\frac{\epsilon_{\text{cm}}^2}{\Delta\tau^2}\right)^{1/4} + \epsilon_{\text{cm}}, $$ which is minimized when $t_1 = \frac{2^{1/3}R^{2/3}\Delta\tau}{\epsilon^{2/3}}$, yielding a minimum value of: $$ R^{7/6}\frac{\epsilon_{\text{cm}}^{1/3}}{\sqrt{\Delta\tau}} + \epsilon_{\text{cm}}. $$ In contrast, the halving schedule achieves an upper bound of $\tilde O\left(R\sqrt{\frac{\epsilon_{\text{cm}}}{\Delta\tau}}\right)$, which is **strictly smaller** than that of the uniform schedule.\ Practical evidence further supports this trade-off: - Our simulation results (Appendix G) show that both baseline methods experience degraded performance in the final sampling steps. - In Table 1 and 2 of (Luo et al. 2023), LCM with 4-step sampling achieves better FID scores than with 8 steps. - Both theoretical analysis and empirical observations highlight the importance of **stratigically designing** the sampling schedule. Thus, we believe it is fair to emphasize the trade-off in choosing the number of sampling steps. 2. - **clarification on the result in Case Study 1:** Corollary 1 provides a **universial upper bound** on the $W_2$ distance for the VP process, without imposing constraints on $R$, $\Delta\tau$, and $\epsilon_{\text{cm}}$. Naturally, when the consistency model $\hat f$ is poorly estimated, the generated sample quality deteriorates. We believe the case you mentioned belongs to this category. If $\frac{\epsilon_{\text{cm}}}{\Delta\tau}\approx R$, it indicates that $\hat f$ is a poor approximation. Even when using the true marginal $P_T$ as input, Lemma 2 shows that $\hat f$ incurs an error of $O(R)$. In this scenario, the resulting upper bound is inevitably loose and uninformative. - **the significance of constant reduction in Case Study 1:** while many theoretical results focus on asymptotic rates and ignore constants, **constant reductions** can have significant practical implications. For example, latent diffusion models (Rombach et al.) demonstrate improved performance (e.g., lower FID scores) even with fractional gains, enabling high-resolution image synthesis in practice. - **the rate improvement in Case Study 2:** our analysis in Case Study 2 shows a **clear rate improvement** for the VE process when sampling with multiple steps. If we use only a single step ($N=1$), equation (12) simplifies to: $$ \frac{R\sqrt{R}}{\sqrt{t_1}} + t_1 \frac{\epsilon_{\text{cm}}}{\Delta\tau}, $$ with the minimum value being $R\left( \frac{\epsilon_{\text{cm}}}{\Delta\tau} \right)^{1/3}$. In contrast, using the specialized schedule from equation (13), we obtain a faster rate of $\tilde O(R\sqrt{\frac{\epsilon_{\text{cm}}}{\Delta\tau}})$. 3. **Clarification on $\Delta\tau$:** fix a $T$, decreasing $\Delta\tau$ results in a finer discretization $\mathcal{T}$ (line 147), thereby increasing the number of discretization points $M$. This adds more terms to the error decomposition (line 419-423) when evaluated at $\tau_M = T$. - **Training steps vs sampling steps:** for simplicity, our current formulation assumes sampling steps are chosen from the training steps. Under this assumption, a smaller $\Delta\tau$ allows a smaller $t_N$. However, our theoretical framework remains valid even when sampling steps are chosen arbitrarily. In that case, we need to refine Lemma 2 to upper bound $||\hat f(\cdot,t)-f(\cdot,t)||_2^2$ for all $t$. Because self-consistency is only enforced at the $\tau_i$'s during training, $\hat f(x,t)$ and $f(x,t)$ need to be Lipschitz in $t$. - **Effect of $\Delta\tau$ on $\epsilon$:** using a smaller $\Delta\tau$ may reduce the consistency error $\epsilon$ for two reasons: (1). by continuity of $\hat f$ and $\varphi$, equation (4) decreases as $|\tau_{i+1}-\tau_{i}|$ decreases; (2). As shown in Theorem 1 and 2 of Song et al. 2023, smaller $\Delta\tau$ improves the approximation for the consistency loss in both consistency distillation and consistency training. Hence, $\epsilon$ can be smaller. --- Rebuttal Comment 1.1: Comment: Thanks for your response. I still have concerns after reading the contents. Here are my questions. 1. I feel that Point 1 does not directly address my question. While I agree that the strategic design of the sampling schedule is important, the authors specifically claim there is a **trade-off** in selecting the **number** of sampling steps. This claim appears unrelated to the examples provided. "trade-off means increasing the number of sampling steps does not necessarily lead to improved performance due to the influence of term (ii)". My main concern lies in this part. Take a trivial example. Let $f(x) = 2x + (-x)$. While the first part increases with $x$ and the other decreases, the total function is still strictly increasing. Therefore, I find it too easy to claim a tradeoff in choosing the number of sampling steps directly from the theoretical result of Theorem 2. 2. I mention the case $\epsilon_{cm} \approx \tau$ because it's written in Line 339. If that's a failure case, why do you mention it there? 3. What do you mean by "fractional gains" in Rombach et al.? 4. Lastly, it's strange to compare two upper bounds with big $O$ notation and then conclude that one method is faster than the other. After all, these are only upper bounds, not tight estimates of actual performance. While this may not be a critical issue, the authors should be more careful in making such claims. Arguing that multi-step sampling is better than single-step sampling based solely on a comparison of upper bounds is problematic. --- My concerns have been addressed after reading the newest rebuttal. Thus, I have adjusted my score accordingly. I do suggest the authors include a detailed discussion of the problems we have discussed in the revision, such as the accurate definition of the tradeoff, concrete examples, and comparison of rates. This will help the paper become more readable. --- Reply to Comment 1.1.1: Comment: We sincerely thank the reviewer for the opportunity to further discuss these points. Please find our responses to your comments below: 1. We appreciate the reviewer’s feedback and would like to clarify that the examples provided in our rebuttal are indeed related to our claim. Specifically, we included: - two instantiations of our upper bound as theoretical examples; - a summary of our simulation results in Appendix G; - a summary of real-world experiments from Luo et al. (2023). All these examples support the observation that increasing the number of sampling steps in consistency models, without careful strategy, can degrade performance. We understand the reviewer’s concern and will ensure to use more precise language in a future revision. 2. To clarify, we respectfully note that **the case described in the review and the case described in our paper are not equivalent.** In our paper, the scenario where $\epsilon_{cm} \approx \tau$ does not necessarily indicate a failure case. In practice, the diameter $R$ of the data distribution may scale with the dimension. For example, natural images are supported on a hypercube $[0,1]^d$ where $d$ corresponds to the number of pixels and channels, and $R = \sqrt{d}$ in the worst case. In this context, an error bound of $O(1)$ is actually meaningful, as the trivial upper bound is $\sqrt{d}$, a potentially large quantity. On the other hand, the situation described in the review, where $\epsilon_{cm} \approx \Delta\tau$ and $R = O(1)$, does correspond to a failure case. 3. Regarding the comment that a constant reduction may not be significant, we respectfully disagree. In practice, even constant improvements in evaluation metrics such as the FID score can be quite impactful. For instance, Rombach et al. (LDM) demonstrated impressive results in image synthesis with FID improvements over prior works. - On the CelebA-HQ benchmark (Table 1), LDM achieved an FID of 5.11, compared to baselines ranging from 7.16 to 15.8; - in Table 3, LDM achieved 3.6 vs. 4.59–10.94; - in Table 5, 2.4/4.3 vs. 5.2–15.2; - in Table 7, 9.39 vs. 10.4–30.5. These examples show that constant-level improvements are indeed considered meaningful in the community. 4. - **Regarding the big O notation:** we adopt big O notation primarily to simplify the presentation by hiding constants. Nonetheless, we believe the improvements to the upper bounds remain clear, even with the big O notation. In Case Study 1, we explicitly retain the constants in the leading terms and apply big O notation only to the lower-order terms. In Case Study 2, we highlight a reduction in rate, which clearly demonstrates an improvement despite the use of big O notation. Moreover, the exact bounds, including constants, can be readily derived by specifying the $t_j$’s in Theorem 2. - We would also like to emphasize that our main contribution lies in establishing theoretical guarantees for consistency models while relaxing several strong assumptions made in previous works. We appreciate your feedback and will take greater care to ensure precise and clear writing in future revisions.
Summary: This paper analyzes consistency models—a recently introduced approach for accelerating sampling in diffusion-based generative models. Unlike classical diffusion models that rely on multiple iterative score-based updates, consistency models learn a direct mapping (“consistency function”) from noise to data while preserving the so-called self-consistency property. This enables both one-step sampling and optional multi-step refinement. In this paper, the authors showed that if the self-consistency property holds approximately (quantified by a small “consistency loss”), then one-step or multi-step sampling from a CM can approximate the target data distribution in the Wasserstein-2 distance. Different from other relevant works, the analysis requires only mild assumptions on the data distribution (e.g., bounded support or sufficiently fast-decaying tails for the Wasserstein results, and log-smoothness for the TV results). The authors proved that two-step sampling can yield appreciable improvement over single-step sampling, but adding more than two steps leads to diminishing gains—mirroring empirical observations in prior work. Claims And Evidence: The theoretical claims follow from some widely used lemmas (e.g., chain rule of KL, Minkowski’s inequality for L2 errors) and do not appear to be overstated or overclaimed. There are no large leaps of logic, and each main theorem is accompanied by a clear proof outline. Claim 1: Wasserstein Guarantees: The derivations rely on standard techniques—Pinsker’s inequality, data-processing inequalities, and mild assumptions (such as bounded or suitably light-tailed distributions). The authors' arguments are mathematically sound and are aligned with recognized results in diffusion modeling theory. Claim 2: Total Variation Bounds: The authors show that smoothing with a small Gaussian convolution can produce a valid bound in TV distance. This theoretical technique is consistent with prior approaches in generative modeling when bridging the gap between pointwise errors and distributional overlap. Methods And Evaluation Criteria: This paper is a completely theoretical work and there is no benchmark or datasets involved or required. Theoretical Claims: Yes, I checked the correctness of the proofs. Experimental Designs Or Analyses: This paper is a theory work and there is no experimental designs or analyses needed. Supplementary Material: Yes, I did quickly review all the extra theorems, lemmas. All of them are theoretically correct. Relation To Broader Scientific Literature: Prior theoretical works (e.g., Lyu et al. 2023; Dou et al. 2024) examine consistency models but often under stronger Lipschitz assumptions or only variance-preserving SDEs. This paper’s novelty is in requiring fewer assumptions on the data distribution (“light tails” or “bounded support” rather than strict Lipschitzness) and analyzing multi-step sampling for general forward processes. They also draw connections to analyses of probability flow ODE solutions in diffusion modeling (Chen et al. 2023, Li et al. 2024, etc.), but adapt these arguments to the direct one-step or few-step sampling approach. Essential References Not Discussed: NA Other Strengths And Weaknesses: 1. Some proofs rely on dimension-free arguments (via Wasserstein and pinned distributions), but real-world data can be extremely high-dimensional. Whether the derived rates remain meaningful in large-scale real tasks is still an open question. I would like to ask the authors about the high-dimension issue, whether curse-of-dimension will make the derived rate in the high-dimensional case unreasonable in the real world scenario. 2. It would be very helpful if the authors can do some toy experiments to add some simple empirical ideas. Even a small synthetic or 2D test could give a sense of how the theoretical results manifest empirically. For example, the paper’s theory suggests that after two steps, gains diminish significantly. Could the authors share real-data experiments or references to confirm that this theoretical phenomenon fully matches practice? 3. You propose different strategies for picking sample times (e.g., geometric halving). Have you considered adaptive schedules dependent on learned \hat{f} or data statistics during sampling? Will it make any difference to the proof? To sum up, this paper offers a valuable theoretical understanding of consistency models under mild data assumptions, particularly clarifying the trade-offs with multi-step sampling. The main theorems are carefully argued, the bounding steps are standard but precise, and the conclusion that “two-step sampling yields a notable boost while further steps add smaller improvements” is both practically and theoretically important. Other Comments Or Suggestions: Please refer to the "strength and weaknesses" section. Questions For Authors: Please refer to the "strength and weaknesses" section. Ethical Review Concerns: There is no ethical review concerns since it is a theoretical work. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your feedback. Please see our detailed responses below: 1. **Regarding high-dimension issue:** even when accounting for the implicit dependency on the dimension, our upper bound remains at most polynomial in dimension and thus does not suffer from the curse of dimensionality. For example, in Corollary 1, our result shows that given an $\epsilon$-accurate consistency function estimate, the Wasserstein distance $W_2(\hat P, P_{\text{data}})$ depends only on consistency error $\epsilon$ (assuming $\Delta \tau = 1$ for simplicity) and the diameter of the distribution $R$. Considering the implicit dependency on $d$: - **the diameter $R$ grows at most polynomially with $d$:** if the ground-truth distribution $P_{\text{data}}$ has support on the $d$-dimensional hyper-cube $[0,1]^d$, then $R=\sqrt{d}$; if the ground truth distribution $P_{\text{data}}$ has support only on a low-dimensional manifold, e.g. $[0,1]\times \{0\} \times \cdots \times \{0\}$, then $R$ can be some constant. - **the consistency error $\epsilon$ scales at most polynomially with $d$:** by definition, $\epsilon$ arises from a $d$-dimensional prediction problem and is expected to scale at most polynomially in $d$. Given these factors, we believe our bounds remain both reasonable and meaningful in high-dimensional real-world scenarios. 2. **Regarding experiments:** Please refer to our Appendix G for simulations related to Case study 1. We show that for two heuristic sampling strategies, increasing the number of sampling steps yields diminishing improvements or even degraded performance. In contrast, our sampling strategy nearly matches the best performance of the heuristic strategies with only two sampling steps. Furthermore, our theoretical upper bound closely approximates the Wasserstein distance observed in practice during sampling. Additionally, Tables 1 and 2 in Luo et al. (2023) show that in LCM, 2-step sampling significantly outperforms 1-step sampling in terms of FID score, while 2-, 4-, and 8-step sampling exhibit comparable performance. Notably, 8-step sampling even yields worse FID scores than 4-step sampling. These empirical findings further support our theoretical results. 3. **Regarding adaptive schedules:** - In **Case Study 1**, we optimize the general upper bound in Theorem 2 by choosing $t_j$'s strategically and propose a two-step sampling strategy that adapts to both the problem parameters (the distribution diameter $R$) and the learned $\hat f$ (the consistency error $\epsilon_{\text{cm}}$); - For **Case Study 2**, here is a strategical derivation for a sampling schedule. In order to adjust the $t_j$'s to minimize the error upper bound, we first estimate the lower bound of equation (12). For an arbitrary sampling strategy $t_1 \ge t_2 \ge \ldots \ge t_N$, the summation inside term (i) can be lower bounded by: $$ \sum_{j=2}^N \frac{t_{j-1}^2}{t_j^2} \ge \sum_{j=2}^N \frac{t_j(t_{j-1}-t_j)}{t_j^2} = \sum_{j=2}^N \frac{t_{j-1}-t_j}{t_j} \ge \int_{t_N}^{t_1} \frac{d t}{t} = \log \frac{t_1}{t_N}. $$ A schedule defined by a geometric series approximates this lower bound well. Letting $t_j = t_N \rho^{N-j}$ with $\rho > 1$, we have $\sum_{j=2}^N \frac{t_{j-1}^2}{t_j^2} = \rho^2 \log_\rho\frac{t_1}{t_N} = \frac{\rho^2}{\log \rho}\log\frac{t_1}{t_N} \ge 2e\log\frac{t_1}{t_N}$, where equality holds when $\rho = \sqrt{e}$. Substituting this into equation (12), we obtain: $$ 2R \left(\frac{R^2}{4t_1^2} + \frac{2e\epsilon_{\text{cm}}^2}{4\Delta\tau^2}\left(\log t_1 + \log\frac{1}{t_N}\right)\right)^{1/4} + t_N\frac{\epsilon_{\text{cm}}}{\Delta\tau}. $$ To minimize this expression, we choose $t_1 = \sqrt{\frac{1}{e}}\frac{R\Delta\tau}{\epsilon_{\text{cm}}}$. As $t_N \to 0$, the second term decreases linearly, while the first term increases slowly. Therefore, a reasonable choice is $t_N = \Delta\tau$, a small constant. - In addition, for any new sampling strategy, one only needs to specify the noise schedule $\{\alpha_t,\sigma_t^2\}$ and the sampling steps $(t_1, t_2, \ldots, t_N)$ in Theorem 2. The majority of the proof remains unchanged.
null
null
null
null
null
null
null
null
Universal Approximation of Mean-Field Models via Transformers
Accept (poster)
Summary: The papers consider a mild variant of the transformer model, as part of a larger literature connecting transformers and maps to/from probability measures. Their main result, from my perspective, is Theorem 4.14 which provides a sort of "small time" approximation guarantees that their version of the transformer model can efficiently approximate certain MF ODEs. This is largely a consequence of their main technical result (Theorem 4.7). I find that, the title perhaps mis-matches the claims, as when reading I was expecting to see approximation guarantees for MFGs or interacting particle systems. Perhaps ODEs should be **clearly** placed in the paper's title and abstract, and not "models" which is far too overpromissing. Claims And Evidence: Rigorous and correct proofs. Methods And Evaluation Criteria: Good Theoretical Claims: Not always careful; e.g. - In Assumption 4.3, what range are you allowing for p? $p>0$, or I assume $p\ge 1$. Can $p$ be infinite (e.g. compactly supported measures)? The same issue is in the first line of the proof of Theorem 4.14. Otherwise the proofs seem correct Experimental Designs Or Analyses: Effectively NA Supplementary Material: Rigersouly and proofs are correct Relation To Broader Scientific Literature: NA Essential References Not Discussed: The authors compare extensively to Furuya et al. (2024); however, the same author has since provided improved, i.e. fully quantitative results without passing to a "non-continuous limit" of attention in: - "Is In-Context Universality Enough? MLPs are Also Universal In-Context." (2025). Can you please compare to more contemporary result of the same author, and not old results? Additionally, the authors should comment on the relationship to the probabilistic transformer of - Kratsios A, Zamanlooy B, Liu T, Dokmanić I. "Universal Approximation Under Constraints is Possible with Transformers." International Conference on Learning Representations (Spotlight). which is measure-valued and, which also yields a universal vector-valued map, when the expectation/barycenter map is applied at the output layer. Other Strengths And Weaknesses: I don't understand the point of Assumption 4.3 in proving Theorem 4.14. If $\Omega$ is compact then for any Borel probability measure $\mu \in \mathcal{P}(\Omega)$ and any $p>0$ we have $$ \mathbb{E}_{X\sim \mu}[\|X\|^p] \le \operatorname{diam}(\Omega)^p, $$ whence $\mu\in \cap_{p>0}\mathcal{P}_p(\Omega)$; so why the subscript. Also, if we're in the metric setting, which we are, then not consider the weakest condition where $p=1$? Other Comments Or Suggestions: NA Questions For Authors: - Can you shed some light on how to interpret (2) given that the second argument of $\mathcal{F}$ on measures, and there is a derivative of $\mu$ in time (for non-experts in MFGs). - A key step in the proof of Theorem 4.7 in the quantization results of Fournier et al. Can the authors provide LBs by combining the LBs in [1] with those in [2] into their analysis. [1] Kloeckner, Benoit. "Approximation by finitely supported measures." ESAIM: Control, Optimisation and Calculus of Variations 18.2 (2012): 343-359. [2] Ronald A DeVore, Ralph Howard, and Charles Micchelli. Optimal nonlinear approximation. Manuscripta mathematica, 63(4):469–478, 1989 - Can the authors comment on the aforementioned transformer results and theirs? Especially the more recent Furuya et al. (2025) result? Ethical Review Concerns: NA Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the comments and feedback, and we thank the reviewer for finding that our results show that "transformer model can efficiently approximate certain MF ODEs." We hope that this response answers the reviewer's concerns. > Assumption 4.3 Assumption 4.3 a) is an assumption on $\mathcal{F}$ map and not on the measures. If we restrict to compactly supported measures, arbitrary maps $\mathcal{F}$ may not satisfy the Lipschitz condition for any $p$. Hence, we assume that the map satisfies the Lipschitz condition for some *finite* p. Thus, for any $p$ for which we have the Lipschitz condition, Theorem 4.14 holds for that $p$. > Comparison with Kratsios and Furyaya (KF) 2025 and Kratsios et al. (K) 2022 We thank the reviewer for pointing us to these interesting papers. In the interest of fairness, *we would like to point out that KF 25 follow-up paper was posted to arxiv on 5th February after the ICML deadline on 30th January.* **Role of the Measure** 1. Our Work: The measure $\mu$ is an input argument representing the state distribution in a mean-field system, directly influencing the vector field $\mathcal{F}(z, \mu)$. This directly models physical or biological system interactions. A key strength is its applicability to general Borel probability measures $\mathcal{P}(\Omega)$ on a compact set $\Omega$. 2. K 22: The measure $\mathbb{P}$ is the output of the model $\hat{F}(x)$, explicitly designed to handle constraint satisfaction by ensuring the output distribution lies within the constraint set $K$. 3. KF 25: The measure $\mu$ is an input argument to the target function $f(\mu, x)$. However, it's restricted to the specific class of Permutation-Invariant Contexts $\mathcal{P}\_{C,N}(\mathcal{X})$ within a geometrically constrained domain $\mathcal{K}$, aimed at analyzing general in-context function approximation. **Map Definition and Transformer** 1. Our Work: Defines the map $\mu \mapsto \mathcal{T}_n(\cdot, \mu)$ (Measure to Vector Field) using the Expected Transformer $\mathcal{T}_n$, derived by taking the expectation of a standard, finite-sequence transformer $T$. This provides a practical link between standard architectures and measure-theoretic inputs. 2. K 22: Defines the map $x \mapsto \hat{F}(x)$ (Vector to Output Measure) using a modified transformer incorporating Probabilistic Attention, explicitly outputting a measure. 3. KF 25: Defines the map from finite vector spaces to finite vector spaces. However, the input and outputs are interpreted as measures on discrete sets. 4. Connection via Thm 4.7: If the map approximated in K \& F satisfies our assumptions, then our Theorem 4.7 could be applied to the transformer $\hat{\mathcal{T}}$ constructed in K \& F's Corollary 5. **Guarantees** 1. Our Work: Provides quantitative $L_\infty$ bounds on the vector field approximation error ($\|\mathcal{T}_n - \mathcal{F}\|$) that explicitly show convergence as the number of particles $n$ increases, linking the error to the quality ($\mathcal{E}$) of the underlying finite transformer. Furthermore, it connects this to the approximation of the system's dynamics ($\mathcal{W}_p$ bounds for $\mu(t)$) via Gronwall's lemma. 2. K 22: Guarantees exact support constraint satisfaction ($supp(\hat{F}(x)) \subseteq K$) combined with quantitative $W_1$ bounds on how well the output measure approximates the target. 3. KF 25: Provides quantitative probabilistic $W_1$ bounds on the output approximation error, dependent on the target function's modulus of continuity $\omega$ and the domain's geometry $q$, focusing on the network size needed for a given precision $\epsilon$. > Interpret (2) Mean field Equations of the kind in (2) are used for modeling the dynamics of a large system of interacting particles. Here, the derivative in time models the change in distribution (or measure) of the particles, and there is not necessarily any underlying variational aspect to their evolution. In this way, mean field games (MFGs) are distinct, where the evolution of the agents are dependent on the measure, through a loss function of a coupled game. > Lower Bounds The reviewer is correct that using this, we can lower bound the quantity in Line 535. However, this is only one of the three terms that we bound. Even if we manage to individually bound the terms, we used a variety of inequalities, such as Jensen's and triangle inequality, to get our decomposition. > Title Please note that the particle models we consider are formulated as ODEs (e.g. Eq. 6), while the mean-field models (e.g. Eqs. 2 and 7) are expressed as PDEs. In Theorem 4.14, we demonstrate that solutions of the continuity equation can be approximated by the *approximate continuity equation*, where the transformer replaces $\mathcal{F}$. In essence, our work shows that mean-field models derived from both ODEs and PDEs can be approximated, therefore the word *models* aligns nicely with the overall theme of our study. --- Rebuttal Comment 1.1: Comment: Dear authors, I have finished reading there rebutal and am satisfied with it. Thanks for the clear response :)
Summary: The authors study how transformes can be used to approximate mean-field models. The analysis is both theoretical and empirical. Empirically, they test the transformers' on two different mean field models. Theoretically they provide bounds in therms of $L_\infty$ distance between the expected transformer and the mean field vector field. **Update after rebuttal** I am satisfied with the authors' answer and confirm my positive evaluation of the paper thus recommending its acceptance Claims And Evidence: yes Methods And Evaluation Criteria: yes Theoretical Claims: I checked the proof of the main theorem (Theorem 4.7) in Appendix A and looks correct to me Experimental Designs Or Analyses: yes. The experimental anaysis looks sound and valid to me Supplementary Material: Appendix A, Appendix E and Appendix D. Relation To Broader Scientific Literature: Yes, there is a dedicated paragraph in the introduction where the authors mention several works where mean field games are learned from the particle level dynamics. Moreover, in section 4 the authors make more specific comparison with a work in the literature in terms of theoretical resutls obtained. Essential References Not Discussed: not that I am aware of Other Strengths And Weaknesses: I think that studying how transformers can be used to approximate mean field games has very important implications in machine learning. The paper looks original to me and the question the authors try to answer is fundamental. Other Comments Or Suggestions: The authors are invited to provide a more extensive comparison with the work of Furuya 2024 in the introduction already as it seems to be very related to what they are doing. The sentence "Additionally, Furuya et al." use a continuous version of transformes and attention and provide universal approximation of measure theoretic maps is too minimalistic and does not explain what is the substantial difference between their work and your work. -In the introduction the symbols $P(\Omega)$ and $D(\Omega)$ are undefined Questions For Authors: The authors are invited to provide a more extensive comparison with the work of Furuya 2024 in the introduction already as it seems to be very related to what they are doing. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the comments and feedback. We thank the reviewer for finding that the problem we study has "very important implications in machine learning" and that "the paper looks original to me and the question the authors try to answer is fundamental". We hope that this response answers the reviewer's concerns. > Comparison with Furaya et al 2024 We acknowledge the reviewer's comment regarding the need for a more detailed comparison with the work of Furuya et al. (2024) in the introduction. Furuya et al. (2024) establish that deep transformers are universal approximators for general continuous 'in-context' mappings defined on probability measures. Their measure-theoretic approach defines transformers directly on the space of probability distributions, leveraging a continuous version of attention. A key result is that a single deep transformer architecture, with fixed embedding dimensions and a fixed number of heads, can approximate any such continuous mapping uniformly, even when dealing with an arbitrarily large or infinite number of input tokens represented by the measure. Our work, while also connecting transformers to measure theory, takes a different approach tailored to approximating the specific structure of mean-field dynamics in interacting particle systems. Instead of defining a continuous transformer directly, we utilize standard transformers designed for finite sequences and propose a novel lifting mechanism: the 'Expected Transformer' ($\mathcal{T}_n$). This construct maps the standard finite-particle transformer to the space of measures via an expectation over the particle distribution. Our primary goal is not general function approximation on measures, but rather to specifically approximate the vector field $\mathcal{F}$ governing mean-field dynamics and, consequently, the evolution of the system's distribution described by the associated continuity equation. **Hence Theorems 4.14 is a major contribution of our work.** The substantial difference lies in both the model definition and the nature of the approximation guarantee. While Furuya et al. provide an existence result for a single, deep, fixed-dimension transformer achieving a target precision $\epsilon$ for arbitrarily many tokens, our work provides quantitative approximation bounds for the Expected Transformer $\mathcal{T}_n$. These bounds explicitly characterize how the approximation error for the infinite-dimensional vector field $\mathcal{F}$ depends on two factors: (i) the approximation quality ($\mathcal{E}$) of the underlying *finite-dimensional* transformer on $n+1$ particles, and (ii) the number of particles $n$ used in the expectation, leveraging known convergence rates of empirical measures in Wasserstein distance. We further utilize these bounds to establish guarantees on approximating the *solution trajectories* of the mean-field continuity equation, linking the vector field approximation error to the error in the dynamics via stability results like Gronwall's inequality. Below are the specific substantial differences highlighted: **Approximation Target** 1. Furuya et al. Aim to approximate *general continuous in-context mappings* $\Lambda^*(\mu, x)$ defined on probability measures. Require target mapping $\Lambda^*$ to be *continuous* w.r.t. weak* topology (plus Lipschitz conditions on contexts for masked case). Their focus is broad representational power. 2. Our Work: Specifically targets the approximation of the *vector field* $\mathcal{F}(z, \mu)$ governing *mean-field dynamics* and the subsequent approximation of the *dynamical system's evolution* (solution to the continuity equation). Requires target vector field $\mathcal{F}$ to be *Lipschitz continuous* w.r.t. spatial and measure arguments (using Wasserstein distance). **Transformer Definition:** 1. Furuya et al.: Define transformers directly on the space of probability measures using a *measure-theoretic formulation* with continuous attention layers ($\Gamma_{\theta}(\mu, x)$) 2. Our Work: Uses standard transformers $T$ designed for *finite sequences* of length $n+1$. Introduces the "Expected Transformer" $\mathcal{T}_n(x, \mu)$ which *lifts* the finite transformer's output to the measure space via an expectation operation. **Handling Input Size (Number of Tokens/Particles):** 1. Furuya et al. Show a *single* transformer architecture (with fixed dimensions/heads) works uniformly for an *arbitrary* number of input tokens (even infinite) for a given precision $\epsilon$. 2. Our Work: The approximation quality of $\mathcal{T}_n$ explicitly *improves as n increases*, reflecting empirical measure convergence. Focus is on convergence behavior. > In the introduction the symbols are undefined We thank the reviewer for pointing this out, we have now added the definitions.
Summary: This paper shows, both empirically and with theoretical guarantees, that mean-field dynamics ("transport-type" dynamical systems over the space of probability measures, i.e., which take the form of a continuity equation $\partial_t \mu_t = -\nabla_z \cdot (\mu_t \mathcal{F}(z,\mu_t))$) can be approximated up to any finite time horizon using transformers, provided the vector field $\mathcal{F}$ is Lipschitz-continuous in the Wasserstein sense. This is achieved by: - fixing a number $n$ of particles and considering the $n$-particle dynamics corresponding to the desired mean-field dynamics, $\frac{d}{dt} \mathbf{z}\_t = \mathcal{F}(z, \nu^n\_{\mathbf{z}\_t})$ where $\mathbf{z}\_t \in (R^d)^n$ and $\nu^n\_{\mathbf{z}\_t} = \frac1n \sum\_{i=1}^n \delta\_{\mathbf{z}\_{ti}}$ - choosing a transformer model $T_\theta: \Omega^{n+1} \to R^{(n+1) \times d}$, where $\Omega \subset R^d$ is the domain over which we consider probability measures ($\Omega = R^d$ in the experiments and $\Omega=$a compact set in the theory sections) - learning a transformer $T = T_{\hat{\theta}}$ that approximates the mapping $\Omega^{n+1} \ni \mathbf{z} \mapsto \left[ \mathcal{F}(\mathbf{z}\_1, \nu^{n+1}\_{\mathbf{z}}, ..., \mathcal{F}(\mathbf{z}\_{n+1}, \nu^{n+1}\_{\mathbf{z}} \right]$ - considering the vector field, called expected transformer, $\mathcal{T}\_n: (x, \mu) \mapsto \mathbb E\_{\mathbf{z} \sim \mu^{\otimes n}} T([x, \mathbf{z}])$, and using the associated dynamics $\partial_t \mu\_t = -\nabla\_z \cdot (\mu\_t \mathcal{T}\_n(z,\mu_t))$ as an approximation of the desired mean-field dynamics. On the experimental side, this methodology is validated on two toy examples (the Cucker-Smale and the Fish Milling models) in dimension 4, and on approximating the training dynamics of two-layer neural networks (in the mean-field parametrization). On the theoretical side, the paper provides quantitative estimates of the approximation error for the proposed scheme. In particular the approximation error upper bounds vanish when $\Omega$ is compact, the time horizon is fixed, and $n$ goes to infinity. These theoretical guarantees rest upon previous works' results on the approximation power of transformers as sequence-to-sequence mappings. ## update after rebuttal See discussion in the comments. I decided to keep my score of 3, though it's a "strong" 3, because there are still some natural questions raised by this paper that are not really addressed, particularly the significance of the in-expectation result; this being said this is arguably okay for a conference publication. Claims And Evidence: Yes the claims are supported by clear and convincing evidence. Methods And Evaluation Criteria: Yes the proposed methods and evaluation criteria make sense. Theoretical Claims: I have checked Appendix A (proof of Thm 4.7) and Appendix C (proof of Thm 4.14). Both contain minor mistakes (or missing steps I wasn't able to fill) which may slightly impact the constants in the bounds: - The last step of the proof of Thm 4.7 concludes to an upper bound which is different from the one in the theorem statement: $\mathcal{L} \frac{2}{n+1}$ instead of $C \mathcal{L} \mathrm{diam}(\Omega)^p n^{p/q-1}$. - In the proof of Thm 4.14, I don't see how line 704 is obtained: if it is again by Young's inequality then there should be an extra $2^{p-1}$ factor a priori. On line 717, I don't see how the first term can be upper-bounded by $\int_0^t \varepsilon ds$, it seems it should be $(\int_0^t \varepsilon ds)^p$, and similarly for the two other terms. Why not just consider $\\|Y(t,x)-X(t,x)\\|_p$ and use triangle inequalities? Even then, the norms used in the inequalities stated in Assumption 4.3 don't seem to be sufficient to have dimension-independent bounds at this step of the proof. By the way, this proof assumes that in the definition of the distance $W_p$, the distance $d$ is the $p$-norm, which is not necessarily standard, so it may be useful to specify it. For simplicity it might be preferable to stick to using the $2$-norm on $R^d$ and considering $W_p$ distances defined as in Definition 4.2 with $d$ being the $2$-norm, and in the proof of Thm 4.14, bound $\\|Y(t,x)-X(t,x)\\|_2$. The proof of Corollary 4.10 appears to be missing. Experimental Designs Or Analyses: I have not checked the soundness of experimental designs. Supplementary Material: There is no supplementary material, but code was provided at an anonymized URL. I have not reviewed it. Relation To Broader Scientific Literature: The paper shows end-to-end guarantees for the problem of approximating mean-field dynamics using transformers. The possibility of using transformers for this purpose is not surprising given the universal approximation capabilities of transformers as sequence-to-sequence maps, but this paper gives a complete rigorous analysis. The error bounds obtained in this paper are relatively straightforward consequences of classical techniques in the context of mean-field dynamics, and are likely way too pessimistic, but it is still worthwhile to write those bounds down properly, which this paper does well. Compared to related works, in particular Furuya et al. (2024), this paper takes an alternative and arguably simpler approach. Indeed, by considering the expected transformer (and using an easy triangle inequality argument, line 516), this paper's approach allows to apply previous finite-sequence-length approximation results directly "off the shelf". (I am not aware of other works taking this approach, but I am not familiar with the literature on transformers.) Essential References Not Discussed: I am not aware of any essential references that were not discussed. Other Strengths And Weaknesses: See my comment on the use of the expected transformer in "Relation to Broader Scientific Literature". Other Comments Or Suggestions: It seems to me that the approach taken in this work could be generalized to the case where the map $\mathcal{F}(z,\mu)$ is of the form $\mathcal{G}(z,\mu) + \nabla \log \mu(z)$ where $\mathcal{G}$ is Wasserstein-Lipschitz (in the sense of Assumption 4.3a). That is, it could be generalized to maps containing an isotropic diffusion term. This would connect to the use of transformers for score learning in the context of score-based diffusion models. If the target dynamical system $\partial_t \mu_t =-\nabla_z \cdot (\mu_t \mathcal{F}(z, \mu_t)$ converges (or is stable), then we can hope for a uniform-in-time (resp. polynomial-in-time) approximation error. I wonder if the proposed methodology achieves this, and if this could be shown using this paper's tools. This remark is inspired by the favorable observed behavior for the simulation on the Cucker-Smale model in dimension 4. Questions For Authors: - Please address the minor technical concerns listed in "Theoretical Claims". - Do you know of any practical ML settings in which learning a mean-field dynamics from a finite number of observed trajectories is directly relevant? - The approximation bounds given in the paper are for the dynamics using the expected transformer. In practice, how can the expected transformer mapping be computed? Can the permutation invariance of $T$ be exploited for fast computation? - To complement the empirical comparison of the proposed method's performance compared to baselines in section 3.1: how do these methods compare in terms of computational cost? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the comments and feedback. We thank the reviewer for finding that "the paper shows end-to-end guarantees" and that it is "worthwhile to write those bounds down properly, which this paper does well," and for finding that the paper takes a novel, "arguably simpler approach" compared to prior work. We hope that this response answers the reviewer's concerns. > Last step of Thm 4.7 proof, upper bound different from theorem statement The reviewer is right, we fixed the upper bound. It now reads: $$\left\|\mathcal{T}_n - \mathcal{F} \right\|\_{*} \le \mathcal{E} + \mathscr{L} \text{diam}(\Omega)^{p} \left(\frac{1}{n+1} + CG(n,p,q) \right) $$ > Technical details for theorem 4.14 The reviewer is correct on all counts. We were missing a factor of $2^{p-1}$ in line 704. The term should be $\int_0^t \varepsilon^p ds$. Moreover, we did miss a dimension-related term : $d^{\frac{p}{2}}$ that is due to the conversion between $\|\cdot\|_\infty$ and $\|\cdot\|_2$ norms. The final result that we obtain is: $$ \mathcal{W}^p\_p(\mu^{\mathcal{F}}(t),\mu^{\mathcal{T}\_n}(t)) \leq 2^{2p-1} d^{\frac{p}{2}} \varepsilon^p t\exp(\mathscr{L}^p 2^{2p-1} d^{\frac{p}{2}} t). $$ We don't need Young's. We use $(a+b)^p \leq 2^{p-1}(a^p+b^p)$ with the triangle inequality to give us the needed result. We now use the classical definition using $\|\cdot\|_2$. > Corollary 4.10 proof missing We thank the reviewer for pointing this out. Here is the sketch: The proof follows from Theorem 4.3 of Alberti et al., which states - For each permutation equivariant function $f$ and $\epsilon > 0$, there exists a Transformer $T$ such that $$ \sup\_{X \in \mathcal{X}^n}\|f(X) - T(X)\|\_\infty < \epsilon $$ Although the statement does not explicitly provide bounds on the sizes, these can be inferred from the construction. Specifically, the architecture comprises a two-layer network described by Hornik (1989), followed by a single attention layer, and then another two-layer feedforward network from Hornik (1989), resulting in constant depth. The result of Hornik et al. (1989) does not impose bounds on the widths of these networks. The single attention layer has a width of $1+2d+d'$, where $d'$ remains constant with respect to $d$. > Generalize to maps containing isotropic diffusion term We thank the reviewer for this insightful comment. This is something we are currently looking at. While approximation of score functions using transformers is a potential application, the score function is not Lipschitz in the measure, so one requires additional work since results of our paper don't immediately extend to this case. > Uniform-in-time (resp. polynomial-in-time) approximation error This a good point, however, this would require the notion of stability on Wasserstein spaces and robustness of stable mean-field systems to perturbations, so it would involve more work to extend ideas from approximation of classical ODE theory to approximation of mean-field ODEs. > Learning mean-field dynamics from a finite number of observed trajectories This work is relevant to generative modeling or sampling problems in which a noise distribution is transported to a target distribution. Another interesting application is training neural networks. We can train a smaller network, approximate the training dynamics using the Transformer, and then increase the width (i.e., increase the number of particles). Then, we could train the large-width model using the Transformer. Note that this doesn't require knowing the training data. > Approximation bounds for the dynamics using the expected transformer In practice, the expected transformer can be approximated quickly. For example, if $x$ is a data point and $\mu$ is the measure. Suppose we have $B$ collections $z^{(1)}, \ldots, z^{(B)}$ of particles, where each $z^{(i)}$ is state of $n$ particles. Then, we can append $x$ to each of the $B$ collections. For the transformer, we represent the input with a batch size $B$ and a sequence length of $n+1$. This allows the forward pass to be efficiently parallelized in any modern ML library. After obtaining the outputs, we compute their mean to approximate the expected transformer output. Instead of calculating the theoretical mean, we use the sample average over $B$ samples, concentrating around the true mean at a rate proportional to the transformer's variance divided by $B$. We expect that a reasonably sized $B$ will yield an accurate estimate. We did this for the Cucker Smale model. The results can be seen in Figure 4. Here, we used $B = 10000$. On a single GPU, this takes a few seconds. > Time Complexity This is a great question. For the fish milling dataset, on a single L4 GPU: Transformer: 6-10 minutes per model, TransformerConv: 5-10 minutes per model, Cyclindrical, FNN, and Kernel were all under 2 minutes for the largest model. The smaller models were on the order of seconds. --- Rebuttal Comment 1.1: Comment: Re "Technical details for theorem 4.14": - It seems to me that dimension-independent bounds can be obtained if the Euclidean norm is used instead of the $|.|\_1$ and $|.|\_\infty$ norms in the definition of the regularity assumption, Assumption 4.3 (please correct me otherwise). It might simplify the computations (and make comparison with related works easier) to stick with the Euclidean norm throughout (this is a minor technical point that is entirely up to you, as perhaps there are applications which I am not aware of for which the current form of Assumption 4.3 is more convenient). Re "Uniform-in-time (resp. polynomial-in-time) approximation error": Fair point. I would personally be curious to see in the future what guarantees can be obtained, especially since most dynamics one is typically interested in (including those in your numerical experiments actually) do exhibit some kind of stability or convergence. Re "Approximation bounds for the dynamics using the expected transformer": - Do I understand correctly that the $B$ collections of particles, $z^{(1)}, ..., z^{(B)} \in (R^d)^n$, are also updated throughout the run? Using what vector field? My current understanding is that they would need to be updated using the $z^{(1)}, ..., z^{(B)} \in (R^d)^n$ themselves, so the concentration of the sample average becomes unclear, as we lose independence after one iteration. - Is there any way to bound the transformer's variance theoretically? Empirically, does the variance appear to be uniformly bounded (across iterations, datapoints, etc.)? --- Reply to Comment 1.1.1: Comment: > Dimension-independent bounds can be obtained if the Euclidean norm is used The reviewer is right, and we thank them for their comments, these changes make our calculations more consistent, and improve the readability. We proved the equivalent versions of Theorems 4.7 and 4.14 when $\|\cdot\|\_\infty, \|\cdot\|\_p$ are replaced by the $\|\cdot\|\_2$ norm. For the new versions we use $\mathcal{W}\_1$ metric with the $\|\cdot\|\_2$ norm. The new assumptions would read: $$ \| \mathcal{F}(x,\mu) - \mathcal{F}(y,\nu) \|\_{2} \leq \mathscr{L} \left( \| x - y \|\_{2} + \mathcal{W}\_1(\mu, \nu) \right).$$ and $$\| \mathcal{F}(x,\mu) \|\_{2} \leq \mathscr{M} \left( 1 + \| x \|\_{2} + M\_1(\mu) \right), $$ The new norm now reads: $$\| \mathcal{H} \|\_{star} := \sup_{x \in \Omega} \sup\_{\mu \in \mathcal{P}(\Omega)} \| \mathcal{H}(x, \mu) \|\_{2}.$$ The reviewer is right, by changing to $\|\cdot\|\_2$, the dimension-dependent term in the proof of Theorem 4.14 can be removed. The new bound in Thm 4.14 now reads: $$ \mathcal{W}\_1(\mu^{\mathcal{F}}(t),\mu^{\mathcal{T}_n}(t)) \leq \varepsilon t\exp(2\mathscr{L} t) $$ > Re "Approximation bounds for the dynamics using the expected transformer": The reviewer's understanding is correct, we would have to update using the Transformer. Hence, the reviewer is correct that the independence only holds for the first iteration. However, empirically, at least for small time horizons with $B=1$ (see Figures 1,2,3), the lack of independence doesn't seem to be an issue. However, making a proper comparison for the expected transformer beyond the first iteration is tricky, as we would need the ground truth continuous measure which is non-trivial to compute. For the first iteration, empirically, the variance seems to be 2 orders of magnitude smaller than the mean. For our Cucker Smale model, we observe a mean on the order of 0.1 and a max variance of 0.003.
Summary: This paper explores the application of transformers in modeling the mean-field dynamics of interacting particle systems. The study empirically shows that transformers can effectively approximate diverse mean field models, such as the Cucker-Smale model and systems for training two-layer neural networks. It supports these empirical findings with mathematical theory, proving that the approximation error between the transformer-based and the true mean-field dynamics can be quantified and is dependent on the number of particles used in training. Finally, it establishes theoretical bounds on these errors, enhancing the understanding of transformer capabilities in complex system dynamics. Claims And Evidence: Yes. Methods And Evaluation Criteria: I am somewhat confused by Figure 1 and Table 1. Firstly, for the mean-field model, is it appropriate to solely use mean square error to assess performance? I was under the impression that it should also be evaluated by the distance between distributions. Theoretical Claims: No I did not. I am still confused with problem set up and the correctness of evaluation. Experimental Designs Or Analyses: I am not an expert in this area, but the benchmark experiments seem to be limited to models similar to Cucker-Smale, which I believe is not comprehensive enough. Supplementary Material: No I did not. Relation To Broader Scientific Literature: N/A Essential References Not Discussed: I am not an expert in this area so I am not sure. Other Strengths And Weaknesses: First of all, I feel really confused about the experiment results. 1. For table 1, I do not see any benefit of proposed method, regardless that I feel the distribution distance should also be reported. The proposed method seems to have small variance but the absolute mean is not competitive? 2. For figure 2, it makes me even more confused. Why the comparison is between SGD (optimizer) and Transformer (Model)? Am I missing anything here? Other Comments Or Suggestions: See weakness. Questions For Authors: See weakness. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewers for their questions and comments, which help improve the paper. We thank the reviewer for finding that our paper "supports these empirical findings with mathematical theory" and "establishes theoretical bounds on these errors, enhancing the understanding of transformer capabilities in complex system dynamics." We hope that our response can clear the reviewers' doubts. > I am somewhat confused by Figure 1 and Table 1. Firstly, for the mean-field model, is it appropriate to solely use mean square error to assess performance? ... it should also be evaluated by the distance between distributions. This is a great question: we have two points. 1. The MSE shown in Table 1 is about approximating the vector field $\mathcal{F}$. For a given distribution $\mu$, $\mathcal{F}$ is a map from $\Omega \times \mathcal{D}(\Omega) \subset \mathbb{R}^d \to \mathbb{R}^d$. Since the range of $\mathcal{F}$ is $\mathbb{R}^d$, using MSE is appropriate in this case. Alternatively, if we were minimizing the particle positions between the predicted model and data, it would have made sense to use 2-Wasserstein distance. 2. Additionally, if we were minimizing the particle positions between the predicted model and data, bounds on MSE imply bounds on the 2-Wasserstein distance but not vice versa. According to the definition, $ \mathcal{W}\_2(\mu,\nu)^2 = \inf\_{\gamma \in \Pi(\mu,\nu)} \int \|x-y\|^2 d\gamma(x,y).$ The MSE bound restricts the integrand, thereby providing an upper bound on $ \mathcal{W}\_2(\mu, \nu)^2$. However, the reverse implication does not hold. For instance, consider two particles, $a\_1$ and $a\_2$, that both start at zero. With equal probability, either $a\_1$ moves to 1 and $a\_2$ to $-1$ or vice versa. At the level of distributions, these scenarios yield zero distance. However, the MSE between the two scenarios is 4. > For table 1, I do not see any benefit of proposed method, regardless that I feel the distribution distance should also be reported. The proposed method seems to have small variance but the absolute mean is not competitive? We apologize for the confusion; the $\times 10^{-k}$ factor multiplies both the mean and the variance. For example, for the Cucker Smale model, our method has a mean of $1.9 \times 10^{-6}$. The next best mean is the TransformerConv with $m = 20$ has a mean of $3.3 \times 10^{-6}$ which is nearly double. For the fish milling dataset, we have a mean of $2.2 \times 10^{-2}$ and the next best model is the TransformerConv with $m=3$ has a mean of $6.5 \times 10^{-2}$ which is nearly three times that of the Transformer. Thus, Table 1 shows that transformers have the best MSE error. > For figure 2, it makes me even more confused. Why the comparison is between SGD (optimizer) and Transformer (Model)? Am I missing anything here? We use the Transformer to approximate the ODE dynamics that govern the evolution of parameters in a two-layer neural network during training. That is, we are using the Transformer to train a distinct two layer neural network. This approach is analogous to the Cucker-Smale model, with the true dynamics defined by Equations (5) and (6). In this context, the true dynamics we aim to approximate are those induced by SGD. As Mei et al. (2019) demonstrated, the SGD dynamics can be expressed through a mean-field equation (Equation (7)), ensuring that our theory is directly applicable here. > I am not an expert in this area, but the benchmark experiments seem to be limited to models similar to Cucker-Smale, which I believe is not comprehensive enough. We consider three benchmark datasets. The first consists of simulated dynamics generated using the true Cucker-Smale equations. The second dataset features real-world observations of fish milling in a pond—actual fish behavior—which, although often conjectured to follow the Cucker-Smale model, is not guaranteed. The third dataset captures the dynamics of SGD for a two-layer neural network, representing a model that is notably distinct from the other two.
null
null
null
null
null
null
Gated Integration of Low-Rank Adaptation for Continual Learning of Language Models
Reject
Summary: This manuscript focused on the continual learning of language models. Unlike the existing continual learning studies based on LoRA that treated the new and old LoRA branches to contribute equally to old tasks, the authors proposed a new method, gated integration of low-rank adaptation (GainLoRA). Specifically, GainLoRA expands a new LoRA branch for a new task with a gating module to integrate. The introduced gating modules are used to integrate the new and old branches, and the new gating module will minimize the contribution from the new LoRA branch to old tasks to mitigate the forgetting issue. Experiments were conducted on several language benchmarks with two language models to support the effectiveness of the proposed method. ### update after rebuttal and internal discussion Thanks to the authors for responding to my questions. After the rebuttal, most of my concerns have been addressed. However, during the internal discussion, there are some discussions regarding the completeness of the comparisons with many existing studies in the field of continual learning with pre-trained model (including but not limitted to S-prompt/HiDe-prompt/RanPAC/Dual-prompt/NoGRA/HiDe-PET). Based on this consideration, I will keep my original rating "weak accept". Claims And Evidence: Yes, the claims were supported by either theoretical analysis and empirical results. Methods And Evaluation Criteria: The benchmarks used in this manuscript are reasonable for evaluation. Theoretical Claims: The demonstration of the conditions of the gating module seems correct. Experimental Designs Or Analyses: The experimental designs and analyses have been checked and they seem sound. Supplementary Material: The appendix and supplementary materials have been reviewed. The demo code is provided in the supplementary materials. Relation To Broader Scientific Literature: The core contribution in this manuscript is the design of gating modules. This design can be inspired by the related topics like Mixture-of-Expert. However, it is still novel to see the use of gating modules in the LoRA-based continual learning problem. Essential References Not Discussed: N/A Other Strengths And Weaknesses: ## Strengths 1. The design of gating modules is reasonable and theoretically sound. 2. The proposed GainLoRA can be plugged into other existing LoRA-based methods to provide further improvements. 3. The experiments were conducted on the real-world datasets and models, like T5 and Llama2, which were at the scale in production environments. ## Weaknesses 1. For the computational and memory overhead, the authors only emphasize the associated amount introduced by trainable parameters. However, the memory and computational overhead regarding the subspace construction should also be discussed. Other Comments Or Suggestions: The authors should also consider the introduced memory and computation by the subspace construction. Questions For Authors: 1. In the current version, it seems that the authors mainly discussed the computational and memory overhead by trainable parameters. However, to the best of my experience in using Gradient Projection Memory (GPM) and its variants, I noticed that the construction of subspaces $\mathcal{M}$ and the projection operations can also require significant computation and memory. I wonder if the authors could provide quantitative discussions regarding this aspect. 2. In Eq. (12), it seems that $l \in \{1,2,...,T-1\}$ should be $l \in \{i,i+1,...,T-1\}$. 3. Does the gating module introduce significant training instability, particularly in early task learning phases? 4. Is your proposed method compatible with SAPT [1], a recent CL method for LLMs with parameter-efficient tuning? If so, could you please provide performance comparisons in the experimental part? I will accordingly adjust the rating after the author rebuttal. References: [1] SAPT: A Shared Attention Framework for Parameter-Efficient Continual Learning of Large Language Models. ACL 2024. Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Q1: memory and computational overhead regarding the subspace construction** **A1:** The memory and computational overhead of subspace construction in GainLoRA is minimal due to the small size of the gating module (only 3 layers, see Appendix B.3). We provide detailed analyses below. Memory: The number of orthogonal bases stored for each subspace does not exceed its dimension. For T5-Large, the dimensions of the three subspaces are 1024, 100, and 1024, respectively. This results in a worst-case memory of less than 0.3% of the total model parameters ($(2*1024^2+100^2)$/(T5-Large's params)<0.3%). Similar estimates yield 0.07%, 0.5%, and 0.4% for T5-XL, Llama-2-7B, and Llama-2-13B, respectively. Since this calculation represents a rough upper bound, the actual memory is even lower. Computational Overhead: The computational overhead for subspace construction requires a single forward pass over the task dataset and SVD on the feature matrices of the gating module. Assuming a single forward pass over the task dataset requires $A$ FLOPs. For T5-Large, training a task for 100 epochs needs 100 forward and backward passes. Since a single backward pass has roughly $2A$ FLOPs, the total FLOPs are $300A$. Thus, a single forward pass for subspace construction accounts for only 1/300≈0.33% of total computation. Similar estimates yield 0.33%, 0.67%, and 0.67% for T5-XL, Llama-2-7B, and Llama-2-13B, respectively. SVD is performed on $H_lH_l^T\in\mathbb{R}^{d_l\times d_l}$, where $H_l$ is the feature matrix in the $l$-th layer of the gating module. According to the conclusion from Lecture 31 of the textbook [1], the FLOPs required for the SVD of $H_lH_l^T$ are less than $4d_l^3$. For T5-Large ($d_1=d_3=1024$ and $d_2=100$), this results in $4*(2*1024^3+100^3)<5GFLOPs$, which is negligible compared to a single forward pass with sequence length 128 (see Table 8 in the Appendix). Similar calculations give the same conclusion for T5-XL, Llama-2-7B, and Llama-2-13B. We will include these calculations in the final version. Thanks for suggestions. **Q2: projection operations can also require significant computation** **A2:** The projection operations in Eq.9 and Eq.10 incur minimal computation. Before learning a new task, Eq.9 is applied to the last layer of the gating module, involving at most three matrix multiplications. During training, after a single forward-backward pass, Eq.10 is applied to all layers of the gating module, involving at most nine matrix multiplications. In contrast, a single forward-backward pass of T5 or LLaMA involves hundreds or even thousands of matrix multiplications. Therefore, the computation of projection operations is negligible compared to the overall training process. We will include these analyses in the final version. Thanks for suggestions. **Q3: $l\in 1,2,...,T-1$ should be $l\in i,i+1,...,T-1$.** **A3:** We follow existing works [2,3] to define FT and have verified that the correct formulation is indeed $l \in 1,2,...,T-1$ as stated in their papers. **Q4: Does the gating module introduce significant training instability, particularly in early task learning phases?** **A4:** No, the gating module does not introduce significant training instability. This [figure](https://anonymous.4open.science/r/Re-A3CF/track.png) shows the variation in the gating module's output during training on the 15-th new task in Order 1. As observed, the output for new tasks quickly approaches 1, ensuring sufficient adaptation to new tasks without unstable training. Meanwhile, the output for old tasks remains near 0, maintaining stability for old tasks. **Q5: Is your proposed method compatible with SAPT ...?** **A5:** Our method is partially compatible with SAPT but cannot be directly integrated with it. SAPT relies on generated samples for rehearsal, making it incompatible with the rehearsal-free setting considered in this work. Furthermore, SAPT requires an extra phase to train a generative model for producing old samples, involving multiple forward and backward passes over the task dataset. This leads to significantly higher computational overhead compared to our GPM-based subspace construction, a key concern raised by the reviewer. When rehearsal is allowed, GainLoRA doesn't need to constrain the new gating module but uses generated old samples to minimize its output on old tasks. The following table shows the results of Order 1, where GainLoRA uses the same rehearsal datasets as SAPT and achieves comparable performance. However, SAPT can't be extended to a rehearsal-free setting, while GainLoRA is designed for rehearsal-free setting. We've cited SAPT and will incorporate this discussion in the final version. Thanks for the suggestions. ||T5-Large|Llama-2-7B :-|:-:|:-: SAPT-LoRA|51.38|55.88 GainLoRA+rehearsal|51.62|55.93 [1] Numerical linear algebra, SIAM 2022 [2] Continual Learning in Low-rank Orthogonal Subspaces, NeurIPS 2020 [3] On tiny episodic memories in continual learning, arXiv 2019 --- Rebuttal Comment 1.1: Comment: Thanks for providing further explanations to my questions. Most of my concerns have been addressed. I decided to increase my rating to Accept./ --- Reply to Comment 1.1.1: Comment: We are pleased to see that your key concerns have been effectively addressed. We sincerely appreciate your time and effort in reviewing our response and providing positive feedback.
Summary: This paper introduces GainLoRA, which integrates LoRA with gating mechanisms. GainLoRA expands a new LoRA branch for each task while incorporating task-specific gating modules, for mitigating catastrophic forgetting. Experimental results demonstrate strong performance and provide comprehensive ablations. Claims And Evidence: Yes Methods And Evaluation Criteria: yes Theoretical Claims: yes Experimental Designs Or Analyses: yes Supplementary Material: yes Relation To Broader Scientific Literature: The method aligns with recent trends in parameter-efficient fine-tuning in continual learning. Essential References Not Discussed: Wu, et al. Mixture of lora experts. ICLR2024. Other Strengths And Weaknesses: Strengths: 1. The paper is straightforward and easy to follow. 2. The experiments and ablations are comprehensive, addressing many key concerns. 3. The improvement over prior works is significant, demonstrating better performance in mitigating forgetting. Weaknesses: 1. The idea of using a mixture of LoRA branches is not novel, as it closely resembles the MoE LoRA framework. The primary contribution appears to be its application to the continual learning domain, with added constraints on gate learning. 2. The paper uses $W$ to represent both the pre-trained weight matrices added with LoRA and the weights of the gating module, which could be misleading. 3. The proposed soft-gating mechanism (sigmoid-based gating) diverges from the commonly used top-k gating, raising concerns about scalability. For instance, in the SuperNI benchmark with 1616 tasks, even using a low-rank of 4 would lead to an explosion in the number of parameters during both training and inference. This issue becomes more critical when considering computational throughput and latency constraints. 4. The gradient projection memory method may struggle with extremely long task sequences, as orthogonal subspaces are inherently limited. Over time, maintaining orthogonality across a growing number of tasks could degrade performance. 5. The initialization strategy in Eq. 8, which copies the gating weights from the previous task, may introduce a conflict with the desired orthogonal property. Other Comments Or Suggestions: This work is complete and sound, though the novelty feels limited. Questions For Authors: See weakness. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Q1: The idea of using a mixture of LoRA branches is not novel, as it closely resembles the MoE LoRA framework.** **A1:** Our method is fundamentally different from existing MoE LoRA frameworks, as it specifically addresses continual learning (CL) in a rehearsal-free setting where task identities are unavailable during inference. While MoE-based LoRA frameworks dynamically route inputs across experts, they do not tackle catastrophic forgetting or adapt to sequentially arriving tasks without rehearsal. In contrast, our method leverages gating mechanisms to minimize the interference of the new LoRA branch on old tasks. Furthermore, the paper mentioned by the reviewer, Wu et al., Mixture of LoRA Experts (ICLR 2024), does not focus on CL or address forgetting. Instead, it targets at static learning with MoE-style routing, which is fundamentally different from our setting. We will cite this paper and include a discussion in the final version, clarifying the distinctions between our approach and existing MoE-based LoRA methods. Thanks for suggestions. **Q2: The paper uses $W$ to represent both the pre-trained weight matrices added with LoRA and the weights of the gating module, which could be misleading.** **A2:** In the final version, we will use $G_{l}$ to represent the weight of the $l$-th layer in the gating module, ensuring a clear distinction from the pre-trained weight matrices. Thanks for suggestions. **Q3: The proposed soft-gating mechanism diverges from the commonly used top-k gating, raising concerns about scalability. For instance, in the SuperNI benchmark with 1616 tasks, ... lead to an explosion in the number of parameters during both training and inference...** **A3:** To the best of our knowledge, the number of tasks in the 15-task sequence setting in our experiments matches or exceeds the scale of nearly all existing CL methods for language models (LMs). Our results demonstrate that under this setting, our method introduces minimal additional parameters and computational overhead. Notably, since no existing CL method for LMs has explored a sequence with more than 15 tasks, making a direct jump from 15 to 1616 seems too challenging for the development of the CL community. We acknowledge that as the number of tasks increases, the parameter count and computational cost of our method will also grow. However, in an extremely long task sequence, constraining CL methods to avoid parameter growth can lead to insufficient capacity for learning new tasks, which could ultimately degrade performance. Therefore, scaling CL methods to extremely long task sequences remains an open problem. As part of future work, we plan to investigate top-k gating or other adaptive mechanisms to enhance efficiency while preserving performance. We will highlight this issue in the final version. Thanks for suggestions. **Q4: The gradient projection memory method may struggle with extremely long task sequences, as orthogonal subspaces are inherently limited. Over time, maintaining orthogonality across a growing number of tasks could degrade performance.** **A4:** Performance degradation over long task sequences is a well-known challenge in continual learning (CL) and is not unique to our method. To overcome forgetting, many methods introduce constraints like regularization or orthogonal constraints. As old tasks accumulate, these constraints must be strengthened, which can limit model plasticity and degrade new task performance. Reducing constraints can mitigate performance degradation on new tasks but may risk forgetting old tasks. This trade-off, known as the plasticity-stability dilemma, is inherent to CL and affects all methods, including those using orthogonal subspaces. In fact, although our method introduces orthogonal subspaces, it mitigates performance degradation because the orthogonal subspaces are applied to the gating module rather than directly to the LoRA parameters. This allows the model to avoid excessive constraints on the LoRA parameters, thereby preserving its ability to learn new tasks. We will provide a more detailed discussion in the final version. Thanks for suggestions. **Q5: The initialization strategy in Eq. 8 may introduce a conflict with the desired orthogonal property.** **A5:** The initialization strategy in Eq. 8 does not conflict with the desired orthogonal property. This is because it only copies the gating weights of the first L layers from the previous task and does not copy their updates. Since Eq. 6 applies to weight updates rather than the weights themselves, there is no conflict between the initialization strategy in Eq. 8 and the desired orthogonal property in Eq. 6. Furthermore, while Eq. 5 is applied to the gating weights in the last layer, Eq. 8 does not initialize this layer’s weights from the previous task. Therefore, there is no conflict between the initialization strategy in Eq. 8 and the desired orthogonal property in Eq. 5.
Summary: The paper introduces GainLoRA, an approach to mitigate catastrophic forgetting in task incremental continual learning scenarios leveraging gated integration of low-rank adapters. This approach expands LoRA branches for each task and introduces gating modules to dynamically control the impact of each branch. Unlike the previous approaches that integrate LoRA branches with naive averaging, GainLoRA computes integration coefficients regarding each LoRA’s contributions to the input data to better adapt to where task identities are unavailable. Claims And Evidence: **Well-supported** \ *Claim 1: GainLoRA improves CL (continual learning) performance by mitigating catastrophic forgetting and outperforms SOTA methods* - The authors experimentally shows that the proposed GainLoRA approach achieves lower forgetting rates (FT) and higher averaged performance (AP) compared to existing SOTA methods, such as O-LoRA and InfLoRA *Claim 2: The gating module ensures dynamic task adaptation* - The outputs of gating module illustrated in Figure 5 demonstrates that the gating module assigns higher coefficients to the newly added task - Table 4 ablation study showcases that the task orthogonal property of gating modules mitigate the forgetting issue **Partially supported** \ *Claim 3: GainLoRA has minimal computational overhead* - GainLoRA stores a separate LoRA branch and gating module per task, requiring dynamic computation of integration coefficients for each input sample. - While the LoRA branches are lightweight, the cumulative parameter count increases with the number of tasks, potentially limiting scalability in real-world applications where models must handle a large number of tasks across diverse domains. Methods And Evaluation Criteria: **Strengths** - GainLoRA is evaluated on standard CL datasets, including SuperNI and Long Sequence. - The AP and FT metrics are well-established for measuring continual learning effectiveness. - GainLoRA's robustness is tested across multiple task orders. **Weaknesses** - Additional CL evaluation metrics (FWT, BWT) should be included to provide a more comprehensive assessment of GainLoRA’s ability to enhance knowledge transfer and reduce forgetting. - Task order diversity is not fully explored beyond random sequences. Specifically, evaluating highly similar consecutive tasks (e.g., sentiment analysis → sentiment analysis) is crucial, as the orthogonality constraint may inadvertently hinder knowledge transfer in such cases. - Comparisons with recent SOTA rehearsal-free baselines for language models, such as [MoCL (2024)](https://aclanthology.org/2024.naacl-short.39/), [TaSL (2024)](https://aclanthology.org/2024.acl-long.69/), and widely adopted continual learning baselines like [EWC (2017)](https://www.pnas.org/doi/10.1073/pnas.1611835114) would provide a more comprehensive evaluation of GainLoRA’s effectiveness. Theoretical Claims: The mathematical formulation of gating function and orthogonality constraints are well-defined and theoretically grounded. Experimental Designs Or Analyses: **Strengths** - The selection of benchmarks (SuperNI, Long Sequence) and comparison methods (O-LoRA, C-LoRA, InfLoRA, etc.) is well-grounded. The experiments are appropriately designed for task-incremental CL scenarios, and the ablation studies are thorough and well-executed. **Weaknesses** - Analysis of task order impacts should be included, particularly to assess how the orthogonality constraint in the gating module affects knowledge transfer. Evaluating potential negative transfer risks when tasks are highly similar would strengthen the paper’s insights. - Including multi-task learning (MTL) results would provide an upper bound on performance, offering a benchmark to assess how well GainLoRA retains task performance and mitigates forgetting compared to joint training on all tasks. Supplementary Material: The supplementary material is well-structured and provides valuable insights, including: - Proofs of mathematical claims (orthogonality). - Extended results (including experiments on IncLoRA and C-LoRA) and model training details to better demonstrate the GainLoRA's effectiveness. - Computational cost comparisons showing GainLoRA’s overhead is manageable. - Analysis of different gating module designs, providing additional insights into their impact on performance. Relation To Broader Scientific Literature: - Catastrophic forgetting is a critical issue in continual learning, particularly for domain adaptation and test-time adaptation in real-world applications. - GainLoRA contributes to LoRA-based parameter-efficient continual learning, extending prior methods such as O-LoRA and InfLoRA. - Its effectiveness in mitigating forgetting without relying on task identities enhances its practical applicability, making it more adaptable for real-world scenarios where task boundaries are ambiguous or unknown. Essential References Not Discussed: The paper provides a strong foundation by referencing LoRA-based CL methods such as O-LoRA and InfLoRA, but it omits several recent rehearsal-free CL methods such as MoCL, TaSL, KIF that are highly relevant for comparison. See "Methods And Evaluation Criteria". Other Strengths And Weaknesses: **Strengths** - The paper is clearly written with a well-structured presentation, making the proposed GainLoRA method easy to understand and follow. - GainLoRA is straightforward to implement and can be seamlessly integrated with existing LoRA-based continual learning approaches. - Evaluation across multiple model scales (T5, Llama-2-7B, Llama-2-13B) and diverse task orders ensures robust empirical validation. **Weaknesses** - While its application to LoRA integration is innovative, the gating module itself is not inherently novel, as similar gating mechanisms have been explored in PEFT and CL methods. - Potential negative transfer effects from the gating module and its orthogonality constraint are not analyzed—specifically, how these constraints impact task similarity, transfer learning, and overall adaptation remains unexplored. Other Comments Or Suggestions: I find the paper well-written, methodologically sound, and a valuable extension of LoRA-based continual learning, with the gating mechanism for task-adaptive LoRA integration being a particularly noteworthy contribution. However, the paper would benefit from a broader analysis of task order effects on gating modules, additional CL evaluation metrics (FWT, BWT), and comparisons with recent SOTA CL methods beyond LoRA-based approaches. Additionally, including MTL results would provide an upper bound on performance, offering a stronger reference for evaluating GainLoRA’s effectiveness. If these concerns are addressed, I would be willing to raise my score. Questions For Authors: 1. How does GainLoRA mitigate negative transfer when continually learning highly similar tasks? Would removing or relaxing the orthogonality constraint improve performance for similar tasks? 2. How does the output distribution of the gating module vary based on task order? Does the size of each task dataset influence the numerical values of the output coefficients (e.g., lower gating weights for low-resource tasks)? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Q1: the cumulative parameter count increases ... potentially limiting scalability in ... a large number of tasks** **A1:** We admit that cumulative parameters increase with more tasks, but scaling to a large number of tasks remains a challenge in CL. To the best of our knowledge, our 15-task sequence setting matches or exceeds those in nearly all existing CL methods for LMs, including TaSL, EWC, and MoCL. Experiments show our method adds minimal additional parameters and computational overhead in this setting. From a capacity perspective, our method scales better than fixed-capacity methods like O-LoRA and InfLoRA since they may lead to insufficient capacity over a large number of tasks. In contrast, our method slightly expands capacity per task. We'll incorporate these discussions into the final version. Thanks for suggestions. **Q2: FWT, BWT should be included** **A2:** This [table](https://anonymous.4open.science/r/Re-A3CF/T1.png) reports FWT and BWT for different methods. Our method achieves the best BWT by preventing new LoRA branches from interfering with old tasks. For FWT, our method is competitive. This is because old gating modules generate coefficients for new task samples on old branches. Since we do not enforce 0 outputs from old gating modules for new task samples, new samples can leverage old LoRA branches' knowledge. Note that we do not claim FWT improvement, and our focus is on enhancing overall CL performance by reducing forgetting, as shown by our best FT and BWT. We'll include these results and discussions in the final version. Thanks for suggestions. **Q3: Comparisons with MoCL, TaSL, EWC** **A3:** For TaSL and EWC, this [table](https://anonymous.4open.science/r/Re-A3CF/T3.png) shows that our methods outperform them in AP and FT. This [table](https://anonymous.4open.science/r/Re-A3CF/T1.png) also shows their FWT and BWT. For MoCL, we maintain the same settings as those in MoCL, including 16-shot, 4 tasks, and 3 different orders. After adjusting the update magnitude (see **A7**), this [table](https://anonymous.4open.science/r/Re-A3CF/T4.png) shows that our methods outperform MoCL in the setting where task identities are unavailable during testing. We will cite these references in the final version and make discussions. Thanks for suggestions. **Q4: Including MTL results** **A4:** In the response to Reviewer kvg3 (**A2**), we provide MTL results. These will be added to the final version. Thanks for suggestions. **Q5: the gating module itself is not inherently novel** **A5:** We admit that the gating module has been used before, but in rehearsal-free setting where task identities are unavailable during testing, we are the first to explore how to design gating modules to overcome forgetting. Note that this setting is important and has been considered by many CL methods such as OLoRA and TaSL. **Q6: Evaluating potential negative transfer risks when tasks are highly similar... (e.g., sentiment analysis→sentiment analysis)** **A6:** We evaluated GainLoRA on similar consecutive tasks (3 sentiment analysis tasks: Task363→Task1687→Task875) as suggested. The results in the [table](https://anonymous.4open.science/r/Re-A3CF/T2.png) show that while GainLoRA remains effective, its improvement is smaller than that in the 15-task setting with dissimilar tasks. Furthermore, GainLoRA underperforms InfLoRA and O-LoRA on the new task (Task875) but outperforms them on old tasks (Task363 and Task1687). This indicates orthogonality constraints might hinder forward transfer. This is a common trade-off in CL: constraints help mitigate forgetting but may restrict transfer, particularly for similar tasks. Conversely, weak or no constraints risk forgetting in dissimilar tasks. Future work will explore adaptive strategies: stronger constraints for dissimilar tasks and weaker ones for similar tasks. Anyway, our GainLoRA is effective in terms of overall (average) accuracy. These discussions and results will be added in the final version. Thanks for this insightful comment. **Q7: How does the output distribution of the gating module vary based on task order? lower gating weights for low-resource tasks?** **A7:** The output distribution of the new gating module is not significantly affected by task order, as shown in Figure 5 of the text and in this [figure](https://anonymous.4open.science/r/Re-A3CF/Sim.png), which involves a task order with similar tasks mentioned in **A6**. Low-resource tasks may result in lower gating weights, but adjusting the gating module's update magnitude can mitigate this. Specifically, for 16-shot experiments (see **A3**), as shown in the [figure](https://anonymous.4open.science/r/Re-A3CF/Few.png), using small learning rate as in many-shot settings leads to insufficient learning, resulting in small coefficients for new tasks and poor performance. However, increasing the learning rate boosts both the new task coefficients and model performance. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed and thoughtful responses. The additional experiments and analyses, including broader baselines and FWT/BWT metrics, have addressed my main concerns. While some limitations remain (e.g., novelty, scalability), the paper is acceptable as it offers a promising approach for rehearsal-free continual learning scenarios. I will adjust my rating to Weak Accept. --- Reply to Comment 1.1.1: Comment: Thank you for your thoughtful review and constructive feedback. We respectfully clarify that we offers a novel method specifically for the rehearsal-free continual learning setting—an important setting in continual learning. Regarding scalability, our method is more flexible in terms of model capacity: unlike fixed-capacity methods (e.g., O-LoRA, InfLoRA) that may suffer from capacity insufficiency as tasks accumulate, our design incrementally expands capacity with minimal per-task parameter growth. This enables better adaptation to a large number of tasks. We are pleased that our responses have addressed your main concerns, and we sincerely appreciate your time and effort in reviewing our work and providing positive feedback.
Summary: The paper proposes a method for computing the weighting factor of different LoRA components in a continual learning setting. The approach is based on training a new set of LoRA parameters for each new task alongside a gating network. This network is constructed such that it outputs a value of 0 at 0. The method further enforces orthogonality constraints to prior data at initialization and when updating to avoid interference with old tasks. The experiments show improved performance over baselines from the literature on SuperNI and LongSequence benchmarks. Claims And Evidence: The primary contribution of the paper is a new method and it performs best in the experiments. Methods And Evaluation Criteria: Yes, the benchmarks, metrics and baselines are appropriate. Theoretical Claims: No. Experimental Designs Or Analyses: The design of the experiments is suitable. There are further relevant ablation studies on the individual components of the method. Supplementary Material: Checked B.3 Relation To Broader Scientific Literature: The paper clearly credits the works on orthogonal initialization and updating that it builds on. Wider themes in the related literature (parameter efficient fine-tuning, continual learning) are similarly discussed and reference. Essential References Not Discussed: None that I am aware of. Other Strengths And Weaknesses: The paper leverages prior work in a thoughtful and logical way. The method is evaluated thoroughly both in comparison to prior work and in terms of ablations. I could see future work extend on this paper. Other Comments Or Suggestions: * the constraint to not carry any data forward seems a bit artificial to me when a new lora module + gate function is added for each task (hence memory use scale linearly with the number of tasks anyway). * please include results for simultaneously training on all tasks as a reference for optimal performance where appropriate, e.g. in Tab 1 * I would be curious if there are any variants of the method that didn't work? If so it would be helpful to include these in the appendix. Questions For Authors: n/a Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **Q1: the constraint to not carry any data forward seems a bit artificial to me when a new lora module + gate function is added for each task (hence memory use scale linearly with the number of tasks anyway).** **A1** The constraint to not carry any data forward is not merely about saving memory but also about preserving privacy and reducing computational overhead [1]. In many real-world scenarios, storing past data is not allowed due to privacy concerns, making rehearsal-free methods essential. Furthermore, some methods that generate pseudo-data for replay require training a generative model, which incurs significant computational overhead. On the contrary, rehearsal-free methods like our GainLoRA in this paper inherently avoid these issues. In the final version, we will clarify this in more detail. Thanks for the suggestion. **Q2: please include results for simultaneously training on all tasks as a reference for optimal performance where appropriate, e.g. in Tab 1** **A2:** We conduct simultaneously training on all tasks in SuperNI and Long Sequence, and we refer to this method as multi-task learning (MTL). The average performance is reported in the table below, and we will include these results in Table 1, Table 2 and Table 3 in the final version. Thanks for the suggestion. ||SuperNI|Long Sequence |:-|:-:|:-:| |T5-Large|52.10|81.63 |T5-XL|54.12|84.07 ||SuperNI |:-|:-:| |Llama-2-7B|56.88 |Llama-2-13B|57.66 **Q3: I would be curious if there are any variants of the method that didn't work? If so it would be helpful to include these in the appendix.** **A3:** Yes, we explored several variants of our method that did not perform well, and we reported them in the ablation study (Table 4). Specifically, the variant "No Initialization Constraints" replaces $f$ with a sigmoid function, a common choice for gating mechanisms. However, sigmoid function does not satisfy $f(0)=0$, leading to performance degradation compared to our method. Similarly, the variant "No Update Constraints" omits orthogonal projection during training, which also results in a significant performance drop. We appreciate the reviewer’s suggestion and will clarify these points further in the final version. [1] A comprehensive survey of continual learning: Theory, method and application, TPAMI 2024. --- Rebuttal Comment 1.1: Comment: Thank you for the clarifications and additional results. These further assure me in my recommendation to accept the paper. --- Reply to Comment 1.1.1: Comment: We are pleased that our responses have addressed your concerns, and we sincerely appreciate your time and effort in reviewing our work and providing positive feedback.
null
null
null
null
null
null
How Classifiers Extract General Features for Downstream Tasks: An Asymptotic Analysis in Two-Layer Models
Reject
Summary: The paper investigates how classifiers learn general features that can be directly applied to new tasks without further training. It considers a two-layer neural network trained with a single gradient descent step on a mean‐squared error loss. In an asymptotic regime—where the number of samples, input dimension, and network width all grow proportionally—the authors decompose the learned feature representation into the initial random features and a “spike” signal term introduced by training. They show that when the distribution of unseen classes is similar to that of the training data, the extracted features exhibit strong intra-class cohesion and inter-class separability. A key finding is that if two unseen classes both align with the same training class, separability decreases even when overall similarity is high. Claims And Evidence: The claims are supported by simplified theoretical analysis and several empirical validations. Methods And Evaluation Criteria: The analysis is based on an idealized model—a two-layer network undergoing a single, large gradient descent update—selected for analytical convenience. Very similar analyses should already appear in previous studies, and I will elaborate on this point below. The evaluation includes synthetic experiments designed to control similarity measures, along with empirical tests on standard image datasets. A nearest-neighbor retrieval metric is employed to assess clustering quality, serving as a reasonable proxy for evaluating feature transferability. Theoretical Claims: Yes. The techniques are standard. Experimental Designs Or Analyses: Yes, see above. Supplementary Material: Yes. I loosely go through the mathematical parts. Relation To Broader Scientific Literature: This paper should be related to the feature learning literature in understanding deep learning. Essential References Not Discussed: This is my primary concern regarding this paper. There already exists extensive theoretical literature on feature learning, encompassing both linear and nonlinear features, and covering various regimes such as the concentration and proportional regimes. Although I am most familiar with regression settings, classification scenarios should inherently share similar insights, making results transferable. Additionally, earlier theoretical work on classification likely exists, and many papers have also addressed transfer learning extensively. I would appreciate a thorough comparison with the following references: nonlinear feature learning and transfer learning in regression cases (https://arxiv.org/pdf/2311.13774, https://arxiv.org/pdf/2411.17201), linear feature learning and transfer learning in regression (https://arxiv.org/abs/2206.15144), and linear feature learning in the proportional regime using random matrix theory (https://arxiv.org/abs/2410.18938). Moreover, there should be additional relevant references along these lines. Given this context, I am not entirely sure about the novelty of your paper. Other Strengths And Weaknesses: No. Other Comments Or Suggestions: Some notation choices are unusual. For example, in Theorem 3.3, the notation "spike_L" is non-standard and should be avoided in a formal theorem statement. Questions For Authors: No. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Attachment link: anonymous.4open.science/r/icmlrebuttal-3B6F Thank you for recognizing the theory as standard and for positively evaluating the experiments. We understand that you wanted us to **clarify the relationship between the relevant studies and ours** in order to highlight the **novelty**. Our work focuses on the **feature transfer of networks trained via classifiers**, particularly in settings akin to metric learning, which has previously been used for performance *without understanding the underlying principles.* In contrast to many previous studies that analyze on feature learning phenomenon of training data or test errors. For theoretical contributions, we agree as you mentioned, “classification scenarios should inherently share similar insights, making results transferable”. However, this adoption is challenging due to difficulties in dealing with multiple non-identical and arbitrarily labeled class conditional distributions. Such modifications on the problem setup require the more generalized assumption on data distribution, i.e., **a non-centered Sub-Gaussian**. Thus we provide mathematical tools to analyze non-centered sub-Gaussian distribution with an Hermite expandable activation, which are novel in the existing works (Section L, M). These technical contributions generalize the previous data assumption, and we believe future research can build on ours. **First, we reviewed the requested studies** (Wang23, Fu24, Damian22, Dandi24) and noted them by the first author and year. *Note on Damian 22, Wang 23, Fu 24* These studies analyze the sample complexity for neural networks to learn the internal structure while regressing a teacher in the form of $g(x^\top Ax)$ where this form becomes progressively more complex (here, $x$ comes from a centered distribution.). These studies focus on regression problems using a teacher, whereas we assume only **arbitrarily assigned classification labels without using a teacher**. At the same time, the transfer learning part analyzes the sample complexity for learning when function head $g$ is changed while maintaining the internal structure. In contrast, we conduct research on feature transfer when new distribution inputs are introduced without additional learning. *Note on Dandi24* Dandi24 analyzed the learned feature extractor using an equivalent model to characterize the test error for *regression* and analyze the spectral tail behavior of the covariance matrix of the feature extractor. This study dealt with the phenomenon where spikes appear in the spectrum, making the tail heavy-tailed. Similarly, we derive an equivalent model suited to our *classification* setup and analyze the characteristics of clustering error for unseen distributions (Section 4.1). Then, we analyze how the spike term of the feature extractor operates (Section 4.2). Additionally, we have provided a comparison table in Table 2 of the above anonymous link, which we plan to include in the Appendix to offer further explanation to the readers. We sincerely appreciate the references you shared. **Secondly, regarding the existing research on classification and transfer learning**, we respond as follows We’ve already covered existing studies on feature transferability in the Additional Related Works section, which focuses on intuitive explanations of feature transfer (L662) and feature learning phenomena, Neural Collapse, mainly for training data (L672). Since you reviewed only the mathematical part of the Supplementary Material, we suggest reviewing Section A. Some additional classification studies were not cited as they either **differ from our framework or focus mainly on training data**. However, we plan to discuss them in our paper for readers' understanding. For example, there are papers addressing the alignment phenomenon between the network and the training data in neural network classification tasks (arxiv.org/abs/2307.12851), studies on the increased separability and cohesion of training data features in classification settings (arxiv.org/abs/2012.10424, arxiv.org/abs/1909.06930), and research showing that classifier networks outperform linear classifiers through feature learning (arxiv.org/abs/2206.01717, arxiv.org/abs/2202.07626, arxiv.org/abs/2102.11742). Additionally, there are empirical studies on classifier *transfer learning* (not *feature transfer* like ours, which doesn’t require learning) (arxiv.org/abs/2212.12206) and theoretical studies (arxiv.org/abs/1809.10374, arxiv.org/abs/2006.11650). Furthermore, there are theoretical papers studying optimization properties like implicit margin maximization (arxiv.org/abs/2110.13905, arxiv.org/abs/2305.11788) and interpolation (arxiv.org/abs/2012.02409, arxiv.org/abs/2306.09955). **Finally**, we will standardize spike_L to s_L. Once again, we truly appreciate your thoughtful review and constructive suggestions, and if you have any additional concerns, let us know. We will do our best to address your concerns.
Summary: This paper studies how a two-layer classifier, trained by mean-squared error for multi-class problems, learns a features that can cluster unseen data. The main theoretical result is an exact characterization of a single-step gradient update of the network features, derived under proportional asymptotics (sample size, width, and data dimension diverging at constant rate) and mixture of sub-Gaussian covariate distribution. More precisely, they show that after a single gradient step, the feature matrix can be asymptotically approximated by a low-rank spiked matrix. From this result, the authors draw the following conclusions: - **Multi-class spikes**: the feature matrix depends mostly on a linear combination of classifier weights aligned with data, implying that “off-angle” or unrelated training directions have negligible effect. - **Cohesion and separability**: By looking at the population risk for binary-class unseen data, this characterization implies that train-unseen similarity drives intra-class cohesion and inter-class separability – but separability degrades if new classes map to the same training label. Numerical experiments on both real and synthetic data are provided to illustrate the theoretical results. Claims And Evidence: Most of the claims in this work are mathematical statements, which are supported by rigorous proofs in the appendix. Methods And Evaluation Criteria: N/A. Theoretical Claims: I skimmed through the proofs in the appendix. The key steps are mostly adapting previous results by (Ba et al., 2022; Dandi et al., 2024; Moniri et al., 2024) to the multi-class classification setting studied here. Honestly, the lack of text highlighting the main ideas make it challenging to parse the proof, even for a reader familiar with the works cited above. Therefore, even after skimming through it I am not in a position to saying the proofs are correct, and I would strongly encourage the authors to rewrite the appendix having the readability in mind. Experimental Designs Or Analyses: Several numerical experiments are presented, which seem mostly to agree with the theory. I did not check them in detail, but I think this is not the main point of the paper. Supplementary Material: Yes. I mostly went over the theoretical part (Appendix G to M). As highlighted in **Theoretical Claims** I found the appendix challenging to parse. Relation To Broader Scientific Literature: This paper belongs to a recent wave in the machine learning theory literature in looking at the benefits of feature learning after a few (or here, a single) large gradient step from initialization (Damien et al., 2022; Ba et al., 2022; Dandi et al., 2024). The key result, which is the asymptotic characterization of the network feature matrix after a single step, heavily builds on the analysis developed in these works, adapting it to the case of an input data mixture distribution. Essential References Not Discussed: While some works in this literature as acknowledged, I have found some relevant omissions - most importantly [Dandi et al., 2024], which proves results which are complementary to (Damien et al., 2022; Ba et al., 2022) and precedes (Moniri et al., 2024). Also relevant to the regime studied here are (Cui et al., 2024; Dandi et al., 2024), who provided an exact characterization of the gradient step in the critical learning rate regime which is complementary to (Moniri et al., 2024) results, that hold in the sub-critical regime. Finally, a related recent work is (Demir & Dogan 2024), who generalized this discussion to mixture distributions, in a similar spirit to this work. The authors should provide a comparison of their technical contributions with this work, since it is possible there are some technical overlap. - [Dandi et al., 2024] Dandi, Yatin, Florent Krzakala, Bruno Loureiro, Luca Pesce, and Ludovic Stephan. "How two-layer neural networks learn, one (giant) step at a time." JMLR 2024. - [Cui et al., 2024] Hugo Cui, Luca Pesce, Yatin Dandi, Florent Krzakala, Yue Lu, Lenka Zdeborova, Bruno Loureiro. "Asymptotics of feature learning in two-layer networks after one gradient-step." ICML 2024. - [Dandi et al., 2024] Dandi, Yatin, Luca Pesce, Hugo Cui, Florent Krzakala, Yue M. Lu, and Bruno Loureiro. "A random matrix theory perspective on the spectrum of learned features and asymptotic generalization capabilities." arXiv preprint arXiv:2410.18938 (2024). - [Demir & Dogan 2024] Demir, S., & Dogan, Z. "Asymptotic Analysis of Two-Layer Neural Networks after One Gradient Step under Gaussian Mixtures Data with Structure." ICLR 2024. Other Strengths And Weaknesses: The motivation of this work and the phenomenology are very interesting, and this could have been a potentially nice paper if it was more clearly written. However, the heavy (and sometimes redundant) notation, combined with many typos and confusing writing makes the reading quite challenging, even for a reader familiar to this line of work. The manuscript would definitively benefit from an extensive rewriting. Below I give a few concrete suggestions. Other Comments Or Suggestions: - The scales in Fig. 2 and 4 are very big. It would be better to plot these quantities with the correct normalization with respect to the dimensions $d,N,n$ to have more meaningful numbers. - Although you do it implicitly, would be nice to give the explicit definition of the matrix $\mathbb{A}$ in eq. (3), which is relevant to the result that follows. - It is not easy to keep track of which quantities here are matrices/vectors or components of matrices/vectors. I suggest the authors to stress this difference, for instance by putting the former in bold. - There is an inconsistency in the usage of bold and not bold letters throughout the manuscript, for instance in the dimensions $N,p,d$. - There are many sentences which seem incomplete in the text. For example, In the end of page 2: "The number of problem \#_{P} \overset{\Delta}{=} ...$. To improve readability, I would encourage the authors to go through the text and complete these. - Page 2, L090, left-column: "sementic" - I don't understand the limit $\sum_{i<j}^{c}$ in Eq. (1) - is this correct? Is $c=N/n$? Questions For Authors: My main concerns were listed in the previous points. However, I have one clarification for the authors: - If I understood correctly, the learning rate in the gradient step is exactly $1$? Accounting for the different choices of normalization, how does this compare with the scale of the learning rate of (Ba et al., 2022; Dandi et al., 2024; Moniri et al., 2024)? More precisely: is this choice sub-critical (as in Moniri et al., 2024) or critical (as in Cui et al., 2024)? Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: Attachment link: anonymous.4open.science/r/icmlrebuttal-3B6F Thank you for your comments. We also appreciate your feedback that the motivation of our work and the phenomenology are very interesting. After reviewing your comments, we found that you suggested improvements for readability, understood the paper's core focus as theoretical, requested a discussion on its relation to certain references, and faced difficulties in comprehending the proof section. Through this rebuttal, we aim to 1. inform you of the revisions made based on your suggestions, 2. clarify that our contributions beyond theory, 3. discuss the gap between our work and the suggested references, 4. assist in your understanding of our theory, and 5,6. rebuttal to some misunderstandings 1. We made the following revisions based on your feedback. First, for readability, we fix typos you suggested. Second, for better clarity of theory, we included the main idea before the proof and the proof structure as in the above link's Table 3-8 and Fig. 4–8. Also, we placed the proofs of main Theorems and Propositions upfront, while moving the lemmas back. Third, we included references to studies that you suggested. 2. Contrary to comment on experiments that “I think this is not the main point of the paper.”, **the results in Section 4 and experiments are the major contributions of our study**. This work offers intuition based on theory to data collection, with experiments on settings not explored in metric learning literature. It contributes by leveraging existing theories with empirical adaption, **enhancing the understanding of feature transfer, and establish widely accepted intuitions (as in *L662*) on a rigorous foundation.** We hope this is reflected in your evaluation. 3. As stated in the right column on L84, **the research we present is not a direct extension of the feature learning literatures including your suggestions**. The studies you mentioned focus on *teacher-student regression*. On the other hand, our research analyzes feature transfer and clustering tasks in a *classification setups*. Therefore, we primarily cited baselines that are required for the proof. However, since the distribution we are dealing with is non-centered Sub-Gaussian, which generalizes to all data setups in your suggestions. Additionally, you can find the comparisons between the papers you mentioned and our approach in Table 2 of the attachment link. 4. It seems you missed the logical flow of the proof, so we summarize and clarify its implications as follows. Lemmas (I.xx, J.xx) resemble those in Ba et al. (2022) and Moniri et al. (2024) which deal regression, but our classifier setting **need to deal with multiple non-identical distributions**. This requires novel proof techniques in Section I, J and a novel generalized distributions be dealt within Sections L and M. There, **we develop mathematical tools for analyzing sub-Gaussian distributions and Hermite-expandable activations**, extending previous data assumptions. We trust this enhances your comprehension of our contributions. 5. We address sub-Gaussian distributions rather than Gaussian mixture distributions, which is more general distribution set. **We did not use the word "mixture" in the text. Thus, could you clarify why you mentioned this in the Summary and in referencing Scientific Literature?** After reviewing Demir & Dogan 2024, we found that it is contemporary work. They assume a Gaussian mixture for the data, propose a Conditional Gaussian Equivalence and stochastically approximate a 2-layer network. This might seems similar to our Theorem 3.3, but **our proof does not require conditional approximates nor constructing a stochastically equivalent model** since we only utilize Sub-Gaussian property. This provides an advantage in analyzing the features of new data in a simple form. 6. Regarding the rejection, we understand that it was due to a lack of clarity in the proof of the Theorem. We would like to appreciate if you clarify which part was difficult to understand, leading to your statement, "I am not in a position to say the proofs are correct"? **Your concrete suggestions mainly focused on typographical errors**, the distinction between vectors and matrices, etc. **We do not believe these significantly hinder the understanding.** Also, We appreciate your understanding in novel analysis induce unfamiliar notations, unavoidably. **Answer for Question**: The learning rate is exactly 1. Normalization follows Moniri et al., 2024. The learning rate corresponds to the case where $\alpha = 0$ in Moniri et al., 2024 or $\Theta(\sqrt{n})$ in Cui et al., 2024 In this setup, we make $||G||, ||W_0||, ||W|| = O_p(1)$. We will add the assumption that $\eta = \Theta(1)$ for clarity. Finally, the constant $c$ in Eq. 1 is #_cls. We will revise this. Looking forward to your response and discussion. Thank you. --- Rebuttal Comment 1.1: Comment: I thank the authors for clarifying some points in their rebuttal and for the updates. To clarify, my score is not based on a single factor (e.g. clarity) but on a combination of factors, which in my judgement suggests the paper would benefit from substantial rewritting justifying a resubmission. Coming to the specific points of your rebuttal. > the results in Section 4 and experiments are the major contributions of our study. I find rather striking that you highlight the experiments as the major contribution of your work. First, this does not translate in the writting: both in the abstract and introduction, the experiments are refered as a "*validation*" of the theoretical findings, and not the opposite. Second, I honestly find the plots you present in Section 5 illisible. There are too many plots, with small fonts and curves that do not convey a clear message. I don't think it is normal I have to zoom in to understand them. Moreover, in most of the plots the y-axis is a quantity that is scales badly with the problem dimensions (and in some plots also the x-axis). As a constructive suggestion, I would suggest you to focus on less plots that convey stronger and clearer conclusions. > We did not use the word "mixture" in the text. Maybe I misunderstood something. A mixture distribution $p(x)$ is a distribution of the form: $$ p(x) = \sum\limits_{c\in\mathcal{C}} \alpha_{c} p_{c}(x) $$ for a countable set $\mathcal{C}$, a sequence of $\alpha_{c}\in[0,1]$ with $\sum_{c\in\mathcal{C}}\alpha_{c}=1$ and $p_{c}(x)$ a family of distributions indexed by $c\in\mathcal{C}$. Can you clarify how Assumption 2.2 is different from a mixture of subGaussian distributions? > We would like to appreciate if you clarify which part was difficult to understand, leading to your statement, "I am not in a position to say the proofs are correct"? The way the theoretical part of the appendix is written is hard to parse. For concretness, consider for instance "*Appendix I. Proof of Theorem 3.1*". The first Lemma is a bunch of symbolic mathematical statements with no text. Assumptions are not stated. You talk about "aforementinoed" matrices $\mathbb{A},\mathbb{B},\mathbb{C}$, but where have you defined them? The rest of the Appendix follow pretty much a similar structure: a bunch of mathematical formulas, almost no text and context and with the assumptions almost always implicit in the statements of the mathematical results. --- Reply to Comment 1.1.1: Comment: Thank you for staying engaged in the discussion. Following the initial rebuttal, we consider issues raised by R1, R3, R4, R5 (rebuttal index), and Question1 to be resolved, as no further objections were made. In this round, although described as a combined basis for rejection, we believe some have been clarified in revision without rewriting, while others are addressed through rebuttals: 1. Sub-Gaussian vs. mixture: The question seems to conflate basic distributional concepts, whereas **this assumption serves as a central enabler** of our contribution. 2. Revisit R2: We clarified a few sentences to more explicitly emphasize that Sections 4 and 5 are key contributions, as **already stated throughout the paper**. 3. Section 5 figure clarity: **Now newly raised; We respectfully contest**: the figures are sufficiently **clear and minimal** for supporting our claims. 4. Revisit R6: We note that **this focuses on style rather than the correctness** of the theory. We have revised the text accordingly for clarity. **First, Sub-Gaussian vs. mixture**: The question on Assumption 2.2 seems to reflect a misunderstanding of its role. We assume class-conditional distributions are non-centered sub-Gaussian (defined via tail behavior; Vershynin, 2018), enabling generalization over prior works and forming the theoretical basis of our deterministic feature analysis for classifier in Sections 3 and 4. Since mixtures of sub-Gaussian remain sub-Gaussian due to the tail, mentioning “mixture” is redundant. We suspect the confusion stems from “class-conditional,” but clarify that unconditioned distributions (possibly representated as mixture) are not the object of analysis. **Second**, while you comment that the role of Sections 4 and 5 as key contributions is not clearly conveyed, we note that this point is emphasized consistently throughout the paper, where we repeatedly highlight the practical perspectives: - L30 (Abstract): “demonstrate practical applicability” - L46 (Intro): Motivated from the underexplored nature of transferable conditions - L408 (Conclusion): Offers empirical insights and implications for transfer tasks Nonetheless, we revised a few sentences to clarify that “validate” (L87, L107) refers to theory-driven practical explanation, not just validation of approximation. **Third**, the opinion that figures in Section 5 are illegible may not be appropriate: - Number of plots: The number of plots is justified by the need to support each claim with a minimal set—one summary figure per experiment, mostly. - Need to Zoom in: We disagree with your concern about figure size. The current scale is already sufficient—for instance, in Figure 7, the text is roughly 3/4 the size of the caption, which is not atypical. Moreover, the key message (trend direction) remains clearly readable without zooming. - Scale: This issue appears only in Figures 7 and 8 due to unnormalized high-dimensional features. Normalizing would complicate the setup explanation, so we respectfully disagree. **Also, we note that UifN carefully reviewed experimental results and provided constructive feedback, and neither RnQR nor UifN raised concerns about legibility.** **Fourth**, the renewed concern about the readability of the theoretical section appears to be a matter of presentation style—including (a) Formula-centric presentation, (b) implicit definitions, and (c) implicit assumptions/contexts—rather than a substantive issue. We note that no specific errors were raised or suspected. With RnQR’s positive evaluation, we claim that our theory is structurally and formally sound. **Minor improvements such as restating important assumptions or adding brief context have made in revision, but we do not believe these issues warrant a full rewrite.** Responses to specific points: - The remark on “bunch of symbolic math with no text” reflects a stylistic preference e.g. Lemma I.1. This is acceptable in theoretical papers; see, e.g., Lemma 14/15 of Ba et al. (2022) for a format. - The notations A,B,C were already explicitized in response to earlier feedback. As you reviewed the theory closely, we hope Eq. 13, 20, and 30 were not overlooked. - We explicitly state that our main results rely on Assumptions 2.1/2.2 and Condition 4.4, so we did not repeat them in every lemma. Additionally, whenever we use external results, we already ensured citation for context (e.g., L2243, L2255). Moreover, we believe this work is timely for publication with few revisions. The references you and RnQR provided (Table 2 in Attachment) suggest that the field is entering a mature phase, with recent studies extending to more theoretical generalizations. *Our work goes beyond commonly proposed complex assumptions and shows that this line of research can extend to practical tasks like classification and feature transfer.* It demonstrates how theory can inform real-world problems, and we believe sharing this idea to the community now can **notably stimulate future progress.**
Summary: This paper explores how classifiers extract general features for transfer to new distributions. It analyzes a two - layer network in the proportional regime, decomposing features into components like random initialization and spikes related to training classes. In binary classification, train - unseen similarity affects cohesion and separability. Higher similarity increases cohesion, and for separability, it depends on class assignment. In multi-class classification, non-orthogonal spikes to the input contribute to feature extraction. Experiments on synthetic and real datasets demonstrate the theoretical findings, showing that semantic similarity between training and test data improves clustering performance. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Althogh the paper starts the anaylze from a two-layer network, it consider the network width and dataset size in a similar scale, it aligns with common practices in model scaling. Experimental results on Resnet50 also show consistent results with the analyzing. Theoretical Claims: No. Experimental Designs Or Analyses: Yes. Experimental design in Expr.V seems a little unfair. Accoding to the supp.N, the number of related images between sub In1k and subsampled whole In1k is different, it is hard to get the conclusion in L411. Supplementary Material: Yes. Supp. B, C, D, E and N. Relation To Broader Scientific Literature: The paper provide a theoretical analysis method for understanding the transfer learning problem in deep learning, which is beneficial for downstream applications. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths: 1. Offers a rigorous theoretical analysis of feature learning in two-layer neural networks using Hermite decomposition. 2. Uses both synthetic and real-world datasets to demonstrate the effectiveness. 3. Provides insights into feature transferability. Weaknesses: 1. Unfair comparsion. Experimental design in Expr.V seems a little unfair. Accoding to the supp.N, the number of related images between sub In1k and subsampled whole In1k is different, it is hard to get the conclusion in L411. 2. The experimental setup on ImgNet can not well support their claims. This paper uses four semantic categories in ImgNet to show ""adding semantically relevant classes to the training set leads to performance gains", however, these semantic categories have different classes and have different internal similarity. In other words, these semantic categories have different diversity. For example, the semantic similarity between cock and hen (birds) is surely closer to that between guitar and analog clock. I guess a better way is to establish four semantic categories from four finegrained datasets. Or, consider the internal similarity when conducting the training set. 3. Organization problem of the paper. Some important experimental setup (which showcase the fair comparison) should be included in the main text to avoid readers from turning pages repeatedly. Other Comments Or Suggestions: None. Questions For Authors: Please refer to weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Attachment link: anonymous.4open.science/r/icmlrebuttal-3B6F Your thoughtful feedback is a great encouragement and reaffirms our commitment to furthering this research. We understand that your primary concern lies in fairness during experimental validation. Thus, the primary purpose of this rebuttal is **to dispel any misunderstandings that this setup is unfair**,and to clarify that fairness has indeed been ensured. Moreover, *in response to your point about the difficulty of verifying fairness* as **W3**, we included **the experimental setup mentioned at Appendix N in the main text.** After carefully reviewing your comments, we respond to your concerns and propose the following revisions: **First, We will address the concern on W1 about the unfairness of the claim on L411**, i.e., Expr V. It seems to stem from the following sentences. _L355_: we performed experiments on the whole classes ImageNet by sampling $100$ instances per class (say subsampled whole In1k). _L411_: We find that adding classes from the entire ImageNet dataset during training, rather than including only related classes, does not significantly improve clustering _Appendix N_: To balance the number of samples per class with those in the base fine-grained datasets, we extracted $82$, $58$, $5$, and $6$ samples per class for I(V), I(B), I(P), and I(C), respectively. We sincerely apologize for the omission of some information in _L355_. To be more precise, in _L355_, we only specified the setup for Expr. VII, which used the subsampled whole In1k. In this case, as _L355_, we extracted $100$ instances per class from the entire ImageNet dataset. Since this experiment is related to the case of duplicate assignment and we use subsampled whole In1k alone, the situation you were concerned about did not occur. **On the other hand**, for Expr. V, we did not perform sampling in the same way. We already agree with your concerns, and we took them into account when designing the experiment. In Expr. V, to ensure fairness, we extracted $82$, $58$, $5$, and $6$ instances per class from the I+D case datasets and performed the experiment accordingly. We sincerely apologize for the omission of this detail when explaining the setup. To ensure the accuracy of this experiment, we _re-inspect_ the code for extracting $82$, $58$, $5$, and $6$ instances per class for the D+I case using subsampled whole In1k in Expr. V (Listing 1 in the above anonymous link) and then _conducted the experiment again_. As a result, we obtained nearly identical performance to the original results (with only minor performance variations due to seed differences, which had no impact on the claims). This can be confirmed in Figure 1 and Table 2 in the above anonymous link. **Secondly, you raised a concern on W2 that the ImageNet setup may not sufficiently support the claim that "adding semantically relevant classes to the training set leads to performance gains"** However, there is a misunderstanding regarding this point. We presented this claim in the _L90_ and left column of _L414_, and the experiment that supports this claim is Expr. VI. We designed Expr. VI based on the same reasoning as yours, which is why **we do not make the above claim by adding similar classes using a subset of ImageNet.** Consequently, as stated in _L430_ and as you suggested, in Expr. VI, we conducted experiments using $25$%, $50$%, $75$%, and $100$% of the domain dataset classes exclusively i.e. the four fine-grained datasets. As a result, we observed a trend where adding classes within a dataset led to performance improvements, which formed the basis of our claim. However, we have realized that this claim was not explicitly stated in the explanation of Expr. VI. To reduce any potential misunderstandings, we will explicitly add the statement "adding semantically relevant classes to the training set leads to performance gains" in _L429_, based on Expr. VI. Additionally, we will clarify that the domain datasets used are fine-grained datasets such as CUB, CAR, SOP, and ISC. I hope this answer will help you understand this study better and please continue to rate it positively. Thank you.
null
null
null
null
null
null
null
null
Sliding Puzzles Gym: A Scalable Benchmark for State Representation in Visual Reinforcement Learning
Accept (poster)
Summary: The paper introduces a sliding puzzle based environment for evaluating visual RL. It provides a number of baselines on the environment. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: N/A Experimental Designs Or Analyses: Yes. It is sound. Supplementary Material: Yes, I checked it. Relation To Broader Scientific Literature: The authors do a good job discussing the previous works such as distracting DMC, vanilla DMC, ProcGen, etc. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Weakness related to writing: it is still not clear to me why this method would test an RL agent's visual representation capability? Clearly, an RL agent needs to put together _what_ is the goal image. It only understands so when the goal is an image that _makes sense_. From that point of view, it is not convincing to if we are _only_ evaluating the visual aspect of RL. I would encourage authors to provide a better motivation to this question in the paper. Based on the response, I am open to changing my evaluation (both positive and negative). Other Comments Or Suggestions: N/A Questions For Authors: Please see above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their positive assessment of our experimental design and literature review, and for the opportunity to clarify the core motivation behind SPGym regarding the evaluation of visual representation capabilities. ## Why SPGym Tests Visual Representation Capabilities The central idea behind SPGym is to create a controlled environment where the primary challenge being scaled is the agent's ability to process and understand diverse visual inputs, even though the agent is trained end-to-end on a standard RL task. We achieve this through specific design choices: 1. **POMDP Formulation:** As detailed in Section 3, SPGym is formulated as a Partially Observable Markov Decision Process (POMDP) defined by $(\mathcal{S}, \mathcal{X}, \mathcal{A}, \mathcal{P}, \mathcal{R}, \mathcal{S}_0)$. Critically, the agent never observes the true underlying puzzle state $s \in \mathcal{S}$. Instead, it only receives visual observations $x \in \mathcal{X}$. The agent must infer the relevant state information solely from these high-dimensional visual inputs. 2. **Isolating the Visual Challenge:** The key design element is that across all experiments and all compared agents, the core components of the MDP remain fixed: the state space $\mathcal{S}$ (tile permutations), the action space $\mathcal{A}$ (up, down, left, right), the deterministic transition dynamics $\mathcal{P}$ (how tiles move), the reward function $\mathcal{R}$ (based on Manhattan distance), and the initial state distribution $\mathcal{S}_0$. The only thing that changes between experimental conditions (e.g., different pool sizes) or agent comparisons (e.g., different representation learning modules) is the emission function that maps the underlying state $s$ to the visual observation $x$. We vary this by overlaying the puzzle state with different pools of images. 3. **Performance Reflects Representation Quality:** Because all core task elements (dynamics, rewards, etc.) are constant, any difference in performance (e.g., sample efficiency) between agents or across different visual diversity levels must be attributed to how effectively the agent's visual encoder maps the observation $x$ to a useful internal representation. An agent cannot solve the puzzle without implicitly or explicitly understanding the tile configuration from the image. Better visual representations enable the policy to make better decisions, leading to faster learning and higher success rates. While policy learning is intrinsically linked, the bottleneck being systematically stressed and evaluated is the visual representation learning component. ### Empirical Evidence To provide further empirical support for this link between representation quality and task performance, we conducted linear probe evaluations (inspired by feedback from reviewer o6ZL) on the frozen encoders learned by PPO and SAC agents. We trained linear classifiers to predict tile positions from the learned features. Our results show a statistically significant negative correlation (Pearson r=-0.81, p=1.1e-13) between the probe's test accuracy and the number of environment steps the RL agent needed to reach 80% success. This demonstrates that encoders capable of producing more informative features (higher probe accuracy) enable faster learning on the downstream RL task. The probe accuracies for tested agents follows: |Agent/Pool|1|5|10|20|30|50| |-|-|-|-|-|-|-| |PPO|99.81±0.14|96.42±1.06|87.84±1.40|66.97±5.28|-|| |PPO+PT(ID)|99.81±0.12|96.83±0.61|89.54±0.95|-||-| |PPO+PT(OOD)|99.59±0.33|95.68±0.77|88.90±1.11|-||-| |SAC|100.00±0.00|97.63±0.68|93.34±0.48|80.74±6.31|66.69±8.41|55.52±0.09| |SAC+RAD|99.99±0.01|98.66±0.20|89.74±0.73|-||-| |SAC+CURL|99.98±0.03|97.14±0.17|89.47±1.23|-||-| |SAC+SPR|99.99±0.01|94.31±0.24|75.48±1.82|-||-| |SAC+DBC|100.00±0.00|94.26±1.19|76.59±5.82|-||-| |SAC+AE|100.00±0.00|95.52±4.56|88.66±1.88|-||-| |SAC+VAE|99.66±0.06|78.21±2.35|64.76±0.11|-||-| |SAC+SB|99.90±0.03|96.69±1.08|81.93±6.06|-||-| ## Conclusion In essence, while the agent learns end-to-end, SPGym isolates the difficulty scaling to the visual domain. An agent succeeding in SPGym, especially across varying image pools, demonstrates robustness in its visual processing specific to the task structure. We believe this setup effectively probes an agent's ability to form useful representations from pixels under varying visual conditions, which is a critical aspect of visual RL. We will revise the Introduction and Methodology sections to incorporate this detailed explanation and motivation, ensuring it is clear how our design choices allow for the evaluation of visual representation learning capabilities in a controlled manner. We hope this addresses the reviewer's concern and clarifies the value proposition of SPGym. We appreciate the reviewer's willingness to reconsider their evaluation based on this clarification. --- Rebuttal Comment 1.1: Comment: While I appreciate more experiments in such short time, I am unsure the results shown here answers my question and remain unconvinced. I _do_ think the claim is okay (hence leaning accept), except that the current experiments do a _subpar_ job of showing it. I suggest a simple experimental setup: can we have a ground truth latent for the environment observations? Can an RL agent learn significantly better for a large number of different environments given the ground truth latent? I will stay at my borderline accept rating and happy to go up if the authors can present a convincing experiment. --- Reply to Comment 1.1.1: Comment: Thank you for the further discussion and the constructive suggestion for a direct comparison experiment. We agree this is a valuable way to assess the impact of learning from visual observations versus ground-truth states. Following your suggestion, we trained PPO, SAC, and DreamerV3 agents using SPGym's one-hot encoding variation (representing the ground-truth puzzle state, identical to the targets for our linear probes) and compared their sample efficiency (steps to 80% success, avg. 5 seeds) against the image-based versions. For PPO and SAC, we replaced the CNN encoders with 2-layer MLPs to process the one-hot vectors. DreamerV3 used its default non-image encoder (a 3-layer MLP). We maintained hyperparameters close to the image-based experiments without specific tuning for the one-hot setting. |Algorithm|Grid Size|One-hot|Image (Pool 1)|Image (Pool 5)| |-|-|-|-|-| |PPO|3x3|661.69k±81.44k|1.75M±444.81k|7.80M±1.08M| ||4x4|12.29M±467.84k|24.46M±7.58M|-| |SAC|3x3|672.51k±63.10k|334.26k±67.47k|907.21k±116.20k| ||4x4|5.09M±463.14k|8.14M±3.64M|-| |DreamerV3|3x3|834.86k±61.10k|417.09k±55.03k|1.23M±199.49k| ||4x4|3.68M±436.97k|2.26M±287.23k|5.81M ± 2.17M| These results provide several insights. For PPO on both grid sizes and SAC on the 4x4 grid, learning directly from the ground-truth one-hot state is more sample efficient than learning from images. The results for SAC and DreamerV3 on the 3x3 grid, where pool 1 images led to faster convergence than one-hot, may be influenced by the differences in network architectures and the lack of architecture/hyperparameter tuning specifically for the one-hot setting. Crucially, however, across all agents and grid sizes, increasing the visual diversity from **image pool size 1 to pool size 5 and beyond consistently increases the sample complexity**. This shows the impact of the visual representation challenge that SPGym is designed to probe, isolating the effect of visual diversity on learning efficiency. While the one-hot version provides a useful ground-truth baseline, **its difficulty is fixed**. SPGym's core value lies in its image-based variations, which allow us to systematically scale the visual diversity challenge (Pool 1 vs. Pool 5 vs. Pool 10, etc., see Table 5 in the main paper) while keeping the underlying task dynamics constant. This enables the controlled evaluation of how effectively different RL agents learn representations under this specific, scalable stress, revealing limitations that wouldn't be apparent from the one-hot setting alone. While acknowledging that perfect disentanglement is challenging in end-to-end learning, SPGym provides a framework for this structured, comparative evaluation of visual representation learning capabilities in RL. We thank you again for pushing us on this and for providing the suggestion that led to these insightful results. We will incorporate this experiment and discussion into the revised manuscript.
Summary: This paper presents SPGym, a new benchmark for visual reinforcement learning (RL) based on the classic 8-tile puzzle. SPGym uses a visual observation space derived from large datasets and allows researchers to manipulate representation complexity by adjusting visual diversity. Experiments using model-free and model-based RL algorithms on SPGym reveal that current methods struggle to generalize across varying visual inputs, with performance degrading as visual diversity increases. Claims And Evidence: The reviewer agrees that the ability to "scale the representation learning complexity while maintaining consistent environment dynamics" is a crucial aspect of research in visual reinforcement learning. The proposed SPGym environment, despite its simplicity, demonstrates potential in fulfilling this objective. By merely utilizing different images within the puzzle, one can effectively control the visual complexity of the task. Methods And Evaluation Criteria: The reviewer has concerns regarding the evaluation settings. 1. In the 'in distribution' setting, there appears to be no separate hold-out validation set for evaluating the learned policy. Given the deterministic nature of the game, the reported numbers in Table 2 seem to only reflect how well each method overfits to the five training game instances. The reviewer found that the results did not provide clear conclusions or insights about the tested algorithms. 2. Table 3 presents results that suggest none of the existing algorithms were able to solve the puzzle when the testing image was not included in the training pool. This indicates a significant limitation of the benchmark. A benchmark that results in a zero success rate for all RL algorithms does not provide a useful basis for comparison or meaningful insights into the relative strengths and weaknesses of different algorithms. The reviewer believes that the SPGym environment still holds potential despite the aforementioned issues. For instance, one could use the similarity between training and testing images as a parameter to control the difficulty of the visual task. This could involve a tiered approach: - Easy Level: The testing images could be augmented versions of the training images. - Intermediate Level: The testing images could be drawn from the same classes as the training images. - Hard Level: The testing images could be sourced from entirely different datasets than the training images. By evaluating different RL algorithms across these varying levels of visual difficulty, researchers could gain deeper insights into the visual capabilities and generalization abilities of these algorithms. Theoretical Claims: No formal theoretical claim is presented. Experimental Designs Or Analyses: In addition to the issues discussed in 'Methods And Evaluation Criteria' section, the reviewer has concerns regarding the size of the training sets. Utilizing only five images in the training set appears insufficient, and the reviewer finds it unsurprising that existing algorithms fail to generalize to unseen images under these conditions. While the authors mention experimenting with training sets containing up to 100 images, this quantity is still considered limited by the reviewer. The reviewer suggests exploring the use of significantly larger training sets, on the order of 100,000 or even 1 million images, to assess the impact on generalization performance. Supplementary Material: Supplementary material is not reviewed. Relation To Broader Scientific Literature: This paper has close connections to visual reinforcement learning, such as [a, b]. The goal of the paper is to create an environment that can separate visual complexity from dynamics, which is indeed an important topics in the field. [a] Revisiting Plasticity in Visual Reinforcement Learning: Data, Modules and Training Stages, Ma, 2024. [b] DrM: Mastering Visual Reinforcement Learning through Dormant Ratio Minimization, Xu, 2024. Essential References Not Discussed: None Other Strengths And Weaknesses: The paper is generally easy to follow and well organized. Other Comments Or Suggestions: None Questions For Authors: Please see the above discussions. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the positive feedback on SPGym's potential and the detailed, constructive comments regarding the evaluation settings and experimental design. We address the concerns below: ## 1. In-Distribution Evaluation We understand the concern that Table 2 results might seem like overfitting. However, our primary goal with the in-distribution (ID) setting is not to evaluate generalization across images but to measure the sample efficiency with which different RL agents learn useful visual representations under controlled visual diversity. We chose pool size 5 because it offers a balance: it introduces enough visual diversity to discriminate between different representation learning approaches (as seen by the varying steps-to-convergence) while remaining learnable for most agents within our computational budget. As detailed in our response to Reviewer R6Sn, SPGym is formulated as a POMDP where only the visual observation function changes between settings; the underlying task dynamics remain fixed. Therefore, differences in sample efficiency (Table 2) directly reflect how effectively each agent's representation learning component handles the visual aspect of the task. ## 2. Out-of-Distribution Evaluation We acknowledge the reviewer's point that zero OOD success might seem like a limitation. However, we view this consistent failure not as a flaw of the benchmark, but as a crucial diagnostic finding about the limitations of current end-to-end visual RL methods when faced with generalizing visual features learned purely through a task-specific RL objective. Standard RL objectives, like the one used here, do not explicitly optimize for generalization to unseen visual appearances of the same underlying state. SPGym effectively highlights this gap. Our choice to present the main OOD results (Table 3) using agents trained on pool size 5 stems from this being the setting where most methods achieved high ID success. Evaluating OOD generalization is most meaningful when agents have demonstrated competence on their training distribution. While some base agents achieved ID success on larger pools, they still failed completely OOD, reinforcing the finding that simply increasing training diversity within this range wasn't sufficient for generalization with current methods. Showing zero OOD success across methods aims to motivate research into algorithms with better visual generalization properties. ## 3. Training Set Size We appreciate the suggestion to explore significantly larger training pools. In preliminary experiments, we attempted training with pools of thousands of images. However, we found that with such high visual diversity, the RL training became unstable. Because each observation was essentially unique (rarely seen more than once), the gradients from the RL objective were insufficient to train a useful visual encoder from scratch. Without a stable encoder, the policy failed to learn. Therefore, our approach was to identify the approximate limit of visual diversity that current standard algorithms (PPO, SAC, DreamerV3) could handle within a reasonable budget (10M steps), which led to testing pools up to 20, 50, and 100, respectively. This reveals the scaling limitations of these methods rather than aiming for OOD generalization via massive datasets, which might require different training paradigms (e.g., pretraining on external data, different objectives). ## 4. Tiered Generalization Evaluation We thank the reviewer for this excellent suggestion. We agree that evaluating generalization across different levels of visual similarity would provide deeper insights. As a preliminary step in this direction, we evaluated the trained PPO and SAC agents on augmented versions of their training images. We observed a strong correlation (Pearson r=-0.81, p=2.5e-12) between success rates on these augmented images and the agents' sample efficiency, suggesting a link between robustness to simple transformations and learning speed. We plan to include these results and discuss the tiered approach as a key direction for future work in the revised manuscript: |Agent/Pool|1|5|10|20|30|50| |-|-|-|-|-|-|-| |PPO|0.49±0.13|0.53±0.14|0.34±0.08|0.12±0.03|-|-| |PPO+PT(ID)|0.33±0.09|0.53±0.16|0.27±0.07|-|-|-| |PPO+PT(OOD)|0.49±0.12|0.52±0.14|0.34±0.08|-|-|-| |SAC|0.45±0.12|0.58±0.12|0.46±0.12|0.35±0.11|0.19±0.04|0.06±0.02| |SAC+AE|0.78±0.11|0.64±0.16|0.55±0.12|-|-|-| |SAC+VAE|0.64±0.15|0.30±0.08|0.12±0.03|-|-|-| |SAC+SPR|0.65±0.13|0.21±0.09|0.07±0.04|-|-|-| |SAC+DBC|0.44±0.13|0.34±0.13|0.13±0.04|-|-|-| |SAC+CURL|0.76±0.09|0.44±0.10|0.37±0.11|-|-|-| |SAC+RAD|0.62±0.15|0.42±0.13|0.30±0.11|-|-|-| |SAC+SB|0.89±0.08|0.65±0.12|0.06±0.02|-|-|-| We will revise the paper to clarify the evaluation rationale, incorporate the results on augmented images, and explicitly frame the tiered generalization as important future work, addressing the points raised. Thank you again for the constructive feedback.
Summary: The paper introduces SPGym, a novel benchmark for visual RL that extends the classic sliding puzzle by replacing numbered tiles with image patches. This enables scaling visual diversity while keeping the puzzle dynamics fixed, with the aim of isolating representation learning from policy learning. The authors evaluate a range of RL baselines: on-policy PPO (with ID and OOD pretrained encoders), off-policy SAC with data augmentation and representation learning techniques, and the model-based DreamerV3, to assess sample efficiency and generalization. Their experiments show that pretraining and data augmentation enhance sample efficiency, baselines react differently to increasing the images in the training pool, nearly all approaches achieve high in-distribution performance yet completely fail to generalize to unseen images, and PPO continues to learn more efficient solutions after reaching 100% average success. Claims And Evidence: 1. The authors repeatedly claim that SPGym disentangles representation learning from policy learning, using this as a core motivation for their work and distinguishing it from prior benchmarks like The Distracting Control Suite and ProcGen. However, I don’t see how their experimental setup fully supports this claim. While PPO is tested with pretrained encoders (both in-distribution and out-of-distribution), SAC and DreamerV3 still learn representations end-to-end, meaning representation learning is not truly isolated across all agents. SPGym does not incorporate explicit representation learning evaluations, such as frozen encoder tests or linear probes, which would help validate the claim of disentanglement. Since policy updates inherently shape learned representations, the benchmark does not provide a clear separation between the two processes. 2. The authors claim that the best-performing agents fall short of the theoretical 22-step optimal solution, basing this assertion on the 5-pool training results in Table 2. However, agents like SAC and DreamerV3 benefit from larger training pools, with DreamerV3 achieving the lowest episode length of 27.8 steps using a 20-image pool (Table 5). Moreover, since early stopping is applied, preventing further improvements, there is evidence from PPO that continued training can significantly reduce episode lengths (from 214.30 to 31.35 steps). Given that the best version of DreamerV3 is trained using only about 40% of the allowed data budget, it has the potential to approach the theoretical optimal solution with extended training. Therefore, the claim that the best-performing agents fall short of the optimal solution is not fully substantiated. Methods And Evaluation Criteria: 1. SPGym has two clearly defined training evaluation metrics: 1) **task completion**, measuring whether an agent successfully solves the puzzle within the maximum episode length of 1,000 steps, and 2) **completion efficiency**, measuring how quickly an agent solves the puzzle in terms of the number of steps taken. These well-defined metrics make evaluation and comparison straightforward. 2. SPGym measures sample efficiency by tracking the number of environment steps an agent requires to achieve an 80% success rate. 3. Increasing the training image pool size introduces greater visual diversity while keeping the underlying dynamics unchanged, allowing for an analysis of how different algorithms perform across varying levels of visual complexity (Figure 4 & Table 5). 4. SPGym is effective for evaluating generalization to unseen data, as all baseline methods fail to transfer their learned representations to unseen images, regardless of the training pool size. 5. One of my main concerns is the **reward function**. Using the Manhattan distance between each tile’s current and target positions as a reward signal discourages moves that might be necessary for reaching the optimal solution. There are likely scenarios where a tile that is already in place, or close to it, must be temporarily moved further away to solve the puzzle efficiently. This reward function risks biasing the agent toward locally optimal but globally suboptimal strategies, which could explain the poor baseline performance. While I acknowledge that a purely sparse +1 reward upon completion would make learning infeasible due to the extremely low probability of solving even a 3×3 puzzle through random actions, the current approach may still be counterproductive. The authors should design a dense reward signal that does not penalize necessary intermediate steps or provide empirical evidence that the current formulation does not hinder learning. Theoretical Claims: This paper does not contain theoretical claims or formal proofs; its primary contributions are experimental and methodological. Experimental Designs Or Analyses: 1. The authors evaluate a diverse set of SOTA baselines, including on-policy PPO with pretrained encoders, off-policy SAC with various data augmentation strategies, and multiple recent variants from the literature, along with the model-based DreamerV3. This strengthens the paper’s contributions. 2. The baseline comparison of increasing the training pool size is insightful, as it reveals how well the algorithms adapt to greater visual diversity. 3. In Table 2, the authors assess sample efficiency by measuring the number of samples required for baseline methods to reach an 80% success rate with a small image pool. Establishing a clear evaluation criterion for sample efficiency is beneficial for the benchmark. However, while they mention that early stopping occurs when the agent achieves a 100% success rate for 100 consecutive episodes, they do not specify the window size used for averaging the 80% success rate threshold. This omission makes it unclear how success rates are computed and whether short-term fluctuations could affect the reported results. 4. The authors determine the best data augmentation strategy by evaluating only **RAD**, then apply the selected approach (grayscale + channel shuffle) to all augmentation-based methods, including **CURL** and **SPR**. This is problematic because different methods have distinct augmentation requirements. **CURL** [1] has been shown to benefit more from cropping rather than random color shuffling, while **SPR** [2] relies on temporal consistency and may be particularly sensitive to spatial distortions. Optimizing augmentations solely for RAD does not guarantee optimal performance for other methods, potentially skewing the results and disadvantaging CURL and SPR. This could explain why RAD outperforms CURL and SPR in sample efficiency. 5. The Hyperparameter Selection and Data Augmentation Analysis focus solely on optimizing training performance. However, since the benchmark also assesses evaluation, the selection process should consider generalization performance, not just faster learning. The chosen hyperparameters may improve training efficiency but contribute to overfitting, making it crucial to evaluate augmentation methods based on their impact on generalization. 6. Table 2 shows that pretrained encoders improve PPO’s sample efficiency. However, this comparison is not entirely fair, as the pretrained encoders have already seen a large number of samples during pretraining, even if from OOD data. This gives them a significant advantage over agents learning representations from scratch, making direct comparisons of sample efficiency misleading. 7. Table 3 presents the ID and OOD evaluation results, highlighting SPGym’s effectiveness as a generalization evaluation tool. While a training pool of only 5 images is understandably insufficient for meaningful generalization, the authors note that even models trained on pools of up to 100 images fail to transfer to unseen images. In my view, this shows the benchmark’s core challenge, demonstrating that overfitting remains a problem regardless of pool size. However, since SAC and DreamerV3 achieve strong in-distribution performance with larger training pools over 10M steps, it would be more informative to display their main results in the table on a larger pool size to better assess their generalization potential. Additionally, the authors only evaluate OOD performance for the base PPO, SAC, and DreamerV3 models. Extending this analysis to other baselines included in the study, such as data-augmented or contrastive learning variants, would provide a more comprehensive understanding of how different representation learning methods handle distribution shifts. 8. The authors apply early stopping once agents achieve a 100% success rate for 100 consecutive episodes, yet their own results show that agents continue to improve solution efficiency well beyond this point. The first 100 successful episodes average 214.30 steps, whereas the final 100 episodes average just 31.35 steps, indicating that further training yields significantly more optimal solutions. This raises concerns about the motivation behind early stopping in this setting. Typically, early stopping is used when a model has converged and is no longer learning anything meaningful. Here, it appears to halt training prematurely, limiting the evaluation of efficiency improvements. 9. The use of procedurally generated images, such as those from DiffusionDB, offers advantages like reduced storage overhead and access to near-infinite diversity, potentially aiding controlled generalization. However, Figure 5 shows that performance trends closely mirror those on ImageNet, suggesting that this diversity does not significantly impact learning outcomes under the tested conditions. While procedural generation allows fine-grained control over visual complexity, the results do not demonstrate a clear advantage over large pre-existing datasets. To strengthen this contribution, the authors could analyze its benefits in terms of computational efficiency, storage, or controlled generalization. [1] Laskin, Michael, Aravind Srinivas, and Pieter Abbeel. "Curl: Contrastive unsupervised representations for reinforcement learning." *International conference on machine learning*. PMLR, 2020. [2] Schwarzer, Max, et al. "Data-efficient reinforcement learning with self-predictive representations." *arXiv preprint arXiv:2007.05929* (2020). Supplementary Material: I reviewed the full appendix. Relation To Broader Scientific Literature: The authors position their work among benchmarks that introduce visual diversity, such as ProcGen and DM Control Suite, and those that add visual noise, like the Distracting Control Suite. They claim that SPGym uniquely disentangles representation learning from policy optimization by extending the 8-Puzzle task, previously used in RL research, to support visual observations. Essential References Not Discussed: There are other benchmarks tailored for visual RL that have a setting which keeps the environment dynamics the same while increasing visual diversity [1, 2], similar to SPGym. [1] Dosovitskiy, Alexey, et al. "CARLA: An open urban driving simulator." *Conference on robot learning*. PMLR, 2017. [2] Tomilin, Tristan, et al. "Coom: A game benchmark for continual reinforcement learning." *Advances in Neural Information Processing Systems* 36 (2023): 67794-67832. Other Strengths And Weaknesses: 1. Since the authors introduce a new benchmark, access to the code would be beneficial for properly assessing their work. However, they have not uploaded their code or provided a link to an anonymous repository. 2. This benchmark has strong potential to be valuable to the community. The 8-Puzzle problem with visual observations is a commendable proposal, particularly since it aligns with how humans approach the task. Benchmarks introducing novel problems are always beneficial. However, I am not fully convinced by the strong emphasis on disentangling feature and policy learning, as the current evidence does not fully support this claim. Additionally, some aspects of the experimental evaluation need improvement, as outlined above. I am very open to raising my score if the authors adequately address my concerns. Other Comments Or Suggestions: 1. The authors provide wrappers that allow for different observation modalities beyond image-based tiles, including text and one-hot encodings. While this adds flexibility to SPGym, it is not a fully realized contribution since there is no empirical analysis or baseline evaluation using these alternative modalities. Without an evaluation of how different modalities impact learning performance, this aspect remains more of a hypothetical feature rather than a demonstrated advantage. 2. The authors state that they explore three algorithmic paradigms: off-policy, on-policy, and model-based. However, model-based methods can themselves be either off-policy or on-policy, depending on how data is used. The distinction could be better clarified for conceptual accuracy. 3. The evaluation of data augmentation methods is solely based on their impact on training performance, without assessing their effect on generalization during evaluation. 4. It would be interesting to analyze whether some images make it more difficult to solve the puzzle than others. 5. The authors repeatedly state that increasing the training pool size increases visual complexity. However, I believe it only increases visual diversity rather than complexity, unless certain images make it more difficult for the agent to solve the puzzle. Questions For Authors: 1. Is the missing tile always in the bottom-right corner, or is it random across episodes? 2. Why do the authors use early stopping if the model still has the potential to learn more efficient solutions? 3. Does PPO converge when trained for 10M steps on the 4x4 grid, or could longer training improve performance further? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for their thorough and constructive feedback. We appreciate the recognition of SPGym's potential and the detailed suggestions, which will significantly improve the paper. ## 1. Answers to Direct Questions 1. **Missing Tile:** The missing tile's starting position is randomized in each episode as part of the initial shuffling. 2. **Training Termination ("Early Stopping"):** Our primary motivation for terminating runs based on sustained success rate was computational efficiency, given the number of experiments. A secondary reason was to evaluate OOD generalization before potential extreme overfitting. We agree "early stopping" is imprecise terminology, as agents can still improve solution efficiency, and will rephrase this in the revision. 3. **PPO 4x4 Convergence:** PPO does not converge within 10M steps on the 4x4 grid. Extended runs (100M steps) show it requires approximately 24.5M ± 7.6M steps (avg. 5 seeds) to reach 80% success, indicating feasibility but requiring much more data. We will add this finding. ## 2. Addressing Key Concerns 1. **Disentanglement of Representation and Policy Learning:** We acknowledge the reviewer's point on end-to-end learning. However, SPGym's design isolates the visual representation challenge as the key variable by keeping all underlying MDP components fixed and only varying the visual observation function. Performance differences thus reflect agents' visual processing capabilities. For a detailed explanation of the POMDP formulation and supporting empirical evidence from linear probing, please see our response to **Reviewer R6Sn**. We will add these clarifications and results to the revision. 2. **Optimal Solution Claim:** You are correct. Given our termination criterion and that only PPO was tested for continued training, the claim that best-performing agents fall short might be too strong. We will revise this to reflect that agents under our protocol didn't reach the optimum, acknowledging the potential for improvement with longer training. 3. **Reward Function:** The Manhattan distance reward is standard in puzzle literature (Korf et al., 1985; Burns et al., 2012; Lee et al., 2022; Moon et al., 2024) and encourages minimizing steps by accumulating negative rewards until the goal (+1) is reached. While local optima are possible, this aligns the RL objective with finding efficient solutions. A perfect dense reward is non-trivial, and sparse rewards are infeasible here. 4. **Evaluation Criteria Clarity (80% Threshold):** We apologize for the omission. We calculate this by finding the first environment step where the average success rate across all parallel environments reaches 80% for each run, then averaging these step counts across seeds. We will clarify this. 5. **Data Augmentation & Hyperparameter Selection:** We acknowledge the limitation of optimizing augmentations only for RAD and applying that strategy universally. This was a trade-off for controlled comparison vs. method-specific tuning. Similarly, hyperparameters were tuned for training efficiency. Evaluating based on generalization is important future work. We will clarify these limitations. 6. **Fairness of Pretraining Comparison (PPO):** We agree it's not a direct 'from scratch' comparison. The intent was to assess the impact of leveraging pretrained features (ID/OOD) vs. learning purely from the RL signal in SPGym. We will clarify this motivation. 7. **Generalization Evaluation (Table 3 & Pool Size Choice):** The consistent OOD failure across methods and pool sizes is a key diagnostic finding about current end-to-end visual RL limitations. Our rationale for focusing Table 3 on pool size 5, preliminary findings on larger pools, and the interpretation of the OOD results are discussed in detail in our response to **Reviewer ukCk** (due to space limitations). 8. **Procedural Generation:** We agree the current results don't show a demonstrated advantage over ImageNet. We will moderate the claim, framing it as a feature with potential benefits (diversity, control, storage) for future investigation. ## 3. Other Points * **Missing References:** Thank you. We will add CARLA and COOM to the related work discussion. * **Code:** Code is in the supplementary material, omitted from the paper for anonymity. We will add links in the camera-ready version. * **Minor Points:** We will refine terminology (model-based definition, visual diversity) and acknowledge unevaluated features (other modalities) and future analysis directions (image difficulty). We hope these responses and planned revisions address the reviewer's concerns. We value the feedback and are committed to improving the paper. We appreciate the reviewer's openness to reconsidering their evaluation. --- Rebuttal Comment 1.1: Comment: 1. Since the **Manhattan distance** reward function has been widely used in prior literature, and I don’t see a better alternative, I consider this concern resolved. 2. Since SOTA methods like DreamerV3 already perform close to the optimal solution, there is limited room for further improvement in terms of training performance, particularly in settings with moderate image pools. As a result, future progress in SPGym will likely center on sample efficiency and generalization performance during evaluation, rather than achieving more optimal training outcomes. 3. I agree with reviewer **ukCk**’s suggestion to incorporate levels of evaluation difficulty. The jump from 100% accuracy on ID to 0% on OOD is too abrupt, disabling a meaningful comparison of existing methods. There’s no telling when a method will be created that surpasses 0% OOD accuracy. Until then, comparisons remain uninformative since ID is too easy and OOD is too hard. Intermediate levels of complexity would enhance SPGym’s utility as an evaluation framework. 4. In your response to **ukCk**, there is no mention of the specific image augmentations applied on the training images. More importantly, across all methods, **evaluation performance decreases as the image pool increases**. This is counterintuitive to the proposition: a larger training pool should, in theory, promote more general representations and improve generalization. However, the results suggest otherwise, as agents appear to perform better on the augmented evaluation setting when overfitting to a single image puzzle. This undermines the claim that the agents are learning truly general representations. A similar phenomenon occurs when increasing the training set size. As you noted in your response to reviewer **ukCk**, > we found that with such high visual diversity, the RL training became unstable. > This instability likely stems from the agent's inability to learn general representations. Instead, it appears to be exhausting its network capacity, effectively memorizing solutions rather than generalizing across different images. This is likely also why DreamerV3 performs so much better. It is much more sample-efficient and has a larger network, enabling it to learn solutions for a larger number of images individually. 5. In the linear probe experiment, PPO and SAC are trained end-to-end. This means that the reward signal from policy learning has shaped the encoder weights to provide useful encodings for solving the RL puzzle task. Similar to other results, the performance again drops when the training pool increases, indicating a lack of potential for learning generality. Therefore, I don’t see the isolation of visual representation learning evaluation. However, I do think these results are useful to include in the paper to show that the encoder can be repurposed for classification in such manner. 6. I still believe that only using **grayscale + channel shuffle** is a strong injustice to **CURL** and **SPR**, as they are being employed in ways that deviate from their intended design. This goes beyond a simple lack of hyperparameter tuning. These baselines should either be re-evaluated or omitted. I have raised the score because some of my concerns have been addressed, and I believe SPGym is an interesting problem for visual RL. However, the core issues remain. Most importantly, I remain unconvinced of the disentanglement of representation and policy learning. --- Reply to Comment 1.1.1: Comment: Thank you for the continued engagement and detailed feedback. We appreciate the opportunity to clarify our perspective on the remaining points. 1. Our primary claim is that SPGym is a valuable tool for *evaluating* the visual representation learning capabilities of RL agents by *isolating visual diversity as the controlled variable*. While training is end-to-end, comparing the performance of an agent across different pool sizes allows for a controlled comparison of how well their representation learning handles visual stress, because all other task aspects are fixed. It's important to distinguish this claim about *evaluation methodology* from claiming that representation and policy learning are perfectly decoupled during training, or that the learned representations are universally general-purpose – claims we do not make. We will ensure this distinction is clear in the revised manuscript. 2. We agree with your assessment that future progress will likely focus on sample efficiency and generalization. We also believe SPGym remains valuable for highlighting the *scaling limits* of current methods, including SOTA ones like DreamerV3 (as seen with pool size 100), in terms of efficiently learning representations solely from the RL objective under increasing visual diversity. 3. We agree on the value of intermediate evaluation difficulties and thank you and reviewer ukCk for the suggestion. We plan to incorporate the 'Easy Level' results and discuss the tiered approach as important future work. 4. To evaluate performance on the proposed 'Easy Level' OOD setting, we took the trained agents and tested them on augmented versions of their training images. Specifically, we applied the same augmentations we considered in the paper (crop, shift, grayscale, inversion, channel shuffle) to the training images, ran evaluations for each augmentation type individually across all 5 seeds for 100 episodes, averaged the success rate for each augmentation, and then reported the average success rate across all augmentation types. We understand the observation that performance on these augmented images tends to decrease as the training pool size increases seems counter-intuitive. One interpretation is that agents achieving better performance (typically on smaller pools) learned SPGym's specific structural invariances better, resulting in representations that are more robust to these simple geometric or color perturbations of the images they were trained on. 5. Regarding the linear probe results, we highlight two key findings: First, there is a statistically significant correlation between the quality of the learned representations (measured by the linear probe's accuracy in decoding the underlying state from observations) and the agent's task performance (measured by steps to reach 80% success). Agents with representations that better encode the state learn the task faster. Second, both the probe accuracy and the task performance systematically degrade as the image pool size increases. Since the only variable changing is the visual diversity introduced by the larger pool, this suggests that increased visual diversity directly hinders the agent's ability to learn high-quality, task-relevant representations from the RL objective alone, consequently impairing policy learning performance. We believe including these results empirically supports our claim that SPGym effectively evaluates representation learning under varying visual diversity. 6. Thank you for pushing on the augmentation strategy for CURL/SPR. Prompted by your valid concern about fairness, we conducted a dedicated augmentation search specifically for CURL and SPR, following the protocol we describe Appendix C.2. This search included the same augmentations considered for RAD, and we also included shift + color jitter, which was used originally by SPR. Empirically, we found that for this specific task in SPGym, the combination of grayscale + channel shuffle still yielded the best sample efficiency for both methods. Our interpretation remains that preserving tile structure is particularly critical here. We will add this clarification and the supporting evidence to the Appendix. Anonymized links to learning curves: [CURL Augmentation Search](https://i.postimg.cc/Gmj3q5JF/curl-aug-search.png), and [SPR Augmentation Search](https://i.postimg.cc/VNYzCLGy/spr-aug-search.png) We hope these clarifications fully address your remaining concerns. We have incorporated much of your feedback into our revision plans and believe the paper, with these changes, makes a valuable contribution. Thank you again for your time and insightful comments.
null
null
null
null
null
null
null
null
VTGaussian-SLAM: RGBD SLAM for Large Scale Scenes with Splatting View-Tied 3D Gaussians
Accept (poster)
Summary: To address a high memory consumption issue of 3DGS, this paper proposes view-tied 3DGS, which determines Gaussians based on the views. The Gaussians from the last frame are tracked and processed in sections. Since the method is view-dependent, it efficiently reduces the storage required for location, rotation, and scale values. The location of Gaussians is expressed through depth, and adjacent frames are used to handle missing depth information. When a new section is optimized, pose optimization is performed using rendering loss. To minimize total error, the head frame of the previous section is used. The Gaussians in the scene are trained using L1, SSIM and depth loss. Finally, BA is applied by optimizing the Gaussians of the head frame and camera pose to reduce absolute pose error. Claims And Evidence: This paper's main claim is that instead of storing all 3DGS parameters, it proposes a section-based storage approach, making it more efficient on limited GPU resources. The paper argues that the model can operate on large-scale scenes, demonstrating better rendering performance and lower pose error compared to existing models. However, the experiments are conducted only on indoor datasets, and no actual large-scale scenes are presented. Methods And Evaluation Criteria: Instead of representing the entire scene as Gaussians, this paper proposes an approach based on views, and the scene is represented using depth values from each view. To reduce the number of parameters, location is replaced with depth, and rotation is eliminated by representing Gaussians with a sphere shape. Within each section, Gaussians are merged to create a scene. To further reduce redundancy, visibility across view is measured, and only Gaussians in invisible regions are additionally stored. For each new section, the head frame from the previous section, where views have the highest overlap, is selected. The Gaussians are then stored based on these views. The comparison models include Neural Implicit Fields models and 3DGS-based models. On the Replica dataset, D-L1 and F1 score were measured. On other datasets, ATE RMSE was used for tracking comparisons, and PSNR, SSIM, and LPIPS were used for rendering comparisons. Theoretical Claims: There are no specifically proposed theoretical claims. Experimental Designs Or Analyses: The experiments follow the other SLAM methodologies and are conducted on the Replica, TUM-RGBD, ScanNet and ScanNet++ datasets. The evaluations are performed on an NVIDIA RTX 4090, where comparisons of runtime and the number of Gaussians are presented. Additionally, the experiments include comparisons between anisotropic and isotropic Gaussians, analyses of different section lengths, evaluations of overlap selection methods, and ablation studies on the impact of the visibility mask. Supplementary Material: The paper includes video results and the main function of the code. Additionally, it provides further details on the implementation, experimental results, visualizations of ablation studies, and supporting figures. Relation To Broader Scientific Literature: The paper employs a method that represents RGBD SLAM using Gaussians and proposes a section-based representation. Instead of using point clouds, it adopts a 3DGS primitive-based approach. However, since the Gaussians are tied to depth-based representation, extending this method to RGB SLAM appears challenging. Essential References Not Discussed: There do not appear to be any essential references that are not discussed. Other Strengths And Weaknesses: **Strengths** It is interesting that the model achieves good rendering results with a more simplified representation compared to other models. **Weaknesses** The explanation of the overlap section in the method is not smooth, and the description of the threshold $\gamma$ is unclear. Despite the significant impact of the overlap method, it is not well-explained in the overview, making it difficult to understand. There are no experimental results on actual large-scale scenes. In Table 11, the runtime is slower compared to G-SL and NICE, and both the total and maximum number of Gaussians are higher than in G-SL. This suggests that additional experiments on large-scale scenes and a direct comparison of GPU memory usage are necessary. As the number of frames increases, the number of overlap candidates is likely to grow, potentially increasing the time required to search for sections. However, there are no experiments addressing this issue. Additionally, as the number of sections increases, the redundancy of Gaussians stored in the head frame is expected to rise. Since the model is designed based on depth modality, it is likely to be highly dependent on dataset-specific factors such as the number of captured frames and resolution. Furthermore, in large-scale outdoor environments where dense depth acquisition is not feasible, the model may fail to operate effectively. This suggests that the model has generalization limitations. Other Comments Or Suggestions: There is a spacing typo on line 417: "Visibility ." Questions For Authors: How much is the actual difference in memory usage in Table 11? In Supplementary Section D.1, I am curious about the model's robustness to input depth noise. Are experimental results available to assess this? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your review and positive comments on our idea, contributions, evaluations, and supplementary materials. ### **1. Overview** We will revise the overview accordingly to make it easier to follow. ### **2. Runtime comparison in Tab.11** As stated in Lines 434-438 left, we manage to optimize many more Gaussians at each frame for much better rendering quality and more accurate pose estimation than other NeRF-based and GS-based methods, which sacrifices a little bit more runtime in total. However, we show a great advantage if we compare the runtime averaged on each Gaussian. All GS-based methods use the most Gaussians until no improvement can be made further in Tab. 11. We report peak GPU memory usage as a supplement to Tab. 11 below. |Methods|NICE-SLAM|Point-SLAM|SplaTAM|Gaussian-SLAM|Ours| |-|:-:|:-:|:-:|:-:|:-:| |Peak GPU Use (GiB)|12.0|7.7|18.5|4.2|5.4| Although Gasussian-SLAM uses fewer memory, we employ more Gaussians and produce much better rendering, as shown in the following. We also report additional experiments on city-level scenes. Due to the time limit, we only report our results on several scenes in KITTI to evaluate tracking and mapping performance below. We also provide a visual comparison of rendering performance at the link: https://imgur.com/a/FOIgcrz. |Methods|Gaussian-SLAM|SplaTAM|LoopSplat|Ours| |-|:-:|:-:|:-:|:-:| |00 (ATE RMSE$\downarrow$[m])|3.02|58.83|2.22|**2.06**| |01 (ATE RMSE$\downarrow$[m])|77.51|84.45|74.47|**29.01**| |05 (ATE RMSE$\downarrow$[m])|128.88|80.39|117.43|**7.74**| |10 (ATE RMSE$\downarrow$[m])|10.60|43.82|11.39|**4.54**| |Methods|Gaussian-SLAM|SplaTAM|LoopSplat|Ours| |-|:-:|:-:|:-:|:-:| |00 (PSNR$\uparrow$)|15.51|9.82|15.82|**28.54**| |01 (PSNR$\uparrow$)|15.95|12.89|14.69|**30.33**| |05 (PSNR$\uparrow$)|16.22|26.48|15.98|**28.19**| |10 (PSNR$\uparrow$)|15.58|25.58|14.58|**27.59**| |Peak GPU Use (GiB)|2.74|22.37|3.56|4.79| ### **3. Time on overlapping section selection** To determine the overlapping section selection, we project the downsampled depth points (200000) in the current view to the previous candidates. This simple manner is quite fast, and does not bring a burden, as shown in the time comparison below. |Number of Overlapping Candidate Frames|200|400|800|1600| |-|:-:|:-:|:-:|:-:| |Time (s)|0.038|0.079|0.144|0.282| ### **4. What if depth is not available** Actually, SLAM methods using RGBD images are dedicated to indoor scenes. For outdoor scenes, we may not have sensor-captured depth maps, but we can use monocular depth priors, such as DepthAnything or MiDaS, to predict depth maps from RGB images as initialization. Then, in the first several frames, we can register the depth to the sparse depth from SfM (Structure from Motion). In the following frames, we can register the predicted depth to the depth rendered with Gaussians as a coarse depth to initialize Gaussians. ### **5. Memory comparisons** In Tab.11, we report the memory cost with the most Gaussians in each method until no improvement can be made. Please see our memory consumption on KITTI in the second table in our response to the question 2 above. ### **6. Impact of depth noise on the performance** Our results on real datasets like Scannet were reported using depth maps with noise. Although our Gaussians are fixed at depth with noise, Gaussian splatting is flexible enough to overfit the current frame and neighboring frames by tuning other attributes like color, opacity, and shape. Our results show that depth noise does not impact the rendering. We report additional results below. ||10% pixels w/ noises|20% pixels w/ noises|30% pixels w/ noises|Ours(w/o additional noises & fix)| |-|:-:|:-:|:-:|:-:| |PSNR$\uparrow$|43.41|43.40|43.29|43.06| |SSIM$\uparrow$|0.996|0.996|0.996|0.996| |LPIPS$\downarrow$|0.015|0.015|0.015|0.013| ### **7. Minor issues** We will revise the manuscript accordingly to resolve other minor issues. --- Rebuttal Comment 1.1: Comment: I appreciate the authors' response, which addresses most of my concerns. I will keep my original score. --- Reply to Comment 1.1.1: Comment: Thanks for your review and comments. Really appreciated it. Best, The authors
Summary: - This paper addresses the limitation of traditional 3DGS-SLAM methods, which struggle to scale up to extremely large scenes due to inefficient tracking and mapping strategies. - The authors propose tracking and mapping strategies based on a new 3D representation called view-tied 3D Gaussians, which simplifies traditional 3D Gaussians by tying them to depth pixels, eliminating the need to learn their locations, rotations, and multi-dimensional variances. - The proposed method demonstrates advantages in both reducing storage requirements and improving rendering quality. Extensive experiments supported the effectiveness of their design. Claims And Evidence: Yes. The claim that "existing methods struggle to scale up to extremely large scenes due to inefficient tracking and mapping strategies" is generally valid. However, the paper doesn't provide sufficient experimental evidence to demonstrate the superiority of VTGaussian-SLAM in **large-scale scenarios**. The experiments are conducted only on small-scale scenes from datasets such as Replica, TUM-RGBD, and small subset of ScanNet++, which do not adequately test the scalability of the proposed method. To convincingly support the claim, the authors should evaluate their approach on larger-scale scenes. A relevant benchmark for comparison could be GO-SLAM: Global Optimization for Consistent 3D Instant Reconstruction (ICCV 2023), which addresses similar challenges in large-scale environments. Methods And Evaluation Criteria: For method: the proposed View-Tied 3D Gaussians method introduces a novel approach to 3D representation in SLAM systems. By tying a 3D Gaussian to each pixel in the depth map, the positions of these Gaussians are determined solely by depth and camera poses, eliminating the need to learn and store their locations or perform density control. This design significantly reduces memory and computational overhead. Additionally, the proposed tracking and mapping strategies focus on rendering and optimizing only a subset of Gaussians associated with the most recent views, rather than all Gaussians in the scene. This approach removes the need to maintain and optimize all Gaussians in memory throughout the training process, thereby improving the scalability of 3D Gaussian Splatting (3DGS) in SLAM applications. For evaluation: As I suggested in the previous section, the experiments are insufficient. Only small-scale datasets such as Replica, TUM-RGBD, and small subsets of ScanNet++ are conducted, which do not adequately test the scalability of the proposed method. To convincingly support the claim, the authors should evaluate their approach on larger-scale scenes. Theoretical Claims: Yes. The theoretical proofs in this paper are quite concise, as the work is primarily based on modifications to an existing code framework. Consequently, the tracking and mapping sections lack substantial theoretical development, and the focus of the paper is predominantly on the implementation aspects rather than theoretical contributions. Experimental Designs Or Analyses: Yes. I review the experiments in 4.1. Comparisons and 4.2. Ablation Studies and Analysis. The main experimental results are compelling. The authors demonstrate that their view-tied Gaussians significantly reduce storage requirements, enabling the maintenance of a large number of Gaussians within limited GPU memory. Furthermore, the ablation studies provide additional validation for the effectiveness of the proposed VT (view-tied) approach. Specifically, the ablation study compares different attributes of 3D Gaussians, including anisotropic Gaussians (aniso), isotropic Gaussians (iso), and view-tied Gaussians (VT), further confirming the advantages of the VT design. Supplementary Material: I have reviewed the supplementary material provided by the authors to examine the implementation details, experimental results, and demo demonstrations in greater depth. I have gone through essentially all of the supplementary material. Relation To Broader Scientific Literature: The key contributions of this paper build upon and address limitations in recent works such as (Keetha et al., 2024; Matsuki et al., 2024; Huang et al., 2024b; Yan et al., 2024; Yugay et al., 2023; Sandström et al., 2024). which employ various tracking and mapping strategies, but requires maintaining and optimizing all Gaussians covering the scene within limited GPU memory to ensure color and geometry consistency across all previous views. Here are also some more complex systems (Liso et al., 2024; Zhu et al., 2024; Bruns et al., 2024; Sandström et al., 2024), that incorporate loop closure mechanisms into their optimization processes. However, detecting loop closures across views often relies on pre-trained priors and is highly sensitive to image quality. Essential References Not Discussed: No Other Strengths And Weaknesses: Weakness: - I believe the novelty and effectiveness of this paper are somewhat weak, although I agree with the motivation of addressing the scalability limitations of existing methods that struggle to optimize all 3D Gaussians within limited GPU memory. The proposed Learnable View-Tied Gaussians and Frozen View-Tied Gaussians essentially follow a standard incremental SLAM framework. The View-Tied Gaussians approach appears to focus primarily on optimizing the 3D scene for the current view, which seems conceptually similar to the tracking and mapping of the current frame in prior works. As such, the contribution leans more toward engineering implementation rather than groundbreaking innovation. - The experiments fail to demonstrate the superiority of the proposed method in large-scale scenarios. The paper and video does not provide sufficient evidence regarding key performance metrics such as tracking and mapping accuracy, rendering quality, memory efficiency, and computational time in large-scale environments. While the results on smaller datasets (e.g., Replica, TUM-RGBD, and ScanNet++) are promising, they do not adequately validate the scalability claims. To strengthen the paper, the authors should include evaluations on larger-scale benchmarks, which would more convincingly showcase the advantages of their approach in real-world applications. - The proposed method cannot achieve real-time performance with approximately 0.5 tracking FPS in the easiest Replica dataset, which is unacceptable for the SLAM framework with real-time demands. Other Comments Or Suggestions: No Questions For Authors: 1. The paper proposes simplifying an ellipsoid Gaussian into a sphere, which retains only a color c ∈ R1×3, a radius (variance) r∈R1 , and an opacity o∈R1. How does this simplification differ from the approach in Point-SLAM: Dense Neural Point Cloud-based SLAM (ICCV 2023)? It seems that such a simplified Gaussian representation effectively reduces to a Neural Point Cloud. Could the authors clarify the distinctions and advantages of their method compared to this prior work? 2. The strategy of selecting overlapping regions and Frozen View-Tied Gaussians raises concerns about the ability to perform global bundle adjustment (BA). Could this approach lead to a degradation in the accuracy of the map and camera poses, potentially resulting in suboptimal solutions? Does the authors' framework include additional mechanisms to ensure global consistency and maintain optimality in the reconstruction? 3. I am unclear about the concept of View-tied Gaussians. In traditional Gaussian splatting, only the Gaussians projected onto the current camera view are involved in optimization. How does View-tied Gaussians fundamentally differ from this approach? Could the authors elaborate on the unique aspects and advantages of their method compared to the standard Gaussian splatting framework? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your review and positive comments on motivation and performance. ### **1. Results on city-level scenes** Following SplaTAM, we evaluate on the widely used benchmarks such as ScanNet++ and demonstrate superior storage efficiency, learning 20 times more Gaussians for more detailed rendering while overcoming out of memory issues during optimization. These advantages significantly improve our capability in large-scale scenes. Due to the time limit, We additionally report our results on several city-level scenes in KITTI below, and visual comparison is shown at https://imgur.com/a/FOIgcrz. |ATE RMSE$\downarrow$[m]|Gaussian-SLAM|SplaTAM|LoopSplat|Ours| |-|:-:|:-:|:-:|:-:| |00|3.02|58.83|2.22|**2.06**| |01|77.51|84.45|74.47|**29.01**| |05|128.88|80.39|117.43|**7.74**| |10|10.60|43.82|11.39|**4.54**| |PSNR$\uparrow$|Gaussian-SLAM|SplaTAM|LoopSplat|Ours| |-|:-:|:-:|:-:|:-:| |00|15.51|9.82|15.82|**28.54**| |01|15.95|12.89|14.69|**30.33**| |05|16.22|26.48|15.98|**28.19**| |10|15.58|25.58|14.58|**27.59**| |Peak GPU Use (GiB)|2.74|22.37|3.56|4.79| Memory on KITTI is in the table above. Each method is using the most Gaussians until no improvement can be made. We use more memory, but manage to produce much better rendering with more Gaussians. We will also cite GO-SLAM. With the same RGBD setting as ours, GO-SLAM did not report results on city-level scenes either, and we cannot report its plausible results on KITTI at this moment. Note that it requires a RGB or stereo setting for larger scenes. ### **2. Novelty and effectiveness** Our novelty not only lies in the view-tied Gaussians, which save storage for more Gaussians to recover more details, but also novel tracking and mapping strategies to work with view-tied Gaussians. Our tracking strategy resolves the camera pose error accumulation when not accessing all Gaussians as a global reference. Our novel mapping strategy finds a balance between storage complexity and rendering quality. We believe these novel strategies are not a kind of engineering job. Their effectiveness has been justified in our extensive experiments and ablation studies. ### **3. The 1st misunderstanding** “The View-Tied Gaussians approach appears to … current frame in prior works.” is a misunderstanding. As our response to your question 2 and explanations in Lines 214-239, we keep all Gaussians in the current section, which covers the current frame and its neighbor frames, learnable, rather than merely focusing on the current frame. This differentiates our method from the incremental SLAM framework. ### **4. Runtime** Compared to classic SLAM methods, there is still a large room for rendering-based SLAM methods to improve the runtime efficiency. However, rendering-based SLAM methods like ours provide the capability of novel view synthesis, which can be directly used in VR. This is a vital function that classic SLAM methods cannot provide. Tab. 11 reports our runtime efficiency is comparable to the latest rendering-based methods. ### **5. Difference to Point-SLAM** Although we are using a simplified isotropic Gaussian representation, each sphere is still a Gaussian rather than a point, including other attributes. So, ours is a GS-based SLAM, which is different from NeRF-based SLAM like Point-SLAM. Runtime comparisons in Tab. 11 show splatting is faster than ray tracing, and numerical comparisons in Tab. 3-8 show our superior performance over Point-SLAM. ### **6. Global bundle adjustment** As stated in Sec. 3.5, we do have BA, but only at the head frame. Since we employ a large amount of Gaussians over the scene, and cannot access all Gaussians at the same time, we cannot perform a global BA over all frames. But our superior performance shows no degeneration. But we do have special designs to ensure global consistency and optimum with multi-view constraints in both tracking and mapping, like finding the overlapping section in the most front as a common reference in tracking and along with fixed Gaussians in other sections when mapping the most current section. ### **7. Difference to traditional GS-based SLAM** Firstly, our Gaussians are fixed at depth points and have fewer attributes to learn. Secondly, we only need to keep Gaussians in the current section learnable and in the memory without using keyframes. While traditional GS-based SLAM methods require keeping all Gaussians learnable in the memory to maintain consistency to the current frame and all keyframes. These designs enable us to employ more Gaussians to recover more details in larger scenes. ### **8. The 2nd misunderstanding** “In traditional Gaussian splatting, … involved in optimization” is a misunderstanding. With a background of SLAM, traditional GS-based methods require keeping all Gaussians (rather than merely the ones in the current frame) learnable and getting involved in the optimization, since they need to constrain Gaussians using both the current view and the keyframes, aiming to maintain the global consistency. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed response, which addresses most of my concerns. I will update my rating accordingly. --- Reply to Comment 1.1.1: Comment: Thanks for your review and comments. Really appreciated it. Best, The authors
Summary: The paper presents VTGaussian-SLAM, a novel RGBD SLAM system that utilizes view-tied 3D Gaussians for efficient mapping and tracking in large-scale scenes. It introduces the representation of Gaussians tied to depth pixels, thus improving optimization efficiency and reconstruction quality while enabling better scalability in SLAM applications. Claims And Evidence: Yes, the claims made in the submission supported by clear and convincing evidence. Methods And Evaluation Criteria: Yes, proposed methods and evaluation criteria make sense. Theoretical Claims: Yes, they are theoretically correct. Experimental Designs Or Analyses: Overall, the experimental designs are rigorous, and the discussions are comprehensive. Supplementary Material: The supplementary materials include implementation details and detailed results on each evaluated scene, more analysis and visualizations, code and a demo video. The submitted materials are comprehensive to facilitate paper understanding. Relation To Broader Scientific Literature: The proposed methodology contributes to the improvement of GS-based RGBD SLAM systems. Essential References Not Discussed: The most related works have been discussed in the paper. Other Strengths And Weaknesses: Strengths: 1) The paper develops an out-of-core algorithm for 3D GS based SLAM. 2) The paper proposes a simple yet effective supervision strategy to improve RGBD SLAM performance, achieving SOTA results. 3) The paper is well written and easy to follow. Weaknesses: 1) The influence of section length. The length of the Gaussian section, i.e., N is an essential hyperparameters of the proposed system. The Gaussians are maintained within a fixed length of section, while the selection of this hyperparameter is sensitive to datasets. In Line 257-261, they mentioned they choose different N on different datasets, according to image resolution and mapping iterations. However, although the paper had conducted an ablation study on ScanNet to experimentally prove their selection of the length of section S is optimal, there are no substantive analysis and evidence to support this hyperparameters is related to image resolution and mapping iterations. Or should it be related to camera motion pattern? A deeper analysis is appropriated. 2) Lack of implementation details. The number of optimization iterations in tracking and mapping appear not to be reported. Are the optimization iterations consistent across different datasets, and if not, how do they vary? Furthermore, Table 11 can include average operation time. 3) Lack of evaluation on "large-scale" scenes. The paper emphasizes the scalability challenges of SLAM in large-scale scenes. However, the method is only evaluated on datasets with room-level scenes. In general, large-scale scenes in SLAM tasks typically involve building- or even city-level environments. A more comprehensive evaluation on such large-scale scenarios, e.g., KITTI, would strengthen the paper's claims regarding scalability. Other Comments Or Suggestions: 1) On the top of each subfigure in Figure 1, is it Section 1 or Section o, {g}$^1$ or {g}$^o$ 2) Line 369, “Without using sections (“1”), we cannot ...”, is it section (“1”) or section ("o”). 3) Figure 4 can include the corresponding rendering image and error for each section to enhance visualization. Questions For Authors: 1) Did you prune or densify the GS during the mapping iterations? 2) Selection of hyperparameters {α, β} in tracking. Depth maps from TUM-RGBD and ScanNet++ exhibit significantly higher levels of noise compared to those from Replica. When depth information is imperfect, it is intuitive that the system should rely less on depth loss. This suggests that the value of β (the weight assigned to depth loss) should be smaller in such scenarios. However, the system uses a much larger β when running on TUM-RGBD and ScanNet. Could you provide an explanation for this choice? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thanks for your review and positive comments on our idea, contributions, evaluations, and supplementary materials. ### **1. Impact of section length** For fair comparisons with previous methods in rendering quality, we adopt the same number of iterations for mapping. But our Gaussians are view-tied, which are usually more than the ones in previous methods when covering the same number of frames. So, we set a proper section length to ensure our Gaussians in the same section can be well optimized in the same number of iterations. Comparisons in Tab. 10 indicate that the rendering may degenerate if more and more Gaussians are included in a section but still under the same number of iterations. We agree camera motion may also be a factor to consider, but we value more on the fair comparisons. ### **2. Tracking and mapping iterations** For fair comparisons, we follow the previous methods to conduct optimization in the same iterations on different benchmarks, such as 100 iterations for mapping and 60 iterations for tracking on Replica. Tab.11 reports the operation time on each frame. We will report these iterations clearly in our revision. ### **3. Results on city-level scenes** We follow previous methods like SplaTAM to report our evaluations on the widely used benchmarks such as ScanNet++. We also show our advantages in storage complexity, which can learn 20 times more Gaussians to recover more details on all frames than the latest methods, like SplaTAM, and also overcome the obstacle of out of memory at any frame during optimization. These advantages significantly improve our capability and performance in large-scale scenes. Due to the time limit, we only report our results on several city-level scenes in KITTI to evaluate tracking and mapping performance below. We also report a visual comparison of rendering performance at the link: https://imgur.com/a/FOIgcrz. |Methods|Gaussian-SLAM|SplaTAM|LoopSplat|Ours| |-|:-:|:-:|:-:|:-:| |00 (ATE RMSE$\downarrow$[m])|3.02|58.83|2.22|**2.06**| |01 (ATE RMSE$\downarrow$[m])|77.51|84.45|74.47|**29.01**| |05 (ATE RMSE$\downarrow$[m])|128.88|80.39|117.43|**7.74**| |10 (ATE RMSE$\downarrow$[m])|10.60|43.82|11.39|**4.54**| |Methods|Gaussian-SLAM|SplaTAM|LoopSplat|Ours| |-|:-:|:-:|:-:|:-:| |00 (PSNR$\uparrow$)|15.51|9.82|15.82|**28.54**| |01 (PSNR$\uparrow$)|15.95|12.89|14.69|**30.33**| |05 (PSNR$\uparrow$)|16.22|26.48|15.98|**28.19**| |10 (PSNR$\uparrow$)|15.58|25.58|14.58|**27.59**| |Peak GPU Use (GiB)|2.74|22.37|3.56|4.79| We also report memory consumption on KITTI in the table above. Each method uses the most Gaussians until no improvement can be made. We use a little bit more memory, but we manage to use more Gaussians to produce much better rendering. ### **4. Densify and prune Gaussians during mapping** No, we do not need these operations, which can speed up the mapping. This is because our view-tied Gaussians provide adequate Gaussians to achieve better rendering quality and have an efficient strategy to resolve out-of-memory issues. ### **5. Larger weight on the depth loss** Although depth maps from TUM-RGBD and ScanNet are not perfect, low qualities like motion blur, low resolution, and different exposure time make RGB supervision may be even more unreliable than depth to us. We found that weighting larger on depth maps can take full advantage of our view-tied Gaussians on real images. ### **6. Minor issues** We will revise the manuscript accordingly to resolve other minor issues. --- Rebuttal Comment 1.1: Comment: Thank you for your response. I do not have further concerns and keep the original score. --- Reply to Comment 1.1.1: Comment: Thanks for your review and comments. Really appreciated it. Best, The authors
Summary: This work presents VTGaussian-SLAM, a novel method for RGB-D SLAM by a novel view-tied 3D Gaussian representation, with corresponding tracking and mapping methods. This method does reduce parameter optimization (e.g., exact localization, rotation and covariance parameters), so the system can store many more Gaussians in the GPU memory for more detail and/or larger scale mapping. The method is organized in terms of sections of frames. In a section, added frames spawn new Gaussians (for novel geometry coverage) or update existing ones for uniform appearance and geometry. Crucially, only the current section's (and a limited number of overlapping sections for consistency in pose) Gaussians are learnable at any point, removing the need for global consistency with all keyframes. Results with Replica, TUM-RGBD, ScanNet, and ScanNet++ show improvements in both camera tracking accuracy (ATE) and rendering quality (PSNR, SSIM, LPIPS), and in reconstruction metrics (depth L1, F1) over prior state-of-the-art SLAM methods based on both implicit fields (NeRF variants) and 3D Gaussian splatting. Claims And Evidence: The authors claim that by using view-tied Gaussian representation, the system reduces parameter storage (saving location, rotation, and variance) by directly binding each Gaussian to a depth pixel, and allows more Gaussians to represent local details. The authors use ablation experiments on benchmarks to demonstrate the effectiveness of the method using ATE RMSE, PSNR, and other quantitative metrics. Methods And Evaluation Criteria: Evaluation criteria include ATE RMSE, rendering quality, reconstruction results, and runtime and memory Usage. It is a good evaluation of various aspects of neural SLAM. Theoretical Claims: The contribution of this work lies more in algorithm design and is demonstrated experimentally rather than through formal proofs. The main hypothesis is that "view-tied" Gaussians can represent geometry/color for local views (with supervision in depth) sufficiently without the expense of unconstrained 3D Gaussian parameters. Experimental Designs Or Analyses: The experimental metrics designed are good, but the benchmarks are all indoor datasets and do not involve extremely large scene mentioned in the abstract and introduction. Supplementary Material: The video in the supplementary material explains in detail the main improvements of the article's algorithm and demonstrates the effectiveness of the paper's approach through visual comparison of results. The code provided in the supplementary only has a main.py file, without other files, which makes it impossible to run the code directly. However, the main file also shows the main process of the algorithm. Relation To Broader Scientific Literature: The paper's approach builds on 3D Gaussian Splatting and Neural Implicit SLAM, and correctly cites the body of related research. Essential References Not Discussed: No Essential References Not Discussed Other Strengths And Weaknesses: 1. Although the authors say their method works better for extremely large scenes, most evaluations are still on single-room or short multi-room scales. So it would be more helpful if benchmark results on city scale or other large scenes could be shown. 2. The author's method uses the depth from the RGB-D sensor for each frame. But depth is usually noisy, so why don't the author optimize the depth value based on the depth and the linear direction of the camera? Will this lead to better rendering results? 3. For the Bundle Adjustment part, the authors did not explain how many iterations were used. And after the online mapping section, whether the author's method used final refine Bundle Adjustment for several iterations (similar to Mono-GS). Other Comments Or Suggestions: 1. In the RMSE error section of Table 9, it seems that the results of the first three columns are filled in incorrectly, and are too different from the results of the fourth column. Questions For Authors: see weekeness and other commonts Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thanks for your review and positive comments on our idea and evaluations. ### **1. Benchmark selection and large-scale scenes** We follow previous methods like SplaTAM to report our evaluations on the widely used benchmarks such as ScanNet++. We also show our advantages in storage complexity, which can learn 20 times more Gaussians to recover more details on all frames than the latest methods, like SplaTAM, and also overcome the obstacle of out of memory at any frame during optimization. These advantages significantly improve our capability and performance in large-scale scenes. As requested, we additionally report our performance on extremely large scenes, such as city-level scenes in KITTI. Due to the time limit, we only report our results on several scenes to evaluate our tracking and mapping performance below. We also report a visual comparison of rendering performance at the link: https://imgur.com/a/FOIgcrz. |Methods|Gaussian-SLAM|SplaTAM|LoopSplat|Ours| |-|:-:|:-:|:-:|:-:| |00 (ATE RMSE$\downarrow$[m])|3.02|58.83|2.22|**2.06**| |01 (ATE RMSE$\downarrow$[m])|77.51|84.45|74.47|**29.01**| |05 (ATE RMSE$\downarrow$[m])|128.88|80.39|117.43|**7.74**| |10 (ATE RMSE$\downarrow$[m])|10.60|43.82|11.39|**4.54**| |Methods|Gaussian-SLAM|SplaTAM|LoopSplat|Ours| |-|:-:|:-:|:-:|:-:| |00 (PSNR$\uparrow$)|15.51|9.82|15.82|**28.54**| |01 (PSNR$\uparrow$)|15.95|12.89|14.69|**30.33**| |05 (PSNR$\uparrow$)|16.22|26.48|15.98|**28.19**| |10 (PSNR$\uparrow$)|15.58|25.58|14.58|**27.59**| |Peak GPU Use (GiB)|2.74|22.37|3.56|4.79| We also report memory consumption on KITTI in the table above. Each method uses the most Gaussians until no improvement can be made. We use a little bit more memory, but we manage to use more Gaussians to produce much better rendering. ### **2. Impact of depth noise on the performance** Although Gaussians are fixed at depth with noise, Gaussian splatting is flexible enough to overfit the current frame and neighboring frames by tuning other attributes like color, opacity, and shape. Our results show that depth noises do not significantly impact the rendering performance. Meanwhile, we try to optimize the position of Gaussians along the ray direction, but we do not find an obvious improvement in rendering performance. We report additional results below. We also report a visual comparison using either fixed Gaussians or movable Gaussians (along the ray) at the link: https://imgur.com/a/oEfbgro. ||10% pixels w/ noises|20% pixels w/ noises|30% pixels w/ noises|Gaussians movable along ray|Ours(w/o additional noises & fix)| |-|:-:|:-:|:-:|:-:|:-:| |PSNR$\uparrow$|43.41|43.40|43.29|42.89|43.06| |SSIM$\uparrow$|0.996|0.996|0.996|0.995|0.996| |LPIPS$\downarrow$|0.015|0.015|0.015|0.020|0.013| ### **3. Bundle adjustment** As stated in Lines 243-245, our bundle adjustment is just used at the head frame in each section. Since the head is so important to start a section and is used as a reference by the following frame in the same section, this design stabilizes the optimization a lot and achieves high accuracy. Due to the large amount of Gaussians in total over a large-scale scene, we cannot conduct a final or global bundle adjustment using all Gaussians. The 80 iteration optimization is illustrated in Fig. 5. ### **4. Numerical comparison in Tab. 9** We confirmed the results are correct. We keep the experimental setting the same but just using different kinds of Gaussians. So, one analysis here is that these alternatives do not work well with some parameters, which produce rendering errors that can accumulate across frames fast in tracking. ### **5. Code** The code in the supplementary materials is merely for demonstration. We will release the code upon acceptance.
null
null
null
null
null
null
Hyperflows: Pruning Reveals the Importance of Weights
Reject
Summary: The paper proposes a 'prune and regrow' approach during training. The concept of hyperflows and pressure is introduced. Hyperflows behave as a sort of saliency measure for each neural network weight. Pressure is used to control the sparsity of the network. Pruning during training behavior is analyzed in order to derive scaling laws. ## update after rebuttal I have raised my score slightly. Claims And Evidence: The claims seem sufficient. Methods And Evaluation Criteria: The methods and evaluation seem sufficient. Theoretical Claims: There are no significant theoretical claims in the work. Experimental Designs Or Analyses: The experimental design and analyses seem sufficient. Supplementary Material: No Relation To Broader Scientific Literature: There have been previous works in prune and regrow before. Though not many. Essential References Not Discussed: N/A Other Strengths And Weaknesses: There is no intuition behind hyperflows other than: Inspired by the well-known insight that the value of something is not truly known until it is lost, we introduce Hyperflows, a dynamic pruning method which determines weight importance by first removing it. Is the idea behind this approach coming from max-flows? Can the authors better give an explanation of as to the novelty or correctness of their approach. Why should Hyperflows and pressure be the way to prune something? Why or how does it tie into scaling laws? Why should a practitioner use hyperflows as opposed to some competing approach? Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for the insightful questions. We would like to clarify the inspiration, novelty, and practical advantages of Hyperflows: **C:** “Is the idea behind this approach coming from max-flows?” **R:** While the notion of “flow” might remind us of max-flow formulations in network theory, our approach is not directly derived from max-flow algorithms. Instead, Hyperflows is motivated by a fundamentally different idea: rather than solving a max-flow problem, we quantify a weight’s importance by measuring the aggregated gradient “flow” when that weight is temporarily removed. This flow essentially captures how much the loss changes if that weight is removed, thereby serving as a proxy for the weight’s criticality. An analogy could nevertheless be developed in relation with the max-flows algorithm. The weights with larger flow are kept, which can be interpreted as capacity. **C:** “Can the authors better give an explanation of as to the novelty or correctness of their approach?” **R:** The novelty lies in the concepts of pressure and flow, which can be used to drive pruning decisions without direct interference. Also, we show relationships between these notions in the neural pruning laws section, hinting at ties with the fundamental structure of the neural network. As to correctness, we address it through a combination of theoretical proofs and empirical validation. Notably, in our supplementary material A.1, we prove that if pruning a weight $\theta_i$ leads to a larger loss increase than pruning another weight $\theta_j$, then the aggregated gradient (flow) associated with $\theta_i$ is larger in magnitude, which reflects that important weights are identified by our method upon pruning. We will revise the related work section in the paper to clarify all this. **C:** “Why should a practitioner use Hyperflows as opposed to some competing approach? Why should Hyperflows and pressure be the way to prune something? Why or how does it tie into scaling laws?” **R:** We believe that Hyperflows is a simple framework. It uses the same pruning hyperparameters for all networks, working well out-of-the-box as shown in our experiments, while allowing to set a desired sparsity. On the other hand, much of the value of Hyperflows resides in the theoretical concepts it derives and how they relate to each other. As opposed to a purely empirical paper focused on results, Hyperflows builds an intuition around the idea of pruning and uses pressure not only as a mean to prune but as a way to analyze the connections between the various effects of pruning which fall under Neural Scaling Laws. For example, we observe predictable final sparsity when pressure is constant and resilience of critical connections to extreme pressure levels. In one experiment, on LeNet300 MNIST, we observed that pruning the weights between the last hidden layer and the output leads to an identity, which lead to an extremely large flow. This happens because each remaining weight is tied to a class. --- Rebuttal Comment 1.1: Comment: I think the rebuttal is appropriate, I have slightly raised my score. I think the novelty of this work is high. It would be nice to have some citations giving a more intuitive tie-in for flows (if accepted) and other "Physics of AI" approaches if this has not been done already in the draft. --- Reply to Comment 1.1.1: Comment: Thank you for your positive feedback and for raising your score. We appreciate your recognition of the work’s novelty, as well as your suggestion regarding additional citations to tie flows into broader “Physics of AI” approaches, which we will address in the final version.
Summary: This work propose a novel algorithm that measuring importance of weights through observing their gradient during dynamic prunning process. The weights that are believed to be important will regrow at later stage. Overall the proposed algorithm show better performance than those compared with in this work. ## update after rebuttal I believe the current state of the paper is not ready for publication, as the baseline used in the performance comparison is not appropriate. More broadly, I think we should avoid continually proposing new variations of pruning algorithms that yield similar results. That said, this is just my personal opinion and does not necessarily reflect the views of the broader community. Claims And Evidence: Overall the design of the algorithm is reasonable. However, the author might want to clearly compare the actual computing cost associated with this new algorithm. Is the computing cost similar to other algorithms. In addition, there are many other algorithms (Please see link below) can achieve better performance than the results shown in this paper, it should be justified why those are not inculded in this work. https://paperswithcode.com/sota/network-pruning-on-imagenet-resnet-50-90 Methods And Evaluation Criteria: The method is overall reasonable. Theoretical Claims: N.A. Experimental Designs Or Analyses: The experimental designs are quite standard. Though the authors might want to claify on the computing complexity of the algorithm. Supplementary Material: No. Relation To Broader Scientific Literature: N.A. Essential References Not Discussed: See link below. https://paperswithcode.com/sota/network-pruning-on-imagenet-resnet-50-90 Also see: @article{li2024pushing, title={Pushing the Limits of Sparsity: A Bag of Tricks for Extreme Pruning}, author={Li, Andy and Durrant, Aiden and Markovic, Milan and Yin, Lu and Leontidis, Georgios}, journal={arXiv preprint arXiv:2411.13545}, year={2024} } Other Strengths And Weaknesses: N.A. Other Comments Or Suggestions: Neural network pruning has a long history, with likely hundreds of algorithms proposed. However, given the distributed nature of neural network representations, the importance of pruning any specific weight may be questionable. Does it still make sense to invest time in developing new variations of pruning algorithms? Questions For Authors: I don't believe making connection to the scaling laws add value to this work. L131: "by capturing the features lost from the permanently pruned weights, leading to larger flows" not sure what you meant by capturing the features" Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for the analysis and highlighting potential issues of the manuscript. We address them below. **C:** “The author might want to clearly compare the actual computing cost associated with this new algorithm.” **R:** We compared Hyperflows with methods that do not require additional parameters and acknowledged the downside that Hyperflows would require increased FLOPs at training due to learning more parameters. We appreciate the reviewer’s suggestion regarding the analysis of computational efficiency. In response, we conducted a detailed computational analysis, set to be included in the final paper, with the results summarized in Table 1. As shown, Hyperflows consistently utilizes a higher percentage of FLOPs during training, whereas it requires significantly fewer FLOPs at inference. This contrast is due to Hyperflows’ layer-wise sparsity distribution, which prunes more aggressively in computationally expensive layers, such as the 3×3 convolutions in bottleneck blocks. This trend is also illustrated in Figure 12 of C.2. To estimate the number of FLOPs, we approximate the backward pass as 2·fs + fd, where fs refers to the number of FLOPs associated with sparse weight tensors and fd refers to the FLOPs associated with dense t value tensors. The term 2·fs is used to approximate the backward FLOPs for sparse weights based on their forward FLOPs, following a common convention in the literature, while fd accounts for the backpropagation of the t values, which are maintained in dense form to enable potential weight regrowth. This yields a total training cost of 3·fs + fd FLOPs, indicating that Hyperflows requires at least one-third of the FLOPs compared to the dense baseline. This can be observed in the expression (3·fs + fd) / (3·fd) = fs/fd + 1/3. 3 fd is the total compute cost of a dense network, 1 fd for forward + 2 fd for backward. ## Table1 |Method|Top-1Acc(%)|Params|Sparsity(%)|FLOPs(Test)|FLOPs(Train)| |---|---|---|---|---|---| |ResNet-50|77.01|25.6M|0.00|1.00x|1.00x| |GMP|73.91|2.56M|90.00|0.10x|0.51x| |DNW|74.00|2.56M|90.00|0.10x|-| |RigL|73.00|2.56M|90.00|0.24x|0.25x| |GraNet|74.50|2.56M|90.00|0.16x|0.23x| |STR|74.31|2.49M|90.23|0.08x|-| |**Hyperflows**|**74.90**|**2.54M**|**90.11**|**0.15x**|**0.60x**| |GMP|70.59|1.28M|95.00|0.05x|-| |DNW|68.30|1.28M|95.00|0.05x|-| |GraNet|72.30|1.28M|95.00|0.12x|0.17x| |RigL|70.00|1.28M|95.00|0.12x|0.12x| |STR|70.40|1.27M|95.03|0.04x|-| |**Hyperflows**|**72.20**|**1.13M**|**95.58**|**0.08x**|**0.52x**| |RigL|67.20|0.90M|96.50|0.11x|0.11x| |STR|67.22|0.88M|96.53|0.03x|-| |GraNet|70.50|0.90M|96.50|0.09x|0.15x| |**Hyperflows**|**70.40**|**0.92M**|**96.42**|**0.06x**|**0.49x**| x = fraction of baseline value **C:** Uncited reference A bag of tricks... **R:** We decided not to reference the paper, given the fact that their weight sharing scheme affects their definition of sparsity and therefore does not match the scope of our manuscript anymore. **C:** “there are many other algorithms which can achieve better performance” **R:** We wanted to compare with state of the art methods which use the same benchmarks as us, due to the fact that the process of adding new network architectures to the benchmarks is time consuming. Thus we selected methods that were in the computational bounds of what we could afford. Nevertheless, we agree with the reviewer and will compare Hyperflows with the methods mentioned above on ImageNet benchmark, in the final version. **C:** “Does it still make sense to invest time in developing new variations of pruning algorithms?” **R:** Pruning is driving superposition and distributed representations, which can help especially in explainable AI. Thus, we believe that sufficiently advanced network pruning and compression techniques can offer insights into how a neural network works, potentially leading to more efficient models and explainability. For example Anthropic used sparse autoencoders, which does compression, to explain LLM facts. ( https://www.anthropic.com/research#interpretability ). **C:** “... connection to the scaling laws add value to this work”, “by capturing the features lost from the permanently pruned weights, leading to larger flows” **R:** We believe that scaling laws reveal indirectly how the network compresses information as weights are pruned, leading to our analysis in the neural pruning laws section. For example, compression is, perhaps surprisingly, affected by the learning rate on the weights, even though we use a separate learning rate for t values. When weights are pruned, in order to minimize the loss function, the network will aim to compress the information in the remaining weights, to extract as many features as before more efficiently. Since more features are encoded in fewer weights, removing any weight, will lead to larger drops in accuracy, thus leading to a larger gradient aiming to regrow the weight (or in other words, the weight has larger flow). This phenomenon is analyzed in A1.
Summary: The authors propose Hyperflows, a pruning-during-training method. It assigns each parameter a learnable parameter to determine if a certain parameter should be pruned. The effectiveness of Hyperflows is tested across multiple datasets, including CIFAR10, CIFAR100, and ImageNet. It outperforms baseline methods in most scenarios. Claims And Evidence: Yes the claims made in the submission is supported by clear evidence. Methods And Evaluation Criteria: In fact, I am not entirely sure if the comparison is fair. All of the baseline methods do not require training additional learnable parameters. It requires much more computational costs compared to baselines. To provide a more comprehensive evaluation, the authors should include more dense-to-sparse methods like l-0 regularization [1], or other dense-to-sparse methods which learn masks through straight-through-estimators. [1] Louizos, Christos, Max Welling, and Diederik P. Kingma. "Learning sparse neural networks through $ L_0 $ regularization." arXiv preprint arXiv:1712.01312 (2017). Theoretical Claims: I have checked the section A.2. However, some of the notations are not explained, for example, $\mathcal{I}$ is not defined anywhere. Experimental Designs Or Analyses: I have checked the soundness of the experimental design of comparing with baselines. Supplementary Material: I have read SM Section A. Relation To Broader Scientific Literature: The researchers may be interested in a novel pruning-during-training method. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths: 1. The overall performance of the proposed method is promising, surpassing baseline methods. Weaknesses: 1. The organization of the paper makes it very difficult for readers to understand key details. For example: 1. The algorithm is not shown in the main manuscript, making authors hard to understand how the algorithm is implemented and how pruning happens. Moreover, the role of values defined in Section 3.2 is unclear. If mask is solely dependent on $H(t_i) $, then how does the weight flow defined in section 3.2 contribute to the pruning process? Also, I do not see how these $\mathcal{F}$ and $\mathcal{M}$ are used during pruning or analysis. 2. In section 3.2, the authors mention multiple topologies, how are they implemented? 3. $\mathcal{I}$ is not defined in Section A.2. 4. It is very confusing in Section 3.2 Equation (2) that we use gradient of $t_i$ while $H(\cdot)$ is not differentiable. The authors should move their explanations of using STE from line 176-182 to here. 2. As I mentioned in "Methods And Evaluation Criteria". To provide a more comprehensive evaluation, the authors should include more dense-to-sparse methods like l-0 regularization, or other dense-to-sparse methods which learn masks through straight-through-estimators. Or clarify why they are not needed. Other Comments Or Suggestions: I suggest the authors to reorganize the manuscript to make it more coherent and easier to understand. Questions For Authors: Can the authors explain the core difference between HyperFlow and those dense-to-sparse pruning methods that use learnable masks with STE? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive feedback. We address the concerns below: **C:** "differences between Hyperflows and Learnable masks with STE dense-to-sparse methods." **R:** **Common aspects:** - Learnable Masks, L0 global pressure, STE for mask parameters. **Technical differences:** - Hyperflows uses the pressure scaler $\gamma$, not as a fixed regularization but as a network parameter that can be adjusted to control the network behavior. This allows for fine pruning control, making the network able to follow any desired pruning curve (including multiple stages of regrowth, aggressive pruning, lenient pruning, etc.). Additionally, relationships between pruning and other network metrics can be observed through the pressure, which result in Neural Pruning Laws (Section 3.3). These relationships can significantly improve the interpretability of the pruning process, which we consider valuable. - The Regrowth stage distinguishes Hyperflows from other STE methods that use learnable masks. This stage significantly boosts accuracy and is enabled by our scheduler, which precisely controls the pressure scaler. - One aspect that was of great importance to us was the explainability and theoretical buildup of Hyperflows pruning. Most existing methods, lack explanations and granular analysis on the underlying mechanisms of pruning beyond simple intuitions and ad-hoc heuristics with the sole purpose of increasing final accuracy. We aimed to tie together both the intuitive grounding (“you don’t know the value of something until you lose it”) with the concrete mechanism used in the method, i.e. aggregating gradients in the absence of the weight to reflect the performance impact of that weight. This gradient mechanism can be developed further to analyze other weight properties such as sign flipping, which we think is valuable for further research. - Some aspects in Hyperflows, such as gradient aggregation when the weight is pruned, might occur “under the hood” in certain L0 methods, but they are not a design choice nor exploited to their full potential. **C:** “the authors should include more dense-to-sparse methods like L0 regularization…” **R:** We agree with the reviewer that the comparison could use a separation between learnable mask or L0 methods and other methods not using additional parameters, with each one being compared independently with Hyperflows in terms of accuracy as well as computation. Our intention was for Hyperflows to be both theoretically grounded and to yield results comparable to popular state-of-the-art methods evaluated on the same benchmarks. Many of the L0 regularization methods we found had poor results or were not evaluated on the benchmarks we ran. Despite this, we do think that the paper would benefit from a broader comparison with L0 and learnable mask methods. Thus, we will compare Hyperflows with additional methods that use learnable masks and make a distinction in the tables between methods that use (or do not use) additional parameters. Our experiments confirm Hyperflows' additional training costs. However, the significantly lower inference cost may offset these expenses. See Table 1 posted for Reviewer 3 with ID: RnFA. **C:** “The algorithm is not shown.” **R:** We will add pseudocode detailing the full pruning algorithm in the main body. **C:** “If mask is solely dependent on $H(t_i)$, …how does flow contribute to the pruning process?”, “how these $F$ & $M$ are used during pruning or analysis.” **R:** $H(t_i)$ defines the binary mask in the forward pass. Our goal is to compute the gradient of $t_i$. Since $H(t_i)$ is not differentiable, to compute the gradient on $t_i$, which is denoted by $G$ in Section 3.2 (2), we need to use a STE for $H(t_i)$. Furthermore, $G$ has a different meaning when $t \le 0$ than when $t > 0$, so for clarity we denoted $G$ respectively by $F$ and $M$ in these two situations (even though they are the same gradient). The flow $F$ described in Section 3.2 is the gradient $G$ that propagates on $t_i$ when the weight is pruned and acts as an indicator of weight importance. In contrast, $M$ is the gradient $G$ of $t_i$ when the weight is not pruned, which makes $t_i$ follow the magnitude of $w_i$ (proven in A.2). We will revise section 3.2 to make this clearer. **C:** “the authors mention multiple topologies, how are they implemented?” **R:** These topologies are not explicitly implemented in the method; they are implicitly generated by the noise produced by pruning the weights, and in Section 3.2 they are offered as an explanation for how we are able to handle the interdependencies among weights. **C:** “$I$ is not defined, Section A.2.” **R:** $I$ was meant to be the output of the previous neuron; we will clarify this. **C:** “It is very confusing in Section 3.2 Equation (2) that we use gradient…” **R:** We will move the definitions and explanations of the STE in Section 3.2 to immediately follow Equation (2), where $G$ is defined. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed rebuttal. However, considering the current organization of the paper, the comparison analysis, and the marginal improvements demonstrated over existing baselines, I remain hesitant to increase my rating. Therefore, I will maintain my current rating. --- Reply to Comment 1.1.1: Comment: Thank you for your feedback. We performed additional experiments on ImageNet under the same conditions (pretrained weights, 100 epochs), comparing against GMP and GraNet, the strongest baselines in our training setup, and found that Hyperflows remains competitive. The updated results are shown in Table 2, provided in the rebuttal comment for Reviewer 1 (ID: vscf). Despite the numerically relatively small improvements, we believe Hyperflows is a strong competitor, with the extra advantage of being supported by solid theoretical grounding. We appreciate your review and will enhance the organization and clarity of our analysis.
Summary: This work proposes a novel method for the pruning of parameters from deep neural network models. It focuses upon the principle of defining a network topology based upon each parameter having a measure which captures a tradeoff between a pruning 'pressure' which is applied to every node, as well as a measure of the 'flow' of each parameter. This 'flow' captures a gradient signal proportional to how much the loss would be affected if a parameter is removed. The behaviour of this novel method, 'Hyperflows', is examined across a range of hyperparameters, and in multiple neural network models. It is also compared against existing state of the art methods and shown to outperform many current competing alternatives. Claims And Evidence: Should the comparisons all be made fairly, all claims would be well supported by clear and convincing evidence. Having said that, one potentially problematic concern is that the experimental methods for this work are not very clear. Therefore it is difficult to ascertain if the comparisons are fair. There is inconsistency, for example the methods mention that 160 epochs of training are used, however according to Appendix D the Imagenet trained models have a different length of training. Furthermore, the paper from which comparison results are taken for Tables 1 and 2 (Liu et al. 2022) appear to describe that they used a custom training setup (by taking trained 90 and 95% sparse networks and further tuning for 30 epochs to achieve extreme sparsenesses) which is not described here for this method. This could have significant implications for whether this method truly reaches a state of the art. Methods And Evaluation Criteria: Methods and evaluation criteria are in line with existing work and appropriate. Theoretical Claims: The theoretical and methodological descriptions are sufficiently correct, but are somewhat out of order. Most importantly, a derivative with respect to the heaviside function (H) is given as 1.0, before it is ever outlined that there is an assumption that a straight-through estimator is applied. The claim beforehand appears to suggest that the flow is measured based upon an exact derivative when this is in fact approximate. This could be clearer. Experimental Designs Or Analyses: As mentioned in the claims section above, I have concerns regarding the experimental design and whether the experimental design is comparable between those models run for this paper (GMP, GraNet, Hyperflows) and the rest of the comparisons (e.g. RigL, STA etc). It appears that a number of results were taken from other papers but the experimental design is unclear and does not appear to conform to the same setup. Pruning, as pointed out by one of the references by Gale et al. 2019, can be highly dependent upon the parameterization used for the training. Supplementary Material: None. Relation To Broader Scientific Literature: It appears that this work is well related and embedded in the broader literature. It also appears that this work contributes to the overall field very well. Essential References Not Discussed: None which I am aware of. Other Strengths And Weaknesses: This paper is, on the whole, written well and contributes a neat setup to the problem of finding a suitable topology for a network. Its results are also impressive, and clearly require a great deal of effort to produce. Some of the methods section is a little loose and could be more rigorous and clear. Other Comments Or Suggestions: - Figure 1 is never referenced in the text (as far as I can tell). Please do refer to it somewhere. - The reference to Gale et al. 2019 is incorrect and shows up incorrectly in the main text. - GMP is referenced to Gale et al. 2019 but originated in Zhu et al. 2019. May be good to reference the original work. Also to define some of these acronyms which are never made clear. Questions For Authors: No specific questions, I see this as a neat paper but would like a great deal more clarity on how precisely the training was done for these models and therefore whether it is truly comparable to the other models in Tables 1 and 2. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their detailed feedback and for recognizing the potential and novelty of our proposed Hyperflows method. Below we address the main concerns raised: **C:** “There is inconsistency, for example the methods mention that 160 epochs of training are used, however according to Appendix D the ImageNet trained models have a different length of training” **R:** In the main body of our experimental section, we omitted to specify that we train for 160 epochs on everything but ImageNet (we will clarify in the final version if eventually accepted). For ImageNet, we pruned the network for 90 epochs, with a further 30 epochs for regrowth, adding up to a total of 120 epochs. Note that GraNet and RigL train ImageNet for 100 epochs, which gives a slight advantage to Hyperflows. In Table 1 posted for Reviewer 3(ID: RnFA) are the newly computed results for ImageNet, where pruning takes place in the first 70 epochs, with an additional 30 epochs for regrowth. We also added the FLOPs comparison between methods in the same table. **C:** „Furthermore, the paper from which comparison results are taken for Tables 1 and 2 (Liu et al. 2022) appear to describe that they used a custom training setup (by taking trained 90 and 95% sparse networks and further tuning for 30 epochs to achieve extreme sparsenesses) which is not described here for this method”. **R:** Our comparison focused on two kinds of methods, during-training and one-shot methods. For during-training methods, we aimed to evaluate them in the same conditions as Hyperflows, by running them in a post-training setup. We did this for GraNet and GMP but (i) the computational costs were high, since we needed to do a learning rate search as well as test both dense-to-sparse and sparse-to-sparse setup and report the best results. Furthermore, (ii) in almost all comparisons done between GraNet, GMP and the other methods, the latter underperformed, so we believe they would do so in the post-training setup as well, without changing the overall hierarchy of results presented in Table 1. For reasons (ii) and (i), we took the results from GraNet [2106.10404], that trained the networks under the same number of epochs, but not in a post-training setup. Nevertheless, we agree with the reviewer about the need of a rigorous design of experiments and will rectify the results, by running all the methods in the final version of the paper, for both Table 1 and Table 2. Furthermore, for one-shot pruning, we considered that our post-training setup is not suitable, but we decided to report their original results since many during-training pruning methods also compare to one-shot methods. If the reviewer considers that one-shot pruning methods results are of no interest, we could remove them from the final paper. **C:** “The claim beforehand appears to suggest that the flow is measured based upon an exact derivative when this is in fact approximate. This could be clearer.” **R:** To bring more clarity, we will move in the final version the definitions and explanations of the STE from Section 3.2 just after (2), where G is defined. **C:** “The reference to Gale et al 2019 is incorrect”, “GMP is referenced to Gale et al 2019”,“Figure 1 is never referenced in the text” **R:** Thank you for noticing, we will rectify and make sure to reference the original papers. Moreover, the reference is done just under the figure, on lines 203-204, to make it easier to find we will move the figure after the reference. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed responses. Given that the comparisons in the submitted version were indeed apples to orange (i.e. quite different in retraining setup) and the new Table of performances demonstrates that GraNet can often outperform Hyperflows, I maintain my current recommendation and do not upgrade it. This is an interesting approach but I am hesitant to suggest an outright acceptance. I would be in favour of an overhaul of the results so that a clear apples to apples comparison could be done. This would ideally show the different methods with precisely the same training recipe applied. I am aware that this can be extremely computationally intensive but in the field of pruning comparisons this clarity is absolutely necessary. --- Reply to Comment 1.1.1: Comment: Thank you for your insightful feedback. First, we would like to clarify that the 30-epoch fine-tuning setup for 90% and 95% sparse networks mentioned by the reviewer was employed solely to assess plasticity and did not form part of GraNet’s main pruning pipeline, which starts from either a 50% randomly initialized sparse network or a fully dense network. In other words, there are two main distinctions between our initial training setup and GraNet’s: first, Hyperflows used pretrained weights, and second, our method was trained for 120 rather than 100 epochs on ImageNet. To address these concerns, we conducted additional ImageNet experiments following the Hyperflows setup (pretrained weights, 100 epochs), as summarized in Table 2. In these experiments, GraNet and GMP show only slight gains from pretraining, with Hyperflows outperforming GMP and remaining close to GraNet. These methods were chosen because they were the strongest competitors, and we do not expect the remaining experiments to alter the current performance hierarchy. In the final version of the manuscript, we will run the remaining experiments under identical conditions to ensure a fair, apples‑to‑apples comparison. **Table 2** | Method | Top-1Acc(%) | Params | Sparsity(%) | FLOPs(Test) | FLOPs(Train) | |-------------|------------:|--------:|------------:|------------:|-------------:| | ResNet-50 | 77.01 | 25.6M | 0.00 | 1.00x | 1.00x | | **GMP** | **74.09** | **2.56M** | **90.00** | **0.10x** | **0.51x** | | DNW | 74.00 | 2.56M | 90.00 | 0.10x | - | | RigL | 73.00 | 2.56M | 90.00 | 0.24x | 0.25x | | **GraNet** | **74.48** | **2.56M** | **90.00** | **0.16x** | **0.23x** | | STR | 74.31 | 2.49M | 90.23 | 0.08x | - | | **Hyperflows** | **74.90** | **2.54M** | **90.11** | **0.15x** | **0.60x** | ||||||| | **GMP** | **70.87** | **1.28M** | **95.00** | **0.05x** | **-** | | DNW | 68.30 | 1.28M | 95.00 | 0.05x | - | | **GraNet** | **72.54** | **1.28M** | **95.00** | **0.12x** | **0.17x** | | RigL | 70.00 | 1.28M | 95.00 | 0.12x | 0.12x | | STR | 70.40 | 1.27M | 95.03 | 0.04x | - | | **Hyperflows** | **72.20** | **1.13M** | **95.58** | **0.08x** | **0.52x** | ||||||| | RigL | 67.20 | 0.90M | 96.50 | 0.11x | 0.11x | | STR | 67.22 | 0.88M | 96.53 | 0.03x | - | | **GMP** | **70.39** | **0.90M** | **96.50** | **-** | **-** | | **GraNet** | **70.79** | **0.90M** | **96.50** | **0.09x** | **0.15x** | | **Hyperflows** | **70.40** | **0.92M** | **96.42** | **0.06x** | **0.49x** |
null
null
null
null
null
null
Certification for Differentially Private Prediction in Gradient-Based Training
Accept (poster)
Summary: This paper presents a certification algorithm for assessing the stability of model predictions, which helps reduce the smooth sensitivity of the predictions. By providing a tighter bound on smooth predictions, the algorithm enhances the accuracy of private predictions. Empirical experiments demonstrate that this certification improves private binary classification and enhances the accuracy of noisy aggregation. ## update after rebuttal I will keep my score unchanged. Claims And Evidence: The claims are valid. I appreciate how the authors introduce prediction stability—ensuring a stable prediction naturally leads to a reduction in smooth sensitivity. This insight effectively motivates the development of an algorithm to verify prediction stability. Methods And Evaluation Criteria: The experiments are well designed. The reduced smooth sensitivity helps improves performance on two application, private binary classification and PATE. Theoretical Claims: I didn't fully checked the proofs. But the results make sense to me. Experimental Designs Or Analyses: Yes. The smooth sensitivity is reduced using the certification. Model prediciton accuracy is improved under certain cases. Supplementary Material: I didnt' check all supplementary material. Relation To Broader Scientific Literature: The paper gives an alternative way for private prediction with improved accuracy. This also motivates more investigation into private predictions which might be more accesiable compared to DP-SGD. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strength: The use of certification for prediction stability to reduce smooth sensitivity is an innovative idea. The paper presents two important applications: private binary classification and PATE. Additionally, the writing is clear, and the motivations are well articulated. Weakness: My primary concern is the efficiency of the algorithm. The certification process involves clipping (similar to DP-SGD) and incurs additional overhead for bounding stability. As the authors acknowledge, the algorithm requires 20 times more computation time. Given this substantial overhead, the improvement over DP-SGD is relatively modest, which may lead users to question whether adopting the new algorithm is worthwhile. Another limitation, as stated in the paper, is that the current method is restricted to binary classification. Additionally, its performance in PATE is effective only when the number of queries is small. Other Comments Or Suggestions: No. Questions For Authors: Could you also give the memory overhead for the Algorithm 1? For example, when running gpt2, what's the memory consumption compared to DP-SGD? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their time and consideration of our work. * Could you also give the memory overhead for the Algorithm 1? For example, when running gpt2, what's the memory consumption compared to DP-SGD? Both DP-SGD and AGT require the computation of per-sample gradients, incurring a large memory overhead compared to standard pytorch training. However, much like DP-SGD, AGT can employ “virtual batches” to separate physical steps (gradient computation) and logical steps (parameter updates) (see https://opacus.ai/docs/faq, for example). This means that memory overhead can be managed by only storing per-sample gradients for a subset of a batch at a time. Computing SEMin/SEMax independently over each virtual batch amounts to storing the top-k per-sample gradients, incurring a memory overhead of O(k * no. parameters) during training. We thank the reviewer for the questions and will reference this in a revised version of our main text or if there is not space to do so will have a discussion of this in our Appendix. * Given this substantial overhead, the improvement over DP-SGD is relatively modest, which may lead users to question whether adopting the new algorithm is worthwhile. We first point out that the computational cost of our approach actually compares favorably with other private prediction methods that use ensembles. When compared with DP-SGD we do acknowledge that it requires additional overheads, but the privacy analyses are not exactly comparable. Additionally, as we point out in our global response, there are many avenues for future works to sharpen the bounds that we provide using the current instantiation of our framework. With many future directions for improvement we hope that this line of research yields privacy guarantees that are competitive with DP-SGD. * Another limitation, as stated in the paper, is that the current method is restricted to binary classification. We agree with the reviewer that our current practical instantiation of the framework is only practically validated on binary classification, however, the framework itself is general and future works will be able to extend this to regression and multi-class classification by leveraging tighter or use-case specific bounds. * Its performance in PATE is effective only when the number of queries is small. We would like to highlight that in our initial submission we do not use our bounds directly in the student-teacher set up of PATE, but only compare the mechanisms in the context of private prediction. However, as indicated in our response to reviewer dcek, using our private prediction mechanism to train a student allows the privacy budget to be fixed for an unlimited number of future queries. With regards to ensemble approaches to private prediction, however, it is necessarily true that as the number of queries grows so does the expended privacy budget. However, we emphasize that the tighter privacy bounds of our approach are an orthogonal development to other private prediction methods and in theory can tighten other approaches enabling us to answer more queries within the same budget. For example, if the ensemble is using the global sensitivity in order to privatize its predictions, then there are cases (as we show in our paper) where our bound makes things strictly tighter thus allowing for more queries to be answered at the same privacy budget. We thank the reviewer for the comment and will try to clarify this in future works.
Summary: This paper introduces a new approach for improving differential privacy in machine learning predictions. The authors propose a method to compute tighter dataset-specific upper bounds on prediction sensitivity by using convex relaxation and bound propagation techniques. Their approach called abstract gradient training analyses how model parameters change when data points are added or removed from the training set. By combining these bounds with smooth sensitivity mechanisms, they achieve significantly better privacy-utility trade-offs compared to methods based on global sensitivity. The authors evaluate their approach on medical images and sentiment analysis. The method allows users to dynamically adjust privacy budgets and works with complex training configurations like federated learning. ## Update after rebuttal I originally recommended accept and had some non-urgent comments. The authors responded well to those and I kept my score (4). Claims And Evidence: - The authors claim that fewer than 10 runs of the AGT algorithm are typically sufficient to capture most privacy benefits. Although preliminary experimental results support this claim for the chosen tasks, the paper lacks a more systematic sensitivity analysis. The robustness of this claim in more diverse settings is not explored. - The computational overhead (reported as 20–40× standard training) may limit the practicality claims of the method despite the improved sensitivity bounds. Methods And Evaluation Criteria: - The authors test on both imaging and natural language data, which demonstrates the versatility of the approach. - However, the experimental validation is limited to a few binary tasks. It remains unclear whether these tighter bounds extend practically to more complex models (e.g. deep multi-class networks) or larger scale real world applications. That remains more of a theoretical promise. Theoretical Claims: - The method development is thorough and mathematically grounded. However, many proofs are deferred to appendices with some key assumptions. - interval bound propagation (IBP) to compute gradient bounds, while innovative, may be sensitive to the choice of activation functions and loss functions. - Algorithm 1 is grounded, some proofs are deferred to the appendix which I checked and did not see any issues with Experimental Designs Or Analyses: While the experiments compare the proposed method against baselines across various privacy budgets and even examine ensemble size effects, the discussion of hyperparameter tuning is relatively brief. It is not entirely clear how sensitive the performance is to choices like batch size, number of epochs, or the precise method used for bound propagation. Supplementary Material: The appendix includes many of the required proofs and some additional experiments. I reviewed the appendix, I could not verify all appendix proofs completely given the amount of them, but I found no issues. Relation To Broader Scientific Literature: Overall, the paper provides a reasonably good discussion of how its contributions build on and extend prior work in both differential privacy and neural network verification. But a few areas could benefit from deeper contextualization. For example, more explicit comparisons with the smooth sensitivity framework introduced by Nissim et al. (2007) and with recent advances in multi-neuron convex relaxations for neural network verification e.g. Wong and Kolter (2018) and Tjandraatmadja et al. (2020) Essential References Not Discussed: There are a few papers that I feel the authors might have missed in this area, [1] for the background of smoothness in privacy and [2] for inclusion in the discussion about inaccurate estimation in privacy. [1] Nissim, Kobbi, Sofya Raskhodnikova, and Adam Smith. "Smooth sensitivity and sampling in private data analysis." Proceedings of the thirty-ninth annual ACM symposium on Theory of computing. 2007. [2] Casacuberta, Sílvia, et al. "Widespread underestimation of sensitivity in differentially private libraries and how to fix it." Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security. 2022. Other Strengths And Weaknesses: Strengths: - The authors present a novel approach that bridges verification-based techniques with differential privacy theory - The work provides rigorous mathematical proofs and analysis throughout - The authors test their approach across various datasets and model architectures Weaknesses: - The method’s performance is sensitive to hyperparameter choices such as batch size and the number of training epochs. These dependencies, along with the assumption of a fixed data ordering, might restrict the practical applicability of the method in more dynamic training scenarios. - The method's effectiveness diminishes as datasets become less separable or when limited data is available, so potential fragility under challenging data conditions. - Some of the technical details of bound propagation and sensitivity certification might benefit from additional explanation but this is not a requirement. Other Comments Or Suggestions: - The operators SEMin and SEMax are only briefly described, a clearer definition would be better - PATE is also mentioned without description or definition Questions For Authors: 1. Might the element wise clipping have a different bias-variance tradeoff compared to the more standard l2 clipping? What effect could this have 2. 40x overhead would be impractical for many real world applications, do the authors see any avenues or potential for reducing this? 3. Cauchy noice has been used in some previous work, but do the authors see any specific advantages over Laplace noise? 4. Could it be possible to quantify or characterise what level of data separability is needed for the method to provide meaningful advantages over global sensitivity approaches? 5. Have any automatic approached been explored for selecting optimal k values that might reduce the need for multiple training runs while still achieving tight bounds? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their time and careful review of our work. * The authors claim that fewer than 10 runs of the AGT algorithm are ... sufficient ... [this] lacks a more systematic sensitivity analysis. We appreciate the reviewer's point that, though empirically we find a small number of AGT runs is sufficient and we believe this is a general result, we cannot rule out the need for many runs of AGT. We would like to stress that in Appendix C we explicitly study how different numbers of AGT runs result in tighter bounds, which is in fact an attempt to study the robustness of this claim. As this message may have been unclear, in our revision we will more explicitly reference this section to the main text. * The computational overhead (reported as 20–40× standard training) may limit the practicality ... any avenues or potential for reducing this? The computational overhead of our method as presented is indeed a practical hurdle of its adoption; however, we note that this paper is the first instantiation of a new approach to private predictions and we hope that -- as with other approaches to privacy -- future research will improve our bounds and reduce this overhead. To reduce this we highlight the tightness of our smooth sensitivity bounds depends on the tightness of the local sensitivity bound and the number of values of k at which AGT is run for. Given sufficiently tight bounds on the local sensitivity (e.g., with future advancements), running AGT for even a single value of k may be sufficient to realise significant privacy benefits. Tighter bounds and careful choice of k values will be a direction for future work. Similarly, particular models may have tighter local sensitivity bounds (i.e., particular architecture or learning choices) may also be valuable directions of study. * Interval bound propagation (IBP) ... may be sensitive to the choice of activation functions and loss functions. One benefit of our proposed approach is that any bound propagation technique (e.g., one chosen to match the model) can be used to tighten our bounds. his will also be a future direction of work to tighten bounds in particular cases. * It is not entirely clear how sensitive the performance is to choices like batch size, number of epochs In the current version of our submission we highlight that batch size is typically taken to be the maximum possible size as this results in the tightest bounds. In a revised version we will include an ablation of the batch size in Appendix E. * More explicit comparisons with the smooth sensitivity framework introduced by Nissim et al. (2007), ... Wong and Kolter (2018) ...Tjandraatmadja et al. (2020) Our approach to tightened privacy explicitly uses the framework of Nissim et al. (2007); however, adapting our approach to further developments in local DP is an important future work. With regards to further convex relaxations, our framework is general and can use any propagation method. We thank the reviewer and will clarify this and we add references to the works the reviewer mentions. * There are a few papers that I feel the authors might have missed in this area, [1] ... and [2]. We do have citation of [1], however, we are aware that there are multiple versions online, though each has the same theoretical framework that we reference and use extensively. We did not cite [2] and will do so in a revision of our submission. * Might the element wise clipping have a different bias-variance tradeoff ...? Yes, the element-wise clipping may introduce different biases in the final model. The preliminary results in this paper, particularly Figure 5, highlight that we actually observe better utility than DP-SGD indicating that this is not substantial. * Cauchy noice has been used ... advantages over Laplace noise? Smooth sensitivity requires the use of an “admissible” noise distribution, each of which comes with its own theoretical privacy loss. Of these choices discussed by Nissim et al., only Cauchy noise satisfies pure (delta=0) differential privacy. * ... quantify or characterise what level of data separability ... to provide meaningful advantages over global sensitivity approaches? We thank the reviewer for the questions as it raises a very interesting point. We present Figure 3 which visualizes this phenomenon and in our GPT experiment we hypothesize that separability is the reason for the strong bounds. However, characterizing separability itself in a general way is non-trivial and therefore characterizing the relationship between our bounds and separability is also non-trivial and an interesting future work. * Have any automatic approached been explored for selecting optimal k values ...? Selecting optimal k values is a challenging problem with many potential heuristic approaches that may be effective in reducing the computational overhead of our method and may be explored in future works. --- Rebuttal Comment 1.1: Comment: I thank the authors for their rebuttal which adequately addressed my concerns. The clarifications regarding AGT sensitivity analysis, use of Cauchy noise and separability considerations are particularly helpful. With the promised additions the paper will be strengthened. I maintain my position that this is a solid contribution and recommend acceptance.
Summary: This paper studies upper bounds on the sensitivity of prediction in machine learning models. By doing that, the paper presents tighter privacy analysis. After which, experimental results showing a wide improvement in the tightness of the privacy bounds. ## update after rebuttal I raised my score to a 4. Claims And Evidence: The claims are all clear and convincing. Methods And Evaluation Criteria: The paper uses on multiple datasets, and the results show a large improvement in tightening the privacy predictions. Theoretical Claims: I checked the proofs and discussions in both the main paper and the appendices, and to the best of my knowledge, there are no issues. Experimental Designs Or Analyses: The experiments done are extensive and support the result of the paper. The experiments are sound. Supplementary Material: I reviewed the written appendices of the paper. Relation To Broader Scientific Literature: Privacy is a very important topic in real world scenarios, and offering tighter bounds and privacy prediction stability moves us a step closer to being able to deploy DP in more applications. Essential References Not Discussed: N/A Other Strengths And Weaknesses: The paper is well written and easy to read. The methods used are not the most novel, but the work has significance. Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their kind words and for the careful work of checking the proof and technical steps of our work. In their review they did not necessarily provide a strong signal of the weaknesses they would like to see addressed in relation to their score. We hope that both our response to other reviewers and promised updates that we have address any concerns that they may have such that they are confident in recommending acceptance.
Summary: The paper proposes to bound local sensitivity of predictions of models learned with gradient-based methods using interval bound propagation. Further, the paper uses the result to construct a sample-and-aggregate procedure for prediction ensembles. The paper then demonstrates that using the proposed bounds enables to significantly improve utility for the same query budget over global sensitivity. ## Update after rebuttal I strongly suggest to quantitatively compare the data-dependent bounds obtained with IBP and the standard data-dependent bounds for report-noisy-max from Papernot et al., 2016, 2017, key baseline data-dependent analyses not compared to in the current version. Claims And Evidence: The claims are well supported by theory and experimental evaluation. Methods And Evaluation Criteria: The experimental settings in the paper make sense to show that the proposed local sensitivity framework outperforms global sensitivity in single-model private prediction and ensemble prediction. Considering that PATE is mentioned several times as a motivation for the ensemble setting, it is quite strange to not see a comparison with PATE. Indeed, this looks like one additional step of training student models on top of the experiment in Fig. 5. Moreover, PATE also relies on smooth sensitivity (specifically, the [2018 version](https://arxiv.org/abs/1802.08908)) and data-dependent privacy bounds. The present paper does not seem to compare the bounds obtained with IBP to these standard simpler bounds, only comparing with global sensitivity. This seems to be a significant drawback in the evaluation setting. I would be happy to increase my score if such results were available. Theoretical Claims: To the best of my understanding, the claimed results seem correct, given the IBP propagation bounds are correctly applied. Experimental Designs Or Analyses: See the "Methods and Evaluation Criteria" section. Supplementary Material: I have reviewed the supplemental material, specifically proofs of Lemma 4.2, 4.4, and Theorem 5.4. Relation To Broader Scientific Literature: The paper introduces a new way of computing local sensitivity applicable to machine learning by using interval bound propagation. This establishes a new intersection between adversarial robustness certification and differential privacy. Essential References Not Discussed: The [2018 version of PATE](https://arxiv.org/pdf/1802.08908) also uses smooth sensitivity. To be completely well-positioned with respect to the prior work on private prediction in machine learning, the paper should (1) compare the IBP bounds to PATE data-dependent bounds, and (2) have a new experiment comparing student-teacher pipeline when using the proposed approach and the PATE approach. Other Strengths And Weaknesses: Connecting the literature on adversarial robustness certification and DP is valuable, and might be an avenue for a fruitful future direction of research. Other Comments Or Suggestions: - L693: GTP -> GPT - Fig 2 and 3 have a different order than referenced in the text. Questions For Authors: --- Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their time and consideration of our work. * It is quite strange to not see a comparison with PATE. Indeed, this looks like one additional step of training student models on top of the experiment in Fig. 5. Fig. 4 illustrates the privacy-utility tradeoff of our method compared to the subsample and aggregate mechanism employed by PATE. As a result, any gains in tightness of privacy analysis here directly translate into the privacy-utility costs of training subsequent student models using the PATE framework. While the comparison with DP-SGD in Fig. 5 would be more favourable following a student-teacher framework, this trade-off has been well studied in previous works. Further, this would require consideration of modified training settings, e.g. having access to an unlabelled public dataset. Nonetheless, we agree with the reviewer that understanding how the gains in our approach translate to improved trade-offs in student models is an interesting and potentially valuable future contribution. In this direction, we provide some preliminary results below, that will accompany a more thorough discussion in our appendix. * I would be happy to increase my score if such results were available. In a revised version of the paper, we will incorporate this discussion into the main text and reference an appendix section that presents preliminary results on applying our mechanism in a teacher-student training setting. Specifically, we re-run the experimental setup from Figure 5, using a privacy budget of (\epsilon, \delta) = (10, 10^{-5}) to label Q = 100 data points held out from the training dataset (which we assume to be our "public" unlabeled dataset). We emphasize that training a student model does fix the privacy budget and which privacy budget is selected will have a significant effect on the results. Once this is done, resulting teacher-generated labels are then used to train a student model. Our findings indicate that the student model's performance under each mechanism aligns with the accuracy levels observed at the corresponding inference budget in Figure 5. We hope this increases the reviewer's confidence in our contribution. | Teacher Mechanism | Blobs | OctMNIST | IMDB | | ------------------------------------------- | ----- | -------- | ---- | | Single model, global sensitivity | 82.8 | 12.7 | 54.4 | | Single model, smooth sensitivity | 99.8 | 18.7 | 73.5 | | Subsample and aggregate, global sensitivity | 99.5 | 14.1 | 73.0 | | Subsample and aggregate, smooth sensitivity | 98.1 | 19.8 | 71.7 | | DP-SGD | 1.0 | 81.2 | 70.5 | * Moreover, PATE also relies on smooth sensitivity (specifically, the 2018 version) and data-dependent privacy bounds. We thank the reviewer for highlighting this variant of PATE that leverages tighter, data-dependent privacy analysis. We will be sure to cite it in the revised version of our submission. While we currently do not include comparisons with such methods, we highlight to the reviewer that the tightening approach introduced in our work is orthogonal to those used in subsequently developed PATE mechanisms. We emphasize that this suggests that our bounds could potentially be combined with these tighter privacy analyses to enable even stronger privacy guarantees. We are grateful for this insightful suggestion and will make it clear in our revision that more advanced mechanisms exist beyond those we evaluate, and that integrating our approach with them represents a promising direction for future research. --- Rebuttal Comment 1.1: Comment: Thank you for the response. For "I would be happy to increase my score if such results were available" the results I also meant the comparison to the existing data-dependent bounds. Let me make this more precise, and let us forget the 2018 PATE for the moment. Consider Theorem 3 in the original PATE paper. It provides a data-dependent privacy guarantee of report noisy max with Laplace noise used when aggregating predictions from a teacher model ensemble (the 2018 version of PATE introduces additional results on this). As far as I understand, this is directly an appropriate baseline for your new method based on IBP, at least in some of the settings. Could you please explain in detail why is it orthogonal? If this is in fact not orthogonal, then could you provide a comparison of, e.g. obtained epsilon values with the classical PATE data-dependent analysis and the proposed IBP-based analysis? --- Reply to Comment 1.1.1: Comment: We thank the reviewer for clarifying their statement. We agree that our current comparison for ensemble models focuses on the basic privacy analysis presented in the original PATE paper (Papernot et al., 2016), whereas that work also includes a tighter, data-dependent privacy accounting. The data dependence in their analysis arises from usage of the vote histogram corresponding to specific predictions. In contrast, our method incorporates the sensitivities of the models themselves to the training data, as well as the vote histogram. The tighter privacy analysis from the original PATE paper indeed represents an interesting and relevant baseline, and we will aim to include corresponding results in the final version of the paper.
null
null
null
null
null
null
SCENT: Robust Spatiotemporal Learning for Continuous Scientific Data via Scalable Conditioned Neural Fields
Accept (poster)
Summary: This paper presents SCENT, which is a scalable and continuity-informed spatiotemporal learning framework designed to model complex scientific data. Using a transformer-based architecture with learnable queries and sparse attention, it unifies interpolation, reconstruction, and forecasting. Extensive experiments demonstrate its satisfied performance across various datasets, offering superior scalability and robustness against sparse and noisy data. ## Update after rebuttal: The authors addressed the main concerns I raised, including the addition of RainNet experiments and comparisons with STFNN. While full statistical significance testing is still limited, I appreciate the substantial effort made to strengthen empirical validation. I have updated my score to 3 accordingly. Claims And Evidence: The paper claims that SCENT is scalable and computationally efficient. However, while the model appears manageable for small values of M, its efficiency for large-scale datasets remains uncertain. How is M determined in practice? Different values of M may significantly impact the cross-attention mechanism, yet there are no ablation studies provided to support this claim. Adding such an analysis would strengthen the evidence for scalability and efficiency. Methods And Evaluation Criteria: The study does not include additional real-world datasets across different domains to better demonstrate the model’s generalizability. For example, RainNet [1]. [1] Rainnet v1. 0: a convolutional neural network for radar-based precipitation nowcasting. Geoscientific Model Development. 2020 Theoretical Claims: The paper does not introduce new theoretical results. Experimental Designs Or Analyses: 1. The paper does not include several closely related methods. Specifically, it lacks comparisons with ST grid forecasting models, such as AutoST [2] and ST-ResNet [3], as well as ST field-based methods, such as STFNN [4]. 2. The reported experimental results (Table 1 and 2) lack standard deviations and statistical significance markers (e.g., confidence intervals). Reference [2] Autost: Efficient neural architecture search for spatio-temporal prediction. KDD. 2020. [3] Deep Spatio-Temporal Residual Networks for Citywide Crowd Flows Prediction. AAAI 2017. [4] Spatio-Temporal Field Neural Networks for Air Quality Inference. IJCAI 2024. Supplementary Material: I noticed that the related work is included in the appendix. I recommend incorporating it into the main text to ensure a more comprehensive and cohesive presentation. Relation To Broader Scientific Literature: The key contributions of this paper align with and extend multiple areas of ST learning, INRs, and scalable deep learning models. Compared to FNOs, it generalizes better to real-world data but needs validation on diverse domains like RainNet and include a comparisons with STFNN. Essential References Not Discussed: STFNN [3] also utilizes INRs and focuses on unified spatiotemporal modeling. A direct comparison between SCENT and STFNN would help clarify their differences and contributions. Other Strengths And Weaknesses: Strengths: The paper includes clear visualization. Weakness: See previous section. Other Comments Or Suggestions: I noticed that the related work is included in the appendix. I recommend incorporating it into the main text to ensure a more comprehensive and cohesive presentation. Questions For Authors: See previous section. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the insightful discussions, suggested papers, and constructive comments. It was a pleasant surprise to find substantial similarities as well as subtle yet important distinctions between SCENT, STFNN, and the referenced works. We found STFNN's inference mechanism and gradient-based formulation particularly innovative, and have therefore reached out to the authors for further exploration. As suggested, we will move the related work section to the main manuscript and provide more in-depth discussions on spatiotemporal forecasting methods, such as AutoST and ST-ResNet. $\ $ **1. Comparing SCENT and STFNN** **Motivation and Similarities**. SCENT aims to develop a flexible model capable of generating continuous spatiotemporal fields from sparse observations, addressing the common scenario in scientific domains where sensor coverage is limited. Similarly, STFNN effectively infers unobserved regions using its sophisticated Pyramidal Inference. Both methods assume continuous fields and handle irregular, sparse, and noisy observations robustly. **Fundamental Differences in Generalization**. However, SCENT explicitly models a family of functions, acting as a generalizable implicit neural representation (GINR). It can represent various spatiotemporal scenarios conditioned on input data from different contexts. In contrast, STFNN models one specific spatiotemporal field (e.g., PM2.5 over China), allowing interpolation and extrapolation strictly within that field, rather than generalizing across distinct fields. **Architectural Design**. Both models utilize INRs but differ in structure and intent. SCENT employs an encoder–decoder architecture with attention-based querying to generalize across tasks. STFNN is more akin to a per-task INR, enhanced with gradient-based modeling and local graph-based correction to capture detailed local variations within a single domain. $\ $ **2. RainNet: Rainfall Nowcasting** We appreciate for the suggestion. We have strengthened our experimental evaluation by incorporating RainNet and the new nowcasting dataset. **Dataset & Task**. We use the RY product from the German Weather Service (DWD), a quality-controlled rainfall composite at 1km~$\times$~1km spatial and 5-minute temporal resolution. Data from 2012-2016 are used for training, and 2017 for testing. The task is to predict rainfall fields for future timestamps $t \in [5, 10, \dots, 60]$ minutes, given four historical fields. Following the official preprocessing, we use 173,345 / 43,456 training / test instances, and downsample the original 900~$\times$~900 resolution to 64~$\times$~64 for faster training during the rebuttal period. SCENT is trained with a forecast horizon ($t_h$ = 60). **Results.** We report root mean-squared error (RMSE [mm$~h^{-1}$]) in the table below. SCENT consistently outperforms RainNet across all lead times, reducing MSE by approximately 30%. We attribute this improvement in part to SCENT's ability to train with variable target times $t_o$, which serves as a form of data augmentation. | Method|#Params|||||Lead|time|(mins)|||||| |--|:--:|:--:|:--:|:--:|:--:|:--:|:--:|:--:|:--:|:--:|:--:|:--:|:--:| |||**5** |**10**|**15**|**20**| **25**|**30**| **35**| **40**|**45**|**50**|**55**|**60**| | RainNet|1.93M|0.445|0.431|0.439|0.460|0.491|0.526 |0.559|0.589|0.612 |0.628 | 0.639 | 0.646 | | SCENT|3.87M|0.319|0.341|0.354|0.366|0.377|0.389|0.399|0.409|0.417 | 0.425 | 0.434 | 0.440 | | Improvement |-| $28.3$% |$20.9$%|$19.4$%|$20.4$%|$26.0$%|$28.6$%|$30.6$% | $31.9$%|$31.9$%|$32.3$%|$32.1$% | $31.9$% | | | | | | | | | | | | | Metric:|RMSE|(mm$~h^{-1}$)| $\ $ **3. Choosing M** We agree with the reviewer that selecting the OPTIMAL number of latent tokens $M$ is important. While performance generally improves with larger $M$, we observe diminishing returns beyond $M=192$ on the S5 dataset—a saturation pattern also seen in Perceiver IO. This suggests that excessively large $M$ offers limited benefit while incurring higher computational cost, and a moderate $M$ can strike a better balance between performance and efficiency. | **M**|$\ $32|$\ $64|$\ $96|$\ $128|$\ $192|$\ $256| |--|:--:|:--:|:--:|:--:|:--:|:--:| |**Rel-MSE**|0.460|0.433|$\underline{0.422}$|0.425|**0.400**|0.427| $\ $ **4. On statistical testing** We appreciate the reviewer’s suggestion. While full significance testing is infeasible due to time constraints, we provide evidence of robustness through multiple runs: for example, our small model (Rel-MSE = 0.467) shows a standard deviation of only 0.0042 across five seeds. Our evaluation also spans multiple benchmarks and a range of lead times, demonstrating consistent performance across diverse tasks and forecasting horizons. We acknowledge the value of statistical testing and will consider incorporating it in future revision. $\ $ **We hope our responses address your concerns, and we’d be grateful if you’d consider updating your score accordingly.**
Summary: The authors introduce a new model called SCENT for spatiotemporal modelling such as for differential equations like Navier-stokes. This model can take irregular input data and generate outputs at arbitrary locations and times, and so is capable of forecasting and spatial interpolation. This model has an encoder - processor - decoder architecture using latent tokens in the processing layers. The authors add several new components to this architecture such as sparse self attention layers during encoding and decoding, and providing the required output time during input and output, limiting the number of recurrent steps required for forecasting. This model is compared to many other current methods on simulated and real world data and shows good performance. The authors show that this model scales well with model and dataset size. Claims And Evidence: Yes the claims are supported by clear and convincing evidence. Methods And Evaluation Criteria: The proposed datasets and evaluation metrics make sense, following those used in previous work. (e.g. "Fourier Neural Operator for Parametric Partial Differential Equations" (ICLR 2021) and "AirDelhi: Fine-Grained Spatio-Temporal Particulate Matter Dataset From Delhi For ML based Modeling" (NeurIPS 2023)) Theoretical Claims: Only real theoretical claims are the big O analyses in the appendices, which look right to me. Experimental Designs Or Analyses: I investigated all of the experimental designs and analyses. I see from appendix C and D that some hyperparameter tuning was done for SCENT for the various different datasets / tasks. I would be interested if the same level of tuning / searching was performed for the various other approaches that are compared against. If not then some of the comparisons could be a bit unfair. Supplementary Material: I reviewed the appendices. Relation To Broader Scientific Literature: The SCENT architecture is quite similar to other approaches in the literature with a few key additions. The encoder - processor - decoder architecture with latent tokens is seen in CORAL ("Operator learning with neural fields: Tackling pdes on general geometries" (NeurIPS 2023)), AROMA ("AROMA: Preserving Spatial Structure for Latent PDE Modeling with Local Neural Fields" (NeurIPS 2024)), IPOT ("Inducing Point Operator Transformer: A Flexible and Scalable Architecture for Solving PDEs" (AAAI 2024)) for example. The sparse attention layers in the encoder and decoder do not seem to be present in earlier works. “WUF” approach to limit recurrent time marching steps seems novel as well and could be effective in limiting the accumulation of errors. The paper compares their results to a wide variety of works in the area, which is helpful and is sometimes not done in previous work, and must have required significant effort. They also make use of several previous datasets and introduce new datasets to provide a good comparison between these approaches. Essential References Not Discussed: The FNO ("Fourier Neural Operator for Parametric Partial Differential Equations" (ICLR 2021)) seems to be one of the most effective methods compared to SCENT even though it requires regular gridded data. The authors adapt this method to work with the less regular data that is used in this work by padding out the grid, and find that it is often quite competitive with their approach. However there have been works that build on FNO to remove this dependency to gridded data (Geo-FNO from "Fourier Neural Operator with Learned Deformations for PDEs on General Geometries") and additionally provide further improvements (F-FNO from "Factorized Fourier Neural Operators" (ICLR 2023)). It would have been enlightening to cite and compare to one or both of these works or discuss why they are not suitable for the datasets used in this work. Other Strengths And Weaknesses: I appreciate the effort the authors have gone through to compare to a wide variety of previous work in this area, and test them on both real world and synthetic datasets. While the idea of essentially providing the required timestep at input and during decoding seems simple, it does not seem to be done in previous work and this could be quite helpful in preventing the accumulation of errors during recurrent processing steps usually required for forecasting. Vaughan et al. (2024) “Aardvark Weather: End-to-end data-driven weather prediction”, used for weather forecasting attempts to avoid this by training many different models, each for a different timestep, which feels less efficient than the approach used in this work. The SCENT approach does show generally very good performance but in some cases it is only marginally better than alternative methods, and I wonder if these other approaches received the same level of hyperparameter optimization that is seen in the appendices for SCENT for each of the different datasets / tasks. If not then perhaps they would outperform SCENT with equivalent tuning. I would have liked to see a comparison to newer FNO approaches considering that the basic version of this with a probably suboptimal approach to non regular data showed good performance. Other Comments Or Suggestions: Some very minor issues I noticed: Line 182 - right - latex formatting error Line 326 - left - surely reconstruction is when delta t = 0? Line 661 - treatSeach -> treats each Figure 2 + table 3 - inconsistency in name of context encoding / embedding network Table 4 - I am not sure why CORAL is more space and time continuous than the other approaches - it is still based on recursively using the processor module to step forward in time like the other approaches. OFormer and AROMA seem to be able to do basically the same things as CORAL here. Questions For Authors: 1. How did you select hyperparameters for the approaches you compared to? And what was your process for selecting these for your approach? Answering this will give me more confidence in how your approach performs compared to alternatives. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **1. Hyperparameters** We appreciate the reviewer’s thoughtful questions regarding the extent of hyperparameter tuning conducted for SCENT in comparison to the baseline methods. This is indeed a crucial aspect when evaluating model performance fairly across methods. We included detailed hyperparameters in the appendix for transparency and reproducibility, and will release the code upon acceptance. **Inherited from Literature.** We clarify that we did not extensively tune hyperparameters for baseline methods, but instead adopted configurations from prior work. Many baselines, including IPOT and Perceiver-IO, share architectural similarities with our encoder-processor-decoder design, allowing us to inherit most of their recommended settings. For instance, Appendix Table C largely follows IPOT, with only minor changes such as warmup and total training steps. **Dataset Dependence.** Dataset-specific parameters (e.g., optimizer profiles, embedding configurations) reflect widely accepted strategies from prior literature, especially for common datasets like NS-3–5. These datasets exhibit varied dynamics but benefit from established training practices—such as long schedules for PDE dynamics and lower sensitivity to learning rate. As shown in Appendix Table D, most hyperparameters are consistent across datasets. We deliberately avoided aggressive tuning to promote generalizability. For example, embedding dimensions result from simple design choices—e.g., combining linear projections and Fourier features—rather than extensive tuning. **Baseline Hyperparameters.** When implementing the baselines, we made every effort to adhere to published or well-established hyperparameters. We first ensured each baseline was reproducible on its original dataset, then applied either the reported settings or those that worked well for SCENT—whichever yielded better performance. Given the complexity of our proposed data, which reflects realistic and large-scale scientific scenarios, we devoted substantial effort to adapting the baselines accordingly. **2. Newer FNO approaches** We thank the reviewer for highlighting recent FNO variants, including Geo-FNO and F-FNO. These works offer notable improvements, particularly in handling irregular geometries and sampling patterns. While we were not able to implement both approaches in full due to time constraints, we successfully implemented F-FNO on the S5 dataset and included those results in our evaluation. We evaluated two configurations of F-FNO with different parameter sizes (m1, m2). Notably, even the smaller variant (1M parameters) performs similarly to the standard FNO, demonstrating the efficiency of the architecture. When scaled to match the parameter count of SCENT and FNO (7.4M), F-FNO achieves significantly improved performance, approaching that of SCENT. We believe this provides meaningful insight into how newer FNO variants perform in challenging, non-grid-based scenarios. | Metric | SCENT | FNO | F-FNO_m1 | F-FNO_m2 | |---|---|---|---|---| | # Params| 7.4M| 7.4M | 0.96M | 7.4M | | Rel-MSE| **0.326**| 0.377| 0.396| $\underline{0.347}$ | **3. On Vaughan et al. (2024) [1]** We appreciate the reviewer’s reference to the Aardvark Weather system. The idea of sequentially trained processors extending the recursive rollout strategy is particularly interesting. By allowing each processor to specialize in longer lead times, the approach may indeed improve inference stability and long-range forecasting performance. While the sequential training of multiple models could introduce computational overhead, the modular nature of the design offers flexibility that might benefit certain applications. Additionally, although the current formulation appears to target fixed-step forecasting, integrating mechanisms for continuous-time inference could be an exciting direction for future work. We see this as a promising and complementary line of research and welcome further discussion on how such ideas might be integrated with or compared to SCENT’s joint modeling capabilities. **4. On CORAL** As the reviewer correctly notes, OFormer, CORAL, and AROMA are all trained using recursive forecasting with a fixed time step. However, CORAL distinguishes itself by introducing a Neural ODE solver, $g_{\phi}$, as the autoregressive processor operating in the latent z-code space. As a result, CORAL adopts a two-stage training process: the first trains the autoencoder, and the second trains the Neural ODE for forecasting. The key advantage of this approach is that the learned ODE can be evaluated at any arbitrary time point, enabling CORAL to capture both spatial and temporal continuity. **We appreciate your thoughtful review and hope our clarifications address your questions.** [1] Allen, Anna, et al. "End-to-end data-driven weather prediction." *Nature* (2025): 1–3. --- Rebuttal Comment 1.1: Comment: Thank you for your informative response. You have clarified most of the issues I had, and I appreciate the comparison to newer FNO approaches. I will increase my score to 4 in response.
Summary: This paper introduces SCENT, a framework for spatiotemporal learning using Scalable Conditioned Neural Fields (CNFs). The model is built on a Transformer-based encoder-processor-decoder architecture, incorporating learnable queries and a query-wise cross-attention mechanism to capture multi-scale dependencies. A sparse attention mechanism is used to improve scalability. Claims And Evidence: Some claims lack sufficient supporting evidence: 1. Sparse Attention Justification – The paper claims sparse attention improves scalability, but does not compare against other sparse attention models (e.g., Longformer, Linformer). 2. Fourier Features Impact – SCENT uses Fourier features, but the paper does not analyze how different frequency bands affect prediction quality. Methods And Evaluation Criteria: 1. Simulated datasets (Navier-Stokes, synthetic sensor data) are relevant, but real-world evaluation is limited (only AirDelhi). More diverse real-world datasets are needed. Theoretical Claims: The paper does not present formal theoretical proofs, but it makes implicit theoretical claims about the benefits of its architecture. Experimental Designs Or Analyses: The experimental design is generally sound, but there are some limitations: 1. The paper does not compare SCENT’s sparse attention to other sparse attention models (e.g., Longformer, Linformer), making it unclear how much it contributes to performance gains. 2. The paper does not analyze the impact of different frequency bands on spatial encoding, leaving a gap in understanding its effectiveness. Supplementary Material: Yes, I reviewed the supplementary material Relation To Broader Scientific Literature: The paper builds on spatiotemporal learning, implicit neural representations (INRs), and conditioned neural fields (CNFs) but lacks engagement with key works. It does not compare SCENT’s sparse attention to models like Longformer, Linformer, nor analyze Fourier features’ impact. Essential References Not Discussed: The paper does not cite or compare SCENT’s sparse attention to established models like Longformer, Linformer, and others, which are essential for evaluating its efficiency. Other Strengths And Weaknesses: Pros: 1. The encoder-processor-decoder architecture is well-motivated, and the use of learnable queries and cross-attention improves scalability. 2. SCENT outperforms baselines like FNO, AROMA, CORAL, and OFormer in most scenarios. Cons: 1. SCENT uses Fourier features for spatial encoding, but the impact of different frequency bands on prediction quality is not analyzed. 2. Most evaluations focus on simulated datasets (Navier-Stokes, synthetic sensor data). The AirDelhi dataset is the only real-world test, and results are not significantly better than AROMA. 3. The motivation and implementation details of the sparse attention mechanism in SCENT are unclear. The paper does not thoroughly explain why sparse attention is necessary beyond scalability and how it specifically enhances spatiotemporal learning. Additionally, it lacks a direct comparison with other sparse attention models, such as Longformer or Linformer, which could provide insight into the efficiency and effectiveness of SCENT’s attention mechanism. Other Comments Or Suggestions: Please refer to weaknesses. Questions For Authors: 1. How do different frequency bands in Fourier feature encoding affect SCENT’s performance? An ablation study could strengthen the justification for their use. 2. Why is SCENT’s sparse attention mechanism chosen over existing methods like Longformer, or Linformer? 3. SCENT performs well on simulated datasets, but its AirDelhi results are not significantly better than AROMA. Can the model generalize effectively to other real-world datasets? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We truly appreciate your valuable comments. We agree on your concerns and suggestions, hence provide below our thoughts and additional experimental results for each of the questions. $\ $ **1. On Fourier feature** Although Fourier features are well established, their formulation can vary. Here, we describe our approach and present additional ablation studies. Let $R$ denote the maximum frequency resolution and $L$ the number of frequency bands. For the $i$th band, we define the frequency as $ f_i = \frac{iR}{L}, \quad i=1,2,\ldots, L. $ Then, the positional encoding for a scalar $x$ is given by the concatenation of sine and cosine functions: $ \gamma(x)=\operatorname{concat}_{i=1}^L \Big[\sin\big(2\pi f_i x\big), \cos\big(2\pi f_i x\big)\Big]. $ We tune $R$ and $L$ to match the data’s inherent frequency characteristics: $R$ is set high enough to capture rapid variations, while $L$ is chosen to balance detail with computational efficiency. The following ablation experiments were conducted using a baseline SCENT model on dataset S5 with $M=32$ learnable queries, latent dimension $l=128$, and a batch size of 128. |# bands ($L$)|$\quad$4|$\quad$6|$\quad$8|$\quad$12|$\quad$16| |:---:|:---:|:---:|:---:|:---:|:---:| |**Rel-MSE**|$\underline{0.448}$|**0.446**|0.466|0.471|0.453| |Max resolution ($R$)|$\quad$ 5|$\quad$10|$\quad$20|$\quad$32| |:---:|:---:|:---:|:---:|:---:| | **Rel-MSE** | 0.514 |0.573| $\underline{0.471}$ | **0.443** | The results show that while the number of frequency bands $L$ has little impact on performance, a higher maximum resolution $R$ clearly improves it. Values were not explored beyond the Nyquist criterion (i.e., $R=32$). $\ $ **2. SCENT’s sparse attention mechanism** We appreciate the reviewer's insightful question. Since scientific data typically appear as smooth and continuous signals, attending to all tokens may become redundant and inefficient for high sampling rates. In our approach, we employ random sparse attention for both the Context Embedding Network and Calibration Network (see Fig. 2), where each token attends to a random subset of $p$ tokens, reducing the complexity from $O(n^2)$ to $O(pn)$ with $p \ll n$. This simple random sparse attention mechanism has demonstrated strong empirical performance. We also acknowledge the extensive literature on efficient attention mechanisms – such as Longformer and Linformer – which may work equally well; indeed, additional experiments with these methods (with roughly matched parameter sizes for fair comparison) reveal that the Longformer (replacing sparse attentions within SCENT) outperforms the others, suggesting promising avenues for future research. | Method|Big-O Complexity|$\quad$ Variables/Notes|Rel-MSE |--|--|--|--| |Random Sparse Attention| $O(n·p$)| $p$: # tokens each query attends to|0.471| |SCENT+Linformer [1]|$O(n·k$) |$k$: projected dimension ($k \ll n$, constant)|0.496| |SCENT+Longformer [2]|$O(n·(w+g)$)|$w$: sliding window size; $g$: # global tokens|**0.440**| $\ $ **3. Additional real-world datasets** Thank you for raising concerns over limited real dataset employed. Here we report explore additional spatiotemporal datasets. (i) **Rainfall Nowcasting**: Using DWD radar rainfall data, SCENT nowcasts lead times from 5 to 60 minutes based on four consecutive 5-minute intervals. Compared to the CNN-based RainNet baseline, SCENT reduces the root-mean-squared error (RMSE) by approximately 30\%. The table summarizes the performance metrics (RMSE in mm$~h^{-1}$) and the number of parameters for each method. | Method | # Params ||Lead|time|(mins)|| |--|:---:|:--:|:--:|:--:|:--:|:--:| |||**5**|**10**|**30**|**60**| |RainNet [3]|1.93M|0.445|0.431|0.526|0.646| |SCENT|3.87M|**0.319**|**0.341**|**0.389**|**0.440**| ||||Metric:|RMSE|(mm$~h^{-1}$)|||| (ii) **Kuroshio Path Prediction**: Using 50 years of CORA [4] reanalysis data, SCENT predicts the Kuroshio current's latitude over a 120-day horizon (with data from 1958–1997 for training and 1998–2007 for testing). SCENT demonstrates stable performance beyond a 50-day lead time. Table below shows that SCENT achieves lower RMSE (in degrees) compared to an LSTM baseline, particularly for longer forecast horizons. |Model|||Lead|Time|(days)| |--|--|:--:|:--:|:--:|:--:| |||10|30|60|120| |LSTM||0.403|0.511|0.591|0.647| |SCENT||**0.354**|**0.360**|**0.391**|**0.430**| |||||Metric:|RMSE (°)| **We hope our responses have addressed your key concerns and would be grateful if you would consider a higher score in light of these clarifications.** [1] Wang, S, et al. "Linformer: Self-attention with linear complexity", *arXiv* 2020. [2] Beltagy, I, et al. "Longformer: The long-document transformer", *arXiv* 2020. [3] Ayzel, G, et al. "RainNet v1.0: a convolutional neural network for radar-based precipitation nowcasting", *GMD* (2020). [4] Han, G. et al. "A new version of regional ocean reanalysis for coastal waters of China and adjacent seas", *Adv. Atmos. Sci.* (2013).
Summary: This paper addresses common issues in scientific data, such as sparsity, noise, and multi-scale problems, by proposing a method called SCENT (Scalable Conditioned Neural Field) that can handle various spatio-temporal learning tasks like interpolation, reconstruction, and prediction. The paper is well-structured with clear motivation, and extensive experiments on both synthetic and real-world data demonstrate the method's effectiveness and scalability. Claims And Evidence: yes Methods And Evaluation Criteria: yes Theoretical Claims: yes Experimental Designs Or Analyses: yes Supplementary Material: yes Relation To Broader Scientific Literature: yes Essential References Not Discussed: yes Other Strengths And Weaknesses: Weaknesses: I think the research scenario is too idealized. It could be worth considering real-world scenarios, like observational data from the Kuroshio, for example. Other Comments Or Suggestions: No Questions For Authors: No Ethical Review Flag: Flag this paper for an ethics review. Ethics Expertise Needed: ['Other expertise'] Ethical Review Concerns: No Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate the reviewer's suggestion. We agree that the Kuroshio current provides an excellent yet challenging testbed for evaluating SCENT. We use 50‐year records from the China Ocean Reanalysis (CORA) [1] as our benchmark and follow the data processing guidelines established by Wu et al. (2023) [2]. **Background.** The Kuroshio current, originating from the North Equatorial Current (NEC) and flowing northward along the eastern side of the Philippine Islands, is the world's second-largest warm current. Accurately predicting its path is crucial because its variations significantly affect the exchange of water masses and heat between the North Pacific subtropical and subarctic circulations. CORA provides daily oceanographic reanalysis data for the Kuroshio current spanning 50 years (January 1958-December 2007). **Implementation.** We adopt the four baseline methods from Wu et al. (2023) for comparison. We perform a 120-day prediction experiment, using data from the first 40 years (1958-1997) for training and the final 10 years (1998-2007) for testing. During training, SCENT is used to predict the Kuroshio path in terms of latitude (ranging from 29°N to 36°N) for various forecast horizons ($t_o$). In the test phase, we measure the root mean-squared error (RMSE, in degrees) against the true latitude for every $t_o \in [1, 120]$ days, and we present comparisons at 10-day intervals. **Results.** Comparison analyses are shown below. We categorize baseline methods into those employing feature engineering (FE) and those that are purely neural network–based. LSTM is used as the baseline, and we also compare FE-enhanced LSTM variants, where FE includes empirical orthogonal functions (EOF) and complete ensemble empirical mode decomposition with adaptive noise (CEEMDAN). The CEEMDAN-based variants perform competitively across all lead times, while SCENT exhibits second-best performance at and beyond a 50-day lead time. The relative strength of the FE methods may be due to the limited data scale—with 14,610 instances in the training split and 3,652 in the test split. However, our scalability study (Fig. 4) suggests that SCENT could outperform the FE methods when larger datasets become available. Additionally, SCENT maintains stable performance even as the lead time increases. This is in stark contrast to the other baselines, which experience significantly sharper performance degradation with longer lead times. | Type | Model | | | | | | Lead | Time | (days) | | | | | | |------------------|------------------|-------------------|:-----------------------------------------------------------------:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:| | | | | 10 | 20 | 30 | 40 | 50 | 60 | 70 | 80 | 90 | 100| 110| 120| | FE+NN | EOF_LSTM | | 0.391 | 0.448 | 0.490 | 0.518 | 0.546 | 0.570 | 0.592 | 0.601 | 0.612 | 0.619 | 0.627 | 0.628 | | FE+NN| CEEMDAN_LSTM | | $\underline{0.243}$ | $\underline{0.294}$ | $\underline{0.328}$ | $\underline{0.357}$ | 0.380 | 0.399 | 0.417 | 0.432 | 0.446 | 0.464 | 0.481 | 0.493 | | FE+NN| EOF_SEEMDAN_LSTM | | **0.176** | **0.229** | **0.262** | **0.279** | **0.303** | **0.325** | **0.337** | **0.346** | **0.357** | **0.371** | **0.386** | **0.399** | | Pure NN| LSTM | | 0.403 | 0.468 | 0.511 | 0.546 | 0.576 | 0.591 | 0.605 | 0.617 | 0.628 | 0.643 | 0.646 | 0.647 | | Pure NN| SCENT | | 0.354 | 0.348 | 0.360 | 0.365 | $\underline{0.374}$ | $\underline{0.391}$ | $\underline{0.390}$ | $\underline{0.408}$ | $\underline{0.410}$ | $\underline{0.419}$ | $\underline{0.415}$ | $\underline{0.430}$ | | | | | | | | | ||||| | Metric: | RMSE (°) | **Thank you again for your time and feedback. We respectfully ask you to consider a revised score if you feel our responses address your concerns.** [1] Han, G. et al. "A new version of regional ocean reanalysis for coastal waters of China and adjacent seas", *Adv. Atmos. Sci*. 30, 974–982 (2013). [2] Wu, X. et al. "Deep Learning–Based Prediction of Kuroshio Path South of Japan", *J. Atmos. Oceanic Tech*. 40.2 (2023).
null
null
null
null
null
null
How to Train Your Multi-Exit Model? Analyzing the Impact of Training Strategies
Accept (poster)
Summary: Multi-exit neural network model is a model that can exit at different layers. It remains a challenge to find an optimal way to increase accuracy of exiting at an earlier layer while not dropping accuracy of the last layer. This paper focuses on one angle: what is the best way to train a multi-exit model. The paper found that the best method to get the best of both worlds is the Mixed approach: train the model till the end without early exit, then add early exit classification heads, referred to as internal classifier (IC), and train the full model with ICs. The paper starts with analyzing the gradient contribution of each early exit, as well as the rank of activations and mode connectivity and mutual information to reach insights to support the decision to use Mixed training. The paper compared the Mixed approach with the Disjoint approach (i.e., train model first, then add ICs, then train ICs while freezing model) and the Joint approach (i.e., train from scratch both model with ICs), and found Mixed being the best trade off for accuracies of earlier layers and last layer. It also compared with different variants of loss and gradient scales and found that Mixed could be better or an alternative to such methods. Claims And Evidence: - The main claim that Mixed training is better for earlier and last layer accuracies than Joint training has been backed by results on training various architectures, modalities and datasets Methods And Evaluation Criteria: - Authors trained on different architectures (transformers, ResNets, ...), modalities (vision and text) and datasets (CIFAR, ImageNet, STSB, etc.) Theoretical Claims: The authors didn't really make theoretical claims but I have some comments about theoretical explanations of some analysis: - Line 303: I suggest to re-phrase "representation of easy samples is not complex" to "representation of a subset of the training samples is not complex", and re-word accordingly subsequent references to easy samples..unless we provide definition of "easy" samples (e.g., those that have low loss) and show that samples that meet such definition do require processing with fewer layers as the paragraph claims - Line 315: Similarly, I suggest to re-phrase "easy datasets where more samples exit at earlier layers." with "subsets of datasets where more samples exit at earlier layers." And similar for Line 319 - Lines 307 to 308: "To describe it in terms of mutual information, the network does not need to reduce the complexity of X to fit the internal representation Z," I am a bit confused. If X is the input, how can the network reduce or change its complexity? Should it only be able to change the internal representation Z, not the input? Experimental Designs Or Analyses: Experiments seemed to be fine, but I have a comment on one of the analyses: Figure 5: According to Figure 2 of this paper ( https://arxiv.org/abs/1909.01380 ), the mutual information between a model's input and intermediate activation should decrease monotonically across layers, but this is not the case in Figure 5 of this paper. Do authors have a reason why? Moreover, according to Data Processing Inequality concept ( https://en.wikipedia.org/wiki/Data_processing_inequality ) after the processing of each layer, information about input X in a layer's output should either reduce or stay the same, and it can't increase. Hence, my understanding is that mutual information entropy should monotonically decrease across layers. Supplementary Material: Yes. I have read all of the Appendix. Relation To Broader Scientific Literature: While most papers in early exit focus on a specific technique (e.g., loss scaling) to increase accuracies of earlier and last layers, they take it for granted whether they train from scratch or finetune, and overlook the implication of this decision. This paper focused on that overlooked aspect, on whether it is better to train from scratch, or continually train, or perform mixed training. Essential References Not Discussed: Not really. I would say that since LLMs (and VLMs) are becoming popular, it would have been useful to discuss some of the results from papers that explored early exit loss for LLMs and (as I suggest in another part of this review) to test the findings on a small LLM: - EMNLP 2023, "Fast and Robust Early-Exiting Framework for Autoregressive Language Models with Synchronized Parallel Decoding", Sangmin Bae, Jongwoo Ko, Hwanjun Song, Se-Young Yun - ACL 2024, "LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding", Mostafa Elhoushi, Akshat Shrivastava, Diana Liskovich, Basil Hosmer, Bram Wasti, Liangzhen Lai, Anas Mahmoud, Bilge Acun, Saurabh Agarwal, Ahmed Roman, Ahmed A Aly, Beidi Chen, Carole-Jean Wu Other Strengths And Weaknesses: Strengths: - "To the best of [the authors'] knowledge, this work is the first to directly compare models trained under different regimes and to provide a detailed analysis of the training dynamics of multi-exit models." If that's true, then publishing this paper will be useful for the community to have a holistic understanding of early exit. - Tested on different architectures (transformers, ResNets, ...), modalities (vision and text) and datasets (CIFAR, ImageNet, STSB, etc.) - Useful insights such as "the earlier layers are characterized by higher frequency features while later layers learn low frequency elements. This regularity is disrupted in the\ case of early exit architectures as the backbone network is given additional classifiers that are placed in earlier parts of the network." and "As placements become less frequent, the difference between joint and mixed regimes becomes less pronounced." Weaknesses: - While the paper claims that Mixed training is an alternative to Joint training with loss or gradient scaling, it could be argued that the latter could be better as it requires less training time compared to Mixed training, as we have to train only once. - The claim or beneifts of the findings are quite incremental Other Comments Or Suggestions: - For experiments, I recommend adding a more commonly used architecture like GPT2 for LLMs - Different Early Exit Adapters: What about using an early-exit adapter sub-network like SCAN [1]? In the case of ViT, how about adding 1 or 2 transformer layers before the classification head in the early exit adapter sub-network? Formatting / Typos: - Authors have forgotten to change the mini-title of pages 2 and onwards. It is still at "Submission and Formatting Instructions for ICML 2025". - Figure 2: Please use same axes limits and steps for Figures 2a and 2b to make it easier to compare. - in several parts of the paper, Latex quotes need to be fixed (e.g., line 245) - in more than one place in the paper (e.g., line 276), the word "Testset" is used while I think it should be "Test set" [1] NeurIPS 2019, "SCAN: A Scalable Neural Networks Framework Towards Compact and Efficient Models", Linfeng Zhang, Zhanhong Tan, Jiebo Song, Jingwei Chen, Chenglong Bao, Kaisheng Ma Questions For Authors: - Figure 3: I didn't understand how can we infer from the plots that "Disjoint and mixed regimes produce similar models, while the model trained in joint regime lies in a different basin" - Table 3: which model are the results for? - Table 6: Why the difference in accuracy between 25% and 100% compute is 1.02% in accuracy for Mixed 1L here, while in Table 3 the difference is almost 30%? - Similarly in Table 7: Again, the differences in accuracy between 25% and 100% compute is around 1%$ or 2% for mixed and joint here. - Line 622: What is the difference between "Mixed-gradual training" and "Alternating training" - Line 666: If theta is the model's weights, what are the numbers x and y? - Table 15: What is PBEE? - Table 16: What is GPF? - Table 17: What is ViT-Entropy? Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer’s thoughtful evaluation of our work and their recognition of its significance. We apologize for brief answers necessitated by the response length limit. Kindly if we resolved the concerns raised, we would be grateful if the reviewer would consider raising their score accordingly. > train time passes We emphasize that the number of phases of training does not have to affect training time. As noted in the manuscript, in all of our experiments we used early-stopping. We report the resulting number of training epochs for the Tinyimagenet/ResNet50 experiment: Disjoint: 523 +/- 156 Joint: 1610 +/- 395 Mixed: 1166 +/- 136 In practice the opposite is true, joint training is slower to converge. > benefits of the findings are quite incremental In most cases the difference over joint regime is definitely significant, e.g. 2 percentage points for the ImageNet experiment. For low budgets the difference over disjoint is immense - and we emphasize that the disjoint regime is also in popular use. > LLMs Please see the answer to reviewer cDMi. > mutual information should decrease monotonically across layers The monotonic decrease shown in Shwartz-Ziv & Tishby (2017) and inferred through the Data Processing Inequality assumes exact calculations. In our work, we estimate mutual information using a Monte Carlo method (Kawaguchi et al., 2023) in high-dimensional spaces. This approximation can introduce estimation noise, which may result in non-monotonic fluctuations in the absolute MI values. However, to further verify the MI impact, we perform additional experiment for another vision (CIFAR-100) and NLP (BERT-B/Newsgroups) datasets. In this case, the MI has a more decreasing tendency. However what we want to emphasize is the relative relationships between regimes present in all the figures that hint that higher Mutual Information may be an indicator for better performance of the mixed regime. [results](https://mega.nz/file/29BwxLCa#cntotXc3eDo7mnqA7KsCHBzq_yiqy905_IJOd0gFBtE) > Different Early Exit Adapters:... As suggested, we run additional experiments where each head has an additional transformer block: | Regime|50%|75%| 100%| |-|-|-|-| |disjoint|51.49|55.39|56.99| |joint|57.42|59.07|59.09| |mixed|58.69|60.54|60.68| Due to the size of this head architecture, the model is not able to meet the 25% of the original model’s budget, so we omit the first column. We emphasize that the mixed regime has better performance as before. > Figure 3:... When linearly interpolating the weights between the (models trained with) disjoint and mixed regime, we do not encounter a region of high loss. This means they lie in the same optimization basin. In contrast, when interpolating between the joint regime model and any other model, we do encounter a region of high loss (yellow color). > Table 3…? > Table 6: Why the difference in accuracy…is 1.02%…while in Table 3 the difference is almost 30%? Each dataset and model combination has a different budget-difficulty characteristic (the shape of FLOPs vs accuracy curves, see Figure 1). Tables 6 and 7 were computed for ViT-T architecture and the ImageNette dataset (a 10-class subset of ImageNet, with the same input image size as ImageNet), a combination which turns out to be a relatively easy problem for a model of this size. Table 3 was computed for CIFAR-100/ResNet-34, which turns out to be a relatively harder problem for this architecture. For comparison, we repeat the experiment from Table 6 for Tinyimagenet below: |Regime|Head|25%|50%|75%|100%| |-|-|-|-|-|-| ||1L|26.35|41.69|53.06|56.72| |disjoint|2L-1024|33.85|47.21|54.97|56.72| ||2L-2048|33.96|45.98|54.74|56.72| ||1L|42.47|53.24|56.02|56.03| |joint|2L-1024|45.59|55.11|57.73|57.66| ||2L-2048|43.60|53.89|56.94|57.00| ||1L|44.03|57.91|60.42|60.28| |mixed|2L-1024|44.92|57.11|59.32|59.22| ||2L-2048|44.61|56.94|60.18|60.15| > …"Mixed-gradual training" and "Alternating training" "Mixed-gradual" trains the model with n ICs in n **phases**, each phase including a larger number of ICs, starting from the deepest ones. "Alternating" switches **every training step** between training with the last IC, and with all the ICs. > Line 666: …x and y? x and y represent scalar coefficients used to perturb the model parameters θ∗ along randomly chosen directions (δ,η) to visualize the loss landscape around the trained model (Li et al., 2018). > What is…? PBEE - Patience-based Early Exit - EE method where we exit for each sample after n consecutive ICs return the same class (Zhou et al., 2020), GPF - Global Past-Future - an EE method that incorporates hidden states from the preceding and deeper ICs (Liao et al., 2021) Entropy - means we base our exit decision on entropy of the prediction probabilities (Teerapittayanon et al., 2016) rather than maximum softmax probability We sincerely appreciate the reviewer's detailed feedback. We incorporate all suggestions in the current version of our manuscript. --- Rebuttal Comment 1.1: Comment: I appreciate the authors' comprehensive response. I have read all the reviewers' feedback and their corresponding rebuttals. I am leaning towards keeping my rating as Weak Accept. The analysis is detailed but the findings are incremental, so a solid accept would be difficult in my humble opinion. A few comments: - I recommend adding a comment or footnote to the paper explaining why mutual entropy is not monotonically decreasing as described in Tishby's paper or in the Data Processing Inequality - Regarding the authors' response to one of the other reviewers that the proposed mixed approach is "extremely uncommon", I would like to mention it is used in LLMs as in ]LayerSkip](https://arxiv.org/abs/2404.16710) and [Relaxed Recursive Transformers](https://openreview.net/forum?id=WwpYSOkkCt). --- Reply to Comment 1.1.1: Comment: We thank the reviewer for the engagement in the discussion process and the valuable suggestions. > I recommend adding a comment or footnote... As suggested, we modify our current revision of the paper to explain the non-monotonicity of the Mutual Information results. > Regarding the authors' response to one of the other reviewers that the proposed mixed approach is "extremely uncommon", I would like to mention it is used in LLMs We agree that in LLM contexts, further pre-training using the same dataset is more likely. However, our work specifically targets setups such as image classification. In that response we stated that fine-tuning on the same dataset as used for pre-training is unusual. Therefore, it is unlikely that a machine learning practitioner or researcher would unwittingly apply the mixed strategy in such tasks. While the content of the papers cited by the reviewer might suggest that LLMs are the main use-case of early-exiting nowadays, this is not really the case. The early-exit setups we investigate are particularly well-suited for low-power edge-device deployments [1], highlighting the real-world applicability of our findings [2, 3, 4, 5, 6]. > as in LayerSkip and Relaxed Recursive Transformers. LayerSkip [7] proposes a "loss curriculum" (training strategy) that is almost equivalent to the Mixed-gradual strategy from our appendix, with the difference being that they enable the next IC at regular intervals, while we use a early-stopping criterion as an indicator when to proceed into the next phase. However, this approach is used only in a single experiment of their work, with a different strategy - rotational curriculum - being used for other experiments. No explanation is given why the authors prefer one over the other, and no ablation study for this aspect is presented. The arbitrary adoption of a specific training strategy in multiple prior studies - for instance, in [2, 3, 9, 10, 11] from last year - without sufficient justification was the primary motivation of our work. While Relaxed Recursive Transformers [8] includes an ablation study on training strategies, this constitutes a relatively minor component of their overall contribution. The transferability of their findings to the settings explored in our work remains uncertain. In contrast, our paper offers a substantially broader and more systematic analysis of the early-exit training strategy. Additionally, we emphasize that according to the [ICML 2025 Reviewer Instructions](https://icml.cc/Conferences/2025/ReviewerInstructions), this work should be regarded as concurrent with ours. We thank the reviewer for making us aware of these two works. We modify our manuscript to briefly discuss these papers in the related work section. [1] Matsubara, Yoshitomo, Marco Levorato, and Francesco Restuccia. "Split computing and early exiting for deep learning applications: Survey and research challenges." ACM Computing Surveys 55.5 (2022): 1-30." [2] Colocrese, Marco, Erdem Koyuncu, and Hulya Seferoglu. "Early-Exit meets Model-Distributed Inference at Edge Networks." 2024 IEEE 30th International Symposium on Local and Metropolitan Area Networks (LANMAN). IEEE, 2024. [3] Wang, Jingcun, Bing Li, and Grace Li Zhang. "Early-exit with class exclusion for efficient inference of neural networks." 2024 IEEE 6th International Conference on AI Circuits and Systems (AICAS). IEEE, 2024. [4] Ayyat, Mohammed, Tamer Nadeem, and Bartosz Krawczyk. "ClassyNet: Class-Aware Early-Exit Neural Networks for Edge Devices." IEEE Internet of Things Journal 11.9 (2023): 15113-15127. [5] Dong, Rongkang, Yuyi Mao, and Jun Zhang. "Resource-constrained edge ai with early exit prediction." Journal of Communications and Information Networks 7.2 (2022): 122-134. [6] Bajpai, Divya J., Aastha Jaiswal, and Manjesh K. Hanawal. "I-splitee: Image classification in split computing dnns with early exits." ICC 2024-IEEE International Conference on Communications. IEEE, 2024. [7] Elhoushi, Mostafa, et al. "LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding." Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 2024. [8] Bae, Sangmin, et al. "Relaxed recursive transformers: Effective parameter sharing with layer-wise lora." arXiv preprint arXiv:2410.20672 (2024). [9] KhademSohi, Hossein, et al. "SelfXit: An Unsupervised Early Exit Mechanism for Deep Neural Networks." Transactions on Machine Learning Research. [10] Jazbec, Metod, et al. "Fast yet safe: Early-exiting with risk control." Advances in Neural Information Processing Systems 37 (2024): 129825-129854. [11] Meronen, Lassi, et al. "Fixing overconfidence in dynamic neural networks." Proceedings of the IEEE/CVF winter conference on applications of computer vision. 2024.
Summary: The authors study different training regimes for early-exit networks (EENNs). To this end, they propose a framework consisting of 4 different metrics (gradient dominance, mode connectivity, numerical rank, mutual information) for studying the tranining dynamics of EENNs. They use the framework to explore the differences between commonly used training strategies (joint, disjoint) as well as their newly proposed mixed strategy. In the experiments, they show that their mixed strategy performs favourably (in most cases) compared to joint or disjoint baselines. Claims And Evidence: / Methods And Evaluation Criteria: / Theoretical Claims: / Experimental Designs Or Analyses: / Supplementary Material: / Relation To Broader Scientific Literature: / Essential References Not Discussed: / Other Strengths And Weaknesses: Strengths: - I agree with the authors that studying and understanding of the training approaches for EENNs is under-explored and that most papers just do what some of the seminal papers from the past did (MSDNet, SDN etc.). Hence, I believe this works fills an important gap in the early-exiting literature - I like the proposed framework for studying the training dynamics. I believe that going beyond just experimentally comparing different training regimes (e.g., via accuracy-FLOPs curves on ImageNet) is valuable and provides more insights into the differences between considered regimes - The experiments presented support claims made in the paper and show that the mixed strategy might be the optimal one going forward in the early-exit community Weaknesses: - While I appreciate the framework presented, I wonder if all 4 metrics are indeed necessary. For example, I feel that the numerical rank one is not that informative as all 4 curves displayed in Figure 4 show quite a similar trend to me. - While I understand it is not the focus on this work, it would still be valuable to say something about whether the findings on training dynamics presented in this paper translate to early-exit LLMs [1, 2, 3] [1] Schuster, T., Fisch, A., Gupta, J., Dehghani, M., Bahri, D., Tran, V., Tay, Y. and Metzler, D., 2022. Confident adaptive language modeling. Advances in Neural Information Processing Systems, 35, pp.17456-17472. [2] Bae, S., Ko, J., Song, H. and Yun, S.Y., 2023. Fast and robust early-exiting framework for autoregressive language models with synchronized parallel decoding. arXiv preprint arXiv:2310.05424. [3] Chen, Y., Pan, X., Li, Y., Ding, B. and Zhou, J., 2023. Ee-llm: Large-scale training and inference of early-exit large language models with 3d parallelism. arXiv preprint arXiv:2312.04916. Other Comments Or Suggestions: / Questions For Authors: Questions: - The fact that joint strategy leads to suboptimal performance for larger computational budgets reminds me a bit of the "interference" of the earlier classifiers talked about in MSDNet paper (see right plot in Figure 3 there). To tackle this, in MSDNet they introduce architectural changes to their CNN via "dense connectivity". So in some way your mixed strategy aims to do the same, but is more general since it doesn't require architectural modifications. And this is kinda confirmed by your experimental results, since, if I read Table 1 correctly, the mixed strategy improves over joint strategy the least for maximal budget on MSDNet Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate and agree with the reviewer’s assessment that our work addresses a meaningful gap in the early-exiting literature. > numerical rank informativeness Firstly, the key insight of the numerical rank metric is that placing multiple early exits increases the rank and also the expressive representation of the network. The difference in the ranks between the plain model (disjoint regime) and the mixed regime in Figure 4a is significant as it is substantially larger than standard deviation. In Fig. 4a we only show the mixed result for clarity but Fig. 4b shows both mixed and joint regimes obtain similar ranks (given that both regimes foster enriched representations), and both are respectively higher than the backbone. Secondly, the backbone remains unchanged for the disjoint regime and so the numerical rank experiment seen in Fig. 4a is also valuable because it may explain the inferior performance of early ICs in the disjoint regime. Joint training results with layers that enable more expressive intermediate representations (higher numerical rank), thus allowing for coexistence of features relevant for the nearest IC, and those relevant for the deepest ICs. On the other hand, a consistently lower rank is obtained for the disjoint regime. As mentioned above when even an already trained backbone is allowed to be affected by the ICs’ gradients, the numerical rank of its representation rises (see Figure 4a), and so does its performance on lower budgets (see accuracy results for disjoint vs mixed in any setting). In the current version of the manuscript we rewrite the explanation of the numerical rank results to make these points more clear to the reader. > While I understand it is not the focus on this work, it would still be valuable to say something about whether the findings on training dynamics presented in this paper translate to early-exit LLMs [1, 2, 3] We appreciate this insightful suggestion. We agree that exploring early exit regimes in LLMs is a promising direction. However, we intentionally refrained from discussing generative LLMs as translating our observations to early-exit LLMs is challenging due to several fundamental differences (we also note that recent early-exiting studies [1, 2] refrain from studies on LLMs.). Methods like proposed CALM and FREE incorporate intricate confidence mechanisms, state-copying techniques, and synchronization strategies tailored specifically for token-level decisions, which are not directly analogous to the layer-wise early-exiting approach we analyzed. Moreover, EE-LLM emphasizes 3D parallelism and large-scale model training, aspects that differ substantially from our focus on training regimes and layer-wise representation dynamics. However, as an early effort we add an experiment on the NLP classification problem using BERT-B trained on Newsgroups dataset. [results](https://mega.nz/file/29BwxLCa#cntotXc3eDo7mnqA7KsCHBzq_yiqy905_IJOd0gFBtE) For the backbone network (BERT-B), lower MI in earlier layers and higher in the rear layers used by the disjoint regime may explain its positive performance only at the deeper layers. A slightly increased MI for mixed regime is further indicative of better performance. [1] Jazbec, Metod, et al. "Towards anytime classification in early-exit architectures by enforcing conditional monotonicity." [2] Meronen, Lassi, et al. "Fixing overconfidence in dynamic neural networks." [3] Yoshitomo, Levorato, Restuccia. "Split computing and early exiting for deep learning applications: Survey”. > The fact that joint strategy leads to suboptimal performance for larger computational budgets reminds me a bit of the "interference" … We agree that interference between internal classifiers is a central issue - indeed, it’s what inspired our gradient dominance metric. Our experiments reveal that even MSDNet’s architectural adjustments, including its dense connectivity, does not fully resolve this issue. The same is true for gradient equilibrium from [1], as we show in Section 4.4. In fact, our mixed training regime outperforms these approaches, demonstrating more effective handling of interference and yielding superior performance, particularly under larger computational budgets. [1] Li, Hao, et al. "Improved techniques for training adaptive deep networks."
Summary: The paper presents an enhanced early-exit training approach that combines two phases: initial backbone training followed by full multi-exit network training. This mixed strategy addresses the shortcomings found in both joint and disjoint training methods. While the paper presents its methodology clearly and provides thorough empirical validation, several limitations exist. The authors do not do experiment on SOTA settings, and, the proposed method, though well-explained, lacks technical innovation - essentially combining two existing approaches. ## update after rebuttal In the rebuttal period, the authors have add the FLOPs-Accuracy curve in the rebuttal, which is a valuable evaluation approach in early-exit models. The authors also clarify the novelty and the relationship between method and analysis, which well address my concerns. As a result, I raise my rating from Weak Reject to Weak Accept. I hope the authors can also add the FLOPs-Accuracy curve (budgeted batch classification, proposed in MSDNet[1], and widely used in following works[2~5]) in the final revision, because it is a very clean way to show the performance of early-exiting networks. [1] Huang, Gao, et al. "Multi-Scale Dense Networks for Resource Efficient Image Classification." International Conference on Learning Representations. 2018. [2] Li, Hao, et al. "Improved techniques for training adaptive deep networks." Proceedings of the IEEE/CVF international conference on computer vision. 2019. [3] Yang, Le, et al. "Resolution adaptive networks for efficient inference." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2020. [4] Han, Yizeng, et al. "Learning to weight samples for dynamic early-exiting networks." European conference on computer vision. Cham: Springer Nature Switzerland, 2022. [5] Han, Yizeng, et al. "Dynamic perceiver for efficient visual recognition." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023. Claims And Evidence: Yes, they provide much analysis and experiments for their claims. Methods And Evaluation Criteria: I am afraid not. The results presented in Table 1~6 are problematic. They provide the x-axis as a ratio, make the reviewer hard to know they achieve this performance in which FLOPs. I suggent the authors present results following the follow literatures: [1] Han, Yizeng, et al. "Dynamic perceiver for efficient visual recognition." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023. [2] Wang, Yulin, et al. "Not all images are worth 16x16 words: Dynamic transformers for efficient image recognition." Advances in neural information processing systems 34 (2021): 11960-11973. Theoretical Claims: The authors provide some experimental analysis in section 3, but I can hardly find the relationship between them and the proposed method. Experimental Designs Or Analyses: I have checked. I think the results presented in Table 1~6 are problematic. They provide the x-axis as a ratio, make the reviewer hard to know they achieve this performance in which FLOPs. I suggent the authors present results following the follow literatures: [1] Han, Yizeng, et al. "Dynamic perceiver for efficient visual recognition." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023. [2] Wang, Yulin, et al. "Not all images are worth 16x16 words: Dynamic transformers for efficient image recognition." Advances in neural information processing systems 34 (2021): 11960-11973. Supplementary Material: Yes, the authors provide more results, visualizations and training setting in the supplementary. Relation To Broader Scientific Literature: They are related to the early-exit and dynamic neural networks material. Essential References Not Discussed: The related work section is enough. Other Strengths And Weaknesses: Early exiting is a very valuable research topic. But the experiments should provide the FLOPs when you report each Accuracy. 25%, 50%, 75% is very unambiguous。 Other Comments Or Suggestions: No. Questions For Authors: No. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the effort spent on reviewing our paper and the valuable insights. If we further addressed the remaining concerns, we would kindly ask for possible score reconsideration. > The authors do not do experiment on SOTA settings. > The results presented in Table 1~6 are problematic. They provide the x-axis as a ratio, make the reviewer hard to know they achieve this performance in which FLOPs In this work, we followed the performance reporting scheme initiated by [1] due to its simplicity and readability (though without the absolute FLOPs values). However, as per the reviewer’s suggestion, we provide the same results as FLOPs vs accuracy plots (similar to the one from Figure 1) in [this link](https://mega.nz/file/nsgSjAiR#73JNpisrf_NrvM0lI2ouyYNgKADWAZWYMlX7QG1EL7s). We also include them in the current version of our manuscript. [1] Kaya at al. "Shallow-deep networks: Understanding and mitigating network overthinking." We are also not sure if the reviewer might have meant it but we want to emphasize that our paper does not aim to achieve state-of-the-art performance on any setup. Its purpose is to analyze the training dynamics and compare the multi-exit model training strategies. We do this by evaluating all the considered regimes on multiple commonly-used architectures, on multiple modalities and datasets, and for multiple early-exiting approaches. Moreover, most of the results we present were computed for networks trained from scratch, for three different PRNG seeds. Our limited computational resources prevent us from running large SOTA models. Nevertheless, to demonstrate that our findings scale to larger models, we present the results for the ImageNet-1k experiment for the ViT-S variant of the vision transformer architecture in the link provided (Figure 11): | Regime | 25% | 50% | 75% | 100% | |-|-|-|-|-| | Disjoint | 10.23 | 33.91 | 68.02 | 78.38 | | Mixed | 49.50 | 75.17 | 78.28 | 78.33 | | Joint | 50.10 | 73.99 | 76.38 | 76.44 | We can see that - as before - the disjoint regime still is significantly inferior on lower budgets, while the joint regime exhibits a noticeable performance gap on higher budgets. > the proposed method, though well-explained, lacks technical innovation - essentially combining two existing approaches. We thank the reviewer for their feedback. Our mixed training strategy is intentionally technically simple, and we consider that as an advantage rather than a weakness of our work. We believe that clarity and effectiveness should take priority over unnecessary complexity. Rather than adding complexity for its own sake, we focus on a practical, well-reasoned method that consistently outperforms existing approaches. Moreover, technically simple methods are easy to implement, and - as a consequence - are more likely to be widely adopted by the community. We emphasize that the primary goal of our work is to systematically analyze early-exit training strategies. Since existing strategies are also simple, it is important to first analyze their limitations before considering more complex alternatives. Our paper demonstrates the weaknesses of the two commonly used training strategies, and highlights the large impact of this previously overlooked aspect. The proposed “mixed” approach is the most straightforward approach to alleviate the issues of the joint and disjoint regimes. While being *technically* simple, it is not necessarily obvious, as it was not used in any prior work. > The authors provide some experimental analysis in section 3, but I can hardly find the relationship between them and the proposed method. We appreciate the reviewer’s feedback and would like to clarify how the analyses in Section 3 are directly connected to the proposed mixed training strategy. Specifically, the experimental metrics - gradient dominance, mode connectivity, numerical rank, and mutual information - were chosen to reveal distinct aspects of the training dynamics under different regimes. For example, gradient dominance analysis shows that the mixed training regime shifts the optimization focus toward later exits. This shift directly correlates with improved performance under higher computational budgets, as later exits become more robust. Please note the relation to the gradient equilibrium experiments in Sec. 4.4. Gradient dominance results may explain the need to rescale the rear exists when trained in joint regime while mixed obviates the need for application of gradient equilibrium. Mutual Information and Numerical Rank show further aspects why the performance of the presented regimes may differ (e.g. for early vs rear exits). Finally mode connectivity reveals that mixed and disjoint regimes produce very similar solutions, despite having significantly different performance on low computational budgets.
Summary: This submission analyses different training strategies for early-exit models, namely disjoint (frozen backbone), joint (end-to-end) and the proposed mixed (backbone pretraining + joint) approach. Several metrics are proposed, including gradient dominance, mode connectivity, numeric rank and mutual information each capturing different angles of the training dynamics. Claims And Evidence: In my opinion, there are limited evidence for the claims made in the submission, in some cases. This is because the joint training scheme can be severely affected by factors such as the number and placement of early exits, which have not be adequately considered in the submission. Methods And Evaluation Criteria: In my opinion, the proposed methods and evaluation criteria are meaningful for the examined problem. Theoretical Claims: Not applicable. Experimental Designs Or Analyses: In my opinion, the experimental analysis considers numerous backbone models and dataset; but not adequate variations in the configuration of the early-exit models. Supplementary Material: I have read and fully considered the provided appendix in my review. I have not reviewed the provided codebase. Relation To Broader Scientific Literature: In my opinion, the analysis conducted in this manuscript offers numerous insights for the training of early-exit models for CV and NLP. The proposed "mixed" approach lacks novelty, not because of its simplicity, but mostly because starting with an ImageNet-pretrained backbone can be considered common practice for ML practitioners deploying multi-exit models. Essential References Not Discussed: In my opinion, relevant literature has been adequately cited. Other Strengths And Weaknesses: Strengths: - Overall the paper is well written and easy to follow; and studies an interesting problem. - The conducted empirical analysis and proposed metrics are insightful and offer numerous findings that can be adopted to guide future research in the field. Comments: 1. The findings of the this analysis, however, may not generalise across different configurations of early-exit models as the number and placement of ICs affect the training dynamics of end-to-end (joint) approaches, as does the architecture of ICs which has been discussed in the manuscript. 2. The proposed mixed training approach, essentially comprises a backbone pre-training followed by joint training, and cannot be considered novel as it is common practice in ML deployment. Nonetheless, the methods discussed in the appendix and their comparison to traditional techniques comprise a more interesting discussion. Other Comments Or Suggestions: I would suggest an alternative presentation, where "mixed" training is not posed as a novel contribution of this work. Instead the comparative analysis of different early-exit model training regimes could potentially be the main contribution of this work. Post-rebuttal edit: Provisionally increasing my score from WR to WA, having considered the authors' rebuttal and other reviewers' comments. Questions For Authors: Please consider conduction an ablation on the number and placement of ICs, and how the affect the findings of the discussed analysis. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for assessing our work and recognizing the importance of this previously overlooked aspect. We hope that the answers below adequately answered the reviewer's questions and concerns. If that is the case, we kindly ask for a reconsideration of the score. > number and placement of early exits, which have not be adequately considered in the submission > … > but not adequate variations in the configuration of the early-exit models > … > consider conduction an ablation on the number and placement of ICs In Appendix B **we provide this kind of an analysis** for our ViT-T model trained on the ImageNette dataset. These results show that our findings generalize to different placement densities. However, to make our case even stronger, **we rerun those experiments on Tinyimagenet/ResNet-50** and **include additional placement schemes** (Dense-Sparse places ICs at blocks: [1, 2, 3, 4, 5, 6, 7, 11], Sparse-Dense places ICs at blocks: [1, 4, 8, 9, 10, 11, 12, 13, 14]). We present the results below: |Placement|Regime|25%|50%|75%|100%| |-|-|-|-|-|-| ||disjoint|38.92|49.25|60.10|65.76| |Every-1|joint|52.20|62.49|65.52|65.59| ||mixed|52.22|63.03|67.21|67.35| ||disjoint|37.34|48.03|60.34|65.65| |Every-2|joint|51.81|62.60|65.55|65.38| ||mixed|52.41|63.19|67.14|67.33| ||disjoint|-|47.91|60.95|65.77| |Every-3|joint|-|63.33|67.22|67.21| ||mixed|-|62.52|66.72|66.71| ||disjoint|-|41.32|57.78|65.78| |Every-4|joint|-|62.54|66.30|66.27| ||mixed|-|62.07|67.08|67.14| ||disjoint|-|39.85|56.20|65.72| |Every-5|joint|-|61.10|65.61|65.79| ||mixed|-|61.95|67.33|67.40| ||disjoint|38.47|50.64|62.04|65.74| |Dense-Sparse|joint|53.14|62.23|64.76|64.93| ||mixed|53.48|63.17|66.24|66.27| ||disjoint|37.12|47.03|59.79|65.68| |Sparse-Dense|joint|50.47|61.19|65.36|65.42| ||mixed|51.19|62.27|67.26|67.47| The results are consistent with the main findings of the paper, that is: 1. The mixed regime still presents generally better performance over the joint regime. 2. The disjoint regime is still inadequate for low budgets. > The proposed "mixed" approach lacks novelty, … starting with an ImageNet-pretrained backbone can be considered common practice… > …essentially comprises a backbone pre-training followed by joint training, and cannot be considered novel as it is common practice in ML deployment… We emphasize that the proposed “mixed” approach is **not** in common use. The common practice of using pre-trained models for transfer learning is independent of the choice of the training regime. Starting with a model pre-trained on dataset A does not preclude us from fine-tuning on dataset B with different training regimes. In particular, let’s assume that a ML practitioner starts with a backbone model (e.g. ViT-B) pretrained on dataset A (e.g. ImageNet-1k). His aim is a multi-exit model that performs well on dataset B (e.g. CIFAR-100). If he takes the backbone, attaches classification heads, and trains (finetunes) everything jointly on dataset B, then **that is still joint training** (finetuning) according to our terminology. To perform mixed regime training in such a setting, we first finetune (on dataset B) the backbone only, and only then proceed to finetune everything together. In the paper we compare all regimes in the transfer learning setting and present the results in Table 4. In this setting the behaviour of each regime and the findings are the same as in other experiments - both joint and disjoint training display a significant performance gap on some budgets. If A == B, then indeed taking a pre-trained model, attaching ICs and training jointly indeed would be equivalent to our “mixed” approach. However, we argue that such cases are extremely uncommon. Again, we apologize for the misunderstanding, which resulted from our insufficiently thorough description of the transfer learning experiment. In our current version of the manuscript we modify this section to present the pre-trained setup more clearly. > …alternative presentation… We thank the reviewer for this valuable suggestion regarding the positioning of our contribution. Although it may not have been emphasized enough, our goal in this work is to provide a comprehensive comparative analysis of different early-exit training regimes, and investigate their training dynamics and implications. In the paper, we point out scenarios where the joint or even disjoint training regimes can be more suitable or advantageous. For instance, we observe that the joint regime may be preferable at very low computational budgets (where early classifiers dominate), and the disjoint regime can show good performance when the backbone model is already well-trained or fixed and further training resources are limited. Nevertheless, our empirical results consistently suggest that, in most practical scenarios and computational budgets, the mixed regime tends to outperform the others, hence our emphasis on highlighting its benefits. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed replies on the raised concerns. Although I remain skeptical about the novelty of the proposed "mixed" training approach, I believe that the comparative analysis between different training methods for early-exit models is indeed quite broad to drive robust conclusions, offering useful insights to practitioners and researchers. As such, I am provisionally increasing my score to WA, pending the reviewer discussion.
null
null
null
null
null
null
Latent Diffusion Planning for Imitation Learning
Accept (spotlight poster)
Summary: The paper proposes latent diffusion planning (LDP), a method for imitation learning featuring 3 components: 1) A variational autoencoder, mapping images to a latent spaces 2) A latent diffusion planner, which generates a sequence of latent states that the policy should visit 3) An inverse dynamics model, also leveraging diffusion, which associates an action to a latent transition. The interesting property of this framework is that it can use suboptimal demonstrations (with actions) to refine the inverse dynamics model, and unlabeled expert videos to improve the latent planner. Experiments show that this setup better makes use of unlabeled or suboptimal demonstrations than previous methods (on average). Claims And Evidence: The main claims of the papers are 1) LDP outperforms previous methods thanks to its better usage of unlabeled and suboptimal data 2) LDP can be applied to real robots, where collecting data with actions can be costly. The results listed in Table 1 and 2 are convincing. LDP is competitive with the best baselines in all the considered tasks, often outperforming them. The authors make the effort of fairly comparing LDP with other methods by providing them with the same data LDP has access to. For example, relabeling action-free data with an inverse dynamics model is a strong alternative to LDP (which in fact performs comparably to LDP in Lift and Square), but overall LDP is stronger. I would suggest removing LDP + Subopt from the first table, or to merge the 2, because I found it confusing at a first read (to my understanding, the baselines in table 1 do not use suboptimal data). Experiments using Franka also show an improvement over DP, especially using action-free data. One possible issue in using LDP is that often the performance improvement is unpredictable - in some cases using action-free or suboptimal trajectories leads to a large improvement, sometimes even to a small decrease in performance. Understanding in which situations suboptimal or action-free data are beneficial would improve the applicability of the method. Methods And Evaluation Criteria: Yes, the benchmarks correctly support the claims. Additional experiments which would help understanding the role of action-free and suboptimal data could evaluate their role more precisely. For example, the authors could show a graph of the performance (maybe in one or in a subset of tasks) as a function of the amount of suboptimal / action-free data, to better understand whether their impact saturates at a certain point, and how many labeled optimal examples are necessary for the method to be effective. Theoretical Claims: The paper makes no theoretical claims. Experimental Designs Or Analyses: I do not see particular issues with the experimental design. My only concern is the lack of justification for the choice of labeled / unlabeled / suboptimal subsets, which might have a relevant impact on the results. Supplementary Material: The supplementary material just lists hyperparameter details. Relation To Broader Scientific Literature: The method is compared to strong baselines and therefore its relevance in the context of the broader literature is clear. Essential References Not Discussed: The background discussion is broad. Other Strengths And Weaknesses: Strenghts: * Clarity of exposition * Good comparison with baselines * Elegant architecture * Convincing results Weaknesses: * Lack of evaluation of how many labeled / unlabeled / suboptimal trajectories are needed for optimal performance Other Comments Or Suggestions: 108: which enabling 252 right column - the paragraph on the number of labeled trajectories and suboptimal / action free is difficult to follow, a small table would be clearer 294: remove use or collect Questions For Authors: - Does using action-free and suboptimal trajectories at the same time further improve the results? - Does training the inverse dynamics on suboptimal trajectories, that will likely never be encountered during deployment (the planner will try to follow the optimal trajectories) actually improve action regression? - Are the action-free data always from expert policies? To my understanding, suboptimal action-free trajectories cannot be used because they would decrease the performance of the planner. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear Reviewer, Thank you for your detailed feedback on our project. We are happy that you found the paper to be well-written and clear. We address your questions and comments below. Please let us know whether there are any other concerns you have that prevent you from increasing your score. **Q1: Choice of labeled / unlabeled / suboptimal trajectories are needed for optimal performance** For different robotics tasks, we tried to keep most of the parameters the same: 100 action-free trajectories for robomimic tasks and 500 suboptimal trajectories for all tasks. However, we chose: \# of demonstrations: Robomimic, by default, includes 200 proficient demonstrations per task, and ALOHA includes 50 demonstrations per task. To showcase the effectiveness of our method when the number of demonstrations is low, we chose to take **half** the number of demonstrations-- 100 for Robomimic tasks, and 25 for ALOHA tasks. However, Lift is a very simple task, and even with around 10 demonstrations, behavior cloning policies can achieve very high accuracy. Thus, specifically due to the simplicity of the Lift task, we chose 3 demonstrations to adhere to the low demonstration setting. \# of action-free trajectories: we chose to use the remaining **half** of the demonstrations as action-free trajectories. This meant 100 demonstrations for Robomimic tasks, and 25 demonstrations for ALOHA. Since Lift only used 3 demonstrations, we considered using only 3 action-free trajectories. However, because we wanted to try and remain consistent with the number of trajectories, we chose to use 100 action-free demonstrations. **Q2: Does using action-free and suboptimal trajectories at the same time further improve the results?** Yes, using action-free and suboptimal trajectories further improves results and strongly exceeds our baselines. We have included updated results. Here is a how LDP + Action-Free + Suboptimal compares against just LDP without the additional trajectories: In addition, we have reran UniPi with identical hyperparameters to LDP, per Reviewer PyaC’s suggestion. | Method | Lift | Can | Square | ALOHA Cube | Average | |----------------------------|--------------|--------------|--------------|--------------|---------| | DP | 0.60 +- 0.00 | 0.63 +- 0.01 | 0.48 +- 0.00 | 0.32 +- 0.00 | 0.51 | | DP-VPT | 0.69 +- 0.01 | 0.75 +- 0.01 | 0.48 +- 0.04 | 0.45 +- 0.03 | 0.59 | | UniPi-OL + Action-Free | 0.09 +- 0.05 | 0.23 +- 0.03 | 0.07 +- 0.03 | 0.02 +- 0.00 | 0.11 | | UniPi-CL + Action-Free | 0.14 +- 0.02 | 0.32 +- 0.04 | 0.09 +- 0.01 | 0.17 +- 0.03 | 0.18 | | LDP | 0.69 +- 0.03 | 0.70 +- 0.02 | 0.46 +- 0.00 | 0.64 +- 0.04 | 0.65 | | LDP + Action-Free | 0.67 +- 0.01 | 0.78 +- 0.04 | 0.47 +- 0.03 | 0.70 +- 0.02 | 0.66 | | LDP + Action-Free + Subopt | 1.00 +- 0.00 | 0.98 +- 0.00 | 0.83 +- 0.01 | 0.97 +- 0.01 | 0.95 | | Method | Lift | Can | Square | ALOHA Cube | Average | |----------------------------|--------------|--------------|--------------|--------------|---------| | DP | 0.60 +- 0.00 | 0.63 +- 0.01 | 0.48 +- 0.00 | 0.32 +- 0.00 | 0.51 | | RC-DP | 0.40 +- 0.04 | 0.73 +- 0.03 | 0.66 +- 0.02 | 0.60 +- 0.04 | 0.60 | | DP+Repr | 0.66 +- 0.04 | 0.61 +- 0.01 | 0.44 +- 0.02 | 0.25 +- 0.03 | 0.49 | | DP PT + FT | 0.52 +- 0.02 | 0.67 +- 0.01 | 0.57 +- 0.03 | 0.78 +- 0.00 | 0.64 | | UniPi-OL | 0.12 +- 0.06 | 0.28 +- 0.02 | 0.07 +- 0.01 | 0.00 +- 0.00 | 0.12 | | UniPi-CL | 0.12 +- 0.02 | 0.30 +- 0.02 | 0.10 +- 0.04 | 0.15 +- 0.07 | 0.17 | | LDP | 0.69 +- 0.03 | 0.70 +- 0.02 | 0.46 +- 0.00 | 0.64 +- 0.04 | 0.65 | | LDP + Subopt | 0.84 +- 0.06 | 0.68 +- 0.02 | 0.55 +- 0.03 | 0.71 +- 0.03 | 0.70 | | LDP + Action-Free + Subopt | 1.00 +- 0.00 | 0.98 +- 0.00 | 0.83 +- 0.01 | 0.97 +- 0.01 | 0.95 | **Q3: Training the inverse dynamics on suboptimal trajectories...** We believe this does help, because LDP + Subopt > LDP, and LDP + Action-Free + Subopt > LDP + Action-Free. In both of these cases, the difference is the suboptimal data for the IDM, which improves policy rollouts. One metric we can use to quantitatively measure this is action MSE for a validation dataset, but we found that this is noisy and not extremely correlated with the policy performance, which is true for many behavior cloning methods. Thus, we don’t find action MSE metrics for the IDM to be particularly insightful, and instead refer to policy performance. **Q4: Are the action-free data always from expert policies? To my understanding, suboptimal action-free trajectories cannot be used because they would decrease the performance of the planner.** Yes, action-free data assumes that optimal behavior.
Summary: - This paper ultimately aims to do some form of imitation learning in robotic settings - It does this with a modular approach, using: 1) a 'planner' to predict sequences of observations from those provided by an expert demonstrator. 2) an IDM predicting actions from past and future observations. - At inference, the planner is used to generate a sequence, which the IDM converts to actions, which can be executed. - Both models operate on observations that are compressed by a beta-VAE. Both are trained with diffusion as the loss. - The paper emphasizes the types of data that can be used to train each model. The planner can also be trained on non-labelled (but still expert) data, the IDM can be trained sub-optimal (but still labelled) data. - Experiments in several simulated robotics tasks and one real robotic task show slight improvements over baselines. Claims And Evidence: See strengths/weakenesses. Methods And Evaluation Criteria: Fine. Theoretical Claims: NA Experimental Designs Or Analyses: - Number of trajectories was quite small in all cases (100's of trajectories), which limits the impact of the work. - Error bars overlapping for real results in Table 3 Supplementary Material: Looked at videos. Relation To Broader Scientific Literature: See strengths/weakenesses. Essential References Not Discussed: Diffusion policy is one example of diffusion for imitation learning, but there are other relevant papers missed. - Diffusion policies as an expressive policy class for offline reinforcement learning - Imitating Human Behaviour with Diffusion Models - ... Also, - DP-VPT is presented as a baseline, but the Tobari paper might be cited here (it is elsewhere) and used in the name, since it came well before the VPT paper. Other Strengths And Weaknesses: Strengths - Real robotic experiments. - Main idea is sensible. - Minor improvements in experiments. - Nice justification about different types of data for different models. Weaknesses - My main criticism of the paper is wrt the novelty. The method feels like a repeat of unipi (and probably other works) with the modification that things are done in a vae's latent space. This just doesn't feel innovative in itself to justify acceptance. Given this is a key difference, I'd expect to see a deep investigation of how to shape this space, and analysis of speedups etc. But this is lacking. - Improvement in experiments is minor compared to baselines (around 10%). Given the variability in results that can be caused by implementation details of baselines, I'm hesitant to believe that latent diffusion planning is really delivering some substantial gain here. - The data size is quite small for all experiments. It'd be better to see if these techniques hold in larger (1000's or 10,000's trajectories) - While doing things in a custom latent space might be better from a speed perspective, it means pre-trained video generation models cannot be used, which I would expect might offer a good initialization for learning from the small datasets used in this work. - In general the implementations for various components are a little outdated -- beta-vae and ddpm. Other Comments Or Suggestions: NA Questions For Authors: See weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Dear Reviewer, Thank you for your detailed feedback on our project. We provide the requested experiments and address the comments and questions in detail below. Please let us know whether there are any other concerns you have that prevent you from increasing your score. **Q1: Improvement in Experiments** We agree the experimental results in our submitted draft show that there is around a 10% improvement for LDP. However, we have now run updated experiments where LDP leverages both action-free and suboptimal data. While BC-like methods work well in settings with a lot of optimal demonstration, LDP effectively leverages additional data sources, which we believe is a fundamental advantage. The results strongly outperform baselines now, by around 30% or more. **See Reviewer LWMgQ2** **Q2: Novelty** The novelty of this paper is to propose a simple and scalable algorithm to address learning from heterogeneous data sources like suboptimal and action-free data. Our main contribution is in showing good performance with a model-based method, which will enable future work to build on it. Further, we provide a comprehensive evaluation of alternative methods for learning from heterogeneous data, and establish a novel finding that our planning-based is particularly suited for this setting. In terms of comparison to UniPi, we agree that the method is fairly similar. However, we find that LDP strongly outperforms UniPi due to its method, with the main difference being the much lower-dimensional latent space. Due to this change in method, there is a large improvement in performance. We attach updated results below and highlighted in red in our PDF. In terms of how to shape the space, we add additional experiments using pretrained DINOv2 embeddings, which investigates how pretrained embeddings compare to LDP’s latent space. In these experiments, we directly swap out VAE embeddings with frozen DINOv2 embeddings. However, we found directly planning over the DINOv2 embeddings does not lead to good learned behaviors (0% success rate even for easy tasks), which we hypothesize is due to the challenges of planning over a large latent space (384 dimensional). Thus, as an alternative, we fix a random projection matrix that reduces the 384 dimensional feature space to 16 dimensions, matching LDP’s latent space. We find that LDP strongly outperforms using pretrained latent embeddings. **See Reviewer PyaC Q4** **Q3: Data Size in Experiments** We agree that using 1,000 or 10,000 trajectories would be interesting. However, most robotics benchmarks and tasks typically use 50-300 demonstrations (Diffusion Policy, Robomimic, Action Chunking with Transformers, etc.). For example. Robomimic, a popular simulated imitation learning benchmark, includes datasets with tasks that have 200 demonstrations. Simulated ALOHA tasks use only 50 demonstrations. It is typically difficult to find single-task imitation learning datasets with more than a few hundred trajectories, due to the difficulty of collecting large-scale expert demonstrations. Larger datasets are often multi-task or language-conditioned, and they are often used to train large multi-task or language-conditioned policies. (Open X-Embodiment, DROID, Octo, OpenVLA). These datasets are expensive to train on, especially for video models. We do not explore these datasets, due to the lack of good pretrained models, and because video models are much more expensive to train. This is why we are starting with small datasets, but this is an important direction for future work. **Q4: Pretrained Video Models** We agree that LDP cannot use pretrained video models. However, for our UniPi baseline, we compare finetuning a pretrained model vs. training from scratch. To provide a comparison, we use a model with weights pretrained on THOR, and from scratch. In these results, we find that initializing from a pretrained model does not actually improve performance. **See Reviewer 9mX6 Q1** The power of pretrained video models may lie in its ability to generalize and extrapolate to new tasks and scenes, often through language conditioning, which we leave to future work. Generalization is important for robot learning, but we find that for single-task imitation learning, which typically use smaller datasets of 50-300 demonstrations, using pretrained video models is not essential. **Q5: Outdated Beta-VAE and DDPM** We agree that there are advances to beta-VAE (VQ-VAE, VQ-GAN, etc.) and DDPM (DDIM, Consistency Models, etc.). However, we choose both of these implementations due to their simplicity. For our planner, fast inference is not crucial to our contribution, and hence, we don’t use faster samplers like DDIM. However, we agree that improvements to both beta-VAE and DDPM can lead to additional improvements and scalability, which we leave to future work. **Q6: Citations** Thank you for those references. We will include them in the draft. --- Rebuttal Comment 1.1: Comment: Thank you for your response -- I appreciate a lot of hard work went into the rebuttal. I am inclined to maintain my score for now, but will engage with other reviewers in an open-minded manner. --- Reply to Comment 1.1.1: Comment: Thank you for taking our further results and ablations into consideration. We appreciate your feedback, and we plan on incorporating writing suggestions from reviewers and our updated results in our paper. To address one of your comments again: **Q3: Data size** An additional way we can test our method is on the LIBERO dataset [1]. This is a multi-task simulated dataset with 130 tasks with 50 human-collected demonstrations each. One way to evaluate LDP's performance on larger robotic datasets is pretraining on many tasks and finetuning on a single downstream task. We can also use data from other tasks as suboptimal data for the other task, in order to learn representations or dynamics. Given that we have reached the end of the rebuttal period, unfortunately we cannot include results, but we may look forward to including this in a camera-ready version. Please let us know if you may find this compelling. [1] Liu, Bo, et al. "Libero: Benchmarking knowledge transfer for lifelong robot learning." Advances in Neural Information Processing Systems 36 (2023): 44776-44791. Thank you again for your time in providing feedback for the paper!
Summary: The work proposes a novel approach for imitation learning that combines an inverse dynamics model (IDM) with a planner that proposes future goal states in latent space. The approach first trains a variational autoencoder (VAE) that encodes visual representations of states into a lower dimensional latent space. Using the embeddings from this encoder, a IDM model is trained that given a state representation and future state representation predicts the action that will lead to that future state. At the same time, a planner is trained that predicts a sequence of embeddings of future states given an embedding of a state. The main insight of this work is that the IDM model can be trained with suboptimal/ general data that contains states and actions within the given environment, and the planner can be trained with (ideally optimal) demonstrations of solving the desired task but no actions are required for training the planner. Both the IDM and planner models are implemented using diffusion models. The approach is evaluated in a series of simulated robotics tasks and a real world robotics experiment. Compared to a series of imitation learning and planning baselines and ablations, the proposed approach is found to be more effective at leveraging data that is partly suboptimal and/ or action-free. Claims And Evidence: The claims made in this work are largely clearly presented and well supported through empirical evidence. However, there are several inconsistencies in the experimental evaluation that I would expect to be corrected or well justified since these might invalidate some of the findings of the provided experiments (see Experimental Design and Analyses, esp. 1. and 3.) Methods And Evaluation Criteria: The work strongly relies on the latent space learned by the VAE to be expressive and representative of features that would be important for learning the dynamics and decision making within the task. Given the efficacy of pre-trained visual encoders for imitation learning (e.g. [1, 2, 3, 4]), I wonder whether this step of training a task-specific VAE is even necessary or if you could replace this with an off-the-shelf pre-trained encoder (and if necessary fine-tune on available data). Establishing that this work could work well with pre-trained visual encoders would further generalise the applicability of this approach and reduce training cost. [1] Nair, Suraj, Aravind Rajeswaran, Vikash Kumar, Chelsea Finn, and Abhinav Gupta. "R3m: A universal visual representation for robot manipulation." arXiv preprint arXiv:2203.12601 (2022). [2] Schäfer, Lukas, Logan Jones, Anssi Kanervisto, Yuhan Cao, Tabish Rashid, Raluca Georgescu, Dave Bignell, Siddhartha Sen, Andrea Treviño Gavito, and Sam Devlin. "Visual encoders for data-efficient imitation learning in modern video games." arXiv preprint arXiv:2312.02312 (2023). [3] Shang, Jinghuan, Karl Schmeckpeper, Brandon B. May, Maria Vittoria Minniti, Tarik Kelestemur, David Watkins, and Laura Herlant. "Theia: Distilling diverse vision foundation models for robot learning." arXiv preprint arXiv:2407.20179 (2024). [4] Yuan, Zhecheng, Zhengrong Xue, Bo Yuan, Xueqian Wang, Yi Wu, Yang Gao, and Huazhe Xu. "Pre-trained image encoder for generalizable visual reinforcement learning." Advances in Neural Information Processing Systems 35 (2022): 13022-13037. Theoretical Claims: There are no theoretical contributions or proofs to check. Experimental Designs Or Analyses: The work uses vastly different amounts of demonstrations across all tasks in the evaluation - Can, Square: 100 demonstrations, 500 suboptimal trajectories, 100 action-free trajectories - Lift: 3 demonstrations, 500 suboptimal trajectories, 100 action-free trajectories - Transfer Cube: 25 demonstrations, 500 suboptimal trajectories, 25 action-free trajectories and the amount of suboptimal trajectories in particular is significantly larger than the main demonstrations provided in the Lift and Transfer Cube tasks. In the ALOHA Transfer Cube task, it is clear that leveraging the 500 suboptimal trajectories is necessary for good performance from the results in Table 2. In particular typical DP is unable to perform well in this task since it is only trained on 25 demonstrations but using suboptimal trajectories via reward conditioning (RC-DP) or pre-training (DP PT + FT) significantly improves upon DP and in particular in the latter case performs comparable to LDP. 1. Would the authors be able to explain why these vastly different amounts of trajectories were chosen? 2. The work provides several ablations on varying data being used across its experiments and compares to a large set of sensible baselines, but it does not ablate the VAE component in its experiments. I would expect that directly planning in high-dimensional image-space performs worse than the proposed approach in latent space, but the work provides no direct evidence for this claim. 3. From the supplementary material, I see several differences between LDP and baseline algorithms or ablations. Would the authors be able to explain the following differences and provide fair comparisons in these tasks? 1. According to Table 5, LDP trains a larger network for the ALOHA cube task compared to other baselines (esp. DP and DP-based algorithms). Why do you use larger networks for LDP and could you provide comparisons to DP at the same size of policy network to ensure that LDP does not outperform the baselines in this task due to its larger networks? 2. According to Table 6, LDP trains a larger IDM model than the UniPi baselines in the Can task (5 vs 3 blocks) and trains its IDM model for longer in all tasks compared to UniPi (500k vs 200k gradient steps). Again, what is the reason for these discrepancies? For fair comparisons, I would expect models to be trained for similar amounts of steps and models to be of identical size where possible. 3. According to Appendix section A.2, LDP + Subopt is trained with 50% optimal and 50% suboptimal data. Would the authors be able to clarify what they mean? Is each update batch constructed of optimal and suboptimal data in equal proportions, or do you subsample the dataset of optimal and suboptimal demonstrations to have an equal share of both? 4. According to Appendix section A.2, LDP + Action-Free trains the main IDM model only on a single expert demonstration. Why do you not train the IDM model still on all available demonstrations as done for other algorithms? 5. According to Appendix section A.2, LDP Hierarchical uses a smaller IDM model compared to LDP. This renders the comparison in Table 4 as unfair and not convincing anymore to state that dense forecasting is an important contribution and part of LDP. Supplementary Material: I reviewed all of the supplementary material. Relation To Broader Scientific Literature: The work does a good job at relating to alternative diffusion-based and IDM-based imitation learning algorithms. In particular, it refers to important literature that leverages action-free or suboptimal data but clarifies the distinctions to LDP. Essential References Not Discussed: I am not aware of any essential references that require further discussion. Other Strengths And Weaknesses: I would like to commend the authors on a overall well constructed and clearly motivated evaluation. In particular, all the baselines serve specific purposes and allow to identify the importance of different components of LDP. Other Comments Or Suggestions: No other comments or suggestions. Questions For Authors: 1. The supplementary material (A.2, Table 5 and 6) reveal several inconsistencies across LDP and its baselines or ablations in several experiments. I would expect the author to correct these inconsistencies and/ or provide convincing justifications for them. Otherwise, it is unclear whether the empirical findings are due to the proposed algorithms or differences in hyperparameters. (see 3. in Experimental Design and Analyses for more details) I otherwise consider this a strong paper and will increase my score if the authors are able to provide convincing justification or corrections for these inconsistencies. 2. Would the authors be able to explain why the varying amounts of trajectories used to train LDP and baseline algorithms across tasks? (see Experimental Design and Analyses for more details) 3. Would the authors be able to provide an ablation that does LDP with its planner and IDM model directly operating on images rather than the VAE latent space? I would expect such an ablation to perform worse but the work currently provides no evidence for such claims. **The score has been updated in response to the author rebuttal** Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear Reviewer, Thank you for your detailed feedback on our project. We are happy that you found the paper well-presented with clear experimental evaluations. Please let us know whether there are any other concerns you have that prevent you from increasing your score. **Q1: Would the authors be able to explain why these vastly different amounts of trajectories were chosen?** For different robotics tasks, we tried to keep most of the parameters the same: 100 action-free trajectories for robomimic tasks and 500 suboptimal trajectories for all tasks. \# of demonstrations: Robomimic includes 200 proficient demonstrations per task, and ALOHA includes 50 demonstrations per task. To showcase the effectiveness of our method, we always take **half** the number of demonstrations-- 100 for Robomimic tasks, and 25 for ALOHA tasks. This is a consistent protocol we use to select the number of demonstrations. \# of action-free trajectories: we chose to use the remaining **half** of the demonstrations as action-free trajectories Lift is a very simple task, so we reduce the number of demonstrations to 3 increase the complexity. **Q2: Ablating the VAE Component** To address “directly planning in high-dimensional image-space,” we include the UniPi baseline, which learns a video prediction planner. UniPi-CL, in particular, mirrors LDP’s planner, in that UniPi-CL forecasts over dense states instead of subgoals. Let us know if that addresses your concern! **Q3.1 DP vs. LDP on ALOHA Cube** Specifically for ALOHA Cube, we found that a larger planner was imperative for reasonable performance. We did not find this necessary for DP, and thus, we kept the smaller DP architecture. We have updated results where DP is trained with the same architecture size as all LDP variants (down_dims = [512, 1024, 2048]). We use batch_size=16 for DP, since the end-to-end training of the encoder requires much more GPU memory than LDP, which uses frozen embeddings. We couldn't run DP with a larger batch size or down_dims. | DP | LDP | LDP + Action-Free | LDP + Subopt | LDP + Action-Free + Subopt | |--------------|--------------|-------------------|--------------|----------------------------| | 0.32 +- 0.00 | 0.64 +- 0.04 | 0.70 +- 0.02 | 0.71 +- 0.03 | 0.97 +- 0.01 | **Q3.2 LDP vs. UniPI: IDM size for Can, and IDM train time** We observed a bigger IDM is helpful for the Can task, likely because the scene is slightly more visually complex. As requested, we have now rerun Can experiments with UniPi to ensure the same IDM architecture. We train the IDM model for longer in LDP because we use a smaller batch size. However, we agree that for consistency, rerun UniPi experiments with the same batch size and \# gradient steps for LDP and UniPi IDM. In addition, we have retrained UniPi GCBC models with the same hyperparameters as LDP. **See Reviewer LWMg Q2.** **Q3.3 LDP 50% optimal and 50% suboptimal batches** Each batch consists of 50% optimal and 50% suboptimal trajectories. **Q3.4 IDM one expert demonstration** We meant to write, “The IDM is trained only **ON** expert demonstrations.” The IDM is only trained on action-labelled data, which in this case, are from the expert demonstrations. **Q3.5 LDP Hierarchical IDM** (Appendix A.2) Hierarchical LDP’s IDM is a Conditional U-Net with down-dims [256, 512], which has 1.67e7 parameters. (Appendix A.1) Non-Hierarchical LDP uses an MLP ResNet based off of IDQL, which has 1.79e6 parameters. The Hierarchical LDP IDM has more parameters, while still underperforming our method. **Q4: Pretrained Encoders** LDP uses a task-specific VAE, because existing image encoders or VAEs (e.g. DINOv2, R3M, Stable Diffusion VAE) produce high-dimensional embeddings. Our VAE produces a much more compact latent space (16 dimension), enabling much faster training and inference, as well as interpretability through decoding image latents. To address this approach, we swap out VAE embeddings with frozen DINOv2 embeddings. However, we found directly planning over the DINOv2 embeddings does not lead to good learned behaviors (0% success), which we hypothesize is due to the challenges of planning over a large latent space (384 dim). Thus, as an alternative, we fix a random projection matrix, reducing the 384 dim feature space to 16 dim, matching LDP. We choose this, because this is a straightforward way of using frozen embeddings. It may be possible to plan over the large embeddings space with a much larger and complex model, or it may be possible to learn alternative ways to project DINOv2 embeddings to a lower-dimensional space, but this may lead to fundamental changes in our method, so we don’t explore more complicated approaches to use pretrained embeddings. | Method | Lift | Can | Square | |--------|--------------|--------------|--------------| | DINOv2 | 0.44 +- 0.24 | 0.03 +- 0.01 | 0.01 +- 0.01 | --- Rebuttal Comment 1.1: Comment: I thank the authors for their response and clarifications. They help greatly in increasing my confidence in the submission. In particular, my primary concerns about inconsistencies in the evaluation have been addressed and therefore I have decided to increase my score to **accept**. The evaluation protocol and clarifications about baselines / ablations are very helpful and I hope the authors will be able to incorporate these into their work (or at least the appendix). I also believe that their investigation to use DINOv2 embeddings is very interesting and adds further insights so I would suggest to include it in the appendix with a small note in the main paper. One outstanding comment re Q2: I believe there are sufficient differences between LDP and the UniPi baseline to justify adding such a separate ablation of the image encoder VAE. I agree with your intuition that the results of UniPi (and prior work) suggests that directly learning in image space is expected to fare much worse and, thus, I don't see this as essential. Nevertheless, it would be a nice comparison point to bring this point home in a convincing manner. --- Reply to Comment 1.1.1: Comment: Thank you for taking our further results and ablations into consideration and increasing your score! We plan on incorporating writing suggestions from reviewers and our updated results in our paper. We appreciate your feedback and are open to any other comments or suggestions that can help improve our draft. We agree the ablation could still provide further insight into the method. One of our main challenges is computational constraints, and hence we focused on UniPi as an image-planning baseline. We may try to include an ablation, in addition to our rebuttal experiments, in an updated draft.
Summary: This paper presents Latent Diffusion Planning (LDP), an algorithm aimed at performing imitation learning with the presence of additional suboptimal and action-free demonstrations. ---- Problem Setting and Key Assumptions: - Vision-based imitation learning for table top manipulation - Aside from expert demos, assume access to (expert) action-free demos and suboptimal/failed data ---- The main algorithm consists of three components: - A VAE to learn a low-dimensional embedding space for the visual observations - A diffusion-based planner that performs forward prediction in the frozen VAE embedding space - A diffusion-based inverse dynamics model that takes adjacent VAE latents to predict robot actions. This design allows the method to learn from the three types of data as mentioned earlier. Specifically, the VAE is trained to enc/dec individual frames, and thus, can take all the data. The planner only need the optimal sequence of latent states. Hence, both expert and action-free demos can be used. Finally, the inverse dynamics can be trained on all $(o, a, o')$ tuples, regardless of whether they lead to task success or not. ---- The authors evaluate LDP in 4 simulated tasks and a real robot task on tabletop manipulation. Because the assumption on the available data is new, they compare with Diffusion Policy and a few variants. They also compare with UniPi. Results support that in the low-data setting, LDP outperforms these relevant baselines by also utilizing the action-free demo and suboptimal data. ---- ## Update after Rebuttal I reviewed other reviewers' comments and the authors' responses. I have raised my score from 2 to 3. Claims And Evidence: The main claim is that LDP should be data efficient (for expert data) by leveraging sub-optimal data and action-free data. This is indeed verified with experiments. The main weakness is that the algorithm still seems to require many demonstrations (and more suboptimal data) for rather simple tasks. Methods And Evaluation Criteria: Overall, the method and evaluation procedures are sound. However, the experiment results are not convincing enough because: - The comparison with a foundation model like UniPi feels somewhat out of place because I assume the authors train the model from scratch using only in-domain data. Fine-tuning a pre-trained model and allowing joint training on multiple datasets are more appropriate. - The tasks are rather simple with clean visuals. Theoretical Claims: No theoretical claims. Experimental Designs Or Analyses: The experiment procedure and selected baseline methods on imitation learning are fair. Regarding the real robot experiment, the authors provide action-free demonstration by removing actions from actual demonstrations. This is a rather awkward design choice. Instead, they should consider collecting demonstrations using a setup similar to the Universal Manipulation Interface paper. In that case, I wonder if the planner model can still reliably predict because the data distribution of action-free demo will be different from the expert demos. Supplementary Material: I reviewed the project website consisting of planned (predicted) and executed visual observations. Relation To Broader Scientific Literature: This work aims to improve the data efficiency of IL methods by utilizing data otherwise not useful. It is highly relevant to the field of LfD (Learning from Demonstrations). Essential References Not Discussed: N/A Other Strengths And Weaknesses: The paper is easy to follow and includes clear visualizations of the model architecture, and comprehensive discussion of relative work. The experiments sections presents the research questions clearly followed by the corresponding studies. Other Comments Or Suggestions: N/A Questions For Authors: For the real robot experiment, how much time does it take the collect the demonstrations? Recent works have shown that similar tasks can be learned from 1 hour of data, including a few demonstrations and sparse-reward RL. (Accelerating Visual Sparse-Reward Learning with Latent Nearest-Demonstration-Guided Explorations​) Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer, Thank you for your detailed feedback on our project. We are happy that you found the paper easy to follow and relevant to learning from demonstrations. Please let us know whether there are any other concerns you have that prevent you from increasing your score. **Q1: Pretrained UniPi + Finetuning In-Domain** The comparison with UniPi is limited as the original UniPi model, code and pretraining data are not released, per Appendix A.1. To provide a comparison, we use AVDC-THOR, which is the best available reproduction of UniPi. We include results on Robomimic tasks. The experiments compare how finetuning a pretrained UniPi model, instead of training a UniPi model from scratch, affects overall performance. We finetune for an additional 50k steps at learning rate 1e-4. | Method | Lift | Can | Square | |--------------------------|--------------|--------------|--------------| | UniPi-OL (from scratch) | 0.09 +- 0.03 | 0.27 +- 0.01 | 0.07 +- 0.01 | | UniPi-OL (from pretrain) | 0.13 +- 0.05 | 0.31 +- 0.01 | 0.07 +- 0.01 | | UniPi-CL (from scratch) | 0.12 +- 0.02 | 0.30 +- 0.02 | 0.10 +- 0.04 | | UniPi-CL (from pretrain) | 0.13 +- 0.05 | 0.23 +- 0.01 | 0.08 +- 0.02 | | Method | Lift | Can | Square | |----------------------------------------|--------------|--------------|--------------| | UniPi-OL + Action-Free (from scratch) | 0.11 +- 0.03 | 0.25 +- 0.03 | 0.05 +- 0.03 | | UniPi-OL + Action-Free (from pretrain) | 0.10 +- 0.08 | 0.26 +- 0.00 | 0.08 +- 0.02 | | UniPi-CL + Action-Free (from scratch) | 0.14 +- 0.02 | 0.32 +- 0.04 | 0.09 +- 0.01 | | UniPi-CL + Action-Free (from pretrain) | 0.11 +- 0.03 | 0.25 +- 0.01 | 0.10 +- 0.00 | We observe the pretrained AVDC/UniPi model does not substantially improve performance, likely because the pretrained model quality is limited. For UniPi-OL, there appears to be slightly improvement when pretraining from scratch, possibly because without closed-loop planning, the pretrained initialization may be helpful. **Q2: Action-Free Demonstrations** Thank you for this suggestion. We were unable to set up the UMI hardware within the short rebuttal time, but we agree that a more scalable and realistic way of obtaining action-free expert data is through data collection tools such as UMI (Universal Manipulation Interface). **Q3: Collecting Demonstrations in the Real World** The demonstrations were collected from one teleoperator. The average time for one demonstration was around 25 seconds, including resetting the cube. In total, for 82 trajectories, it would take around 34 minutes to collect all demos. For suboptimal trajectories, the teleoperator merely supervises the policy to ensure safety, and does not need to actually teleoperate the robot at all. Each suboptimal trajectory takes around 75 seconds, including resets. For 85 trajectories, this would take around 1 hour 45 minutes. The total time is longer than LANE from Zhao et. al., which collects limited demonstrations (requires teleoperator) and performs RL (does not require active teleoperation). It’s possible that our method may work with fewer demonstrations (less teleoperation time) or with less suboptimal data, but we did not sweep over these numbers or optimize for these metrics. **Q4: Additional Comments** “The tasks are rather simple with clean visuals.” -- We agree that the tasks are not extremely complex nor do they have complex backgrounds or visuals. However, we chose commonly used robotics benchmarks for this project and focused on showing the effect of action-free and suboptimal data. “The main weakness is that the algorithm still seems to require many demonstrations (and more suboptimal data) for rather simple tasks.” -- We agree that this paper doesn’t operate in a few-shot demonstration setting. However, the standard number of demonstrations provided by robomimic is 200 per task, and ALOHA is 50 per task. The robomimic benchmark is saturated with 200 demos, and we chose to restrict the number of demonstrations to evaluate our method in a lower demonstration regime. In order to learn from even fewer demonstrations for these tasks, it would likely be necessary to learn from prior data (e.g. from retrieval), or reinforcement learning, which we do not focus on. **Q5: Improved Experiments** Please **see Reviewer LWMg Q2**. We include updated experiments where LDP leverages both action-free and suboptimal data. While BC-like methods work well in settings with a lot of optimal demonstration, LDP effectively leverages additional data sources, which we believe is a fundamental advantage. The results strongly outperform baselines now, by around 30% or more. In addition, we have reran UniPi with identical hyperparameters to LDP, per Reviewer PyaC’s suggestion. --- Rebuttal Comment 1.1: Comment: I appreciate the authors for performing additional studies around UniPi. The new experiments results in response to reviewer LWMg are also convincing. I'm willing to increase my score to 3. --- Reply to Comment 1.1.1: Comment: Thank you for taking our further results and ablations into consideration and increasing your score! We appreciate the feedback and are open to any other comments or suggestions that can help improve our draft.
null
null
null
null
null
null
Occult: Optimizing Collaborative Communications across Experts for Accelerated Parallel MoE Training and Inference
Accept (poster)
Summary: All-to-all communication is a major bottleneck in training and inference for mixture-of-experts (MoE) large language models. While existing MoE kernels have improved computational efficiency, all-to-all communication remains a bottleneck. The authors propose Occult, which aims to (1) reduce redundant communication and (2) encourage in-device collaboration via router update to further optimize latency. Claims And Evidence: Most claims are well-supported by evidence or align with well-established knowledge in the field. However, some claims lack sufficient clarity or appropriate experimental validation: - The claim that the proposed algorithm benefits training is not adequately evaluated. (See below) - The paper does not provide the corresponding code, making it difficult to verify reproducibility. Methods And Evaluation Criteria: The proposed method and evaluation criteria are well-aligned with the problem at hand. Theoretical Claims: Equation (5) states that communication complexity is lower-bounded by the number of devices needed to fill k and upper-bounded by the minimum of k and the number of devices. While this holds in terms of send/receive operations, this definition differs from the one used in Figure 2 and Section 3.1, creating inconsistency. Experimental Designs Or Analyses: The experimental design appears insufficient to fully support the paper’s claims: - The stated goal is to reduce all-to-all latency, which is especially problematic in multi-node multi-gpu setups. However, the evaluation only uses four GPUs connected via PCIe, which has limited bandwidth. This setup leads to a high number of experts per GPU and does not reflect realistic multi-node scenarios. While the resource constraints are understandable, without larger-scale results, the claim about latency reduction remains unconvincing. The author should provide more results, either from experiments or projections, to support this claim. Without the new evidence, the proposed method might be more applicable for **inference**, and the paper should consider revising its scope to emphasize this contribution. For section 5.2: - The comparison is misleading: fine-tuning naturally improves model performance, so a fine-tuned model outperforming a baseline is expected. If the goal is to show that "expanding to two devices achieves comparable or superior quality to standard training," the baseline should also be fine-tuned for a fair comparison. **I hope the author could address these concerns, and I'm more than willing to update my evaluation if my concerns are addressed.** Supplementary Material: I reviewed the appendix. Relation To Broader Scientific Literature: The proposed optimization reduces all-to-all latency, a well-known bottleneck in MoE-based LLMs. The work contributes to this active research area. Essential References Not Discussed: I'm not aware of any essential references that were not discussed. Other Strengths And Weaknesses: While I think the paper do provide some interesting insight on MoE, the writing of this paper has clarity issues, and should be significantly improved. Sections 3 and 4 are difficult to follow and should be rewritten for clarity. Here are some examples: - Figure 2 - This figure is very confusing. Readers unfamiliar with MoE literature will struggle to interpret it. The term device D-0/1 appears to refer to physical devices, but it actually denotes expert assignments of token. The figure needs revision. - Communication Complexity in Section 3.2 - The term "communication complexity" is misleading. A more accurate term would be "communication volume" to reflect what is actually being measured, but it seems to be different from the definition used in Figure 2. - Figure 4 -There is no in-text reference to Figure 4. The figure itself is unclear and needs better explanation. Other Comments Or Suggestions: Line 218 right column: continuously -> contiguously Line 383: device amount -> device count Questions For Authors: Why is Hugging Face’s decoding latency lower than the proposed method when batch size is small? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank reviewer NL62 for the dedicated and professional comments. To address your concerns, we provide detailed pointwise responses below: **[Claims and Evidence]** We provide code at https://anonymous.4open.science/r/Occult-D802. **[Theoretical claim 1: Communication complexity in Fig. 2]** Thank you for the careful review. We admit there's a typo in Fig. 2(c): the blue token routed to Expert 1 and the yellow one routed to Expert 2 should be exchanged due to our intentional modification of routing choices to ensure tokens only activate experts on the same device, thereby reducing all-to-all communication volume. To clarify the communication complexity in Fig. 2: - Fig. 2(a): Each token is repeated twice, yielding $C_{\mathcal{T}}$ = 2 - Fig. 2(b): Red and green tokens are repeated once while yellow and blue tokens are twice, yielding $C_{\mathcal{T}}$ = 1.5 - Fig. 2(c): Each token is repeated once, yielding $C_{\mathcal{T}}$ = 1 The confusion may stem from insufficient emphasis on top-k value. This example uses 2 devices, 4 tokens, 4 experts, and top-2 routing. According to our derivation in Section 3.2, communication complexity bounds are $1\leq C_{\mathcal{T}}\leq 2$, with all 3 cases in Fig. 2 falling in this interval. We will revise Fig. 2 and clearly explain this modification in Fig. 2(c) to make it more comprehensible. **[Experiments & Analysis 1: Multi-node Training]** Our research is targeted at communication efficiency, so it inherently performs better than conventional expert parallelism on multi-node training, where inter-node communication is the main bottleneck. We further examine 8-way (1 node ) and 16-way ( 2 nodes ) expert-parallelized training for DeepSeek-MoE, where latency and evaluation comparison are also visualized at https://anonymous.4open.science/r/Occult-D802. **[Experiments & Analysis 2: Fine-tuning baseline for fair comparison]** In Sec. 5.2 and Fig. 7, we provide the evaluation results for - Original model (no tuning), shown in yellow dashed line - Tuned model with pruning, shown in brown dots ( similarity-based pruning ) and green stars ( router-based pruning ). - Standard tuning, shown in pink diamond. In the caption text of Fig. 7, we indicated that pruning within 4 devices is equivalent to standard tuning since we use 4 devices for distributed training. Therefore, the effect of pruning can be derived by comparing the brown dots & green stars with pink diamonds. The yellow line serves as a reference. **[Strengths and weaknesses 1: Fig. 2]** Thank you for the feedback. We will improve the caption to clarify the meaning of $D_i$, $D_i^j$, and $E_i$, and add explicit in-text references to better guide readers through this illustration of different communication strategies. **[Strengths and weaknesses 2: Communication complexity]** We use communication complexity to indicate the ratio of all-to-all communication volume to the number of tokens, approximated by the average token replication count. This approach addresses two key challenges: - During all-to-all communication, a token replica may either be retained locally or transmitted to other devices based on its routing choice, posing uncertainty in precisely measuring inter-device communication volume across different tasks. - To establish a more stable metric, we use average token replication times, which is also the least upper bound for the ratio of inter-device communication volume to token count, minimizing the impact of dynamic routing uncertainties. We appreciate you highlighting this concern and will expand these explanations in Section 3.2 to provide better clarity. **[Strengths and weaknesses 3: Figure 4]** We apologize for the missing in-text reference to Fig. 4. We will add detailed descriptions, including: - $\texttt{SFD}$ tokens serve as the all-to-all content, constructed from $\texttt{ORI}$ tokens based on $BRIM_0$ - $\texttt{FFN}$ utilizes $BRIM_1$ as an auxiliary input to guide the token layout of the output tensor - Intermediate tokens are organized densely in the $\texttt{EPD}$ state I hope they can help readers better understand the token state transitions and the role of $BRIM$s in our framework. **[Other comments]** Thanks for pointing out. We will fix these typos carefully in our revised draft. **[Question: Huggingface Latency]** Prefilling latency is the primary bottleneck when generating a small number of tokens. Hugging Face’s native API employs pipeline parallelism (PP), which outperforms expert parallelism (EP) under the following conditions: - Limited GPU amount ( our experiments use only 4 GPUs ) - Deep model architectures ( the three evaluated models contain 16, 24, and 28 layers, respectively ) Thereby, the communication volume of EP in the prefilling stage is much larger than PP. However, the decoding speed of EP is much faster, profit by the efficient MoE computation. Thank you again for your comments. We will incorporate these refinements in our revised draft. --- Rebuttal Comment 1.1: Comment: I appreciate the authors' effort to open-source their code as well as providing additional evaluation results. I suppose the number of GPUs mentioned in the README file "Occult (Prune, 1 GPU)" actually refers to the pruning count $N_d$? If that's the case, perhaps the authors may want to provide additional explanation in its caption, like they did in Figure 8. I have increased my rating to reflect my latest evaluation of this paper. I hope the authors can further improve the readability of this manuscript should it be accepted. --- Reply to Comment 1.1.1: Comment: We sincerely appreciate reviewer NL62 for increasing the rating. You're right, the number of GPUs in caption "Occult (Prune, 1 GPU)" exactly refer to the pruning count $N_d$. In this case, we prune the expert collaborations for each token so that an individual token only activates experts within 1 GPU. We have updated the code repository and replaced the table with 2 figures, similar to Figure 11 in our manuscript. We also provide detailed explanations in the captions of figures and tables in this README file. In our revised version, we will expand Figure 11 into 3 sub-figures, and enrich its caption by providing these additional explanations, just like Figure 8. Thank you for your detailed suggestions.
Summary: The paper introduces Occult, an algorithm-system co-design approach to optimize collaborative communication in MoE models for large-scale training and inference. The key idea is to reduce inter-device communication costs by maximizing intra-device expert collaboration, using expert placement rescheduling and collaboration pruning strategies. The paper shows that Occult achieves over 50% speedup across various MoE-based LLMs while maintaining or even improving model quality. The authors provide theoretical justifications, empirical validation, and extensive benchmarking against state-of-the-art frameworks like DeepSpeed, Tutel, and MegaBlocks. Claims And Evidence: The paper claims that optimizing collaborative communication via expert placement rescheduling and collaboration pruning can significantly reduce all-to-all communication costs in MoE models, leading to faster training and inference with minimal quality degradation. **Support:** (a) Theoretical derivations: The authors introduce a collaboration graph-based formulation to quantify inter- and intra-collaboration. (b) Algorithm side: The expert placement rescheduling algorithm is validated through profiling experiments showing up to 20% reduction in communication budget. (c) Empirical evaluation: Occult achieves up to 8.66× speedup for inference and up to 10× faster training over baseline MoE frameworks. **Potential concerns:** (a) The impact of aggressive collaboration pruning on model quality could be better explored for different task types. (b) No direct discussion on scalability beyond 4 GPUs—it would be useful to see how Occult performs on larger clusters and models. Methods And Evaluation Criteria: The proposed methods make sense for optimizing communication bottlenecks in MoE training and inference. Evaluation criteria are appropriate, leveraging: (a) Latency benchmarks for training and inference (Figures 9–11). (b) Accuracy and task performance metrics across multiple NLP benchmarks. The concern is that the paper does not explicitly evaluate multi-node scaling performance, and the evaluated model is relatively small scale. Theoretical Claims: The paper does not introduce new mathematical proofs, but it provides well-founded theoretical insights, including quantification of communication cost and its relationship with intra- and inter-collaboration; and the optimization bounds for communication overhead in MoE expert parallelism. Experimental Designs Or Analyses: Experimental setup is generally robust, with: (a) Multiple MoE architectures (OLMoE, DeepSeek-MoE, Qwen-MoE) (2) Comparison against strong baselines (DeepSpeed, Tutel, MegaBlocks). (3) Latency benchmarks covering different workloads (training, inference, decoding). Limitations or improvements: (a) Limited scalability analysis beyond 4 GPUs. (b) The selection of expert placement rescheduling is heuristic—would an end-to-end learned approach perform better? Supplementary Material: I have reviewed the supplementary material, which contains algorithm pseudocode, additional benchmarks, and implementation details. Relation To Broader Scientific Literature: The paper is well-grounded in prior work, citing relevant MoE frameworks (DeepSpeed-MoE, Tutel, MegaBlocks). The connection to general parallel computing strategies could be stronger—Occult shares similarities with load balancing and scheduling techniques in distributed systems. Essential References Not Discussed: There are no specific references currently on my mind that have not been discussed. Other Strengths And Weaknesses: Pros: 1. The paper addresses a key limitation in MoE scalability. Communication is a well-known challenge in distributed MoE training, and Occult provides a practical, well-validated optimization. 2. The paper presents strong empirical results, a 50%+ speedup in multiple MoE workloads is a compelling result. 3. A lightweight, heuristic expert placement rescheduling was proposed that provides significant efficiency gains. Cons: 1. Limited discussion on multi-node scalability. The paper mainly explores single-node, multi-GPU setups, leaving open questions about large-cluster scaling. 2. Trade-offs in collaboration pruning are not well studied. More ablation studies would help clarify when pruning impacts accuracy. Other Comments Or Suggestions: 1. What are required to be done to scale up Occult when moving beyond single-node multi-GPU setups? 2. Instead of heuristics, could an RL-based expert assignment policy be more effective? Questions For Authors: Please check "Other Comments Or Suggestions" Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank Reviewer 1GwW for recognizing that "the paper addresses a key limitation in MoE scalability", “experiments generally robust”, and that "Occult provides a practical, well-validated optimization." To address your questions, we provide pointwise responses below. **[Potential concerns 1: Different task types]** We've evaluated the performance on 23 benchmarks with different tuning strategies in Tab. 3,4,5 on pages 16, 17, and 18, covering natural language understanding, commonsense reasoning, and math. Besides, we conduct additional experiments on HumanEval (coding) and GSM8K (math) datasets for DeepSeek-MoE. The results below consistently demonstrate the effectiveness of our efficient pruning methods: HumanEval: |Method|No Tune|Prune, 1 GPU|Prune, 2 GPUs|No Prune| |-|-|-|-|-| |Router-based|26.83|17.68|**27.44**|22.56| |Sim-based|26.83|17.07|26.22|22.56| GSM8K ( flexible-extract ): |Method|No Tune|Prune, 1 GPU|Prune, 2 GPUs|No Prune| |-|-|-|-|-| |Router-based|16.91|7.28|15.31|**17.97**| |Sim-based|16.91|11.22|16.00|**17.97**| **[Potential concerns 2: Scalability]** Our occult is more feasible for modern fine-grained MoE-LLMs such as DeepSeek-MoE. Sadly, models with a scale larger than 16B are out of our capability to train ( such as DeepSeek-V2 ). We conduct additional experiments on 8 GPUs and 2 x 8 GPUs (two nodes) to demonstrate the scalability of our method, using DeepSeek-MoE with 8- and 16-way expert parallelism, taking batch size 32 per GPU: Avg training latency per step (s): |Setting|Occult ( 1 GPU )|Occult ( 2 GPUs )|Occult ( 3 GPUs )|Occult ( 4 GPUs )|MegaBlocks| |-|-|-|-|-|-| |8 GPUs|8.50|9.31|10.95|11.92|16.56| |16 GPUs||9.55|10.25|10.93|14.97| Occult acceleration can be more apparent on a better-grouped expert placement, thereby Occult can better improve training efficiency on 8-way expert parallelism. We also evaluate 8- and 16-way EP on MMLU and MathQA: 8-way EP: |Task|Strategy|No Tune|Prune within 1 GPU|Prune within 2 GPUs|Prune within 3 GPUs|Prune within 4 GPUs|Prune within 5 GPUs|No Prune| |-|-|-|-|-|-|-|-|-| |MMLU|Router-based|37.95|35.04|40.41|41.34|41.43|41.19|38.66| |MMLU|Sim-based|37.95|33.68|39.80|**41.74**|41.40|41.48|38.66| || |MathQA|Router-based|31.19|32.93|35.08|34.97|35.95|**36.08**|33.77| |MathQA|Sim-based|31.19|33.17|34.94|35.51|35.24|35.61|33.77| 16-way EP: |Task|Strategy|No Tune|Prune within 2 GPUs|Prune within 3 GPUs|Prune within 4 GPUs|Prune within 5 GPUs|No Prune| |-|-|-|-|-|-|-|-| |MMLU|Router-based|37.95|39.69|40.37|41.23|**41.62**|38.66| |MMLU|Sim-based|37.95|39.23|40.25|41.31|41.61|38.66| || |MathQA|Router-based|31.19|35.61|35.14|35.21|**35.78**|33.77| |MathQA|Sim-based|31.19|34.84|35.21|35.68|35.71|33.77| **[Strengths and Weaknesses 2: Collaboration pruning trade-offs]** We've analyzed the impact of different pruning settings on both performance and efficiency in Fig. 7, 8, 10, and 11. A more comprehensive accuracy analysis is provided in Tab. 3,4,5 on page 16.17.18. Additionally, we conduct an additional ablation study to analyze the pruning impact on efficiency with DeepSeek, using 8-way EP: ||Megablocks|Occult (w/o Prune)|Occult (Prune, 4 GPUs)|Occult (Prune, 3 GPUs)|Occult (Prune, 2 GPUs)|Occult (Prune, 1 GPUs)| |-|-|-|-|-|-|-| |Memory (GB)|43.12|40.68|36.93|34.46|32.18|30.73| |Avg Latency per Step (s)|8.36|6.43|6.02|5.54|4.44|3.93| **[Comments 1: To multi-node setting]** Thanks for the practical question. To scale up our method to the multi-node settings, we further conduct extra experiments on multi-node settings, as shown in [Potential concerns 2: Scalability], and demonstrate the scalability of our method. To scale it to a very large cluster, a combined parallel strategy for the MoE layer is required to adapt to the hardware resources, including: - Data parallelism is required to repeat the basic unit of expert parallelism (e.g., 16 GPUs in 16-way EP ), aiming at avoiding heavy communication overhead across nodes ( e.g., repeating the parameter of each MoE layer for 4 times in a 64-GPU cluster ) - Expert parallelism can be organized across different nodes, placing the grouped experts in Occult on the same node - Tensor parallelism can be adopted in intra-node GPUs for very large expert since the intra-node bandwidth is usually abundant **[Comments 2: RL-based expert assignment]** End-to-end rescheduling may be impracticable, as dynamically tuning expert placement requires huge additional bandwidth, although it can be asynchronous. While our current heuristic approach yields strong results, we believe our expert assignment algorithms can be further improved for enhanced performance. RL-based methods can learn the collaborative communication patterns from the profiling dataset, which can benefit the expert assignment task and improve generalization. We have not tried similar methods yet, but it is promising for our future research. Thank you again for your questions. We will incorporate the refinements in our revised draft.
Summary: In this paper, the author proposes Occult, an MoE training and inference framework designed to reduce communication costs by effectively managing intra- and inter-collaboration among experts. The evaluation results demonstrate that the proposed method achieves significant speedup compared to the state-of-the-art MoE framework. Claims And Evidence: Overall, the evidence is sufficient to support the claims. Methods And Evaluation Criteria: Overall, the baselines and datasets are appropriate. Theoretical Claims: Overall, the theoretical claims are correct. Experimental Designs Or Analyses: 1. Model configurations, such as the number of experts, are critical but not provided. 2. It appears that different pruning strategies significantly affect performance (e.g., similarity-based and router-based). However, there is no empirical or theoretical analysis to determine which strategy should be utilized under specific scenarios. 3. Regarding Figures 9 and 10, presenting throughput (tokens/s), TTFT, and TPOT under varying batch sizes and sequence lengths may be more common, useful, and informative for evaluating performance. Supplementary Material: I have reviewed all sections in the Appendix. Relation To Broader Scientific Literature: This work enhances the training and inference efficiency of MoE models, which is crucial for future LLM deployment. Essential References Not Discussed: The references provided are appropriate and sufficient. Other Strengths And Weaknesses: 1. There is no discussion on the pruning cost or the impact of bandwidth. 2. The performance may heavily depend on the number of GPUs. Since the evaluations are conducted with 4 GPUs, there is a greater opportunity to increase intra-collaboration. However, for larger models trained with more GPUs, such as 1,000 GPUs, it will be more challenging to place experts on the same device, or model performance may be compromised to achieve this. Other Comments Or Suggestions: The notation "0.66x faster" and "0.55x speedup" seems to indicate a slowdown rather than a speedup. Maybe it should be 1.66x faster and 1.55x speedup. Questions For Authors: 1. What is the cost of pruning? Does it require more time to converge? 2. What is the impact of bandwidth? It seems that the improvement is more significant in a low-bandwidth system. What would the improvement be in a high-bandwidth system? 3. DeepSeek recently released DeepEP, a communication library tailored for MoE and EP. Is your optimization orthogonal to DeepEP? Would your method benefit from DeepEP or achieve even better performance if integrated with it? I believe this is a parallel work, so the discussion is not required but welcome. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank reviewer geV9 for recognizing that our approach "enhances the training and inference efficiency of MoE models, which is crucial for future LLM deployment." To address your questions, we provide pointwise responses below. **[Experiments & Analysis 1: Model configuration]** Thanks for the constructive feedback. We've added a supplementary table detailing model configurations: |Model|# Params|# Experts|Top-K|# MoE layers|Hidden size|Expert intermediate size| |-|-|-|-|-|-|-| |OLMoE|6.92B|64|8|16|2048|1024| |Qwen1.5-MoE|14.3B|60|4|24|2048|1408| |DeepSeek-MoE|16.4B|64|6|27|2048|1408| **[Experiments & Analysis 2: Pruning strategies]** The performance differences between router- and similarity-based pruning arise from how they handle expert replacement: - Router-based pruning uses only routing scores, which may not involve expert attribute - Similarity-based pruning considers inter-expert similarity, obtained from a profiling dataset As shown in Figure 7, similarity-based pruning achieves comparable or better performance than router-based approaches in most cases of models and tasks. For example, in the OLMoE results on RTE, similarity-based pruning (shown as "Pruning (Similarity-based)") achieves approximately 3% higher accuracy than router-based pruning when restricting collaboration to 1 GPU. Similarity-based pruning can utilize prior knowledge from profiling datasets, which may advance the downstream tasks under similar domain distribution. However, it would expire on out-of-distribution tasks, where router-based pruning may perform better. **[Experiments & Analysis 3: throughput, TTFT, and TPOT]** Following your valuable suggestion, we will adopt “throughput (tokens/s), Time To First Token (TTFT), and Time Per Output Token (TPOT)” in Fig. 9 & 10 under varying batch sizes and sequence lengths in the formats below: Prefilling: |Model|Method|# Tokens|Throughput|TTFT (s)| |-|-|-|-|-| |OLMoE|Occult (Prune, 2 GPUs)|16384|1777.01|9.22| Decoding: |Model|Method|# Generated Tokens|Throughput|TPOT (ms)| |-|-|-|-|-| |OLMoE|Occult (Prune, 2 GPUs)|512|682.67|1.46| **[Strengths And Weaknesses 1: Cost of pruning]** Pruning does not require more time to converge because - Results in Fig. 7 experiments are all reported with 1 epoch training on the Alpaca dataset, on some tasks it can even perform better than standard SFT. - The average latency per training step can be greatly reduced with Occult. We report the latency and memory cost of pruning for DeepSeek-MoE here, with batch size 16 and 8 GPUs: ||Megablocks|Occult (w/o Prune)|Occult (Prune, 4 GPUs)|Occult (Prune, 3 GPUs)|Occult (Prune, 2 GPUs)|Occult (Prune, 1 GPUs)| |-|-|-|-|-|-|-| |Memory (GB)|43.12|40.68|36.93|34.46|32.18|30.73| |Avg Latency per Step (s)|8.36|6.43|6.02|5.54|4.44|3.93| Our prune algorithms can accelerate clock-time latency and reduce memory costs for training. **[Strengths And Weaknesses 2: Bandwidth impact]** We conducted additional experiments with varying interconnect bandwidths on different machines. The acceleration of Occult is more significant with lower bandwidth. We use DeepSeek-MoE with 8-way EP and batch size 32: |Bandwidth|Speedup (Occult vs. Megablocks, w/o Prune)|Speedup (Occult vs. Megablocks, Prune, 2 GPUs)|Speedup (Occult vs. Megablocks, Prune, 1 GPU)| |-|-|-|-| |18GB/s|1.37|1.78|1.95| |46GB/s|1.12|1.43|1.76| **[Strengths And Weaknesses 3: Scalability]** Our approach can remain effective at a very large scale with a hierarchical strategy: - Intra-node optimization: Within each node (typically 8 GPUs connected via NVLink), we can place multiple experts on the same node to maintain high intra-collaboration within the node boundary, where communication is fast. - Inter-node optimization: Cross nodes, expert placement rescheduling & collaboration pruning can be applied to minimize cross-node communication, which is typically the latency bottleneck. This scaling profile can be beneficial since inter-node communication is often 2-10× slower than intra-node communication, making the reduction of cross-node traffic especially valuable. In large-scale deployments, Occult's benefits should increase rather than diminish as communication overhead becomes more dominant. **[Comments: Notation issue]** Thanks for the constructive suggestion on “faster” and “speedup”, we will fix it in the revised paper. **[Question 3: Integration with DeepEP]** DeepEP mainly contains 2 stages for hardware-tailored MoE communication: - Inter-node all-to-all: sending tokens to the corresponding GPU on the target node - Intra-node all-to-all: sending tokens to the target GPUs containing the target expert In our understanding, Occult is orthogonal to DeepEP since it optimizes all-to-all communication volume along the dimension of top-k aggregation, which can be combined with DeepEP for enhanced efficiency. Thank you again for your questions. We will include these additional experiments and analyses in our revised draft. --- Rebuttal Comment 1.1: Comment: Thank you for your responses. I have no further questions and will keep my original score.
Summary: This paper introduces Occult, an algorithm-system co-design approach aimed at reducing the communication overhead of Mixture-of-Experts (MoE) large language models (LLMs). Specifically, the authors first propose BRIM, a data structure designed to support fundamental MoE operations efficiently. Next, they optimize expert placement based on calibration data to minimize inter-device communication. Finally, by replacing experts on remote devices with similar experts on the N_d devices, they further reduce collaborative communication overhead. Experimental results demonstrate that the proposed method significantly improves efficiency across various scenarios, including prefilling, decoding, and training, compared to existing frameworks. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: No Experimental Designs Or Analyses: Issues: - The experiments can be further improved. It is better for the authors to conduct experiments on larger MoE and MoE with fewer number of experts (e.g., Mixtral), which will demonstrate the generalization and scaling ability of the method. - According to Table 3 and the introduction section, the proposed method can achieve higher performance than the original model. However, the authors only provide explanation on why two-device pruning is better than single-device pruning in Sec 5.2. More explanation is needed regarding the former one. Supplementary Material: No Relation To Broader Scientific Literature: No Essential References Not Discussed: No Other Strengths And Weaknesses: **Strengths** 1. The motivation of the paper is clear. Collaborative communication is a major overhead in MoE computation, and optimizing it through co-design of hardware and algorithms is a reasonable approach. 2. Experiments show that the proposed method achieves faster speed and higher performance compared to baseline methods. **Weakness** Please refer to "Experimental Designs Or Analyses" Other Comments Or Suggestions: No Questions For Authors: The authors provides a new framework for all-to-all communication in Sec 4.1. What's the difference between this framework and existing method for MoE? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank tR7y for recognizing that "the motivation of the paper is clear" and that "experiments show that the proposed method achieves faster speed and higher performance compared to baseline methods." To address your questions, we provide pointwise responses below. **[Experiments & Analysis 1: Larger MoE and fewer-expert models]** Occult is more feasible for fine-grained MoE-LLMs such as DeepSeek-MoE. 16B is a common choice, and larger models usually contain hundreds of billion parameters ( such as DeepSeek-V2 ), which is beyond our capability. We've conducted additional experiments on Mixtral (8 experts) to demonstrate generalizability to models with fewer experts. Limited to GPU memory, we only tune the last 2 layers with 80 ( bs ) $\times$ 128 ( seq length ) tokens with 4 GPUs ( 2 experts per GPU ). To illustrate the scalability, we also performed training with 8 GPUs on DeepSeek-MoE, showing that the communication savings can scale with device count for expert parallelism ( EP ). We tune all the MoE layers with 32 ( bs ) $\times$ 128 ( seq length ) tokens. |Model|# Experts|# Devices for EP|Method|Avg Latency Per Step|Speedup| |-|-|-|-|-|-| |Mixtral-8x7B|8|4|Megablocks|9.64|1.0| |Mixtral-8x7B|8|4|Occult, w/o prune|9.08|1.06| |Mixtral-8x7B|8|4|Occult, prune within 1 GPU|8.34|1.13| |DeepSeek-MoE|64|8|Megablocks|16.56|1.0| |DeepSeek-MoE|64|8|Occult, w/o prune|12.10|1.37| |DeepSeek-MoE|64|8|Occult, prune within 3 GPUs|10.95|1.51| |DeepSeek-MoE|64|8|Occult, prune within 2 GPUs|9.31|1.78| |DeepSeek-MoE|64|8|Occult, prune within 1 GPU|8.50|1.95| We also provide the evaluation results for 8-way expert parallelized DeepSeek-MoE with Occult to demonstrate its effectiveness: |Task|Strategy|No Tune|Prune within 1 GPU|Prune within 2 GPUs|Prune within 3 GPUs|Prune within 4 GPUs|Prune within 5 GPUs|No Prune| |-|-|-|-|-|-|-|-|-| |MMLU|Router-based|37.95|35.04|40.41|41.34|41.43|41.19|38.66| |MMLU|Sim-based|37.95|33.68|39.80|**41.74**|41.40|41.48|38.66| || |OpenBookQA|Router-based|32.20|33.8|36.2|37.2|**37.8**|37.2|34.20| |OpenBookQA|Sim-based|32.20|33.4|36.4|36.8|**37.8**|37.2|34.20| || |MathQA|Router-based|31.19|32.93|35.08|34.97|35.95|**36.08**|33.77| |MathQA|Sim-based|31.19|33.17|34.94|35.51|35.24|35.61|33.77| **[Experiments & Analysis 2: Performance improvement explanation]** The motivation for our proposed pruning methods is to replace some of the original routed experts with carefully assigned alternatives based on certain rules, so that the modified routing choices can be more grouped within some GPUs rather than dispersed, thereby reducing the all-to-all communication overhead. Visualizations in Fig. 8 demonstrate that two-device pruning preserves more essential collaboration patterns found in the original model, while single-device pruning might lose important expert correlations. Our router- and similarity-based pruning approaches are capable of maintaining the most critical expert collaborations, helping the model retain or even enhance its capabilities as a kind of regularization [1, 2]. This targeted preservation of collaboration patterns explains the performance improvements shown in Tab. 3 and Fig. 7. **[Question: Difference with existing method for MoE]** Our framework differs from existing MoE libraries in several fundamental ways, making it more communication-efficient: - **Novel data structure for token management**: As described in Sec. 4.1, we introduce "Bidirectional Re-Index Matrix ($BRIM$), a novel data structure for unified data management" which efficiently tracks token states across different processing stages. Unlike existing methods that use general-purpose data structures, $BRIM$ is specially designed for MoE workflow with optimized memory access patterns. - **State-based token representation**: As stated in lines 206-216, "we outline them as 3 states: Original ($\texttt{ORI}$)... Simplified ($\texttt{SFD}$)... Expanded ($\texttt{EPD}$)," allowing us to maintain tokens in "the token counts across states follow $\texttt{ORI}<\texttt{SFD}<\texttt{EPD}$." This contrasts with existing methods that simply replicate $k$ times for each token, regardless of collaboration patterns. - **Two-stage aggregation**: As described in lines 199-203, we implement "summing the intra-collaboration results before all-to-all, and summing the inter-collaboration results after all-to-all, for each token $x$." This two-stage token aggregation enables symmetrical efficient all-to-all communication for both *dispatch* and *combine* operations in MoE pipeline. Thank you again for your questions. We will emphasize these distinctions in our revised draft. [1] Giles, C.L. and Omlin, C.W., 1994. Pruning recurrent neural networks for improved generalization performance. IEEE Transactions on neural networks, 5(5), pp.848-851. [2] Han, S., Mao, H. and Dally, W.J., 2015. Deep compression: Compressing deep neural networks with pruning, trained quantization and Huffman coding. arXiv preprint arXiv:1510.00149.
null
null
null
null
null
null
Hierarchical Planning for Complex Tasks with Knowledge Graph-RAG and Symbolic Verification
Accept (poster)
Summary: This paper introduces HVR, a neuro-symbolic approach that enhances LLM-based planning by integrating hierarchical planning, retrieval-augmented generation (RAG) over knowledge graphs, and symbolic verification. The proposed method tackles long-horizon and complex task planning by decomposing tasks into macro actions and atomic actions, refining them through symbolic verification to ensure feasibility before execution. The Knowledge Graph-RAG (KG-RAG) component provides structured knowledge retrieval, improving accuracy while reducing hallucinations. The Symbolic Validator plays a dual role: first, it simulates plans in an "ideal world" for verification before execution; second, it functions as a failure detector, aligning expected world states with real-time observations. The system also builds a macro-action library, enabling knowledge transfer across agents by storing reusable action sequences. HVR is evaluated in AI2Thor, a kitchen-based robotic simulator, across 12 tasks of varying complexity, demonstrating significant performance improvements over baselines. Results show that RAG is crucial for smaller LLMs, while hierarchical planning is more impactful for larger models. Symbolic verification consistently enhances plan correctness, but LLMs still tend to generate unnecessarily long plans. The study highlights that LLM-based planners perform well on goal-oriented tasks but struggle with open-ended objectives, emphasizing the need for better failure detection and plan optimization techniques. Claims And Evidence: I think most of the claims are well supported, for example: 1. HVR improves task success rates through hierarchical planning, KG-RAG retrieval, and symbolic verification - The paper evaluates HVR on AI2Thor, showing higher task success rates than baselines. - The ablation study confirms that hierarchical planning, RAG, and symbolic verification each contribute to performance gains. - Symbolic verification improves plan feasibility, and RAG reduces hallucinations, validating these components. 2. Symbolic verification consistently enhances plan correctness - The Symbolic Validator verifies plans before execution, preventing physically impossible or illogical actions. - Experiments show that removing the validator leads to lower success rates, confirming its importance. - **Suggestion**: It would be better if the model compare with alternative approaches for consistency concern, e.g. LLM self-consistency, or self-checking before plan execution. Methods And Evaluation Criteria: Mostly yes, for example the idea about hierarchical planning and symbolic verification are well-suited for long-horizon task and are applicable in real-world planning problems. The selected benchmark and metrics like success rate and execution efficiency are also crucial in real-world applications. Some suggestions: 1. The work mainly focuses on comparing with the variation of itself, e.g. HR, HV. But comparing with classical planning methods such as PDDL planners in the experiments is also necessary. 2. The renumbering task is good to show how the proposed method deals with specific operation order. It'd be better if you can also show and evaluate how the method can recover from errors. The current evaluation focuses on success rates, but some evaluation on how often the symbolic validator can detect errors and adjust the plan would be helpful. Theoretical Claims: The paper doesn't have formal mathematical proofs. The claims are mainly supported by evaluations. And I believe they are generally supported by those evidence. Experimental Designs Or Analyses: The selected benchmarks and metrics generally make sense to me. There are some suggestions I provide in the *Methods And Evaluation Criteria* section. Specifically: 1. AI2Thor is a widely used interactive simulation environment that allows for testing LLM-based planning agents in real-world-inspired tasks. It supports long-horizon goal-oriented tasks, which are crucial for evaluating structured planning. Prior work on embodied AI (e.g., ALFRED, BabyAI) has used similar environments, ensuring comparability. 2. The paper evaluates - Task success rate – Measures whether the agent achieves the goal state. - Plan efficiency – Measures action length and unnecessary steps. These metrics are crucial to long-horizon planning, as it requires both correctness (success rate) and efficiency (fewer redundant steps). Prior AI planning work (e.g., Hierarchical Task Networks (HTNs), PDDL-based planners) has used similar evaluation criteria. 3. The paper conducts ablation studies to test how much each component (hierarchical planning, KG-RAG, symbolic verification) contributes to performance. This isolates which modules are most crucial for performance improvements and helps clarify whether RAG and symbolic verification contribute independently or synergistically. Suggestions: 1. Compare HVR to traditional symbolic planning systems (e.g., PDDL). 2. Evaluate plan robustness and **error recovery** to measure adaptability. Supplementary Material: Review C and D to check if they support the claims in Section 4.2. Relation To Broader Scientific Literature: HVR builds upon prior work in hierarchical task planning, retrieval-augmented generation (RAG), symbolic reasoning, and LLM-based embodied AI. It extends classical AI planning methods by incorporating LLM-based reasoning with **symbolic verification** to ensure plan feasibility before execution. Compared to existing RAG-based AI agents like AutoGPT and ReAct, HVR leverages **structured knowledge retrieval (KG-RAG)** instead of unstructured document retrieval, reducing hallucinations in task planning. It also dynamically adjusts symbolic constraints using retrieved knowledge, enhancing planning accuracy in real-world tasks. In the context of LLM-based task planning, HVR introduces a hierarchical macro-action library, similar to meta-learning frameworks and the options framework in reinforcement learning, to enable knowledge transfer across tasks. Compared to SayCan (Google DeepMind) and ALFRED (AI2Thor-based task planning models), HVR structures planning hierarchically and incorporates symbolic validation to improve long-horizon reasoning. However, the paper does not explicitly compare HVR to these models, making it unclear whether hierarchical planning, KG-RAG, and symbolic verification offer unique advantages over these prior methods. Including these comparisons would provide a clearer understanding of HVR’s contributions relative to existing AI planning systems. Essential References Not Discussed: I think the discussion on related works is comprehensive. Other Strengths And Weaknesses: For weaknesses just the ones mentioned in previous sections. For strengths, HVR presents a novel integration of hierarchical planning, structured knowledge retrieval (KG-RAG), and symbolic verification to improve long-horizon task execution using LLMs. This neuro-symbolic approach is an important step toward more structured and interpretable AI planning, moving beyond purely end-to-end LLM reasoning. The evaluation in AI2Thor demonstrates real-world applicability for embodied AI and assistive task planning, while the macro-action library enables knowledge transfer across tasks, making HVR significant for autonomous systems. Other Comments Or Suggestions: N/A Questions For Authors: No questions other than the ones in previous sections. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their valuable feedback, which has greatly improved our paper. Below, we summarize the main concerns and detail the revisions and clarifications made to address them. **Q1: Missing comparison with the state-of-the-art** We did not include direct comparisons with existing methods, as none address complex, long-horizon kitchen tasks compatible with our setup. However, following reviewers’ suggestions, we considered additional works and included additional results/discussion in the Appendix of the revised paper, summarized below: * *SMART-LLM: Smart Multi-Agent Robot Task Planning using Large Language Models* This work is closely related to ours, as it uses the same simulator and addresses complex tasks (but in a multi-agent setting). We considered only 15 tasks, excluding those not related to the kitchen domain, of which only 12 were feasible in our setup due to a key design difference: unlike SMART-LLM, which assumes full knowledge of all object locations (including hidden ones), our system—aligned with RECOVER —only considers visible objects (hidden objects are treated as failures). HVR with Gemini-2.0-flash successfully planned and executed all 12 tasks. * *ProgPrompt: Generating Situated Robot Task Plans using Large Language Models* In this work there are 10 available tasks, 7 of which are kitchen related. We adapted the original implementation with the VirtualHome simulator to the AI2Thor simulator. Using Gemini-2.0-flash, HVR was able to plan and execute correctly all of the tasks. However, it's worth noting that these tasks are relatively simple compared to those in our benchmark. Additionally, ProgPrompt was evaluated using GPT-3, and their performance would likely improve with a more capable LLM. * *LLM+P: Empowering Large Language Models with Optimal Planning Proficiency* These methods are built on different simulators and/or involve tasks that are not compatible with our setup (e.g., organizing blocks on a table). As a result, a direct comparison would require a substantial and non-trivial reimplementation, which falls outside the scope of this work. * *PDDL-based planning systems* We could not compare with PDDL-based planning systems on our tasks as we found no existing works using planners expressive enough. As described in Section 3.3, our system uses conjunctive, disjunctive, and conditional PDDL statements. We therefore implemented a custom validator in Python, adapted specifically to the AI2Thor environment. * *ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning* Here, given specifications of simple abstract tasks in a simulated kitchen environment, the authors generate both a PDDL domain and a goal state. While this can work for a simple environment with few available actions, we argue it is generally not possible to generate PDDL domain specifications from high-level natural language task specifications such as those we use in our work. Thus, we did not run experimental comparisons with this method. * *Translating Natural Language to Planning Goals with Large-Language Models* This work uses the ALFRED simulator, similar to AI2Thor. However, the implementation is too limited for our use case and does not support key functionalities like creating individual slices after slicing an object, or modeling interactions such as opening appliances—essential for our tasks— making it unsuitable for a meaningful comparison. **Q2: Evaluation of Error Detection/Recovery and System Robustness** We thank the reviewer for the suggestion. The metrics EPV, MPV, and AABV (highlighted as blue metrics in the ‘Verification’ part of in Table 2) are specifically designed to evaluate the role of symbolic verification and correction. These metrics are defined as follows: (4) Expanded Plan Verification (EPV) indicates the extent to which the expanded plan (full sequence of atomic ac- tions) has been successfully verified. It is calculated as the ratio of verified steps to the total number of steps in the generated plan. (5) Macro Plan Verification (MPV) measures the extent to which the macro plan has been verified. It is calculated as the ratio of verified macro plan steps to the total number of macro steps. (6) Atomic Action Block Verification (AABV) evaluates the extent to which the macro plan has been verified at the level of atomic action blocks. It is determined by dividing the number of verified atomic action blocks by the total number of macro actions. To summarize, these metrics reflect how effectively the symbolic validator detects and helps recover from errors at different levels of the plan (expanded, macro, and atomic action blocks). Their strong correlation with Plan Correctness (PC) demonstrates that symbolic validation not only identifies errors but also leads to measurable improvements in plan quality. This provides insight into the system’s robustness and adaptability in recovering from planning errors. --- Rebuttal Comment 1.1: Comment: Thanks for your response! The rebuttal about Q1 does make the work more complete in my view. As I read through other reviewers comment, I'd keep my current rating.
Summary: The authors propose a neuro-symbolic approach that combines LLMs-based planners with Knowledge Graph-based RAG for hierarchical plan generation. It breaks down complex tasks into subtasks and then into executable atomic action sequences. A symbolic validator is integrated to ensure formal correctness, task decomposition, and to detect failures by comparing expected and observed world states. ## update after rebuttal I will maintain my positive opinion. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: No theoretical claims. Experimental Designs Or Analyses: Yes. Supplementary Material: Yes. Relation To Broader Scientific Literature: HVR enhances LLM planning capabilities through a novel neuro-symbolic integration of Hierarchical planning, symbolic Verification and reasoning, and RAG methods over symbolic Knowledge Graphs. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths: 1. The proposed framework is reasonable. It provides a new idea for LLM planning through a novel neuro-symbolic integration. 2. HVR shows obvious improvements in performance. 3. The result analysis is in-depth, strongly supporting the viewpoints of the paper. Weaknesses: The issue of efficiency has not been discussed, nor has it been compared in the experimental section. Conducting planning and retrieval augmentation at two granularities, as well as failure detection, constitutes a relatively complex process. It is unacceptable that there is no comparison of efficiency. Other Comments Or Suggestions: The comparison of efficiency and ablation experiments should be further improved. Questions For Authors: No questions. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their valuable feedback, which has greatly improved our paper. Below, we summarize the main concerns and detail the revisions and clarifications made to address them. **Q1: Missing comparison and discussion of efficiency** We have included a study of the efficiency of our method including 2 new plots in a new section in the appendix "Efficiency considerations" in the revised paper: Figure A (*Average execution times (in seconds) per model using Gemini 1.5 flash*) shows the average computational time for the different models across the 13 tasks using Gemini-1.5-flash as LLM. As expected, approaches involving hierarchical decomposition require (approximately three times) more time than those without. However, as LLMs become faster, the overhead introduced by the HVR framework becomes increasingly negligible, while substantially improving plan correctness. For example, running the same tasks with Gemini-2.0-flash reduced the average execution time for HVR from 3285.32 seconds to 681.51 seconds—a 5x improvement. In contrast to this improvement, relying solely on an LLM as a planner demands a substantially larger context window, which quickly becomes impractical as task or environment complexity increases. HVR addresses this limitation through retrieval-augmented generation (RAG), enabling it to dynamically access only the relevant information from the knowledge graph. This design ensures stable processing times and scalability, even as the environment grows, whereas LLM-only approaches face escalating computational demands and eventual context window exhaustion. Figure B (*Plan Correctness vs. Execution Time in seconds with Gemini 1.5 flash*) shows the trade-off between plan correctness and execution time across the different methods. While HVR takes longer to compute plans, it consistently achieves the highest correctness. In contrast, simpler methods like LLM or R achieve faster execution but produce significantly less reliable plans, demonstrating the benefit of HVR’s more structured approach.
Summary: This paper introduces HVR, a task planning method that integrates hierarchical planning, retrieval-augmented generation (RAG) over symbolic knowledge graphs, and formal verification to enhance the performance of large language models (LLMs) in complex task planning. The proposed method decomposes the language-described tasks into manageable macro actions and further into atomic actions, ensuring formal correctness through symbolic validation. Experimental results in the AI2Thor kitchen environment demonstrate that HVR outperforms baseline methods. ## update after rebuttal In settings where a precise world model exists, the world state is fully known, and the problem is guaranteed to be solvable, symbolic planners are already highly effective at solving planning problems. Therefore, a more meaningful and impactful research direction is to explore the use of LLMs for planning in scenarios where the world model is incomplete or unavailable. Extending HVR to operate under such conditions would significantly enhance its practical value and increase its potential for broader acceptance. Claims And Evidence: In this paper, the authors assume that a precise domain model is known (as stated in lines 194-198 on page 4) and that the environment state can be obtained through OntoThor. Given these assumptions, a classical planner (e.g., Fast Downward) should be able to generate a valid plan with the PDDL-style goal specification. Why is it necessary to use an LLM for planning in this setting? It seems that the LLM’s role could be limited to converting natural language task descriptions into PDDL goal specifications, as done in [1] and [2], after which classical planning would likely achieve a high success rate. The key advantage of using an LLM for planning is its ability to operate without a precisely defined domain model, which can be complex to construct. However, this paper still relies on an exact domain model, which raises questions about the necessity of employing an LLM for planning in this context. References: [1] Translating Natural Language to Planning Goals with Large-Language Models, ARXIV 2023. [2] LLM+P: Empowering Large Language Models with Optimal Planning Proficiency, ARXIV 2023. Methods And Evaluation Criteria: Yes. Theoretical Claims: No, there is no theoretical claim. Experimental Designs Or Analyses: Yes. The design of the baselines serves as an ablation study of the complete method, allowing the effectiveness of the proposed components to be validated through experiments. However, it lacks a comparison with code-style task planning approaches, such as ProgPrompt [3]. Reference: [3] ProgPrompt: Generating Situated Robot Task Plans using Large Language Models, ICRA 2023. Supplementary Material: Yes. I reviewed the section A & B in the supplementary material. Relation To Broader Scientific Literature: Hierarchical planning helps prevent oversimplification or the omission of essential steps in long-horizon tasks. Essential References Not Discussed: See [1] and [2] mentioned in Claims And Evidence. Other Strengths And Weaknesses: Strengths: 1. The proposed method achieves good performance on various tasks in the AI2THOR environment. 2. Several new metrics are valuable for evaluating the methods. Weaknesses: 1. The paper lacks a discussion about the limitations of the proposed method. Other Comments Or Suggestions: No. Questions For Authors: See the comments under Claims and Evidence, Experimental Designs or Analyses, and Weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: We thank the reviewer for their valuable feedback, which has greatly improved our paper. Below, we summarize the main concerns and detail the revisions made to address them. **Q1: Why is it necessary to use an LLM for planning in this setting?** While prior works such as [1] and [2] show how natural language (NL) instructions can be translated into PDDL goals, they are limited to simple tasks and lack the flexibility needed for more complex scenarios. Moreover, symbolic planners tend to be brittle—minor changes in the environment or execution failures can break the entire plan. In contrast, HVR leverages the flexibility of LLMs throughout the whole planning process, addressing several key limitations of symbolic–only methods: * Partial plan execution: for particularly complex tasks, LLM-based models are able to produce at least some correct initial steps or an initial sub-task, while symbolic planners can only generate full plans and thus likely failing entirely on these type of tasks * LLMs enable dynamic adaptation during task execution, incorporating feedback from the environment to replan on the fly. Symbolic planners would need new mid-task NL instructions. * Thanks to LLMs, and even more thanks to the employment of retrieval-augmented generation (RAG), our approach achieves significantly faster planning, avoiding the exponential search space growth typical of classical planners. Lastly, although HVR currently relies on a fixed world model, it can be extended to operate without one—just as we generate macro-actions and their pre/post-conditions dynamically. **Q2: Missing comparison with the state-of-the-art** We did not include direct comparisons with existing methods, as none address complex, long-horizon kitchen tasks compatible with our setup. However, following reviewers’ suggestions, we considered additional works and included new results/discussions in the appendix of the revised paper. See the full response to *Review-Ac9Q*, summarized here: * *SMART-LLM: Smart Multi-Agent Robot Task Planning using Large Language Models* This work is closely related to ours, as it uses the same simulator and addresses complex tasks (but in a multi-agent setting). HVR with Gemini-2.0-flash successfully planned and executed all 12 tasks. * *ProgPrompt: Generating Situated Robot Task Plans using Large Language Models* We adapted the original implementation with the VirtualHome simulator to the AI2Thor simulator. HVR with Gemini-2.0-flash was able to plan and execute correctly all kitchen-related tasks. * *LLM+P: Empowering Large Language Models with Optimal Planning Proficiency* These methods are built on different simulators and/or involve tasks that are not compatible with our setup. * *PDDL-based planning systems* We could not compare with PDDL-based planning systems on our tasks as we found no existing works using planners expressive enough. * *Translating Natural Language to Planning Goals with Large-Language Models* This work uses the ALFRED simulator, similar to AI2Thor. However, the implementation is too limited for our use case and does not support key functionalities making it unsuitable for a meaningful comparison. **Q3: Missing discussion about HVR limitations** We have included a discussion of our work’s limitations and future work in a new section in the appendix in the revised paper: The HVR framework integrates hierarchical planning, knowledge graph retrieval, and symbolic validation to enhance LLM-based task planning, but it also presents several limitations that point to promising directions for future work. One key limitation is its reliance on a fixed ontology and action space, which constrains generalization to new environments or tasks. While updating to the ontology currently requires expert domain knowledge, the predefined action space could be extended to include novel actions automatically using LLM-based techniques—similar to how HVR currently generates macro actions along with their corresponding pre- and post-conditions. Prompt sensitivity is another concern common to LLM-based systems, although recent models show improved robustness to variations in prompt phrasing. Additionally, HVR's current use of natural language to mediate interactions between the LLM and the knowledge graph may not be the most efficient; leveraging sub-symbolic or embedded representations could improve this integration. Another limitation is the current restriction to a linear structure for the generated plans. Supporting partial-order plans would not only allow for more flexible planning but also make it possible to execute different branches in parallel, which is especially valuable in multi-agent settings. Finally, in our method, error correction is done independently for each part of the plan and does not address interdependencies between errors at different planning levels. A more connected correction strategy that reasons across macro and atomic actions could help improve overall correctness. --- Rebuttal Comment 1.1: Comment: Thank you for your response. I fully understand that traditional symbolic planning methods are inherently brittle, as they rely heavily on an accurate and complete world model to produce valid plans. However, under the setting assumed in this paper—where such a precise world model exists, the world state is known, and the problem is guaranteed to be solvable—symbolic planners are already capable of solving the planning problem effectively. Therefore, in this particular setting, I believe the use of LLMs for planning lacks clear motivation or necessity. Instead, the more meaningful and valuable direction is to explore the use of LLMs for planning when such a precise world model is unavailable or incomplete. As the authors have mentioned in the rebuttal, HVR can be extended to operate without a precise world model. I believe that once such an extension is realized, the approach would be much more valuable and well-suited for acceptance. In addition, regarding the missing comparison with methods [1] and [2], it is worth noting that they mainly utilize large language models for translation purposes. Given the availability of the world model and world state in this work, along with the open-source Fast Downward planner, it should not be difficult to adapt these two approaches to the tasks presented in this paper, based on my experience. Therefore, I believe future extensions of this work should include these approaches in the experimental comparison.
Summary: This paper proposes a LLM-based approach (HAR) to tackle long-horizon and complex robotic planning, which integrates hierarchical planning and Retrieval-Augmented Generation (RAG). Specifically, HAR leverages the LLM to decompose complex tasks into subtasks at different abstraction levels while integrates the RAG method to retrieve relevant context from the agent’s knowledge graph for plan generation. Then, HAR employs a Symbolic Validator to verify and correct t the generated plans. Experiments on multiple datasets of varying difficulty levels demonstrate the effectiveness of the HAR method across multiple LLMs. Claims And Evidence: Most claims are supported. However, in Section 2.5, the authors mention that one advantage of this method is its ability to build a reusable library of macro actions, but they do not provide experimental evidence to support its effectiveness. Methods And Evaluation Criteria: Yes Theoretical Claims: No theoretical proof. Experimental Designs Or Analyses: Yes. This paper designs comprehensive and well-reasoned evaluation metrics to thoroughly assess the effectiveness of the proposed method. Additionally, through extensive ablation experiments, it validates the effectiveness of each component. Supplementary Material: I reviewed the video in the supplementary material. Relation To Broader Scientific Literature: This paper leverages the advantages of existing methods to propose a novel methodological framework. Essential References Not Discussed: S. S. Kannan, V. L. N. Venkatesh and B. -C. Min, "SMART-LLM: Smart Multi-Agent Robot Task Planning using Large Language Models," 2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) Z. Zhou, J. Song, K. Yao, Z. Shu and L. Ma, "ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning," 2024 IEEE International Conference on Robotics and Automation (ICRA) Other Strengths And Weaknesses: Strengths: 1. A novel planning framework that integrates hierarchical planning, RAG, and symbolic validation. 2. Comprehensive and clear ablation experiments validate the effectiveness of each component. 3. Designed effective and sufficient experimental evaluation metrics. Weaknesses: 1. The sufficiency of the experiments has certain limitations. The paper conducts experiments on only one dataset and does not include comparative experiments with existing methods(smart-llm). Other Comments Or Suggestions: Please refer to Other Strengths And Weaknesses Questions For Authors: Please refer to Other Strengths And Weaknesses Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their valuable feedback, which has greatly improved our paper. Below, we summarize the main concerns and detail the revisions and clarifications made to address them. **Q1: Missing details and experiments regarding the reusable library of macro actions** The macro actions are stored in the ontology following the same approach used in Cornelio & Diab 2024. While we have not yet conducted experiments leveraging the macro action library to accelerate the planning process, we plan to explore this in the future. We added additional details in a new section in the appendix, summarized below: OntoThor contains the class Action representing agent-environment interactions. We refine this by introducing two subclasses: Atomic Action (AA), as the original Action class, and Macro Action (MA) representing higher-level operations like boil-water. During execution, AA and MA instances are stored in the agent’s Knowledge Graph. Each MA instance is linked to its NL description including pre- and post-conditions, via the hasDescription predicate. These conditions can also be modeled as triples, extending the scene-graph representation in OntoThor. Each MA instance is also linked to its sequence of AAs via the hasAtomicAction predicate. After task execution, the set of MAs, along with their associated conditions, are stored in the Knowledge Graph and, if validated by the environment (see section 2.4), also added to the ontology. **Q2: Missing References** Thank you for the suggested references. We’ve added the missing citations and included SMART-LLM—previously unknown to us—in the revised manuscript, along with an experimental comparison (see Q3). **Q3: Missing comparison with State-of-the-art systems** We did not include direct comparisons with existing methods, as none address complex, long-horizon kitchen tasks compatible with our setup. However, following reviewers’ suggestions, we considered additional works and included additional results/discussion in the Appendix of the revised paper, summarized below: * *SMART-LLM: Smart Multi-Agent Robot Task Planning using Large Language Models* This work is closely related to ours, as it uses the same simulator and addresses complex tasks (but in a multi-agent setting). We considered only 15 tasks, excluding those not related to the kitchen domain, of which only 12 were feasible in our setup due to a key design difference: unlike SMART-LLM, which assumes full knowledge of all object locations (including hidden ones), our system—aligned with RECOVER —only considers visible objects (hidden objects are treated as failures). HVR with Gemini-2.0-flash successfully planned and executed all 12 tasks. * *ProgPrompt: Generating Situated Robot Task Plans using Large Language Models* In this work there are 10 available tasks, 7 of which are kitchen related. We adapted the original implementation with the VirtualHome simulator to the AI2Thor simulator. Using Gemini-2.0-flash, HVR was able to plan and execute correctly all of the tasks. However, it's worth noting that these tasks are relatively simple compared to those in our benchmark. Additionally, ProgPrompt was evaluated using GPT-3, and their performance would likely improve with a more capable LLM. * *LLM+P: Empowering Large Language Models with Optimal Planning Proficiency* These methods are built on different simulators and/or involve tasks that are not compatible with our setup (e.g., organizing blocks on a table). As a result, a direct comparison would require a substantial and non-trivial reimplementation, which falls outside the scope of this work. * *PDDL-based planning systems* We could not compare with PDDL-based planning systems on our tasks as we found no existing works using planners expressive enough. As described in Section 3.3, our system uses conjunctive, disjunctive, and conditional PDDL statements. We therefore implemented a custom validator in Python, adapted specifically to the AI2Thor environment. * *ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning* Here, given specifications of simple abstract tasks in a simulated kitchen environment, the authors generate both a PDDL domain and a goal state. While this can work for a simple environment with few available actions, we argue it is generally not possible to generate PDDL domain specifications from high-level natural language task specifications such as those we use in our work. Thus, we did not run experimental comparisons with this method. * *Translating Natural Language to Planning Goals with Large-Language Models* This work uses the ALFRED simulator, similar to AI2Thor. However, the implementation is too limited for our use case and does not support key functionalities like creating individual slices after slicing an object, or modeling interactions such as opening appliances—essential for our tasks— making it unsuitable for a meaningful comparison.
null
null
null
null
null
null
AKORN: Adaptive Knots generated Online for RegressioN splines
Accept (poster)
Summary: This paper introduces AKORN (Adaptive Knots generated Online for RegressioN splines), a parameter-free algorithm for offline non-parametric regression over total variation (TV1)-bounded functions. AKORN leverages online learning techniques to automatically adapt knot selection for spline regression, eliminating the need for oracle knowledge of function smoothness, which is a major drawback of traditional non-parametric regression methods. The contributions of this paper include the following points 1. It achieves optimal rates without hyperparameter tuning (it adaptively selects knots based on change points in function smoothness). 2. It proposes a new online learning-based method for knot selection: AKORN uses ADDLE (Adaptive Denoising with Linear Experts), an online regression technique, to identify smooth and rough regions of the function. 3. Unlike existing online methods that output noisy pointwise predictions, AKORN reconstructs a continuous function. 4. Its Computational Efficiency and Scalability. It runs in O(n^2) time (comparable to other spline methods). Claims And Evidence: Yes Methods And Evaluation Criteria: This paper introduces AKORN (Adaptive Knots generated Online for RegressioN splines). It makes sense to non-parametric regression problems. Theoretical Claims: Yes, I checked the proof of their main results Theorem 6.1-6.2 and they look correct to me. Experimental Designs Or Analyses: For the experiment, AKORN selects knots adaptively, but the paper does not test how sensitive its performance is to different knot placements, i.e., what happens if AKORN misidentifies some change points? This issue seems important to their proposed method. Supplementary Material: Yes, the proofs of Theorems 6.1-6.2 Relation To Broader Scientific Literature: The key contributions are related to non-parametric regression, eliminating the need for oracle knowledge of function smoothness, which is a major drawback of traditional non-parametric regression methods. This is a commonly studied statistical and machine learning problem. Essential References Not Discussed: The paper has discussed a sufficient amount of relevant papers for showing and understanding their contributions. Other Strengths And Weaknesses: Strength is clear: Their proposed method AKORN (Adaptive Knots generated Online for RegressioN splines) can automatically adapt knot selection for spline regression, eliminating the need for oracle knowledge of function smoothness, which is a major drawback of traditional non-parametric regression methods. Moreover, it achieves optimal rates without hyperparameter tuning (it adaptively selects knots based on change points in function smoothness). Weakness: 1. The method assumes that covariates $x_i$ are equally spaced, which is a strong and unrealistic assumption in non-parametric regression. Although the paper mentioned that but it is a starting point for many non-parametric regression algorithms, it weakens the contribution of this paper significantly. For example, some other papers on trend filtering and wavelets have been extended to handle uneven spacing (Wang et al., 2014; Sadhanala & Tibshirani, 2019), but AKORN has not. 2. The paper only considers TV1-bounded functions, but many real-world applications involve smoother functions (e.g., TV2, TV3). 3. The paper claims AKORN achieves an instance-dependent rate Theorem 6.1-6.2, but there is no worst-case analysis. Trend filtering and spline methods have worst-case minimax bounds, ensuring robustness across all function classes. 4. The method is designed for one-dimensional regression problems. There is no discussion on whether AKORN extends to multivariate or high-dimensional settings. Other Comments Or Suggestions: Please see "Other Strengths And Weaknesses". Questions For Authors: Please see the four questions of weakness in "Other Strengths And Weaknesses". Ethical Review Concerns: NA Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you very much for your consideration. ### Weaknesses 2 and 4 The weaknesses you point out in 2 and 4 are fully accurate, although we feel that the multivariate problem you mention in 4 is outside of the scope of this paper. High-dimensional nonparametric regression is often considered separately from univariate regression. If you are curious about the challenges that remain in extending AKORN to $TV_{k > 1}$, we expand on them in our response to Reviewer YjPY. ### Worst-case analysis (Weakness 3) To be clear, AKORN is minimax optimal over $TV_1(C) = \\{f \ : \ TV_1(f) \leq C\\}$, which is the only minimax result we are aware of for linear Trend Filtering. This is a corollary of the instance-dependent results. ### Uneven Covariates (Weakness 1) With respect to the spacing of the covariates. Since submitting this paper, we have discovered that we can generalize AKORN to handle uneven covariates by tweaking our proof in a few places. Specifically, ADDLE/AKORN can achieve the same rate for covariates $x_1, … x_n \sim p$, where $p$ is some density that is bounded below (that is, $p(x) \geq p_0 > 0$ for $0 \leq x \leq 1$ exactly as in [4]). We will list the technical steps as a the end of this rebuttal. If reviewers and AC approve, we would like to include this result in the paper. In order to maintain accessibility and consistency, we believe it makes sense to include the result in an appendix, as is done in Tibshirani 2014. ### “What happens if AKORN misidentifies some change points?” While it would be an interesting experiment to see how many knots we could perturb while retaining good performance, do note that our theoretical guarantee says that AKORN will *not* catastrophically misidentify knots. Another lens on this point is as follows. Each time we run AKORN with different realizations of the noise vector, it can potentially identify a different knot-set. But our theoretical and experimental results show that whatever knot-set AKORN selects is sufficient for (near-)optimal performance (whp). As a practical observation, the sensitivity of the knot-set to noise increases with $n$, but the sensitivity of the MSE to the knot-set decreases with $n$. ## Technical steps for uneven covariates: In this response, we explain the tweaks necessary to prove that ADDLE/AKORN achieve the rate $\tilde O(n^{-4/5}C^{2/5})$ with probability $1 - 2p_0n^{-10} - \delta$ when covariates are sampled iid from a density $p$ such that $p(x) \geq p_0 > 0$ for $0 \leq x \leq 1$. **Key lemma, "Lemma K"** (The bias of linear regression error is controlled even on uneven points) Let $x_1, ..., x_n$ be a sorted list of design points whose max gap is bounded: $\max_{i > 1\} x_i - x_{i - 1} = O(\log{n}/n)$. Let $l_{r:s}$ be the linear least squares fit on $(x_r, \theta_r), ... (x_s, \theta_s)$. Then $\sum_{j = r}^s (l_{r:s}(x_j) - \theta_j)^2 = \tilde O(\frac{|s - r|^3TV_1(\theta[r:s])^2}{n^2})$. **Lemma 5 from [4]** Suppose $\{x_i\}$ is sampled iid from a pdf, p, that is bounded below by $p_0 > 0$. Lemma 5 from [4] implies that, w.h.p, the max gap between any two covariates in the set $\{x_i\}$ is bounded by $O(log(n)/n)$ with probability at least $1 - 2p_0n^{-10}$. **ADDLE generalization:** 1) Change online linear regression experts to (clipped) VAW forecasters. This very minor algorithmic change does not affect computational efficiency. 2) **VAW linear forecasters enjoy the same upper bound as offline linear regression up to an additive constant** This follows from Theorem 11.8 in [1] paired with Corollary 40 in [2] to control the norm of the least-squares comparator. 3) We can reduce the error of ADDLE on an interval $[r, s]$ to the error of the (clipped) expert that starts at $r$ using the same argument as before (i.e., Appendix D's proof up to Equation (15) doesn’t need to change). 4) The error (unclipped) expert can then be bounded using part (2). Instead of line 1157 of the submission, we use part (2) above to bound the online experts error by $\sum_{j = r}^s (\hat l_{r:s}(x_j) - \theta_j)^2 + O(1)$, where $\hat l_{r:s}$ is batch regression of the noisy responses $y_r, ... y_s$ onto $x_r, ... x_s$. We then use concentration of $\hat l_{r:s} $ to $E[\hat{l}_{r:s}]$ and Lemma "K" to finish the bound. 5) We can use the same oracle partitioning scheme as before to produce a set of intervals of size $O(n^{1/5}C^{2/5})$ together with experts who achieve constant error on each interval. **AKORN generalization** The proof for AKORN now needs only trivial changes. More detailed treatment can be found at this anonymized link: https://anonymous.4open.science/r/AKORN-uneven-D6FB/akorn_non_evenly_spaced_points.pdf [1] Prediction Learning and Games. Cesa-Bianchi and Lugosi. [2] Adaptive Online Estimation of Piecewise Polynomial Trends. Baby and Wang, 2020 [3] Multivariate trend filtering for lattice data, Sandhalana et al., 2024. [4] The falling factorial basis and its statistical applications. Wang et al., 2014 --- Rebuttal Comment 1.1: Comment: I thank the authors for all the detailed response for my questions. While I acknowledge this work's novelty and find it interesting, as the authors' response suggests, it might still not be a quite complete work or at least there are many things that could have been added to the paper before publishing in a top conference such as ICML. --- Reply to Comment 1.1.1: Comment: Thanks for the follow-up discussion. We hope your main concern on the "worst-case optimality" was addressed. As we responded, the uneven covariate case is straightforward and adds no new technical challenge. We plan to add it to the appendix of the paper so as to keep the notations clean in the main paper. For higher dimensional and higher order TV, they are non-trivial generalizations and we believe it is better for the current paper to focus on 1D. Notice that this is the first result of its kind that converts the selected knots of an online algorithm to a valid nonparametric regression fit. We believe it is better to focus on explaining the idea and results clearly than stacking more theorems and results in generalized settings. Thank you again!
Summary: This paper proposes AKORN, a novel approach for offline non-parametric regression that adaptively selects spline knots without requiring manual hyperparameter tuning. The proposed method yields estimators competitive with oracle-enhanced Trend Filtering, attaining near-optimal theoretical performance for TV-bounded regression functions. The theoretical guarantees provided are thorough and rigorously developed, showcasing that AKORN's performance is competitive with the state-of-the-art offline methods. Claims And Evidence: I think the claims are clearly supported. Methods And Evaluation Criteria: The methods and evaluation criteria, such as Doppler, jump functions, and evaluation metric (mean squared error, MSE), are appropriate. Theoretical Claims: I checked Theorem 6.1 (Bound on ADDLE error) and Theorem 6.2 (Bound on AKORN MSE). During the proof, the "Change-point Detection Lemma" (Lemma 7.2) is pivotal, rigorously connecting the adaptively chosen knots to a sparse and statistically efficient linear spline. However, the Lemma 7.2 is informal and the authors didn't provide the formal expression or the source. Additionally, in the Proof Sketches section, the statements of theorems are all informal. Why doesn't the author provide a formal statement directly? Experimental Designs Or Analyses: Experimental design is generally sound. The synthetic functions (piecewise linear, Doppler, Jump) ensure theoretical and empirical coherence. However, there is no performance stability discussion Supplementary Material: Yes, I did. I check the codes provided in the file "akorn_code_submission" of the supplementary material, but I can't fully understand the codes. Relation To Broader Scientific Literature: AKORN builds upon previous work on trend filtering, wavelet smoothing, and online regression techniques. Unlike prior methods, AKORN adaptively chooses knots without needing explicit smoothness information. It extends influential approaches such as Trend Filtering (Kim et al., 2009; Tibshirani, 2014) and wavelet-based smoothing (Donoho & Johnstone, 1998), providing an integrated, parameter-free alternative that automatically adjusts to the underlying structure of data. Essential References Not Discussed: None Other Strengths And Weaknesses: Strengths: 1. The proposed AKORN algorithm is highly original in combining offline spline regression with adaptive online learning. 2. The theoretical framework and results are well-established. Weaknesses: 1. In the Proof Sketches section, the statements of theorems are all informal. 2. Practical applications on real-world datasets would greatly strengthen the paper’s appeal to applied researchers and practitioners. 3. The assumption of equally spaced covariates is restrictive. The authors acknowledge this limitation but should ideally outline a more explicit path forward for generalizing to unevenly spaced data. 4. What are the primary theoretical or computational barriers to extending AKORN to higher-order smoothness classes? Other Comments Or Suggestions: I think the structure of the article needs to be readjusted. Articles usually start with theoretical analysis and then conduct experimental verification. Your appendix is very rich and can fill eight pages. Questions For Authors: See Weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your attention to this work! ### Non-evenly spaced design points Since submitting this paper, we have discovered that we can generalize AKORN to handle uneven covariates by tweaking our proof in a few places. Specifically, ADDLE/AKORN can achieve the same rates for covariates $x_1, … x_n \sim p$ where $p$ is a pdf that is bounded below (as in Wang et al., 2014). We enumerate the necessary changes in our response to Reviewer V1hW and would like to include the generalized result as an appendix of the final version of this paper (as is done in Tibshirani, 2014). Your input on this extension would be greatly appreciated. ### Proof sketches Thank you for mentioning the informality in the proof sketches section. The Change-point Detection lemma that you mention is an informal statement of Lemma C.1 in the Appendix, which we forgot to mention during the proof sketch. This will be fixed in future versions of the paper. More generally, the reason that the proof-sketch lemmas are informally stated is space. We hope that the informal statements are understandable, and welcome input on this front (we are aware of a typo in Lemma 7.3 – the projection operator should be applied to Y rather than $\theta$). Rigorous statements are all available in the appendices, and the main theorems of the paper are formally stated in Section 6. ### Higher-order Smoothness Classes There are two main steps to generalizing to higher-order smoothness classes. The first (relatively easy) step is generalizing the computations in Appendix E.1 to polynomial regression. For unevenly spaced design points, this corresponds to the generalizing the (much cleaner) "Lemma K" given in the response to V1hW. The second task is in generalizing the Spline Existence Lemma (Lemma C.7) to higher order splines. Specifically, given two kth-order piecewise polynomial functions with disjoint knot-sets, we need to show that there is a kth-order spline on some augmented knot-set that lies between the two curves (or some strategic weakening of this). We do not yet know how to do this. Apart from these steps, the bulk of the proof can be adapted to the higher-order cases fairly trivially.
Summary: This paper studies the non-parametric regression over TV_1-bounded functions. The paper proposes a parameter-free algorithm (AKORN) which leverages online learning techniques to select knots for regression splines. The algorithm proposed achieves near-optimal rates without hyperparameter tuning. Both theoretical and empirical results are presented. Claims And Evidence: As a theoretical work, most of the claims are well-supported by providing rigorous proof. Given that most related work on this problem achieves O(n) or O(nlogn) computational complexity, it would better to explore in detail how to reduce the O(n^2) complexity in this paper with the geometric cover trick. Methods And Evaluation Criteria: The proposed methods and evaluation criteria make sense for the problem studied under the setting of this work. Theoretical Claims: Reviewed the proof sketches in the main body and verified the theorem and lemma statements, along with select proof details in the appendix. The overall proof structure appears sound and well-reasoned. Experimental Designs Or Analyses: The experimental design is sound. Supplementary Material: Appendix A, Appendix C, and Appendix F. Skipped most proof details presented in Appendix C, D, and E, but checked lemma statements. Relation To Broader Scientific Literature: n/a Essential References Not Discussed: I am not aware of essential references that are missing. Other Strengths And Weaknesses: Strengths: -The idea of leveraging online learning techniques and learning forward and backwardly to design the adaptive method is interesting. -Processes data sequentially, adjusting knots based on residuals, similar to no-regret learning techniques. -AKORN does not require tuning smoothing penalties or pre-specifying the number of knots. -Proposed method matches minimax-optimal convergence rates for TV₁ functions. -The experimental results demonstrate that AKORN performs competitively with oracle-tuned Trend Filtering. Weaknesses: -Uniform spacing avoids knowing smoothness knowledge but uniform spacing is also a strong assumption as it reveals more structural information as knowing smoothness. -As a consequence, it cannot directly handle scattered data or missing time-series values without preprocessing. -Given the existence of several methods achieves a minimax-optimal convergence rate and comparable runtime, it's unclear if AKORN provides significant advantages over other adaptive regression methods. Other Comments Or Suggestions: Few typos: 1.\hat{y_i} should be replaced with \hat{y}_i on line 046 2."where \tilde{O} and \lesssim hide..." should be replaced with "where \lesssim hides..." on line 698 3. Using T to denote transposing is a bit misleading (for example on line 170) Questions For Authors: I have no further questions for the authors. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you very much for this detailed review. Firstly, thank you for mentioning the typos. We will make appropriate adjustments (e.g. remove the use of $T:=n$ to avoid confusion with the transpose operation). ### Uniform spacing: Since submitting this paper, we have discovered that we can generalize AKORN to handle uneven covariates by tweaking our proof in a few places. Specifically, we can achieve the same result for covariates $x_1, … x_n \sim p$ and $p$ bounded below (exactly as in Wang et al., 2014). In light of reviewers’ comments, we would like to include this result as an appendix. In our response to Reviewer V1hW, we outline the tweaks that are required. These turn out not to be too extensive; from a technical perspective, fixed uniform design is not too different from iid samples from a (nice) distribution. We would greatly appreciate your feedback on this point. ### “Given the existence of several methods achieves a minimax-optimal convergence rate and comparable runtime, it's unclear if AKORN provides significant advantages over other adaptive regression methods.” As you mention, AKORN is adaptive and empirically competitive with Trend Filtering, Locally Adaptive Regression Splines. We aren’t aware of other adaptive methods with comparable empirical performance (see comparison with Wavelets in Section 5 of the paper). ### Computational complexity: It’s true that the computational complexity of AKORN is slightly prohibitive, but we elaborate on the topic now. Firstly, note that the $O(n^2)$ upper bound is, in practice, somewhat loose. This is because AKORN periodically restarts ADDLE during the online pass, meaning that we do not perform $t$ regression steps at timestep $t$. Furthermore, when we perform regression onto the knotset K, whose size is $d = O(n^{1/5}C^{2/5})$, we use $O(d^2n) = O(n^{7/5}C^{2/5})$ compute. So even without any tweaks, real performance is often better than the $O(n^2)$ suggests. On top of this, it is possible (as you mention) to run ADDLE with a smaller set of experts to get an $O(n\log{n})$ online algorithm. In practice, this may lead to significant speed-up when ADDLE is called as a subroutine by AKORN. However, due to the definition of AKORN, this doesn’t shave any compute off of the asymptotic runtime on the offline problem (at least not trivially). Lastly, allow us to note that, technically speaking, the worst-case compute of SOTA Trend Filtering (TF) is superlinear: $O(n^{3/2}\log{1/\epsilon})$ to output an $\epsilon$-suboptimal solution for the associated optimization problem (Tibshirani, 2014). On the other hand, AKORN computes an exact solution. Furthermore, the $O(n^{3/2}\log{1/\epsilon})$ bound for Trend Filtering doesn’t take into account the cost of parameter tuning. If we do this with SURE, for an upper bound $C$ on $TV_1[f]$ and at discretization level $\Delta$, then we need to solve TF $C/\Delta$ times while also computing DoF for each solution. Furthermore, in general, the only a-priori upper bound on $C$ that is possible is $C = O(n^2)$. So while, practically speaking, people know how to do TF extremely efficiently, the theoretical picture is slightly complicated. --- Rebuttal Comment 1.1: Comment: Thank you for your responses, particularly the part regarding computational complexity. I have carefully considered your rebuttal, along with the comments from the other reviewers, and I have decided to maintain my evaluation as a borderline acceptance.
Summary: The authors consider the problem of nonparametric regression over the class of $TV_1$-bounded functions. Crucially, the authors aim to overcome the issue of needing oracle knowledge regarding certain features of the data-generating process, while still achieving optimal error rates. Despite being in an offline setting, the authors leverage existing online denoising algorithm with both forward and backward passes over the offline data. The authors show that their approach performs empirically well relative to other baselines in the literature. Claims And Evidence: Yes, the claims seem reasonable given the empirical performance of this approach. Methods And Evaluation Criteria: All experiments are reasonable evaluations of their method to baselines that (1) take oracle knowledge, which serves as an upper bound on performance (i.e., impressive to perform as good as this) and (2) same theoretical guarantees without oracle knowledge, which serves as the baseline to beat. One minor change that would be helpful in assessing numerical results would be to have error bars on Tables 1 and 2. Theoretical Claims: N/A Experimental Designs Or Analyses: N/A Supplementary Material: N/A Relation To Broader Scientific Literature: To the best of my knowledge, this would be the second work that provides optimal error rates beyond another baseline that these authors test. However, this work improves upon the empirical performance of the first paper greatly. Essential References Not Discussed: I am unaware of other essential references not mentioned here. Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate your time and consideration.
null
null
null
null
null
null
Canonical Rank Adaptation: An Efficient Fine-Tuning Strategy for Vision Transformers
Accept (poster)
Summary: This paper introduces Canonical Rank Adaptation (CaRA), a peft method method specifically designed for ViTs. The core idea of CaRA is to tensorise transformer weights across layers and to directly optimize the stack using a Canonical-Polyadic Decomposition. The authors report minimized trainable parameters and match or outperforms existing PEFT methods with lower parameter counts training a ViT-B/16 on the VTAB-1k and FGVC visual classification benchmarks. Experimental results on these benchmarks and ablation studies are presented to support these claims. ## update after rebuttal The authors have address most of my concern. The increased wall time and memory of CaRA is a limitation of the method in its current form so I ask the authors to clearly include these findings in the camera ready The approach remains novel and interesting for future research so I maintain my original score Claims And Evidence: The claims of parameter efficiency and good performance are clearly supported in the experiments. Methods And Evaluation Criteria: The proposed evaluation makes sense although I would have expected more experimental results with larger vision architectures (e.g. ViT-L/14 and above). Results for language tasks such as commonsense [1] could also have better supported the evaluation claims. [1] Hu, Zhiqiang, et al. "Llm-adapters: An adapter family for parameter-efficient fine-tuning of large language models." EMNLP 2023. Theoretical Claims: Gradient derivations are provided but I did not check the correctness in depth. Experimental Designs Or Analyses: The proposed evaluation makes sense although I would have expected more experimental results with larger vision architectures (e.g. ViT-L/14 and above). Results for language tasks such as commonsense could also have better supported the evaluation claims. Supplementary Material: I glanced at the code in the supplementary material Relation To Broader Scientific Literature: This paper is relevant to literature looking at reducing the number of parameters beyond LoRA's own reduction. CaRA is relevant in this setting as it provide yet another alternative that appears to be competitive in terms of performance and explores new factorization ideas for PEFT. Setting ranks as global to the whole architecture and not to specific layers is also relevant as it allows for an extension to other rank-adaptive methods such as AdaLoRA [2]. [2] Zhang, Qingru, et al. "Adalora: Adaptive budget allocation for parameter-efficient fine-tuning." ICLR 2023 Essential References Not Discussed: A few other algorithms tackle reducing the amount of trainable parameters in LoRA. The contributions are quite orthogonal to the Canonical-Polyadic Decomposition. Examples that could be added to the related work include VeRA [3] or NoLA [4]. [3] Kopiczko, Dawid J., Tijmen Blankevoort, and Yuki M. Asano. "Vera: Vector-based random matrix adaptation." ICLR 2024 [4] Koohpayegani, Soroush Abbasi, et al. "Nola: Compressing lora using linear combination of random basis." ICLR 2024 Other Strengths And Weaknesses: The idea of applying Canonical-Polyadic Decomposition as a parameter efficient algorithm is interesting especially as it considers the weights of the network as a whole. Figure 1 is no very helpful in understanding the algorithm and should be improved, Figure 3 is better but there is too much $\mathbf{W}$ definitions which makes very loaded. The experimental setting is limited to vision and one (small) network architecture. I would have liked to see experiments with CLIP or on language benchmarks to better substantiate the results and maybe get more insights as to what CaRA does differently to LoRA. Most importantly there lacks a study of the training time of CaRA compared to alternatives (especially LoRA, SPT-LORA and FacT-TT/TK) in terms of GPU hours or wall-clock time. This would be helpful to understand the practical efficiency of CaRA beyond just parameter count. There is no section addressing the limitations of CaRA and recommendations for future work Other Comments Or Suggestions: No other comment Questions For Authors: Did the authors perform a wall-time study of training time for CaRA compared to other PEFT ? I would also be interested in whether CaRA requires more VRAM to train that the alternatives. What are limitations of CaRA or future directions for research into PEFT with Canonical-Polyadic Decomposition ? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for the thoughtful review and for recognising CaRA's relevance in PEFT methods. We appreciate your insights on broader evaluation and efficiency comparisons. We address the questions below in detail. ***The proposed evaluation makes sense although I would have expected more experimental results with larger vision architectures (e.g. ViT-L/14 and above).*** We performed experiments on ViT-L pretrained on ImageNet-21k. Please refer to the Bbm7 reviewer's rebuttal response for the results. ***Results for language tasks such as commonsense [1] could also have better supported the evaluation claims.*** Thank you for the suggestion. We would like to emphasise that our study focuses on image classification tasks and we do not make claims regarding language tasks. We already provide additional results for large vision transformers (ViT-L), and we expect that it works for language models as well. Due to the short time for the response, we cannot provide an evaluation for language models, but we will include them in the paper or supplementary material for the camera-ready version. ***Related work include VeRA [3] or NoLA [4].*** Thank you for pointing out these works. We will ensure that VeRA and NoLA are included in the related work section in the camera-ready version. Additionally, we provide comparisons to the VeRA benchmark on ViT-Large in our experiments. ***Figure 1 is no very helpful in understanding the algorithm and should be improved, Figure 3 is better but there is too much W definitions which makes very loaded.*** Thank you for the comment. Figure 1 only presents the performance of CaRA. We assume the comment refers to Figure 2; we will rework Figure 2 to make it more explainable. We appreciate your feedback on Figure 3. We recognise that "W" definitions could be integrated directly into the figure. We will reorganise the figure to make it more intuitive and improve the layout for the camera-ready version. ***measure wall clock time and memory*** As suggested, we present the wall time and VRAM allocated for various fine-tuning methods on ViT-L trained on CIFAR100 for 10 epochs. We observe that LoRA is the most efficient in terms of training time and memory. We attribute this speed to CUDA-optimised matrix multiplications in PyTorch. In contrast, CaRA shows a higher walltime because, just like the Tensorly [3] package we rely on, it is largely written in Python. While the implementation makes it easy to use, it is not the most efficient implementation. We expect significant improvements in speed if CaRA's operations are optimized. Also, CaRA's multi-dimensional tensor nature results in slightly higher VRAM allocation. Given the added representational capability and the implementation, this behaviour for CaRA is to be expected. In spite of slightly higher wall time and memory, CaRA show notable performance improvements in both ViT-B and ViT-L architectures. Given that fine-tuning CaRA is often a one-time cost, we believe this is a reasonable tradeoff. In the case of FacT methods, we notice that they require lower ranks to match CaRA's parameter count. FacT achieves lower accuracy, with 88.4 (TK) and 87.96 (TT), while CaRA achieve 89.36. Interestingly, the DoRA (matrix-based) method exhibits higher memory usage and wall time, which we attribute to extra weight normalisation applied in each forward pass. We are still working on the experiments regarding SPT-LoRA. Method| Walltime (seconds)($\downarrow$)| VRAM (GB)($\downarrow$) | |-|-|-| LoRA|**165.7560**|**20.1079**| DoRA|204.0761|28.0645| FacT-TT|178.2826|20.2464| FacT-TK|180.5781|20.2443| CaRA(ours)|206.5548|21.3740| ***limitations of CaRA and future directions*** This study currently focuses on vision transformers. We do not present results for other tasks, like language processing, at the moment. We expect that it works for language models as well, and we will include results for language models in the paper or supplementary material of the camera-ready version. Currently, matrix multiplications are more optimized than tensor decompositions. Using a hardware-optimized tensor decomposition will be an interesting future direction. DoRA [2] introduces a weight normalisation. A normalised form of the CP-Decomposition could further boost performance and reduce training time. We will add a discussion of limitations and future directions. References: [1] Kolda, Tamara G., and Brett W. Bader. "Tensor decompositions and applications." SIAM review 51.3 (2009): 455-500. [2] Liu, Shih-Yang, et al. "Dora: Weight-decomposed low-rank adaptation." Forty-first International Conference on Machine Learning. 2024. [3] Jean Kossaifi, Yannis Panagakis, Anima Anandkumar and Maja Pantic, TensorLy: Tensor Learning in Python, Journal of Machine Learning Research (JMLR), 2019, volume 20, number 26.
Summary: This paper proposes CaRA, which uses the canonical polyadic decomposition (CPD) to replace the matrix multiplication in LoRA. There are two advantages of using CPD. Firstly, the multi-dimensional formulation can capture the structure of the head-dimension in the projection matrices in multi-head attention (MHA). Secondly, it uses fewer parameters than LoRA for the same rank. The paper also separates the matrices in MHA and those in FFN so that different numbers of decomposition dimensions can be used. CaRA is applied to ViT and tested on VTAB and FGVC benchmarks. The results are slightly better than the best baselines or on par with them. Claims And Evidence: The claim on the CaRA formulation and the parameter efficiency of CaRA is supported. Overall, the performance gain over the baselines is partially supported because the gap between CaRA and the best baselines is marginal. Methods And Evaluation Criteria: Both the method and the evaluation make sense as the paper focuses on parameter-efficient tuning in the vision domain. Theoretical Claims: There are no theoretical claims. Experimental Designs Or Analyses: The experiments on VTAB and FGVC are sound. The ablation study on changing ranks and dims is also sound. However, the visualization of heatmaps provides limited interpretability and is not convincing. Supplementary Material: No. Relation To Broader Scientific Literature: The paper relates to the parameter-efficient fine-tuning of large models. Previous research shows that LoRA is effective, and the idea in this paper provides a similar but alternative formulation using the CP decomposition of tensors. Essential References Not Discussed: The paper did not discuss the improved versions of LoRA, which are highly related. For example, it is helpful to compare with DoRA (ICML'24) and PiSSA (NeurIPS'24). Other Strengths And Weaknesses: Clarity: It is unclear how the baseline results in Tab.2 are obtained. Are they trained by the authors or referenced from other papers? Specifically, how much efforts are taken to tune hyper-parameters of baselines. For example, are the learning rate and the alpha value fairly tuned for LoRA? Besides, it would be helpful to readers if the rank of LoRA/CaRA were listed in the table. Significance: From the results of Tab.4, it seems that making the heads in MHA an extra dimension does not offer much performance improvement. So, the benefit of exploiting the head dimension is overrated in the introduction. Other Comments Or Suggestions: Most LoRA-related papers benchmark on LLMs, such as natural language understanding and natural language generation. It will be more comprehensive if the proposed CaRA is tested on the NLP domain. Questions For Authors: 1. L196-199. "Contrary to the existing works, this formulation ... capture any relations across heads ...". Why cannot LoRA capture relations across heads? 2. There are several notations to clarify. Firstly, what does the $[...]$ mean in Equ. (3-7)? concatenation/stacking? Secondly, what does the {$... ; ...$} mean in Equ. (8)? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thoughtful review and for finding our experimentation sound. Below are our responses to the points raised in the review. ***performance baseline marginal*** Considering SPT-LoRA as the best baseline in both VTAB-1k and FGVC benchmarks, we want to highlight that with only $\approx 11%$ of SPT-LoRA's parameters, CaRA achieves this performance. Furthermore, compared to other tensor-based methods, FacT-TK and FacT-TT, as shown in Figure 4, CaRA consistently achieves $\approx 1%$ or more accuracy in all three types of VTAB datasets, which is a substantial improvement on these benchmarks. Additionally, the ViT-L experiments presented below further establish CaRA's strong performance and scalability. The gains compared to the other approaches are even larger for ViT-L. Overall, these results demonstrate CaRA's effectiveness in fine-tuning the vision transformer across multiple benchmarks. Method|ViT-L #Params ($\downarrow$)|CIFAR100 ($\uparrow$)|Food101 ($\uparrow$)|Flowers102 ($\uparrow$)|Resisc45($\uparrow$)|Mean ($\uparrow$) |-|-|-|-|-|-|-| Head | - | 79.4 | 76.5 |98.9|67.8|80.65 Full | 303.3M | 86.8 |78.7|98.8|79.0|85.83 LoRA | 786.4K | 87.0|79.5|99.1|78.3|85.98 VeRA | **61.4K** | 87.5 |79.2|99.2|78.6|86.13 PiSSA | 786.4k | 87.11|79.55|**99.72**|78.55|86.24 DoRA | 860.2K | 87.93|81.15|99.57|80.33|87.25 CaRA (ours) | 75.6K | **89.36** | **83.65** |99.63|**82.43**|**88.77** ***heatmap interpretability*** We use the integrated gradient maps as a tool to interpret the model's behaviour during fine-tuning, particularly by highlighting the influential image regions the model relies on. While we acknowledge that these visualisations alone may not provide a complete explanation, they serve as an initial step towards understanding the model's decision-making process. ***comparison to DoRA and PiSSA*** Thanks for pointing out. We have included the comparisons with DoRA and PiSSA for ViT-Large training, as detailed in response to Reviewer Bbm7. We will also incorporate these comparisons in the camera-ready version and extend additionally the discussion in the related work section. ***unclear baseline results in Tab.2*** The baseline results for LoRA are from FacT [1], where the LoRA rank is set to 8. The results of Adapter and Adaptformer are from RepAdapter [3], and the other results are from their respective papers. For CaRA, we provide the details of the hyperparameters in the supplementary. To enhance clarity, we will update Table 2 to explicitly state the ranks for each method and include the sources of the baseline results. ***Tab 4, MHA extra dimension*** Table 4 presents an ablation study on CP-Decomposition across multiple dimensions. When comparing rows 2 and 3 in Table 4, we observe that both the number of parameters decreases and the accuracy increases slightly with d_h as extra dimension. The main gain is in the reduction of parameters. We will clarify this in the camera-ready version. ***NLP domain.*** Thank you for the suggestion. We would like to emphasise that our study focuses on image classification tasks and we do not make claims regarding language tasks. We already provide additional results for large vision transformers (ViT-L), and we expect that it works for language models as well. Due to the short response time, we cannot provide an evaluation for language models, but we will include them in the paper or supplementary material for the camera-ready version. ***Why cannot LoRA capture relations across heads?*** Lora works with stacks of two-dimensional matrices for the fine-tuning process. By definition, two-dimensional matrix structures and their decompositions model two-dimensional relationships. Transformer networks are defined to have the embedding dimension (d_model), the number of heads (n_h) and the dimension of each head (d_h) [1]. If we stack multiple layers, we end up with a four dimensional tensor. Matrix-based structures do not allow us to model tensors directly. Tensor decompositions solve this issue by providing a potentially n-dimensional structure [2]. This paper leverages the Canonical Decomposition, which was designed to handle the tensor data structure that Transformers create naturally. ***Notations to clarify*** Thank you for pointing it out, "[...]" in equations (3-6) corresponds to stacking, while equation 7 represents concatenations. We followed the exact representation as in [2]. {} represents the set of CP-Decomposition factors in matrix form. We will update the camera-ready version accordingly. References: 1. Vaswani, Ashish, et al. "Attention is all you need." Advances in neural information processing systems 30 (2017). 2. Kolda, Tamara G., and Brett W. Bader. "Tensor decompositions and applications." SIAM review 51.3 (2009): 455-500. 3. Luo, Gen, et al. "Towards efficient visual adaption via structural re-parameterization." arXiv preprint arXiv:2302.08106 (2023).
Summary: This paper introduces Canonical Rank Adaptation (CaRA), an efficient fine-tuning strategy for Vision Transformers (ViT). The key finding is that leveraging tensor mathematics can effectively address the high-dimensionality of Multi-Head Attention (MHA), enhancing fine-tuning performance. The main results demonstrate that CaRA outperforms existing methods in visual classification benchmarks such as VTAB-1k and FGVC, while using fewer trainable parameters. The core algorithmic idea is to quantize the Transformer into two tensors, which are used for the MHA projection layer and the feedforward layer respectively, and then fine-tune with low-rank updates in the form of Canonical Polyadic Decomposition (CPD). ## update after rebuttal: The rebuttal of the author explained most of the problems. Although this method is novel, its computational weight is not superior, so I think the original score of 3 should be maintained Claims And Evidence: The proposed claim is supported by derivational proof and citations. Methods And Evaluation Criteria: The proposed method and evaluation criteria are meaningful for visual classification problems. Tensorizing the Transformer and using CPD (Canonical Polyadic Decomposition) for low-rank updates fully consider the high-dimensional characteristics of MHA (Multi-Head Attention), enabling more efficient feature capture. Theoretical Claims: The paper provides a detailed explanation of the gradient derivation for CaRA. Experimental Designs Or Analyses: 1.The method has not been fine-tuned and tested on larger models such as ViT-L or ViT-H, so it cannot be proven whether it can maintain high accuracy on these larger models. 2.When calculating the experimental mean, it is not reasonable to first compute the average accuracy for the three datasets of Natural, Specialized, and Structured, and then calculate the average of these averages. The correct approach should be to average the results across all 19 datasets. Therefore, the accuracy of the CaRA method should be 74.14%. Supplementary Material: Yes. I have read all the content except for the code section. Relation To Broader Scientific Literature: In terms of tensor representation, existing research on tensor decomposition for fine-tuning was referenced, and based on this, a new tensorization and low-rank update method was proposed, which improves upon the deficiencies of existing methods in dealing with the high-dimensional nature of MHA. Essential References Not Discussed: This method bears some similarity to the FacT method, and FacT should be introduced in the related work section rather than solely when comparing methods. Other Strengths And Weaknesses: Strength: The language is fluent and the article is easy to read. This method is innovative, and the paper provides mathematical derivation for the method. Weakness: There are deficiencies in the design of the experiment, which cannot effectively demonstrate the performance of the method. Other Comments Or Suggestions: None. Questions For Authors: 1. What is the effect of this method on language models? 2. During the fine-tuning process, multiplication turns into a 3-dimensional matrix multiplication. Will the computational load become excessively large? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for the insightful review and for recognising the innovation in our method and finding it meaningful for the vision classification problem. We appreciate your positive feedback. Below are the responses to your review. ***The method has not been fine-tuned and tested on larger models such as ViT-L or ViT-H, so it cannot be proven whether it can maintain high accuracy on these larger models*** Following the ViT-L benchmark from VeRA [1], we evaluate our proposed CaRA method against other low-rank fine-tuning methods on four datasets. We use one A100 GPU for fine-tuning. For this experiment, we performed hyperparameter sweeps on lr, scale ($\alpha$), and various schedulers. For reproducibility, the camera-ready version will include more details of hyperparameters for CaRA, PiSSA, and DoRA, and the code will be available upon acceptance. The rank for LoRA, DoRA, and PiSSA is 8, and the rank for VeRA and CaRA is 256 and 64, respectively. Note: The benchmark in [1] only evaluates LoRA and VeRA. We also trained and evaluated PiSSA and DoRA at the request of the other reviewers. Method |ViT-L #Params ($\downarrow$)|CIFAR100 ($\uparrow$)|Food101 ($\uparrow$)|Flowers102 ($\uparrow$)|Resisc45($\uparrow$)|Mean ($\uparrow$) |-|-|-|-|-|-|-| Head | - | 79.4|76.5| 98.9 | 67.8 | 80.65 Full | 303.3M |86.8 | 78.7 | 98.8 | 79.0 | 85.83 LoRA | 786.4K |87.0 | 79.5 | 99.1 | 78.3 | 85.98 VeRA | **61.4K** |87.5 | 79.2 | 99.2 | 78.6 | 86.13 PiSSA | 786.4k | 87.11 | 79.55 | **99.72** | 78.55 | 86.24 DoRA | 860.2K | 87.93 | 81.15 | 99.57 | 80.33 | 87.25 CaRA (ours) | 75.6K | **89.36** | **83.65** | 99.63 | **82.43** | **88.77** The table demonstrates that, overall, CaRA significantly outperforms existing baseline methods with only a smaller fraction of trainable parameters ($\approx 10%$ of LoRA's parameters). In the Flowers102 dataset, PiSSA performs better than CaRA by only a slight margin. Additionally, CaRA achieves state-of-the-art accuracy on the CIFAR100, Food101 and Resisc45 datasets. The accuracy gains combined with CaRA's efficiency demonstrate that CaRA also maintains high accuracy for fine-tuning larger vision transformers. ***When calculating the experimental mean, it is not reasonable to first compute the average accuracy for the three datasets of Natural, Specialized, and Structured, and then calculate the average of these averages.*** We follow the approach used in prior works [2,3] to maintain consistency with the literature. However, we understand your point and are happy to add one more column to the table with the overall mean. ***This method bears some similarity to the FacT method, and FacT should be introduced in the related work section rather than solely when comparing methods.*** Thank you for the suggestion. We introduce and cite FacT in line 123 of the related work section, but we do not explicitly mention the name. We will update the section and introduce FacT by its name in the camera-ready version. ***What is the effect of this method on language models?*** Thank you for the suggestion. We would like to emphasise that our study focuses on image classification tasks and we do not make claims regarding language tasks. We already provide additional results for large vision transformers (ViT-L), and we expect that it works for language models as well. Due to the short response time, we cannot provide an evaluation for language models, but we will include them in the paper or supplementary material for the camera-ready version. ***During the fine-tuning process, multiplication turns into a 3-dimensional matrix multiplication. Will the computational load become excessively large?*** As shown in the table below, CaRA's training time is similar to DoRA and higher than LoRA. However, this increase in computational effort can be mainly attributed to the Python implementation of the Tensorly package. While the implementation makes it easy to use, it is not the most efficient implementation. A more efficient implementation could follow [4]. Nevertheless, the computational load remains small. In terms of memory usage, CaRA performs similarly to FacT. Method | Walltime (seconds)($\downarrow$) | VRAM (GB)($\downarrow$) | |-|-|-| LoRA | **165.7560** | **20.1079** | DoRA | 204.0761 | 28.0645 | FacT-TT | 178.2826 | 20.2464 | FacT-TK | 180.5781 | 20.2443 | CaRA(ours) | 206.5548 | 21.3740 | References: [1] Kopiczko, Dawid J., Tijmen Blankevoort, and Yuki M. Asano. "Vera: Vector-based random matrix adaptation." ICLR 2024. [2] Dosovitskiy, Alexey, et al. "An image is worth 16x16 words: Transformers for image recognition at scale." ICLR 2021. [3] Jia, Menglin, et al. "Visual prompt tuning." ECCV 2022. [4] Yang, Zi, Junnan Shan, and Zheng Zhang. "Hardware-efficient mixed-precision CP tensor decomposition." arXiv preprint arXiv:2209.04003 (2022).
null
null
null
null
null
null
null
null
Recommendations with Sparse Comparison Data: Provably Fast Convergence for Nonconvex Matrix Factorization
Accept (poster)
Summary: This paper addresses comparison-based recommendations using non-convex matrix factorization optimization. While this approach is more efficient than convex optimization, it remains challenging. The authors observe that although finding the global minimum may be difficult, the non-convex function behaves convexly near the true solution Claims And Evidence: This work begins optimization with a noiseless warm start, which seems unlikely to be practical in industrial settings or real-world datasets. Can the authors also compare results using naive gradient descent versus the projected method? Methods And Evaluation Criteria: The authors test their approach by simulating a random ground truth matrix drawn from a normal distribution, justified by the presence of popularity bias in user preferences. Figure 1 shows that convergence becomes more linear for large datasets. Theoretical Claims: I reviewed the theoretical proofs at the board level. The authors present a clear and logical flow to justify their approach. Experimental Designs Or Analyses: Figure 1 summarizes the simulation results, which align well with the theoretical hypothesis and proofs. Supplementary Material: no Relation To Broader Scientific Literature: This work proposes a probabilistic model to address comparison-based recommendation using a non-convex loss. One key assumption is that the rating matrix is noise-free or that optimization begins with a warm start. While this assumption may limit real-world applicability, the work can serve as a solid baseline for future studies to explore probabilistic approaches with cold starts and noisy data, which are more common in both industry and academia. Additionally, given the ground-truth matrix's structure—representing user preferences between two items—future research could extend the model to support k-item comparisons. Essential References Not Discussed: Although the paper references prior work that informs its approach, it lacks a comparison with other baseline methods to evaluate its results. Other Strengths And Weaknesses: 1. The paper is well-written and easy to follow. 2. It provides solid theoretical proofs. 3. The paper's first assumption is that the rating matrix is noiseless. Given that most real-world recommendation datasets are noisy, or at least somewhat, this assumption may not be realistic, particularly in industry settings. 4. The results in Figure 1 do not account for other hyperparameters, such as the projection step. Other Comments Or Suggestions: no Questions For Authors: Most of the theoretical proofs assume a non-noisy setting. I wonder how introducing a hyperparameter to adjust for noise in the ground truth matrix would affect the results in Figure 1. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your careful review and appreciation of our writing and theoretical analysis. We understand your concerns regarding the warm start and noiseless assumptions and address them below. We also address your questions about with the projection step. Before addressing your concerns, we restate our main result: in a neighborhood of the ground-truth solution, the loss function is strongly convex. Thus, when initialized within this neighborhood, projected gradient descent converges linearly to the ground truth. This is the first such result for the learning from comparisons problem. In fact, the main takeaway of our paper is a very practical message: it is possible to learn personalized user preferences from comparison data in a computationally efficient and statistically efficient manner. As you noted, our work serves as a baseline for future theoretical studies to establish guarantees with cold starts and noisy data. We hope this will be the case and are actively working toward such generalizations, though these extensions are nontrivial. The challenges can be seen in the matrix completion literature, where initial assumptions, similar to ours, were later relaxed by follow-up work (see Section 1.1 of our paper). However, broadly speaking, these assumptions are largely for analytical convenience; they are not needed in practice. Our simulations aim to highlight this fact. Regarding the projection step: we need it to ensure that iterates remain in the set of incoherent matrices; this, in turn, is necessary for our theoretical analysis. If iterates remain incoherent naturally via gradient descent, projection is unnecessary. Our simulations (Figures 1a, 1c) confirm that vanilla gradient descent also converges to the true solution. The strong overlap of error curves for projected and vanilla gradient descent indicates that the projection step hardly perturbs the iterates. In matrix completion, early theoretical works assumed projection steps or regularization to maintain incoherence, but later research showed these were not necessary. Ma et al. (2020) demonstrated that gradient descent has implicit regularization, leading to convergence without explicit projection. Extending this implicit regularization result to learning from comparisons is a promising research direction. The warm start assumption ensures initialization in a strongly convex region, allowing us to prove linear convergence of gradient descent. However, our experiments (Figures 1a, 1c) indicate that this assumption is not needed in practice, as even random initialization leads to linear convergence. This phenomenon has also been observed in matrix completion. Early work (2010–2015) on nonconvex matrix factorization assumed a warm start to establish theoretical guarantees, which was later relaxed by Ge et al. (2016–2017), who showed that all local minima are global. Since gradient descent finds local minima, this explains why warm starts are not necessary in practice. However, their analysis relies on quadratic loss functions, making extensions to our setting nontrivial. Furthermore, global analyses of matrix completion do not provide convergence rates, whereas our work guarantees linear convergence within a specific region. The noiseless assumption is primarily for analytical convenience; our algorithm can be applied directly to noisy data without modification. This assumption enables us to show exact convergence to the ground truth. Without it, convergence can only be guaranteed up to some statistical estimation error. Analyzing the noisy setting is an important but separate research problem that will likely build on the key ideas introduced in our work. While our original submission lacked simulations on noisy data, we have since conducted extensive experiments showing that performance degrades gracefully with noise (noise in the scores and noisy comparisons). Specifically, when adding a small noise matrix to the ground-truth low-rank matrix, gradient descent converges to a point where the residual error is nearly equal to the norm of the noise matrix. This suggests our method effectively learns the best low-rank approximation of the noisy ground truth. We will include these results in the revised paper. (Unfortunately, we are not able to convey meaningful results through markdown tables here.) In summary, we assume a warm start, noiseless observations, and the projection step to simplify the analysis. However, our experiments suggest these assumptions are not necessary in practice. Even without them, simple gradient descent converges efficiently, making learning from comparisons a practical and robust approach. We hope this rebuttal (along with the other rebuttals) addresses the major concerns you have of our paper. We shall be happy to answer any other questions you may have.
Summary: This paper focuses on the nonconvex learning problem in recommendation systems based on pairwise user comparison feedback, which has often been formulated as a convex optimization over utility matrices in prior literature. The authors propose a nonconvex matrix factorization approach to model pairwise comparisons as noisy evaluations based on the difference in latent utilities and to solve a maximum likelihood formulation over the latent factors using a nonconvex optimization approach. The main theoretical contribution is a proof that, under a warm start and in a sparse data regime, the negative log-likelihood objective exhibits a locally strong convexity–like behavior. This property guarantees that a projected gradient descent method converges exponentially fast to a solution equivalent to the ground truth. The paper provides detailed proofs employing matrix concentration inequalities (in particular, a matrix Bernstein inequality) and supports the theoretical findings with simulation results on synthetic data. ## update after rebuttal I will keep the scores unchanged. Claims And Evidence: * **Claims:** The key claim of this papaer is that the nonconvex formulation for learning from sparse comparison data exhibits (restricted) strong convexity in a neighborhood of the true solution, which guarantees exponential convergence of projected gradient descent given a warm start. Additional claims include sample complexity bounds and explicit dependence on parameters such as the incoherence and condition number. * **Evidence:** The authors back these claims with detailed theoretical derivations (Theorem 3.1, supporting Lemmas 4.1–4.7 and their proofs in the appendix) accompanied by simulation experiments that validate the exponential convergence rate. All the claims are well motivated and supported by rigorous proofs. Methods And Evaluation Criteria: * **Methods:** The authors propose a projected gradient descent method with two projection steps: one to enforce incoherence of the iterates and one to remove the shift invariance (due to comparing only differences). * **Evaluation Criteria:** The theoretical results are evaluated using concentration inequalities and error bounds derived via the matrix Bernstein inequality. The experimental evaluation measures convergence speed via normalized Frobenius norm errors over iterations. Theoretical Claims: I did my best to review the derivation of key intermediate results (particularly Lemma 4.3, 4.4, 4.5) under the stated assumptions, and found the arguments to be mathematically sound given the stated assumptions. Experimental Designs Or Analyses: The experiments use synthetic data in both low- and high-dimensional settings, simulating pairwise comparisons using a known ground-truth matrix. The simulation results confirm that, when using the recommended step-size and sample complexity, the algorithm exhibits linear (exponential) convergence as predicted by the theory. While the synthetic experiments provide compelling evidence for the theoretical findings, given that recommender systems are highly practical applications and considering that most theoretical works mentioned in this paper have been tested on real-world datasets, the absence of experiments on real-world comparison data is a potential concern. Supplementary Material: I have reviewed the appendix of this paper which mainly contains the proofs of intermediate results for the paper's main theoretical contributions. Relation To Broader Scientific Literature: The paper is situated at the intersection of nonconvex matrix factorization and learning from comparison data. It represents an important extension of [1], addressing the scalability issues of previous works [2,3] on large-scale datasets. [1] Negahban, S., Oh, S., Thekumparampil, K. K., and Xu,J. Learning from comparisons and choices. Journal of Machine Learning Research, 19(40):1–95, 2018. [2] Park, D., Neeman, J., Zhang, J., Sanghavi, S., and Dhillon, I. Preference completion: Large-scale collaborative ranking from pairwise comparisons. In Proceedings of the 32nd International Conference on Machine Learning, 2015. [3] Rendle, S., Freudenthaler, C., Gantner, Z., and Schmidt Thieme, L. BPR: Bayesian personalized ranking from implicit feedback. In Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence, 2009. Essential References Not Discussed: Based on my knowledge, the related work discussed in this paper has comprehensive coverage and adequately demonstrates the paper's contributions. Other Strengths And Weaknesses: The theoretical guarantees provided in this paper for the nonconvex formulation—especially in the sparse data regime—are a clear and noteworthy contribution to studies on pairwise comparisons in recommendation systems. However, while the theoretical aspects are strong, the lack of empirical validation on real-world comparison datasets somewhat limits the practical implications of the work, particularly considering the applied nature of recommender systems. Other Comments Or Suggestions: It is recommended that the authors demonstrate empirical validation on real-world datasets. If existing datasets do not meet the assumptions of this paper, or if there are other reasons that only synthetic experiments can be conducted, please explicitly explain these limitations. Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate your time and effort in reviewing our paper and recognizing its theoretical contributions. We also acknowledge your suggestion that experiments on real-world datasets would further strengthen our work. While we generally agree, we have chosen not to include such experiments here for the following reasons. First, the primary focus of this paper is on analyzing the nonconvex optimization problem that arises in the context of learning from comparisons. The core challenge is to establish that the loss landscape has favorable structural properties, ensuring gradient descent quickly converges to the global minimum. To focus on this aspect, we assume a low-rank matrix model for utilities and an oracle choice model, both of which are widely accepted in the recommender systems literature. While real-data experiments could complement our results, they would also introduce confounding factors such as how well the assumed model fits reality and the extent of noise in the data. To avoid this confusion, we work exclusively with synthetic data, where these factors do not crop up. Through our experiments, we demonstrate that some of the assumptions needed for the theoretical analysis (warm start, projection step) are not necessary in practice, which improves the practical viability of this method. Our work demonstrates that learning personalized preferences from comparison data is both statistically efficient (low sample complexity) and computationally efficient (exponentially fast gradient descent). Second, we are not the first to study the problem of learning personalized rankings from comparison data; both Rendle et al. (2009) and Park et al. (2015) study this very problem. They use the same factorized optimization approach as our work and test it on real-world datasets. In terms of the method, therefore, there is no substantial difference between this prior work and ours. However, before our work, these methods were essentially heuristics. We provide a solid theoretical foundation for why gradient descent on factorized matrices leads to good solutions, a nontrivial result given the nonconvexity of the problem. Despite extensive prior work on nonconvex optimization—especially in matrix completion—existing results do not apply to the learning-from-comparisons setting. Our work addresses this open problem. It is also useful to draw a parallel with the research on classical matrix completion: empirical success of matrix factorization for recommendations (Mnih & Salakhutdinov, 2007; Koren et al., 2009) preceded theoretical justification, which emerged gradually through successive refinements. Finally, an important limitation in the field is the lack of explicit comparison datasets. To the best of our knowledge, there is no dataset in the regime of recommender systems where users are \textit{explicitly} asked to compare two items according to their preference. Note that both Rendle et al. (2009) and Park et al. (2015) \textit{infer} comparisons from other forms of data. In Rendle et al. (2009), items that a user has viewed/purchased are interpreted to be more preferred than those that the user has not viewed. In Park et al. (2015), a user is said to have preferred one item over another if it rates the first item higher than the second. While these inferred comparisons have their merits, our work suggests that explicitly collecting comparison data could also be an effective approach; it could potentially be more effective learning from ratings. Curating such a dataset and testing this hypothesis is an important direction for future work. In the absence of such datasets, however, we would have to resort to inferring comparisons from ratings/views, as done by prior work. Doing so would merely amount to reproducing the results of Rendle et al. (2009) and Park et al. (2015), given the large similarity of the methods. Thus, we have not performed such experiments here. We hope this rebuttal helps you better appreciate the contribution of our paper. We shall be happy to answer any other questions you may have.
Summary: In practical settings, users are often picking between their favorite of a few items. As such, we learn about a user’s preferences via the comparisons they made. Given features about the users and the items, the objective is to recover the low-rank matrix of information given data points of the format (user, (item 1, item 2), favorite expected outcome). Specifically in this setting, the authors assume that the outcomes are noiseless, meaning that in fact the expected outcome is revealed. Another assumption made is that we are given an initial matrix which is in an epsilon-ball in the Frobenius norm around the ground-truth matrix. The authors provide theoretical analysis of the non-convex formulation of this problem and provide an algorithm to solve the problem under the assumptions made. The authors also empirically validate their algorithm. Claims And Evidence: Yes, the claims are supported by clear and convincing evidence. Methods And Evaluation Criteria: Yes, the proposed methods make sense. Specifically, the algorithm is supported by carefully written theorems. Theoretical Claims: I have checked the correctness of the claims made in the paper. Experimental Designs Or Analyses: The experimental design for the most part is sound and valid. However, it would be enlightening also to include different values for $r$, the rank of the underlying matrix. Supplementary Material: Only up until Lemma A.3. Relation To Broader Scientific Literature: This paper addresses an open problem and provides an algorithm to solve it under some assumptions. Essential References Not Discussed: n/a Other Strengths And Weaknesses: Strengths: * The paper addresses an open problem, which is the matrix factorization from comparison data in the non-convex setting. Although a few simplifying assumptions are made in order to achieve this result, these are still meaningful steps in solving the more general problem. The algorithm’s dependence on the problem size is $O(nr^2 \log n)$ where $n$ is the number of samples and $r$ is the rank of the matrix, and converges exponentially fast. The algorithm itself makes two projections in order to stay in the correct region of matrices, and shows that these projections maintain the correct invariance throughout the algorithm. * The analysis is overall clear and easy to follow. Weaknesses: * Although the paper is for the most part clear, some improvements could be made in Sections 2.1 and 2.2 to properly motivate the problem setup. In particular, why should $Z^*$ be considered the ground truth matrix, rather than $X^*$? Also, it would be nice to add some motivation for the following definitions such as $Y^*$ as well, and what role these matrices play later on. Furthermore, it would be helpful to the reader to state exactly the definition of $A_k$. Other Comments Or Suggestions: See above Questions For Authors: See above Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank you for a thorough and positive review of our paper. Here, we address the couple of concerns you have raised. First, you mentioned "it would be enlightening also to include different values for $r$, the rank of the underlying matrix." We agree. Below, we give a table highlighting the estimation error as a function of the rank and the number of samples. We observe that to get the same estimation error, the number of samples grows (approximately linearly) with the rank. In the revised version of the paper, we shall include this experimental result in the form of a heatmap, where this relationship is easy to observe. Second, you mentioned "some improvements could be made in Sections 2.1 and 2.2 to properly motivate the problem setup." Thank you for this feedback, we shall improve these sections in the final version. Indeed, in the current form, some of the definitions may seem a bit arbitrary without further justification. To answer your specific questions: "Why should $Z^*$ be considered the ground truth matrix, rather than $X^*$? What is the motivation behind defining $Y^*$?" Both these questions are related. The analysis crucially relies on analyzing symmetric matrices $Y$, which can be factored as $Y = ZZ^T$. To elaborate: we work with terms of the form $\mathcal{D}(Y)$ (defined in (21)), and prove our concentration results for such terms; for these results, $Y$ must be a symmetric matrix. Fundamentally, however, we are trying to estimate the asymmetric matrix $X^* = U^*{V^*}^T$. To overcome this gap, we transform the problem of estimating $X^*$ to one of estimating $Y^*$ (a bigger, but equivalent matrix), which in turn is reduced to the problem of estimating $Z^*$ (because $Y^* = Z^* Z^{*T}$). Thus, ultimately, our loss function is in terms of $Z$ (see (12)). We briefly allude to the relation between these matrices in the first paragraph of Section 1.3, but we will reiterate these connections again in Section 2.1 of our revision. This particular transformation is first proposed in the paper of Zheng and Lafferty, and we borrow this idea from there. Finally, $A_1, A_2, \ldots, A_m$ are i.i.d. random matrices, all of the same form as $A$ (described in equation (8)). We now present a table highlighting the performance of our algorithm as a function of the underlying rank $r$ and the number of samples $m$. We generated a matrix of size 2000x3000 in the same manner as described in Section 5 of the paper. The numbers shown below are the estimation errors, averaged over ten independent runs. | Rank ↓ / Samples → | 30000 | 40000 | 50000 | 60000 | 70000 | 80000 | 90000 | 100000 | |--------------------|--------------|--------------|--------------|--------------|--------------|--------------|--------------|--------------| | 2 | 0.0606 ± 0.0037 | 0.0500 ± 0.0077 | 0.0285 ± 0.0103 | 0.0208 ± 0.0071 | 0.0105 ± 0.0029 | 0.0097 ± 0.0073 | 0.0062 ± 0.0019 | 0.0044 ± 0.0005 | | 3 | 0.0800 ± 0.0015 | 0.0723 ± 0.0041 | 0.0595 ± 0.0086 | 0.0445 ± 0.0092 | 0.0353 ± 0.0080 | 0.0164 ± 0.0038 | 0.0120 ± 0.0035 | 0.0086 ± 0.0017 | | 4 | 0.0955 ± 0.0009 | 0.0919 ± 0.0020 | 0.0837 ± 0.0024 | 0.0715 ± 0.0057 | 0.0507 ± 0.0082 | 0.0384 ± 0.0117 | 0.0329 ± 0.0111 | 0.0170 ± 0.0080 | | 5 | 0.1081 ± 0.0005 | 0.1072 ± 0.0010 | 0.1025 ± 0.0025 | 0.0888 ± 0.0056 | 0.0744 ± 0.0039 | 0.0547 ± 0.0091 | 0.0414 ± 0.0108 | 0.0298 ± 0.0121 | | 6 | 0.1190 ± 0.0004 | 0.1196 ± 0.0008 | 0.1160 ± 0.0020 | 0.1076 ± 0.0030 | 0.0960 ± 0.0041 | 0.0787 ± 0.0053 | 0.0590 ± 0.0110 | 0.0423 ± 0.0111 |
null
null
null
null
null
null
null
null
Clone-Robust AI Alignment
Accept (poster)
Summary: The paper evaluate the robustness of current RLHF algorithms in the presence of approximate clones and develop RLHF algorithms to enhance the robustness regarding this. Claims And Evidence: Yes, I think most of the claims made in the submission clear and convincing. However, the empirical experiments (case study) are not sufficient enough for me. Methods And Evaluation Criteria: See Claims And Evidence part. Theoretical Claims: I didn't check all the proofs in detail but the theorems provides seem to be reasonable and sounding. Experimental Designs Or Analyses: The paper is mostly about theory. See Claims And Evidence part. Supplementary Material: I didn't check all the supplementary material in detail. Relation To Broader Scientific Literature: The paper provides a good vision for the current understanding of RM/RLHF. Essential References Not Discussed: N/A Other Strengths And Weaknesses: See Questions For Authors Other Comments Or Suggestions: N/A Questions For Authors: 1. why the focus of the paper is RLHF rather than reward model itself? 2. I am not quiet familiar with the area. Could you please explain briefly how the theoretical framework can guide the practical implementation of training real LLMs? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your comments! Below we address your specific questions: > why the focus of the paper is RLHF rather than reward model itself? Our paper focuses on the step of the RLHF pipeline that takes as input a preference dataset and outputs a reward model. In RLHF, this reward model is then used to fine-tune LLMs. However, the focus of our paper (robustness to approximate clones) is a desirable property of the process of going from preference data to a reward model. We highlight implications for RLHF because RLHF is currently a very common application of reward modeling from preference data, but our results could be used in any application of reward modeling. > I am not quiet familiar with the area. Could you please explain briefly how the theoretical framework can guide the practical implementation of training real LLMs? To use our framework to train LLMs in practice, one would execute the following two steps. First, learn a reward function by computing the weighted MLE on the given pairwise comparisons; this is the part addressed (and potentially improved) by our paper. The second step is a standard policy optimization step, where (roughly speaking) a given base LLM is fine-tuned to maximize the expected reward from step 1, for example, through proximal policy optimization (PPO). Since the second step simply takes as input a base LLM and a reward model, the output of our weighted MLE can be directly plugged into this heavily studied pipeline.
Summary: The paper considers axiomatic AI alignment. More precisely, the paper is about Reinforcement Learning with human feedback (RLHF). As motivated by Conitzer et al. (2024), consistency with respect to clones is an interesting property for RLHF algorithms. In this paper, each alternative is identified with its context, i.e., some $d$-dimensional real vector that lies in some infinite set of finite volume $S$. Roughly speaking, the goal is to aggregate the utility functions of individual voters (annotators) into a collective utility function. The catch is that we are only given a finite set $M\subseteq S$, query samples of the voter preferences over $M$, and want this aggregated utility function to be clone-proof. Each annotator has a utility function $r$ over the alternatives in $M$, and whenever asked to compare two alternatives, says that they prefer alternative $a$ over $b$ with probability $ e ^ ( r(a) ) / [ e ^ ( r(a) ) + e ^ ( r(b) ) ] $, known as the Bradley-Terry (BTL) model. Queries are (two)-alternative subsets of the form \{a,b\}, and we denote by $Q$ a (multi)-set of queries where each possible query is contained at least once. For each query $q \in Q$, we choose an annotator uniformly at random and obtain a sample of the annotator’s preference for that query. In total, we obtain a random dataset $D$ consisting of all queries in $Q$ and the respective responses. Given the query set $Q$ and the resulting dataset $D$, the goal is to find a social utility function $r$ that best models the collective preference of the annotators. The first result [Theorem 2.3] states that no algorithm can always output a collective preference $r$ that is equal to the mean reward function, i.e., E_i[r_i]. This impossibility already holds true for two voters. Next [Theorem 2.5], the authors establish a relation between the average win rate and the regularized MLE, which is defined as the utility function $r^D = r$ minimizing 0.5 \lambda \sum_{x\in M} r(x)^2 - \sum_{x_1,x_2\in M} p_D(x1>x_2) log( e^(r(x_1) / [ e^(r (x_1) + e^( r (x_2) ) ] ). The authors then introduce their core axiom, robustness to approximate clones, which says that for each $\delta>0$ there should be an epsilon>0 such that adding an alternative that is of distance at most epsilon to an existing alternative changes the reward functions by all alternatives by at most delta, and the utility for the almost-clone is also similar to the original utility of the alternative that it almost clones. Clearly, by being Borda-like, MLE violates robustness to approximate clones [Theorem 3.2], as it even fails robustness to precise clones. To counteract this phenomenon, the authors introduce a Voronoi-approach to define a distribution $w_D$ over $D$: each alternative in $S$ is projected to its closest candidate(s) in $D$ and then, for the weight of $y\in \mathcal M$, $w_D(y)$ is calculated using the volume of the set of all $x\in S$ that are projected onto $y$ [Definition 4.1]. The main result of the authors is that the reweighted MLE satisfies robustness to approximate clones [Theorem 4.2]. Analogously to Thm 2.5, the authors present an identity involving the weighted average win rate and the weighted MLE estimator [Theorem 4.4], implying that the ordering induced by the weighted MLE estimator is equal to the weighted average win rate of the alternatives [Corollary 4.5]. Then, the authors argue that the weighted MLE approach is an approximation of the MLE over S [Theorem 4.6] (This essentially boils down to swapping some sums and integrals). The authors then discuss a synthetic case study that illustrates the susceptibility of MLE to clones, while the weighted MLE unsurprisingly performs better. Claims And Evidence: Claims are of a mathematical nature and supported by proofs Methods And Evaluation Criteria: The proposed weighted MLE is natural and makes sense for the problem at hand. Theoretical Claims: The proofs of Theorem 2.3 and Theorem 4.6 are sound. Experimental Designs Or Analyses: I did not. Supplementary Material: I read the appendices corresponding to the verified proofs, i.e., Appendix B and H. Relation To Broader Scientific Literature: The study of consistency w.r.t. to clones is indeed an important task for AI alignment, see e.g., Conitzer et al. The proposed approach using Voronoi diagrams seems sensible and suits this task Essential References Not Discussed: N/A Other Strengths And Weaknesses: The findings of the paper would be a good addition to the conference. I like the question asked by the authors and think it is both quite natural and interesting. Further, I am a big fan, of recent works combining social choice theory and AI alignment and think that this paper provides a very interesting twist on it. For the most part, the paper is also well written, and nice to read, even for a reader who is not necessarily well versed in the literature on LLMs. One criticism is that currently, the precise properties of $S$ are not defined in the preliminaries, there is no mention of $S$ being Borel-measurable or of finite(!) volume, which confused me for quite a bit and seems to be crucial for the paper if I am not mistaken. In several parts, the paper could also be more non-expert friendly, e.g., KL divergence is mentioned but not defined on page 4. Other Comments Or Suggestions: The authors consider weighting alternatives in \mathbb R^d within a set $S$ of bounded volume. When moving from $S$ to the unbounded $\mathbb R^d$, there is a paper by Berriaud and Wattenhofer that considers this version of the problem. Most notably, I see some similarities between their axiom 6 (alpha-locality under the addition of clones) and the here-considered Definition 3.1 (robustness to approximate clones). Further, both papers use the idea of utilizing projections for integrals and their formula for the $g$ function in Section $4$ seems to resemble the approach taken in this paper for the weighted MLE. As the paper came out after the ICML deadline this is of course concurrent work and not relevant to the judgment of the paper, however, in case of acceptance, the paper should still compare to it (to keep the academic record complete). ## update after rebuttal Thank you for the nice rebuttal! Questions For Authors: 1. It is unclear to me why it is desirable that the average win rate is the sum of the empirical win rate and the reward function itself. Is there some motivation for this? 2. In social choice, there is a strengthening of independence of clones called composition consistency. Roughly speaking, if one replaces an alternative with a component, then the probability of the alternative gets distributed to the alternatives in the component is precisely proportional to how the probabilities would have been if the component was viewed in isolation. Have you thought about whether this notion could make sense in this context? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your helpful comments and feedback! Below we address your specific questions: > It is unclear to me why it is desirable that the average win rate is the sum of the empirical win rate and the reward function itself. Is there some motivation for this? The original motivation for this result (Theorem 2.5) was to give better intuition for why the ranking induced by the MLE estimator is the same as the Borda Count ranking (which follows directly from the relationship between the estimated win rate and the empirical win rate). A secondary benefit of Theorem 2.5 is that it gives an additional motivation for the MLE solution. A natural alternative to using MLE estimation is to find the reward function that best matches the empirical win rates; Theorem 2.5 says that the BTL MLE solution is (almost) equivalent to matching empirical win rates. >In social choice, there is a strengthening of independence of clones called composition consistency. Roughly speaking, if one replaces an alternative with a component, then the probability of the alternative gets distributed to the alternatives in the component is precisely proportional to how the probabilities would have been if the component was viewed in isolation. Have you thought about whether this notion could make sense in this context? Mapping composition consistency to RLHF is difficult for a few reasons. In social choice, composition consistency is defined by running a social choice function twice on a set of rankings: once with the clone sets grouped together and once on the winning clone set. This is possible because social choice functions map rankings (over alternatives) to alternatives, so they can be applied iteratively. In RLHF, however, we are mapping pairwise comparisons to a reward function, so we would have a type error if we tried to use an analogous definition. Composition consistency also may not be a desirable property for RLHF. In contrast to traditional social choice, RLHF assigns a reward to every alternative, and therefore the order of alternatives does not matter as much as the actual rewards. Specifically, for a set of approximate clones, we do not care about the order of the clones (like in composition consistency), but instead we want that the clones all have similar reward values. In fact, for any output reward function that is continuous, any set of approximate clones will also have approximately the same reward as desired. Therefore, composition consistency does not seem desirable/necessary in RLHF as long as the output reward function is continuous. > One criticism is that currently, the precise properties of $S$ are not defined in the preliminaries, there is no mention of being Borel-measurable or of finite(!) volume, which confused me for quite a bit and seems to be crucial for the paper if I am not mistaken. In several parts, the paper could also be more non-expert friendly, e.g., KL divergence is mentioned but not defined on page 4. Thank you for pointing this out. We certainly agree that $S$ must be finite and measurable, and we will add more specific properties of $S$ in the model section for the final version of the paper. We will also make sure to define any more niche technical terms used throughout. > ... there is a paper by Berriaud and Wattenhofer that considers this version of the problem... Thank you for bringing this to our attention! We will be sure to discuss this work in the final version of our paper. --- Rebuttal Comment 1.1: Comment: Thank you for the nice response!
Summary: The paper addresses a key challenge in LLM alignment, that of making sure that the RLHF model is unbiased. Specifically, authors show that the distribution of data used to train the model can have a significant impact on how the RLHF model behaves, and as such it is prone to intentional or unintentional biases. This usually happens when the dataset is biased (authors talk about duplicate or near duplicate pieces of data). The authors propose the concept of "clone-robustness" meaning the the model training is immune to data clones present in the dataset. This is done by using a weighted MLE algorithm that assigns lower weights to alternative data points that are similar to other data points. Claims And Evidence: Yes, most claims are well supported, for example "standard RLHF is not robust to clones" 1. Theoretical proof is provided via theorem 3.2 1. The case study in section 5 also supports the claim above "weighted MLE ensures robustness to approximate clones" 1. Theoretical proof is provided via theorem 4.2 1. Also backed up by the case study in section 5 Couple of unproven claims: Generalizability of Weighted MLE 1. Tested only in a narrow set of scenarios (describe Paris), which makes the real world effectiveness unclear 2. A specific weighing scheme is discussed, but its unclear whether alternative schemes might perform better or worse Methods And Evaluation Criteria: The methods used (theoretical proof + experimental validation) sound correct for the problem at hand. However, the scope of the study is quite narrow, which means that it is unclear how the weighted MLE would perform on a diverse set of scenarios. The study simulates human preferences using an LLM as an annotator. An LLM might not accurately capture human's method of annotations. Similarly, the diversity of annotator preferences is modeled using fixed categories. One key missing piece is the lack of validation of the weighted methods against some standard RLHF datasets from industry or academia. Theoretical Claims: No, I did not look at the theoretical proofs in detail. Experimental Designs Or Analyses: ### Strengths 1. Directly Tests Clone Robustness: The controlled introduction of cloned responses effectively isolates the impact of near-duplicates on RLHF training. 1. Uses Embedding-Based Similarity Measures: The use of OpenAI’s text-embedding-3-small model to represent response similarities is a reasonable approximation of how RLHF embeddings work in real-world AI training. 1. Quantitative Analysis with Win Rate Comparisons: The study evaluates how reward scores shift across different topic categories (food, art, romance), with error bars for variance. ### Weaknesses 1. Use of LLMs as Annotators Instead of Human Feedback: The study simulates human preference data using an LLM (GPT-4o-mini) instead of actual human annotators. 1. Narrow Scope of Dataset (Single Prompt: “Describe Paris”): The experiment only tests one question, meaning results may not generalize across different types of RLHF tasks (e.g., safety-critical alignment, long-form reasoning) 1. No Evaluation on Real-World RLHF Datasets: The datasets used are synthetic, and the paper does not benchmark performance on real RLHF datasets. 1. Limited Statistical Analysis: The paper visually presents reward differences (e.g., Figure 3 & 4) but does not conduct rigorous statistical significance tests. 1. No Robustness Testing for Weighted MLE with Alternative Weighting Schemes: The experiment only evaluates one version of Weighted MLE with a fixed weighting function. Supplementary Material: Yes, looked at I.2. Case Study Preference Dataset Generation Relation To Broader Scientific Literature: Tideman (1987): This paper builds upon Tideman's introduction of "independence of clones" by adapting it from voting theory to RLHF, proposing a new algorithm (weighted MLE) that ensures robustness to approximate clones. Elkind et al. (2010, 2012): The authors extend Elkind et al.'s studies on manipulation through cloning by applying similar ideas to RLHF, highlighting vulnerabilities in standard RLHF algorithms and motivating their new robust solution. Conitzer et al. (2024): This paper elaborates on Conitzer et al.'s suggestion that independence of clones is important for RLHF, providing concrete examples and a new algorithm addressing this issue. Xu et al. (2023): The authors share Xu et al.'s concern about duplicates in RLHF datasets but extend their results beyond dichotomy models and three-way comparisons to standard pairwise comparisons. Siththaranjan et al. (2023): This paper extends Siththaranjan et al.'s insights about regularized MLE and average win rates by proving a stronger theoretical relationship, and builds upon their impossibility result for diverse preferences by showing an even stronger impossibility result. Essential References Not Discussed: Christiano et al. (2017) : "Deep reinforcement learning from human preferences" (NeurIPS 2017): This foundational paper introduced RLHF, demonstrating that LLMs can learn from pairwise human preferences. The authors critique RLHF's vulnerability to dataset biases, but do not cite Christiano et al. (2017), where these concerns first emerged. Other Strengths And Weaknesses: The key contributions / strengths are 1. formal definition of robustness to approximate clones 1. proving standard MLE is not clone robust 1. Proposing and proving that weighted MLE works Key gaps are 1. lack of validation / benchmarking against real world RLHF 2. lack of discussion on weighing schemes 3. lack of comparison with alternative solutions for clone robustness Other Comments Or Suggestions: None Questions For Authors: 1. Have you tested Weighted MLE on real-world RLHF datasets? Please share results if so. 1. How does Weighted MLE perform across different types of RLHF tasks (e.g., factual questions, multi-turn dialogue)? 1. Could you provide real-world examples where approximate clones have distorted RLHF in deployed AI systems? Ethical Review Concerns: None Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your comments! Below we address your specific questions: > Have you tested Weighted MLE on real-world RLHF datasets? Please share results if so. How does Weighted MLE perform across different types of RLHF tasks (e.g., factual questions, multi-turn dialogue)? Could you provide real-world examples where approximate clones have distorted RLHF in deployed AI systems? This paper is primarily a theoretical contribution to the field of RLHF, as we propose a theoretical property and prove results about that property for current and new algorithms. While we included the case study as a proof-of-concept for the proposed weighted MLE, more intensive experiments using the weighted MLE in different applications is beyond the scope of our paper. We do however think this is a very important topic for future work, and we hope to provide sufficient motivation and information for practical applications of the weighted MLE in the future. > Key gaps are > - lack of discussion on weighing schemes > - lack of comparison with alternative solutions for clone robustness Because robustness to clones in the context of RLHF was introduced in this paper, there is no previous work that has alternative solutions for this problem. There are voting rules from traditional social choice that are independent of clones, and these could potentially be adapted to the RLHF setting. However, such solutions would be very different than the current MLE estimation and may be less practical. We briefly mention other weighting schemes at the end of the discussion, specifically that the $w(\cdot)$ function can potentially be replaced with other functions that upweight more unique alternatives. We are happy to include more discussion on both of these points in the final paper.
Summary: This paper mainly focus on the problem of unbalanced input datasets in RLHF, which is caused by adversarial manipulation or inadvertent repetition. The key motivation is to make RLHF robust towards non uniformly distributed datasets. Inspired by social choice theory, they introduced robustness to approximate clones, a desirable property of RLHF algorithms which requires that adding near-duplicate alternatives does not significantly change the learned reward function. Claims And Evidence: They show that the standard RLHF algorithm based on regularized maximum likelihood estimation (MLE) fails to satisfy this property. In contrast, a weighted MLE can alleviate this problem. Methods And Evaluation Criteria: The voting rule robust towards adding duplicates of alternatives is important. The authors alims this as "satisfy independence of clones". Informally, a voting rule is independent of clones if after adding an alternative a', which is equivalent to another alternative a, the output of the voting rule does not change. In RLHF, there do exist "approximate clones", namely two alternatives which are very close by a given distance metric and for which all annotators have very similar values, where the distance metric depends on the nature of the alternatives. The proposed new training objective is simple, via, down-weighting alternatives that are similar to other alternatives (and therefore provide less new information) and up-weighting alternatives that are different than other alternatives (and therefore provide more new information) Theoretical Claims: As an extension, if n>2 in Theorem 2.3, will the proof still stand? Experimental Designs Or Analyses: The experiment is a bit toy setting. Supplementary Material: Yes. Additional related work Relation To Broader Scientific Literature: This work can give some inspiration to the practical reward model training pipeline optimization, especially handling the data unbalance. Essential References Not Discussed: NA Other Strengths And Weaknesses: The presentation is great. The motivation towards studying unbalanced dataset is clearly explained with examples provided. There is one question towards the necessity of removing near duplicates. Other Comments Or Suggestions: NA Questions For Authors: "Fundamentally, the mandate of RLHF algorithms is to solve a preference aggregation problem". Could you please share your insight on why it is related to "aggregation"? In my opinion, it is just to generalize preference even given conflict preferences existed. One question towards "near duplicate": Sometimes these near duplicates are necessary in practice, because some topic do have a heavy weight among the entire topic distribution. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your helpful comments and feedback! Below we address your specific questions: > As an extension, if n>2 in Theorem 2.3, will the proof still stand? Yes, the results do extend for $n > 2$. Learning a reward function can only become information theoretically harder for $n > 2$ because there are more variables that need to be estimated. Therefore, the same impossibility result holds for larger $n$ as well. > "Fundamentally, the mandate of RLHF algorithms is to solve a preference aggregation problem". Could you please share your insight on why it is related to "aggregation"? In my opinion, it is just to generalize preference even given conflict preferences existed. In the field of social choice, preference aggregation refers to the problem of taking voter preferences that are potentially conflicting, and *aggregating* them into a single output that captures the voters' preferences. Mapping this onto RLHF, the annotators may have conflicting preferences, and the goal is to find a single reward function that captures the annotators' preferences. Therefore, "aggregation" in RLHF refers to the aggregation of all of the individual annotator preferences into one single reward function that can be used for LLM tuning. > One question towards "near duplicate": Sometimes these near duplicates are necessary in practice, because some topic do have a heavy weight among the entire topic distribution. This is a great point, and we thought about this a lot while writing the paper. In practice, we definitely expect that some topics would be more heavily weighted in the overall topic distribution. In fact, having near duplicates in the dataset can be good, because more comparisons involving two common response topics gives a better estimate of the annotators' preferences between these two topics. However, we believe it is undesirable for an RLHF algorithm to reward or punish a topic based on its weight in the topic distribution. Instead, we want the final reward value for an answer topic to only depend on the annotators' preferences for that topic (and not on the topic distribution in the data set). In summary, we don't think that near duplicates are inherently bad -- we just want the final reward function to be stable regardless of the topic distribution for the observed preference data set.
null
null
null
null
null
null
Objective drives the consistency of representational similarity across datasets
Accept (poster)
Summary: To compare representation spaces through representational similarity analysis (RSA) or its close relative in machine learning, centered kernel alignment (CKA), a sample of data is embedded in two different spaces, and the pairwise similarities of all representations in each space is used as a fingerprint for its information content. The CKA or RSA value critically depends on the dataset from which the sample was drawn. The current submission looks at the correlation of CKA values across datasets to draw conclusions about the similarities of vision models, or vice versa (correlation of CKA across models for conclusions about datasets), with motivation in large part to check the recent Platonic representation hypothesis (i.e. that the representation spaces of foundation models are converging). ## Update after rebuttal The authors have addressed all of my main concerns; I appreciate the clarity of the rebuttal and the additional experiments to test the hypotheses I raised. There are interesting results and the analysis is sound. I have adjusted my score from 3->4. Claims And Evidence: The claims are descriptive of the correlations the authors find and are supported by the results. Methods And Evaluation Criteria: The first major analysis aggregates across ~20 datasets to compare models, but the rationale for this aggregation warrants further scrutiny. Since the datasets are an arbitrary selection (not necessarily representative of any relevant distribution of images, with ~5% diabetic retinopathy images) and are weighted evenly in the correlation values, the resulting metrics might reflect the choice of datasets rather than similarities between the models. If, for example, the proportion of datasets whose domain differs significantly from natural images were 10% or 50% instead of the ~25% used in the work, the correlation values could be entirely different. To phrase it differently, while CKA values can be related for any manner of producing different samples, the particular dataset selection appears likely to be a major driver in the observed trends. In order to interpret the model similarity results (aggregating across datasets), it might be important to investigate the effect of such dataset selection. Aggregating across models to compare datasets seems less problematic -- largely because there does not seem to be the same sense of outliers in the selection of models as there are for datasets. Theoretical Claims: I did not see any theoretical claims. Experimental Designs Or Analyses: There are relatively few design choices in the work. One issue I see relates to the selection of a length scale $\sigma=0.4$ for CKA RBF. Can the authors justify why using a single value across the board is reasonable? This assumes that a length of 0.4 has the same relevance in all of the models and for all of the datasets. Why not use something adaptive, like the median distance between points? I also did not find the ablation on this parameter (Fig 9) to support CKA RBF with $\sigma=0.4$ as a local probe -- only 0.2 looks like it captures anything substantially different from CKA linear. Supplementary Material: I looked over the code, which appears thorough and well-documented. I looked into the details around CKA in the appendices and appreciated the tSNE figure (11). Relation To Broader Scientific Literature: The submission can be seen as a direct response to the Platonic representation hypothesis (Huh et al 2024), where the dependence of CKA on dataset immediately raises the question about how dataset factors into the PRH. While there have been many works on assessing representational similarity, the contributions of this work are not in this direction: it primarily adopts one (the linear variant of CKA) and runs that through various levels of analysis. Essential References Not Discussed: None to my awareness. Other Strengths And Weaknesses: The primary contribution relates to the large-scale analysis of 64 models and 23 datasets, as the methodological innovation is limited: essentially performing correlation of CKA values. As such, the contribution is somewhat slim, and the insights obtained are not particularly actionable. Still, the paper is well-written and the results are presented clearly. Other Comments Or Suggestions: - I think the boxes in Fig 2 actually make it harder to assess similarity structure across models. Perhaps signify at the boundary of the heatmap? - Might a bootstrap-type analysis act to ground the CKA values for each model and each dataset? To be specific, one could compute CKA for different samples (size 10k, as is currently used in the submission) from each dataset to get a spread of values, and this spread would help shed light on what constitutes a meaningful difference in CKA values when comparing across models/datasets. Questions For Authors: . Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for acknowledging that _the paper is well-written_ and that our claims are _supported by the results_, which _are presented clearly_. We are grateful for the valuable feedback that helped us improve our paper. We will address each concern point by point. All new figures/tables (R4F1-F7, R4T1-T2) are available [here](https://anonymous.4open.science/r/rebuttal_similarity_consistency/README.md). First, we want to address _[...] metrics might reflect the choice of datasets rather than similarities between the models_. Our analysis included 23 datasets, mostly part of the VTAB [1] and commonly used in CV. We compute the correlation of similarity values for all pairwise combinations of datasets. Therefore, we obtain similarity consistency measures for all dataset combinations (e.g., natural vs natural, structured vs structured, natural vs structured, …). These are responsible for the variance shown in our boxplot (Fig. 5). According to the reviewer’s suggestion, we further analyzed the observed variance in Fig. 5 and isolated the effect of different dataset types. We created two new boxplots (Fig. 5), one containing only correlation coefficients where both datasets contain natural images (Fig. R4F1) and one containing only coefficients for specialized+structured datasets (Fig. R4F2). The main observation remains the same: irrespective of the dataset selection, the training objective is a major driver for representational similarity consistency. However, we observe small differences such as a smaller minimum correlation for natural images (see IN1k/XLarge & IN1k/Large) and a slightly larger variance for specialized and structure data (Fig. R4F2 vs Fig. R4F1). We will add the new plots to the camera-ready version’s appendix. Second, we will address the relevance of sigma for CKA values computed with RBF kernel. We normalize all feature representations, leading to comparable distances over datasets. However, we agree that the strong correlation between CKA RBF 0.4 and CKA linear suggests that $\sigma=0.4$ (partly) captures global similarity structures. Fig. 9 shows small differences in the upper right corner; however, we agree that the difference between CKA linear and CKA RBF 0.2 is more pronounced. We extend our correlation plot (Fig. 3) by CKA RBF 0.2 as shown in Fig. R4F3. As expected, CKA RBF 0.2 differs more significantly from CKA linear. Based on this evidence, we agree with your perspective that RBF 0.4 still captures some global similarity while RBF 0.2 is a better candidate for analyzing local similarity. Thus, we repeated our experiments with the local kernel (see Fig. R4F4 and R4F5). We observe the same overall pattern, indicating that the objective is relevant for similarity consistency, while network architecture is less important. This analysis reveals two interesting observations. For local similarity, the supervised models are more consistent than the Img-Txt models, which are better at recovering global structure. In addition, models trained on IN-21k are more consistent than models trained on IN-1k, of which the latter set is more consistent in their global structure. IN21k contains substantially more classes (21.843) and a higher percentage of the classes are leaf nodes in the WordNet tree (76.71% vs 65% for IN1k), representing more fine-grained entities. The representation must contain more fine-grained details to distinguish these classes, dominating local similarities. We will include these observations in Appx. F of the camera-ready version. Third, we investigate the stability of CKA values across different subsets. Fig. 10 shows that CKA linear is stable with 5k samples, showing smaller variance when increasing to 10k samples. To further analyze this stability, we performed bootstrapping over 500 subsets of IN-1k features, each containing 10k samples (as suggested), for 6 example models (the anchor models from Appx. G). Fig. R4F6 and the narrow confidence intervals (CI) in Tab. R4T1 confirm the stability of CKA linear values, demonstrating minimal variation across bootstrapped subsets. For RBF 0.2, Fig. 10 showed instability with a subset of size 10k, while 30k samples are more robust. We verified this with another bootstrapping experiment, Fig. R4F7, and the CIs in Tab. R4T2 showed a larger variance in CKA values. This suggests that local similarity depends more on the specific stimuli than global similarity. This is not surprising as stimulus-specific fine-grained details drive local similarity measurements. To mitigate this effect, we increased the number of samples for experiments analyzing local similarity from 10k to 30k when the dataset size permitted. Last, we agree that the t-SNE Figure helps us understand our categorization and will move this visualization from the appendix to the main part. [1] Zhai, Xiaohua, et al. "A large-scale study of representation learning with the visual task adaptation benchmark." arXiv preprint arXiv:1910.04867 (2019).
Summary: The paper proposes a way of measuring the consistency of pairwise similarities across datasets and transferability of similarities between them. The authors provide many observations regarding these aspects. ## update after rebuttal The authors provided some additional discussions and results, which further strengthened my belief that they deserve the high score I initially gave (accept). I hope that as promised the authors will incorporate the main points of their answers into their camera-ready version. Claims And Evidence: The claims made in the submission are supported by clear and convincing evidence. Methods And Evaluation Criteria: The methods are adequate for the problem (RSA, CKA). I like the simplicity and clear formulation of the framework. Also the datasets and models are adequate. Theoretical Claims: N/A - the nature of the paper is rather empirical. Experimental Designs Or Analyses: - The authors evaluated numerous models on many datasets. They also analyzed whether the results are not due to the similarity measure of choice. In my opinion, the experiments are convincing. (+) - In Section 4.3 and Fig. 4, it is interesting that some SSL models lie very close to the text-image models. Could the authors check what SSL models cluster with text-image models and try to explain why? (question Q1) - As the authors use models trained on general purpose datasets (such as ImageNet), it would be nice to present some more qualitative results for a dataset such as EuroSAT due to the fact that this dataset also presents a domain with a large domain gap to the general-purpose datasets (e.g. Fig. 2, 4). Similarly, it would be nice to include some examples for the DTD dataset, as it focuses on textures. (Q2) - The authors should better discuss the inconsistencies between relative similarities in Section 4.6 (e.g. ImageNet and CIFAR10/100) - Q3. Supplementary Material: Supplementary materials provided are of good quality and can be used to reproduce the experiments. Relation To Broader Scientific Literature: Key contributions of the paper build on the previous methods (like CKA, RSA) and can be used as an extension of the existing testing procedures. Essential References Not Discussed: The works [1] and [2] could be cited in the paper. They leverage CKA/RSA and task similarities for transfer learning. [1] Borup, Kenneth, Cheng Perng Phoo, and Bharath Hariharan. "Distilling from similar tasks for transfer learning on a budget." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023. [2] Dwivedi, Kshitij, and Gemma Roig. "Representation similarity analysis for efficient task taxonomy & transfer learning." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019. Other Strengths And Weaknesses: Weaknesses: - Some captions of the figures could be more informative. E.g. In Fig. 2, it would be nice to add the legend of the colors used for different boxes (it is only done in the text) - In Fig. 5, some terms are difficult to understand (middle, bottom row) i it would be useful to briefly describe what the authors mean by Large etc. when it comes to the dataset size and similarly for model sizes. Strengths: - The authors compare different similarity metrics to minimize the possible impact of a given similarity measure on the results. - It is a useful combination of the existing methods. Other Comments Or Suggestions: As mentioned before, the authors could once more review their figures and add to their descriptions some better explanations of the contents, especially for Fig. 5 (a brief description of what the labels on the images mean). Questions For Authors: **Q1**: Could the authors check what SSL models are placed close to the text-image models and try to explain why? (Fig. 4) **Q2**: Would it be possible to add additional results to Fig. 2 and 4 (or somewhere in the Appendix) including the EuroSAT/DTD datasets (as an example of other specialized/structured datasets - as such comparisons are the most interesting due to a domain gap between the general purpose and specialized/structured datasets). **Q3**: The authors state: “ Interestingly, ImageNet-1k exhibits a milder yet significant pattern of inconsistency. Within the multi-domain category, this is especially pronounced for CIFAR-10 and CIFAR-100” - do the authors think the reason is a significantly lower resolution of the images in the CIFAR datasets? Could the authors dig a little bit deeper to analyze why some datasets for visible clusters in Fig. 6? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: First, we thank the reviewer for their overall positive feedback and for pointing us to two papers that helped strengthen the integration of our work into the existing literature. We agree with their relevance to our work due to using Representational Similarity Analysis [RSA; 2] or other similarity measures [1] to (pre-)select (downstream) task-specific models. Therefore, we will cite them in the related work section of our manuscript’s camera-ready version: “[...] However, recent work demonstrated that representational similarities (e.g., measured via RSA) can be used to effectively select (downstream) task-specific models [1,2].” Second, we agree on the lack of clarity of some of the figure captions and will use the extra page in the camera-ready version to expand these captions to improve clarity, i.e., for Fig. 5. In addition, we thank the reviewer for posing three interesting questions, which we will elaborate on, providing additional results (R3F1-F5) in an anonymized [repository](https://anonymous.4open.science/r/rebuttal_similarity_consistency/README.md). **Q1**: _Which SSL models are close to Img-Text in Fig. 4, and why?_ Fig. R3F1 zooms into the area of Img-Txt models for the three dataset combinations of Fig. 4 and labels individual model pairs. We observe that many SSL and image-text model pairs show high CKA values across the datasets, i.e., are located in the upper right corner and therefore in proximity. These models are similar within each objective. The SSL model pairs show high similarities as they are trained with similar datasets, augmentations, and losses (e.g., BarlowTwins/VicReg and MoCov2/SimCLR). Some image-text models are quite similar as well. However, the proximity of these two model pair sets does not allow us to infer any direct relationship between them. For this, we must consider the similarities of model pairs containing both model types, as shown in Fig. R3F2. The points, representing cross-type similarities between SSL and Img-Txt models (pink), have lower correlations than within-type pairs. This indicates that despite some individual SSL models appearing close to Img-Txt models, the overall relationship between these two categories of models is less strong and more variable. **Q2**: _Could Fig. 2 and 4 be provided for more specialized datasets?_ We thank the reviewer for their suggestion and have decided to include additional figures extending Fig.2 with datasets of each category (Fig. R3F4) and Fig.4 containing the EuroSAT and DTD (Fig. R3F5). **Q3.1**: _Do you attribute the inconsistency patterns in CIFAR-10 and CIFAR-100 primarily to their significantly lower image resolution compared to IN-1k?_ Yes, we indeed think that CIFAR-10/100’s small image size (32×32 pixels) restricts the representation space by severely limiting fine-grained details and surrounding contextual information. Appx. I shows clear differences in correlation coefficients between performance gaps (def. Sec. 4.7) and CKA values: low absolute correlation for IN-1k versus high for CIFAR. This suggests IN-1k’s high-resolution images (224×224 pixels) provide rich contextual cues that support diverse, well-performing representations, whereas CIFAR’s constrained resolution and minimal background details restrict the range of viable representations. **Q3.2**: _Can you elaborate on the underlying factors causing the visible clustering patterns observed for certain datasets in Fig. 6?_ Looking more closely at the clustering patterns of Fig. 6, we observe: - CIFAR-10 and CIFAR-100 form a strong cluster, potentially due to their low-resolution format and similar categorical structures, showing also high consistency in the (Img-Txt, Sup). - The Breeds datasets, Caltech-101, Country-211, STL-10, and Pets cluster together based on their similar domain properties, resolution profiles, and centered object compositions, showing also high consistency in the (Img-Txt, Sup). - Medical imaging datasets do not cluster together, potentially due to their fundamentally different visual patterns and domain-specific features (eye vs. tissue scans). - RESISC45 shows stronger correlations with structured datasets than with other specialized datasets (i.e., (Img-Txt, Sup)). This cross-category relationship might stem from satellite imagery's inherent structural properties—regular grid layouts, transportation networks, and geometric patterns—which create feature distributions resembling those in structured datasets. However, it differs from EuroSAT potentially due to RESISC45's higher resolution imagery, greater geographic diversity, and more diverse class set. A further analysis that isolates natural images from structured and specialized datasets can be found in the first part of Reviewer 9H2r's answer. We will incorporate the main points of these answers into our camera-ready version.
Summary: This paper is more of an analytical paper that analyze the cross-domain representation similarity among models trained with different objectives. The analytical framework is pretty simple as it is a combination of kernalized CKA and a spearman correlation measure. Methodology description is concise. Experiments show the results of analysis, most of the observations are inconclusive due to the confounded factors, but the some insights are interesting. E.g. SSL's representation similarity consistency is higher than that of supervised learning. Claims And Evidence: There is no clear claim of this paper given it is more about analysis. While the paper claimed the "framework" of analysis as part of its contribution. It is a combination of pretty standard and well known analytical tools. If that is the key claim of this paper, then the paper's novelty is very low. I think the biggest contribution of this paper is about the insights it provided on understanding the representation similarity among models in the context of domain-transfer. However, the analysis seems lack depth given multiple observations are less conclusive ( due to the confounded factors). Probably the experimental settings could be adjust to minimize the effect of confounded factors. I think this paper is more suitable for being a position paper rather than a standard ICML submission given its claim is about advocating a particular research direction rather than proposing a new algorithm. Methods And Evaluation Criteria: Method is clearly written and easy to follow. It does make sense to leverage CKI and spearman correlation to quantify the representation similarities. There is no meta-evaluation criteria to quantify the method proposed (given the method itself is a set of measure). However, the experimental setup can be further tuned to reduce confounded observations. Theoretical Claims: N/A, there is no theoretical claims I can see unless "SSL's representation similarity consistency is higher than that of supervised learning" is the claim, which is empirical not theoretical. Experimental Designs Or Analyses: Analysis looks well done. I have learned something from this work. The only complain is about inconclusive observations here and there. Supplementary Material: No, i didn't read supplementary material given the paper is already self-inclusive. Relation To Broader Scientific Literature: This research is one direct extension of representation similarity measure among models (or could be even model vs neuron recordings). Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strength: 1. Well written article. Descriptions are concise and clean. 2. Extending representation similarity research to cross-dataset setting, which looks interesting. 3. Some observations from the analysis is interesting in terms of justifying the need of SSL in many applications. Weakness: 1. The depth of analysis is still shallow. It could be more interesting if the authors can design the analysis with a more precise control on factors. Other Comments Or Suggestions: I think the figure reference in section 4.3 is wrong. Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We would like to thank the reviewer for the valuable feedback and appreciate the assessment of our manuscript as being _well-written_ and _interesting_ work. First, we agree that the individual components of our analysis framework are well-established rather than novel by themselves. We see this as a strength rather than a weakness because it allowed us to build a novel meta-framework relying on the methodological soundness of the individual components without introducing subcomponents that still need to be validated by the research community. To the best of our knowledge, our work is the first to propose a structured way of using these components to characterize how similarity relationships vary across datasets. Our main focus lies on introducing and formalizing _similarity consistency_ and conducting a large-scale analysis of this measure across 64 different vision models and 23 datasets. We identify that the learning objective is a crucial driver of similarity consistency, and architectural specifications before training (architecture type and size) are less relevant. Second, while confounding factors cannot be avoided when evaluating the limited set of large, well-trained general vision models, we tried to unravel some confounders in Appx. G and H. - In Appx. G, we selected two anchor architectures (ResNet-50 and ViT-L) and systematically varied the training objective (Sup., SSL, Img-Txt) while keeping the architecture type and model size constant, resulting in six models. This allowed us to isolate the effect of the objective in a more limited but controlled setting. We observed that even here, the training objective appears to be driving similarity consistency. - In Appx. H, we investigate the role of training data on similarity consistency. To that end, we fixed the objective (supervised) and network architectures (AlexNet, DenseNet161, ResNet18, and ResNet50) and varied the training dataset (general-purpose vs. domain-specific). Here, we identified higher consistency in models trained on a domain-specific dataset (Places365), most likely due to a more constrained solution space in comparison to general-purpose models trained on large-scale datasets. These findings complement our main analysis, which deliberately focused on general-purpose vision models trained on datasets with large semantic diversity. Last, we fixed the Fig. reference in Section 4.3. We intended to refer to Fig. 11 in the Appx. D. We will correct this in the camera-ready version by using the extra page to move this figure back to the main text.
Summary: The paper sets out to challenge the Platonic representation hypothesis by reexamining similarities between representation of models using multiple datasets. Their key finding is that training objective is a dominant factor driving representations, as opposed to model architecture and model size. Claims And Evidence: The claim on objective function needs more support, as currently only SSL versus supervised objectives were examined; there are many other important objective functions—such as robustness to noise or corruptions objectives, generative modeling objectives (e.g., diffusion models), masked image modeling, and reinforcement learning-based objectives—that were not included in the analysis. Without evaluating models trained on these diverse objectives, it is difficult to generalize the conclusion that the objective function is the primary driver of representational similarity consistency. Methods And Evaluation Criteria: A key methodological limitation is that the study focuses exclusively on representations extracted from the final layers of the models (ref table in supplementary: the penultimate layer for supervised models, the average pooling layer for SSL models, and the image encoder output for image-text models). While this is a common practice, it risks biasing the analysis toward the influence of the objective function, since final-layer representations are often more task-specific and reflect the model’s training objective. Theoretical Claims: No theoretical claims Experimental Designs Or Analyses: Experimental design is sound and standard for measuring similarity of representations. Supplementary Material: The models and the objective functions used in this study. Relation To Broader Scientific Literature: The question of what drives similarity (or dissimilarity) of representations is important for many fields and it directly engages with many recent works on the topic. Essential References Not Discussed: None spotted Other Strengths And Weaknesses: The question and the approach are not novel so the work can really benefit from broadening the evaluations to more models to make a conclusive claim, or refine the claims to the actual results. Other Comments Or Suggestions: None Questions For Authors: If possible, could you also report the analysis results on a few internal layers, close to the middle of processing in each model? Do you expect to see more or less the same results? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We would like to thank the reviewer for their helpful suggestions and for acknowledging the _relevance of our work_ and the _soundness of our experiments_. We believe that following the reviewer’s suggestions allowed us to notably improve our analyses. Two points stood out in particular, which we address in detail below. We added all additional figures (R1F1-F5) to an anonymized [repository](https://anonymous.4open.science/r/rebuttal_similarity_consistency/README.md) and refer to the specific Figures detailed in the README.md. First, we would like to clarify the range of the covered training objectives. In representation learning for computer vision, self-supervision (SSL) and image-text alignment are currently the state-of-the-art approaches. Therefore, we selected these two categories alongside supervised learning. The SSL group contains a diverse set of objectives, including self-supervised contrastive losses (such as SimCLR), the mentioned _masked image modeling_ (MAE), self-distillation (DINO), extended self-distillation (DINOv2), but also pretext-task-based (Jigsaw, RotNet), redundancy-reduction (BarlowTwins) and clustering-based (SwAV) losses*. The total set of 64 models contains (most) SOTA models for representation learning in vision. For this diverse set, we identified the training objective as a crucial factor for the consistency of representational similarity, while model architecture and size appear less relevant. As we identified the training objective as a main driving factor for similarity consistency, we are convinced that due to its flexibility, our framework can, in future work, easily be applied to analyze other training objectives, e.g., as recommended, testing the effect of robustness losses on the similarity structure of out-of-distribution data. Second, we would like to address the effect of taking the _final layer_ of the model by evaluating the representational similarity consistency on intermediate layers. In our work, we followed the standard procedure of extracting features of the _final layer_ (or _penultimate layer_, for classification models), which is commonly referred to as the _representation_ [1]. We remark that these representations are of special interest because they are the ones used for downstream tasks. However, we agree with the reviewer on the potential role of layer choice for our findings, as mentioned in our discussion section. Our proposed analysis framework does _not depend on specific layers_; it can also be applied to intermediate layers. Therefore, we followed the reviewer's suggestion and repeated the consistency analysis for the middle layers of _a large subset of our transformer models_. We omitted CNN-based models, as middle-layer extraction is less clear when representations depend on spatial location. We remain consistent in our extraction method across layers for the transformer models: If the model's original representation was derived from the classifier token, we also extracted the classifier token from intermediate layers. Otherwise, we applied avg. pooling. Fig. R1F1 and R1F2 show similarity matrices, analogous to Fig. 2. Here, model representations tend to be more similar across models of the same type. W.r.t. the consistency of similarities, we observe a lower median and larger standard deviation of consistency of representational similarities across all model pairs (grey bar) in Fig. R1F5 compared to Fig. 5. While the variances are relatively large for the training data and model size categories, we see above-median consistency for within-objective model pairs. We speculate that higher consistencies within training objectives in intermediate layers indicate that the training objective has a stronger influence on representational structures already early in the network. For example, supervised models may form structurally similar lower-level representations but are less constrained in the organization of their representations close to the classification output. In conclusion, our analyses of intermediate layers support the finding that the training objective is a main influence on representational similarity consistency, though more experiments would be needed to fully characterize layer-specific effects. We thank the reviewer for their insightful comments that have strengthened our analysis. \* References to the models can be found in the main paper, in Tab. 2. [1] Kornblith, Simon, et al. "Do Better ImageNet Models Transfer Better?" CVPR, 2019.
null
null
null
null
null
null
Active Learning of Deep Neural Networks via Gradient-Free Cutting Planes
Accept (poster)
Summary: This paper introduces a novel theoretical framework that extends cutting-plane optimization methods to active learning for deep neural networks. The authors bridge two previously separate domains: deep neural network training and cutting-plane optimization techniques. The primary contribution is showing that cutting-plane algorithms, traditionally limited to linear models, can effectively train neural networks despite their non-convexity and nonlinear decision boundaries. A key theoretical result is the geometric contraction rate guarantee of the feasible set, providing strong convergence properties. Through experiments on both synthetic data and standard benchmarks, the authors demonstrate that their approach achieves competitive performance against established active learning baselines while maintaining theoretical guarantees that most current methods lack. This represents an important step toward principled active learning methods for deep neural networks with provable properties. Claims And Evidence: The claims in this paper are well-supported through rigorous theoretical analysis and empirical demonstrations. The authors clearly establish their theoretical framework and provide thorough proofs for their convergence guarantees, which is particularly valuable in the typically heuristic-driven field of deep active learning. Methods And Evaluation Criteria: The proposed method represents a creative and theoretically sound approach to the active learning problem. By adapting classical cutting-plane methods to the neural network setting, the authors provide a fresh perspective that addresses limitations of previous approaches. The evaluation methodology is appropriate and well-executed, using standard benchmark datasets and metrics. The authors effectively demonstrate their method's performance through clear visualizations and comparisons against relevant baselines. Their approach of evaluating on both synthetic data (to verify theoretical properties) and standard benchmarks (to demonstrate practical utility) provides a comprehensive assessment of the method's capabilities. Theoretical Claims: No problems. Experimental Designs Or Analyses: The experimental design is sound and effectively demonstrates the method's practical utility. The visualizations are particularly strong, providing clear intuition about how the algorithm operates. The experiments validate the theoretical properties while showing competitive performance on standard tasks. While the baseline comparisons are somewhat limited and focused on simpler approaches, they adequately demonstrate the method's effectiveness relative to established techniques. Given the theoretical focus of the paper, the experimental validation strikes an appropriate balance between demonstrating theoretical properties and practical utility. Supplementary Material: I reviewed portions of the supplementary material, particularly focusing on the extended proofs. The proofs appear sound and provide the necessary mathematical details to support the claims in the main paper. Relation To Broader Scientific Literature: This work makes a valuable contribution by bringing together two previously separate research areas: cutting-plane optimization and deep active learning. It builds upon classical optimization techniques while addressing modern deep learning challenges. The approach offers promising avenues for extension to other sample selection problems beyond active learning, potentially impacting other areas where principled sample selection is critical. The theoretical guarantees provided by this approach distinguish it from many existing methods in the field. Essential References Not Discussed: None Other Strengths And Weaknesses: **Strengths:** - The paper makes a novel connection between classical optimization techniques and modern deep learning - The theoretical analysis is rigorous and provides valuable insights into the method's performance - The visualizations are clear and intuitive, effectively communicating the algorithm's operation **Weaknesses:** - The experimental evaluation, while sufficient, could include comparisons to more recent active learning methods - The practical implementation details for large-scale applications could be clarified further - The limitations regarding very large networks and datasets are not fully addressed Other Comments Or Suggestions: See Questions. Questions For Authors: 1. What factors limit the scale of data selection with your method, and what is the maximum scale achievable? Understanding these constraints would help clarify the method's practical applicability. 2. What is the computational cost when applying your method to deeper networks, and approximately how many neural network layers can your approach effectively handle while maintaining its theoretical guarantees? This would help readers understand the scalability of your approach. 3. Current active learning approaches often leverage pretrained models (e.g., ActiveFT). Could your method be extended to work with pretrained models, perhaps by training only a linear classifier layer or a shallow two-layer network with ReLU? Some simple experimental validation would be valuable. 4. If samples selected by your method were used to train deeper neural networks, would they be more effective than samples selected by existing methods? This comparison would help establish broader applicability beyond the theoretical context. I should note that while I'm not specialized in theoretical machine learning research, the paper appears well-executed and makes a valuable contribution. I have a positive impression overall, though I believe addressing the real-world applications and scalability questions would further strengthen the work. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: > The experimental evaluation, while sufficient, could include comparisons to more recent active learning methods Thank you for the suggestion. We've compared against 8–10 standard baselines from scikit-activeml and DeepAL. Since our method builds on a cutting-plane training scheme, we adapted the baselines—sometimes with custom implementations—to ensure fair comparison (see Appendix H.3 for details). Because our method introduces a fundamentally novel setup unlike existing AL approaches, an exhaustive comparison is challenging given the vast literature. However, we’ll consider adding more recent deep AL methods in the baseline for the final version. > The practical implementation details for large-scale applications could be clarified further We've detailed our implementation of sentimental classification task in Appendix H.1. We are happy to provide more details if there is anything unclear. > The limitations regarding very large networks and datasets are not fully addressed Our proposed algorithm, still in its early stages, does face scalability challenges with the largest NN models. However, it evolves alongside advances in cutting-plane methods and LP solvers—both active research areas. Recent progress, such as improved cutting-plane techniques [8] and GPU-accelerated LP solvers [9, 10], can be directly applied to ReLU NN training based on the equivalence established in this work. More importantly, our contribution goes **beyond** empirical gains. The theoretical insights enabled by this LP-NN connection are significant, especially given the much deeper understanding of LP systems compared to neural networks. [8] An Asynchronous Proximal Bundle Method: https://link.springer.com/article/10.1007/s10107-024-02088-x; [9] CuClarabel: GPU Acceleration for a Conic Optimization Solver: https://arxiv.org/abs/2412.19027; [10] MPAX: Mathematical Programming in JAX: https://arxiv.org/abs/2412.09734. > What factors limit the scale of data selection with your method, and what is the maximum scale achievable? The main limitation in data selection is that more training data leads to more constraints in the LP formulation, increasing the problem size. While we explore pruning schemes to discard redundant constraints (Appendix F.4), scalability remains a challenge. We haven't tested the maximum scale—current experiments use only tens of points, as larger selections require longer solver runtimes. That said, the method remains executable, albeit slower with more data. > What is the computational cost when applying your method to deeper networks The main computational bottleneck for deeper NNs still comes from solving a large LP since we will have more variables in the LP. So ideally, if we have a very efficient LP solver, there is no limit of our method. As shown in Theorem 4.2, our theory extends to **arbitrarily deep** NNs. Fortunately, we've seen recent effort in GPU-accelerated LP solvers as [9] and [10] mentioned above, which truly empower our method for large scale application. > Could your method be extended to work with pretrained models? We've indeed done some experiments with pre-trained large LLMs, following exactly the reviewer's thoughts. Figure 4 shows the result of training a two-layer ReLU classifier on top of Phi-2's embeddings for sentiment classification task. Phi-2 is a 3B pretrained LLM model. Our result shows that when combining Phi-2's embeddings with our active learning scheme, we harness higher prediction accuracy and better query efficiency compared to other baselines. > If samples selected by your method were used to train deeper neural networks, would they be more effective than samples selected by existing methods? Thank you for the question. We base our theoretical results on cutting-plane training as it offers a cleaner, more analyzable framework. In contrast, gradient-based methods rely heavily on heuristics, and understanding their dynamics (e.g., SGD with varying batch sizes or step sizes) remains limited, making rigorous analysis difficult. In this revision, we strengthen our theory by showing that the learned model converges not only volumetrically in parameter space, but also in norm to the optimal decision boundary (see proof [here](https://drive.google.com/file/d/1K57njXjyj4Ea846PEh4qBkS4_wwsyGfW/view?usp=sharing)). This supports the quality of our selected samples for downstream training. That said, empirically, we tested our selected samples on a deeper network for a quadratic regression task and found our method remained competitive with standard deep active learning baselines, even if the performance gap was smaller due to task simplicity ([link](https://drive.google.com/file/d/1tR3e8n2Lc-JzmGozrzB7ZHU9RsTB-kf4/view?usp=sharing)). We plan to add further experiments on deeper models and more complex tasks in the final version. --- Rebuttal Comment 1.1: Comment: Thank you for your reply. I don’t have any further questions, and I’ll keep my rating. Wishing you the best of luck!
Summary: This paper proposes a novel method for training ReLU deep neural networks and selecting data queries within an active learning framework. Extending previous work on cutting plane algorithms to multi-layer ReLU networks, the authors formulate network training as a linear programming problem, decomposing the task of data fitting into a series of linear programs that can be solved efficiently. Additionally, the paper introduces a new active learning strategy that prioritizes querying the most confident samples to effectively identify misclassified instances and substantially reduce the parameter space. The proposed framework is validated through experiments on multiple synthetic datasets, demonstrating its efficacy. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. The paper introduces a novel approach to neural network training from cutting-plane algorithms. Theoretical Claims: Theorem 6.3, the main theoretical result in this work, is quite similar to the convergence analysis in (Louche & Ralaivola, 2015). It would be appropriate to mention this prior work in the theory section. Experimental Designs Or Analyses: Much of today's Deep Active Learning research focuses on large-scale datasets using batch-mode querying, aspects not addressed by the experiments in this paper. Instead, the authors evaluate their method primarily on simple tasks like a simple 2D spiral binary classification and a regression problem. Consequently, this empirical evaluation does not convincingly demonstrate how the proposed method addresses the scalability challenges typically targeted by active learning. Supplementary Material: I checked the proof of the theoretical results and the experimental setups. Relation To Broader Scientific Literature: The problem of applying convex optimization methods for neural network training is relevant, and the theoretical contributions appear sound. The proposed method effectively addresses common challenges inherent in gradient-based methods, such as hyperparameter sensitivity and slow convergence, while achieving competitive performance. The experimental results demonstrate promising outcomes in both classification and regression tasks on smaller-scale datasets, suggesting potential suitability for specific practical scenarios. Essential References Not Discussed: N/A Other Strengths And Weaknesses: 1. I have a concern that the method may not work well with larger-scale neural networks. The size of the activation pattern $D$ can be exponential in $n$. 2. The proposed method has significant limitations in terms of applicability, as it exclusively supports neural networks composed of linear layers with ReLU activation functions. This constraint renders it incompatible with modern architectures, including transformers, convolutional neural networks, and models utilizing alternative activation functions. Consequently, this severely restricts the practical impact and broader relevance of the method within the current deep learning landscape. Other Comments Or Suggestions: 1. The paper does not address the volumetric stopping criterion in detail, particularly regarding its dependence on the dimensionality of the hypothesis space. It remains unclear how a user would practically specify or adjust this criterion. 2. One potential advantage of the proposed cutting-plane method could be reduced computational cost compared to gradient-based methods. However, this aspect hasn't been sufficiently discussed or analyzed. Providing empirical runtime comparisons between the cutting-plane approach and gradient-based algorithms on the tested datasets would help to quantitatively highlight any computational benefits. Such an analysis would significantly strengthen the evaluation by clarifying whether the proposed approach offers practical efficiency improvements. Questions For Authors: See ``Other Comments Or Suggestions''. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: > Theorem 6.3, the main theoretical result in this work, is quite similar to the convergence analysis in (Louche & Ralaivola, 2015) Dear reviewer, we first cite L&R’s work in Section 2 under “Cutting-Plane-Based Active Learning with Linear Models.” Our contribution goes well beyond theirs, which applies only to linear models. Theorem 6.3 resembles classic cutting-plane convergence results—standard in the literature (e.g., see pp. 9–10 in [ these notes](https://stanford.edu/class/ee364b/lectures/localization_methods_notes.pdf))—but it's not specific to L&R but just a classic result. Our key novelty is extending this to the nonlinear two-layer NN $f^{\text{two-layer}}$, where $f^{\text{two-layer}}$, which is never obtained before. > Empirical evaluation does not convincingly address scalability Our goal is to show the feasibility of training ReLU NNs via LPs (see abstract, Sections 1 & 3), enabling theoretical insights not possible with traditional methods. While not yet faster than gradient-based training, our method offers stronger query efficiency, consistently outperforming baselines with fewer queries (Figures 2–4). This makes it well-suited to query-limited settings. Scalability challenges stem from: - Constraint growth: We mitigate this via subsampling and iterative pruning (Appendix F.4). - Solver overhead: Recent GPU-based LP solvers like CuClarabel [3] and MPAX [4] offer promising future improvements. [3] CuClarabel: *GPU Acceleration for a Conic Optimization Solver*. https://arxiv.org/abs/2412.19027 [4] MPAX: *Mathematical Programming in JAX*. https://arxiv.org/abs/2412.09734 > Concern over exponential growth in activation patterns Thanks for raising this. The number of ReLU activation patterns does **not** scale exponentially in \( n \); rather, it scales with the **rank** of the data (see Section 3 in [5]). We further reduce complexity using pruning strategies (Appendix F.4). Recent work [6] also proposes geometric-algebra-inspired sampling, although it doesn’t yet connect to LP-based training. [5] *Neural Networks are Convex Regularizers: Exact Polynomial-time Convex Optimization Formulations for Two-layer Networks* https://arxiv.org/pdf/2002.10553; [6] *Randomized Geometric Algebra Methods for Convex Neural Networks* https://arxiv.org/pdf/2406.02806. > Applicability limited to ReLU networks We've addressed a similar issue in our response to Point 2, but we’d like to re-emphasize the broader potential of connecting NN training to classical LP solving—both for advancing theory and enabling new algorithms. Our current work offers a preliminary demonstration of this: - Theoretical advancement: We provide a convergence result that was **previously unattainable** due to the nonconvexity of NN training. Under our framework, **the prediction function converges in norm to the optimal decision function** (see our added proof [here (anonymous link)](https://drive.google.com/file/d/1K57njXjyj4Ea846PEh4qBkS4_wwsyGfW/view?usp=sharing). In contrast, LP systems are well-understood thanks to centuries of study. Framing ReLU training as LP solving opens a promising new lens for analyzing deep NN properties. - Algorithmic novelty: We show that cutting-plane methods can be applied to ReLU NN training—an approach not explored before. While we use a basic variant, there’s a rich and growing body of work on advanced cutting-plane methods (e.g., [7], published in *Mathematical Programming*, Jan 2025), which could be leveraged in future extensions. - Solver development: GPU-supported LP solvers have only recently emerged, and represent an exciting and active area of ongoing research. [7] *An Asynchronous Proximal Bundle Method*: https://link.springer.com/article/10.1007/s10107-024-02088-x > The paper does not address the volumetric stopping criterion in detail Algorithm 1 gives the general cutting-plane training workflow. On its own, it lacks convergence guarantees, as random training samples may offer negligible volume reduction. The "volumetric stopping criterion" applies specifically to our active learning scheme, where shrinkage can be precisely measured. For general use, we rely on standard stopping rules like max iterations, data budget, or validation error. > Potential advantage over gradient-based methods Our method, still in early stages, is slower than gradient-based training, so we focus on **query efficiency**—where we show stronger convergence guarantees and consistently outperform baselines with fewer queries (see Figures 2–4). This makes our approach ideal under a query budget. On runtime and scalability, our method evolves alongside advances in cutting-plane methods and LP solvers—both active research areas. As new breakthroughs emerge, they can be directly integrated to improve our training algorithm. Moreover, theoretical progress in these fields—arguably more likely than in learning theory—would naturally carry over to ReLU NN training through our established equivalence.
Summary: This paper provides a very interesting results for training ReLu neural networks. The authors show that training a binary classification problem using ReLu neural networks is essentially solving a linear program (LP), and therefore in the context of active learning, adding a new data point in the training set is equivalent to adding new linear constraints in the proposed LP. The authors also show that they can add linear constraints in a way that the volume of the feasible space is decreasing by $1-\frac{1}{e}$ for each query, and compare test accuracy on several datasets among linear models and ReLu network trained by SGD. Claims And Evidence: I am good with the theoretical claims. However, given the numerical tests conducted, I am not able to draw solid conclusion that this method will surpass current SGD-based training of neural networks. Methods And Evaluation Criteria: I think the benchmark datasets are wide enough, from my perspective. However, I am not sure about the evaluation criteria that solely relies on test accuracy versus SGD. Since the number of variables grow fast as neurons increase, it seems like the runtime for LP would increase by a lot. Another issue is that momentum-based methods seem to be more popular, which might provide better performance. Theoretical Claims: No, I do not check the correctness of the proofs. Experimental Designs Or Analyses: Please see my comments for the session "Methods And Evaluation Criteria". Supplementary Material: Yes, but I did not run the code on my computer. Relation To Broader Scientific Literature: I believe that this paper provides a very interesting perspective that could potentially connect the field of (Mixed-Integer) linear optimization with the well-developed field of neural network. For a long time people believe that only continuous optimization techniques would be great for NN training, but it might be the case that MILP would play a role as well. Essential References Not Discussed: I did not find any essential references not discussed. Other Strengths And Weaknesses: Strengths: 1. The authors developed the connection of modeling the training of NN with LPs, which is a novel topic in the field of Machine Learning. 2. The authors further provides theoretical results on the convergence of their proposed LP-based method for binary classification for active learning 3. The paper is written in a very clear way. Weaknesses: 1. Although convergence results are given, I am not sure how to understand the results. As the volume decreasing exponentially might not imply that the algorithm is capturing the correct hidden function, and therefore increasing the test accuracy 2. The authors did not discuss the computational efficiency of their proposed method, which might be a big problem when it comes to large scale NN models Other Comments Or Suggestions: I think the paper would be greatly improved if the authors could show results regarding learning the correct function, which should not be too hard given the rich literature regarding MLPs. For example, the well-known Universal approximation theorem. Moreover, I think an interesting question is that whether this method could reduce the number of queries needed for active learning - it would be very exciting to learn about this if this is true. Questions For Authors: 1. I found that Algorithm 2 in the paper could be confusing. Could the authors clarify on what those variables mean? 2. Could the authors explain how to interpret the convergence results? For example, if the underlying decision boundary of the true decision boundary is given by $f(x)$, can the LP-based NN training models approximate such function? 3. It is connected to the above problem. If the results are unknown for a general $f(x)$, is that possible to show that for some certain class of functions, the decision boundary could be learned? 4. Could the authors comment of the computational aspect of their proposed method? For example, on the runtime, maximum number of neurons allowed, and the average queries needed? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: > I am not able to draw solid conclusion that this method will surpass current SGD-based training of neural networks. Thank you for raising this point. We do not claim that our method currently outperforms gradient-based training for deep NNs, especially as it remains in an early stage compared to mature gradient-based approaches. Our key contribution is linking ReLU NNs to LPs, enabling convergence proofs previously thought infeasible due to non-convexity. Moreover. emerging GPU-based LP solvers like CuClarabel [1] and MPAX [2] offer promising avenues for scaling our approach. [1] CuClarabel: GPU Acceleration for a Conic Optimization Solver https://arxiv.org/abs/2412.19027; [2] MPAX: Mathematical Programming in JAX https://arxiv.org/abs/2412.09734. > However, I am not sure about the evaluation criteria that solely relies on test accuracy versus SGD. As noted, our primary goal is to demonstrate the feasibility of framing deep ReLU NN training as LP solving, which opens the door to theoretical insights previously out of reach—e.g., our convergence result—by leveraging centuries of progress in LP theory. That said, we also address empirical concerns. The main computational burden stems from (1) the growing number of LP constraints, and (2) the cost of solving each LP: - To manage constraint growth, we analyze LP structure evolution and propose an activation pattern subsampling and iterative filtering scheme (Appendix F.4) to prune redundant constraints. - For solving LPs, recent GPU-enabled solvers like CuClarabel and MPAX offer promising directions to improve runtime and scalability. > Another issue is that momentum-based methods seem to be more popular, which might provide better performance. Thank you for bring up this point. We don't involve different momentum-based methods into comparison since our main experiments are done for our active learning scheme, not for our training scheme. So in figure 2 and 3, we are mainly comparing against different active learning (or query selection in another word) baselines, not optimization algorithms. The only point we compare our training scheme to gradient-based method is in figure 4 (the right most figure), where we take SGD as a representative. The key here is we do acknowledge our method in its current stage cannot beat SGD in deep NN training, but our theoretically-supported query acquisition strategy is already superior in terms of query efficiency. That's why most of our experiments are done for active learning. Though we are optimistic that, with the development of more advanced LP solvers, our method will be more and more practical in the near future. > The authors did not discuss the computational efficiency of their proposed method. We’ve discussed this in our reply to point 2. > Although convergence results are given, I am not sure how to understand the results. Thank you for raising this point. In fact, a consequence of Theorem 6.3 is that **the prediction function converges in norm to the optimal decision function**. Please see our added proof [here (anonymous link)](https://drive.google.com/file/d/1K57njXjyj4Ea846PEh4qBkS4_wwsyGfW/view?usp=sharing). > I think the paper would be greatly improved if the authors could show results regarding learning the correct function. Thanks for the insightful suggestion! Yes we are able to make it, see above. > I think an interesting question is that whether this method could reduce the number of queries needed for active learning. Yes. Despite the connection between training deep ReLU NN and solving LPs, our another contribution is the convergence result for query efficiency. Specifically, we theoretically prove that our proposed active learning strategy can exponentially shrink the parameter search space, this is (to our knowledge) the first theoretic guarantee for query efficiency among various active learning methods. Empirically, figure 2, 3, 4 all demonstrate that we can achieve better performance with fewer queried points. > Could the authors clarify on Algorithm 2? We apologize for the confusion. Due to word limit, we include a detailed description [here](https://drive.google.com/file/d/10Co8zLhbtlI9qqgI1AWUODN_4utD8C7m/view?usp=sharing). > Could the authors comment of the computational aspect of their proposed method? Our cutting-plane training scheme theoretically extends to arbitrarily deep neural networks with **no upper limit** on model size (Theorem 4.2). The active learning method exponentially shrinks the parameter search space, enabling analytical bounds on query complexity (Theorem 6.3). The main remaining concern is runtime, as noted in our responses to points 2 and 3, though promising progress is underway to address this. --- Rebuttal Comment 1.1: Comment: I would love to thank the authors for their detailed explanation. I have no concern in understanding Thm 6.3 given the additional details, but I am still interested in seeing how large neural networks could be trained using the current method. Sorry for the last-minute question, but could the authors show me some numbers in terms of the scale of the NN trained? I am happy to increase score if the scale is moderately large for tasks like number recognition. --- Reply to Comment 1.1.1: Comment: Thank you for your follow-up and for the thoughtful engagement with our work. We’re very happy to clarify the scalability of our current method. In our current experiments in the submitted paper, we train both **two-layer and three-layer ReLU networks** using our cutting-plane framework: - For the two-layer network, we use 623 neurons (Figure 2, spiral task). - For the three-layer model, we used 57 and 34 neurons in the two hidden layers, respectively, while still achieving competitive performance on the same task. While our method is not currently suited for large-scale pretraining due to scalability challenges we have noted from solving an LP system (though we expect continued progress as research in LP optimization advances), crucially, our framework is **flexible and scalable when used with pretrained large models**. For instance, in our IMDB sentiment analysis task (Figure 4), we use **2560-dimensional embeddings** from the **3B-parameter Phi-2 LLM**, and train a two-layer ReLU model using our method with 500 sampled activation patterns. Since we operate only on the embeddings, our approach is **agnostic to the size of the upstream model** and can scale seamlessly with larger backbones (e.g., GPT, LLaMA). As noted earlier, our current scalability is limited by two factors (though remedies for both are already present): - (1) The number of constraints (due to activation pattern enumeration): Our paper already addresses this via activation pattern subsampling and iterative filtering (Appendix F.4), which greatly reduce the number of constraints in practice while preserving model expressivity. This enables training of moderately sized NNs without exhaustively computing all patterns. - (2) Use of a CPU-based general-purpose solver (CVXPY) — which we plan to replace with faster, structure-aware solvers. In particular, the method of Mishkin et al. (2022) ([arXiv:2202.01331](https://arxiv.org/abs/2202.01331)) offers accelerated convex optimization for two-layer ReLU networks and **scales to image tasks like MNIST and CIFAR-10**. These solvers provide an **easy drop-in replacement** for CVXPY in our framework and can significantly accelerate training and enable larger models. In parallel, GPU-based LP solvers such as CuClarabel and MPAX offer complementary performance gains by leveraging hardware acceleration. In short, our current method is not yet compatible with large-scale training, but the framework is **modular and extensible**, and we believe the recent developments in solver efficiency and activation filtering strongly support the path toward broader applicability. Please don't hesitate to let us know if you have further questions / need clarifications!
null
null
null
null
null
null
null
null
Avoiding spurious sharpness minimization broadens applicability of SAM
Accept (poster)
Summary: The authors investigate the Sharpness-Aware Minimization (SAM) algorithm for language tasks and find deteriorated performance compared to vision tasks. They explain this by re-writing the SAM update as a gradient norm penalty, and decompose the gradient of the gradient norm into a functional part and a logit part. Through empirical analysis they demonstrate that in language tasks, SAM is biased towards minimizing the logit part - different to vision tasks. They suggest a modified variant of SAM, which explicitly minimizes the functional part, and additionally contains a preconditioning for the perturbation. The authors report improved performance over baselines on language tasks with model sizes over three orders of magnitude (24M-1.2B). Claims And Evidence: - according to the title, the authors claim to “broaden the applicability of SAM”, implying that SAM becomes practical for (potentially large) language model optimization. However, the results presented in the experiments were conducted for a fixed number of steps, giving SAM twice the compute budget of base optimizers. The authors acknowledge this in their discussion, and point to efficient SAM implementations that could be combined with their work, but this is not explicitly shown. Thus, whether SAM's applicability is indeed broadened is unclear from this work. - “Here, we confirm that the generalization benefits imparted by PRECOND FUNCTIONAL-SAM are brought about convergence to a solution with lower curvature, as shown in the Table 4 for the 23.9M model” This only holds when comparing SAM-variants to AdamW. In Table 6, the lowest curvature does not imply lowest eval loss (but still lower curvature compared to AdamW). - “This further highlights how SAM, by default, in language modeling tasks is set up to minimize sharpness spuriously”. The notion of spurious is a bit unclear here, as there is no way of assessing the non-spurious sharpness via the provided numbers, and PRECOND FUNC-SAM shows the lowest value for $tr(H_G)$. It could also just imply that overall the sharpness quantities investigated do not correlate well with generalization. Methods And Evaluation Criteria: Yes, except for the difference in compute budget between SAM and baselines. Theoretical Claims: I checked Appendix B. Experimental Designs Or Analyses: - like explained above, the difference in compute is problematic - Since the authors aim at minimizing the functional path, it would be good to demonstrate that this actually happens with PRECOND FUNC-SAM, e.g. via repeating Figure 4 for _FUNC SAM (precond)_, and potentially also for the other variants - Including _Func SAM_ and _SAM (precond)_ in Figure 3 would allow to disentangle the effects of preconditioning and the functional formulation - Since rho is tuned, I suggest reporting the full results in the Appendix (ideally like in Figure 3) for all experiments - I recommend also reporting $\lambda_{max}(H_G)$ in Table 4 Supplementary Material: I reviewed the complete supplementary material. Relation To Broader Scientific Literature: SAM has mostly been applied to vision tasks, and to a lesser extent for fine-tuning in the language domain. Why it is not used more for language modelling in practice has not been investigated thoroughly. The authors show that training from scratch leads to deteriorated results compared to standard optimizers. This has not been demonstrated in published research. They connect their findings to previous work that highlighted the relevance of the Nonlinear Modeling Error (NME), a component of the Hessian of the loss, for Sharpness minimization. They derive a SAM-variant that explicitly minimizes the functional part, and additionally employs a preconditioning on the SAM perturbation. The preconditioning of the SAM perturbation is conceptually similar to the plethora of SAM variants that exists already. Those variants are discussed, but not compared against in the experiments. The paper is also the first one I am aware of that scales SAM to models bigger than 1B params. Essential References Not Discussed: The authors provide a comprehensive discussion of the relevant literature. Other Strengths And Weaknesses: - the presentation of the paper is nice. I appreciate the effort in communicating the central ideas and results clearly (both through good writing and appropriate use of colors and markers) - Given that there is a plethora of SAM papers and SAM variants, where each paper claims improved performance over baselines (standard SAM and standard optimizers), it is natural to ask why SAM is - to the best of my knowledge - not used for training LLMs. Investigating and improving the practicality of SAM for language tasks is therefore an important task, and the authors have made a good effort in this direction by finding a difference in sharpness-minimization between vision and language tasks, and proposing a modified SAM variant to mitigate the problems. - The scale of the experiments (>1B training from scratch) is novel for SAM - As discussed by the authors, other studies have also proposed preconditioning of SAM or adjusting its perturbation model, and the authors “believe our decomposition approach is orthogonal to this line of work”. While I agree that the perspective of decomposing the sharpness term is novel for SAM, it would still be interesting to see if some of the other SAM variants show the same behaviour as SAM or PRECOND FUNCTIONAL-SAM in language tasks. Perhaps some of the problems are already implicitly mitigated by those variants. Other Comments Or Suggestions: - grammar mistake in line 186: “we see that a simple but spurious way to decrease it is by make the network…” - typo in line 376: “interestingly, we also that” Questions For Authors: I'll repeat points from above, in decreasing relevance 1. (most relevant) Do the authors have evidence that their method improves over baselines for the same compute budget? 2. Have the authors compared to other SAM variants? Could be for small-scale setups 3. Does the decomposition of the Hessian imply a non-spurious sharpness measure that might correlate better with generalization (e.g. via $\delta_{func}$)? If so, could the authors report it alongside the metrics in Table 4? 4. (least relevant) Do the authors have any intuition or evidence about where the difference between the language and vision setup and the corresponding spurious sharpness minimization comes from? I.e., why would one setup be biased towards the respective path? Can it be connected to the loss function, or the number of classes, or not training until convergence? Is there a connection to label smoothing? I'm just curious, and I don’t expect a comprehensive answer here; it will likely not affect my score, so the authors can feel free to not answer this. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their positive assessment, insightful comments, and detailed feedback. We are glad that you were able to contextualize our contribution quite spot on. &nbsp; ---- &nbsp; > ### Same compute budget comparisons As it stands, at an equal number of FLOPs, well-tuned AdamW does slightly outperform functional SAM. However, there are nevertheless important reasons why we believe functional SAM is still promising: \ &nbsp; - **Relevant Use Cases:** There are practically relevant use cases, such as in data-limited regimes or where the model size is constrained (e.g. due to inference time constraints), where the better performance at fixed step count is desirable and where the extra training overhead of functional SAM may not matter as much. \ &nbsp; - **Solution Quality Beyond Loss:** Even in the equal-FLOP setup, functional SAM’s better geometric properties in its flatter solution might be preferable than a method yielding sharper solutions, since it has been well-documented [Liu et. al., 2023] flatness of the solution correlates more robustly with downstream performance than similar values of loss. \ &nbsp; - **Path to Efficiency:** Lastly, functional SAM is algorithmically compatible with efficient SAM approaches. Approaches like LookSAM [Liu et. al., 2022] can decrease the overhead of SAM to 5-10% while maintaining much of the benefit of the method. We hope to test this and other approaches in future work, and we believe that the overhead can be greatly reduced. &nbsp; > ### Comparison to other SAM variants: - While interesting, comparing against the plethora of (effectively vision-focused) SAM variants was beyond the scope of this work. \ &nbsp; - During our initial investigation into SAM's poor performance in LMs, we did experiment with common, simpler variations, such as using the unnormalized perturbation step or trying different weighted combinations of the SAM gradient and the standard gradient. However, these modifications did not appear to fundamentally resolve the performance degradation observed in language modeling tasks. \ &nbsp; - In retrospect, this is perhaps not entirely surprising. Critically, *none of these existing variations are explicitly designed to prioritize functional sharpness minimization over logit sharpness*. Our diagnosis revealed that this preferential treatment of functional geometry is precisely what is needed to make SAM effective in LMs, requiring a more significant departure from standard SAM formulation. \ &nbsp; - It would be interesting to nevertheless explore other orthogonal SAM variants, which address issues like parameter scaling sensitivity (ASAM, Kwon et al., 2021), norm choices (Tahmasebi et al.,2024) or perturbation stability (ESAM, Li & Giannakis, 2024), in relation to functional SAM as well. But since the design space of new algorithms tends to explode, we have had to stay within our scope making SAM effective in pre-training LMs. &nbsp; > ### Non-spurious sharpness measures that might correlate better with generalization This is an excellent remark. We do think that something which directly measures the *extent of functional curvature* (like the frobenius norm of the functional Hessian) *or its relative extents compared to the logit curvature* to potentially be revelatory. We will try to add some measurements of this kind in the final version, but this more likely deserves a separate study of its own. &nbsp; > ### Vision vs Language Intuition: Another great question. Our current hypothesis relates to the **nature of the typical output distributions $p(y | x; \theta)$ in these domains**. - In many **vision** tasks, the probability mass often *concentrates over a relatively small number of semantically related classes* or visual scenes. The output distribution might be less dispersed, and *manipulating logit statistics could potentially align reasonably well with improving the underlying function's robustness*. \ &nbsp; - In **language modeling** (specifically next-token prediction), the **distribution over the next token is often highly dispersed and heavy-tailed**, with non-negligible probability assigned to many different words. In such a setting, minimizing sharpness simply by manipulating logit statistics (e.g., making the distribution slightly peakier) might be an *"easy" path for the optimizer that doesn't translate to genuine improvements in the functional geometry*. &nbsp; > ### Suggestions on Experimental Designs Or Analyses - Thanks for these great suggestions, we will definitely incorporate them in the camera ready version. &nbsp; ---- *We hope we have able to address your concerns. We remain at your disposal should you have more questions or comments.* --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for responding to my comments, and for the insights about vision vs language and non-spurious sharpness measures. I still believe that the overall direction of this paper is nice, but my stance regarding the main points (compute budget, other SAM variants) does not change in light of the rebuttal. I will therefore keep my score.
Summary: This paper presents an intriguing exploration of the distinction between logit-space and functional-space perturbations within the context of Sharpness-Aware Minimization (SAM). The authors' identification of this subtle difference is interestimng, and while the observed effects might appear minor, the potential ramifications for model training and generalization are substantial. Understanding how these perturbation spaces impact optimization could lead to more robust and efficient training methodologies. ## update after rebuttal thanks for addressing some of the concerns. I think this work has merit and I will keep to my original score [weak accept] Claims And Evidence: The authors propose P-SAM to address issues related to "un-preconditioned geometry." However, it appears that the concept of pre-conditioning the inner adversarial optimizer with the outer optimizer's state, specifically using ADAM, is already a common practice, notably within the Jax's Optax library - and in other works (like Granziol, JMLR; Gordon-Wilson and others). If the authors' P-SAM simply replicates this existing approach, then the novelty and contribution are limited. If, on the other hand, P-SAM introduces "further preconditioning" beyond established techniques, the justifications provided in the paper are insufficient. The explanations regarding the need for additional pre-conditioning lack the necessary depth and clarity to convince of necessity or effectiveness. More concrete theoretical or empirical evidence is needed to substantiate this claim and differentiate P-SAM from existing implementations. While the authors suggest that F-SAM should deliver superior performance, the empirical evidence provided is underwhelming. A mere 0.03 improvement in loss, based on what appears to be a single seed and a fixed training budget, raises serious questions about statistical significance. Without a clear understanding of the variance across multiple seeds under identical training conditions, it's difficult to ascertain whether this improvement is genuinely meaningful or simply a by-product of experimental fluctuation. For typical NLP problems, where variability between runs can often be considerable, a 0.03 difference might well fall within the noise - this should be commented on. To validate the effectiveness of F-SAM, surely a more rigorous experimental setup, including multiple seeds and a thorough analysis of variance, is essential? Furthermore, it would be beneficial to benchmark these improvements against typical NLP problem improvements to provide more context. Adding to my concern is the absence of F-SAM results in Figure 2. This would have been more informative had they included F-SAM. Methods And Evaluation Criteria: The general methods suggested make sense for the application - but, as per my comments, there are significance questions about the improvements. Theoretical Claims: all proofs checked (to the best of my ability) and all seem to work out. Experimental Designs Or Analyses: coming back to the issue of the headline result, which seems to be a small improvement with lack of clarity about its significance. as per previous comments, some multi-run analysis would be good to see if this [slight] improvement is truly significant. Supplementary Material: reviewed all materials available Relation To Broader Scientific Literature: there is a other of work in pre-conditioning, some of which is referenced in the paper - but I'd argue that's not the headline of the submission [as pre-conditioning by itself is not novel]. There are some prior works that might be of interest. A Random Matrix Theory Approach to Damping in Deep Learning, Diego Granziol, Nicholas Baskerville [arXiv]. Granziol's JMLR paper also looks at a Hessian pre-conditioning [using RMT] and there is related work from Gordon-Wilson and Izmailov and Das's arXiv paper Towards Quantifying the Preconditioning Effect of Adam. Essential References Not Discussed: The key innovation of the paper is the exploration of the distinction between logit-space and functional-space perturbations within the context of Sharpness-Aware Minimization. I could not find prior work that replicates this Other Strengths And Weaknesses: The core novelty of the paper is nice - logit-space and functional-space SAM. This could have some neat practical implications. While the observation of the logit/functional difference is an interesting contribution, the paper's central claims regarding F-SAM and P-SAM are weakened by apparent methodological shortcomings and a lack of empirical support. To strengthen the paper, the authors should address the (seemingly weak) statistical significance of their results, provide a more comprehensive experimental evaluation and offer a clearer justification for the proposed pre-conditioning method and how it differs from existing approaches. Other Comments Or Suggestions: The paper is well-written and an enjoyable read with no obvious errors or typos. The references need Capital letters protecting [minor issue]. Questions For Authors: all in previous comments Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank you for your feedback and sharing the interesting works. Besides, we are pleased to hear that you find the exploration intriguing and recognize its potentially substantial ramifications. &nbsp; ---- > ### 1. Significance of Empirical Gains (0.03 loss): We understand the concern about seemingly small improvements in loss. However, - **Context is Key**: In large-scale LM pre-training (100M-1B+ params), **improvements of 0.03-0.06 validation loss** (Tables 1, 2, 3) are **practically meaningful**. They are *comparable to or exceed gains in well-respected works on LLM optimizers* (e.g., SOAP [Vyas et al., 2024], CASPR [Duvvuri et al., 2024] which are both improvements over Shampoo [Gupta et al., 2018] and report similar gains if not lower) and newer variants of Attention [Leviathan, et. al., 2025], LLM Fusion [Mavromatis, et. al. 2024], and corpus deduplication [Fig Lee, et. al., 2022], to list a few. All of these mentioned papers consider the C4 dataset, and so the differences are comparable. \ &nbsp; - **Statistical Significance:** - As discussed in Lines 263-274 of Section 5.2, for prototyping we conducted our experiments using **3 random seeds**. We observed that validation loss results were typically stable to the **3rd or 4th decimal place**, indicating very low variance between runs. The reported results in Table 3 are averaged over 3 seeds (where the various methods rank as, 3.86 vs 3.88 vs 3.90 in Table 3 for precond. Functional SAM vs Functional SAM vs AdamW) are therefore highly significant and not due to noise. We will clarify this. \ &nbsp; - For the later experiments at the much larger parameter scale, we did have to report single seeds due to the constraints of time and corresponding costs of these experiments. However, to address this valid concern of yours, we have carried out an experiment on the **1.2 B parameter model over 3 seeds**, and the averaged results in a fixed-length (50K step) settings are **precond. Functional SAM 2.70** versus **AdamW 2.73**. This highlights that our results continue to be statistically significant even at these scales. We will include these results in the revision. \ &nbsp; Hence, we can be confident that the gains delivered through our method are genuinely meaningful. In fact, it would serve us to remember that standard SAM *consistently performed worse* than AdamW (Fig 1). We have been able to turn things around and shown, for the first time, *positive* gains from SAM-style regularization in this setting over AdamW. And, *the difficulty of achieving any improvement over tuned AdamW at this model scale cannot be overstated.* &nbsp; > ### 2. Preconditioning Novelty (vs. Optax, Granziol, etc.): We appreciate the reviewer pointing out related work and common practices. - Our contribution regarding preconditioning (Sec 4.2) should be understood specifically as addressing the potential **mismatch between SAM's default Euclidean perturbation** and the preconditioned geometry used by the *outer optimizer* (AdamW), particularly relevant for heterogeneous Transformer landscapes. Moreover, we also provide a theoretical argument (App B.1) that preconditioning can help **re-balance logit/functional paths**, which enriches our perspective about preconditioning as well. \ &nbsp; - While general preconditioning and using Adam's state (as perhaps done implicitly in some Optax implementations) are known, what constitutes novelty in this specific context is the **explicit motivation for fixing SAM's failure in LMs** by aligning geometries/rebalancing paths, and the *demonstration of its effectiveness especially in combination with functional SAM* (Table 3 shows precond. functional SAM outperforms plain functional SAM and precond. SAM). \ &nbsp; - Works like Granziol et al., Das et al., while quite intriguing, explore preconditioning for the main step, **not specifically for SAM's perturbation**. We will refine Sec 4.2 and Related Work to better delineate our specific contribution versus existing preconditioning concepts. &nbsp; ---- Hopefully, this addresses your pending concerns, but please let us know if you have any more questions or comments. --- Rebuttal Comment 1.1: Comment: Thank you for addressing my concerns. Clarification of the significance of the results would help a prospective reader, as would making clear how the approach you suggest differs from other works readers may be familiar with. I will keep my score - I do think this is potentially interesting work, with several possible extensions.
Summary: The paper investigates the limitations of SAM in NLP tasks, where it often degrades performance despite its success in vision tasks. The authors find that SAM's effectiveness varies across domains due to differences in sharpness minimization pathways: the logit path and the functional path. In NLP, the logit path dominates, leading to spurious sharpness minimization. The paper proposes two alternative algorithms: Functional SAM and Preconditioned SAM. Empirical evaluations demonstrate improved performance over AdamW and SAM in NLP tasks. Claims And Evidence: See Weakness. Methods And Evaluation Criteria: See Weakness. Theoretical Claims: N/A Experimental Designs Or Analyses: See Weakness. Supplementary Material: Yes Relation To Broader Scientific Literature: See Summary. Essential References Not Discussed: Some improved algorithms of SAM: [1] Du et al. Efficient sharpness-aware minimization for improved training of neural networks. (ICLR 2022) [2] Mueller et al. Normalization layers are all that sharpness-aware minimization needs. (NeurIPS 2023) [3] Wang et al. Improving generalization and convergence by enhancing implicit regularization. (NeurIPS 2024) Other Strengths And Weaknesses: **Strengths** - The paper presents a novel decomposition of SAM’s sharpness minimization update into logit and functional paths, revealing that the logit path dominates in NLP tasks. - The proposed algorithms, Functional SAM and Preconditioned SAM, empirically outperform SAM in certain NLP tasks. **Weaknesses** - **Computational Overhead**: Similar to SAM, Functional SAM and Preconditioned SAM require **twice** the gradient computation per step, making them computationally expensive. Consequently, while the proposed algorithms slightly outperforms Adam *given the same number of iterations*, Adam may still perform better when compared *under equal computational cost*, which is a fairer comparison. - **Concern regarding long-term performance.** *It is unclear whether the proposed algorithms will be surpassed by AdamW or SAM given sufficient training time.* The experiments on C4 are conducted for a relatively small number of steps. Even with a 1.2B model, the final validation loss remains above 3, which is **significantly higher** than established baselines. For instance, in [4], a 1.2B model achieves a final validation loss of 2.56. - **Insufficient experimental evidence** to explain how the proposed algorithms work. Although the proposed algorithms are designed to enhance the functional path, no experiments demonstrate whether they indeed result in a larger functional path than SAM. - **Insufficient theoretical support.** While the proposed algorithms are motivated by addressing the domination of the functional path,their formulation relies on several approximations. Theoretical support is needed to substantiate that they indeed lead to a larger functional path, particularly for Preconditioned SAM (, whose connection to the main motivation remains unclear). [4] Zhao et al. Deconstructing What Makes a Good Optimizer for Autoregressive Language Models. (ICLR 2025) Other Comments Or Suggestions: See Weaknesses. Typo: (Line 377) "We also that" Questions For Authors: See Weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for their detailed comments and address some of their concerns here. &nbsp; ---- &nbsp; > ### 1. Computational Overhead: We agree that in its current form, Functional SAM is not as FLOPs efficient as Adam. - However, FLOPs are not the only limiting factor in training; for example, in certain scenarios industrial practitioners tend to be **data limited** or **model size limited**. In these scenarios, *extra training time is acceptable for better final quality*, and functional SAM can be a better fit. \ &nbsp; - In addition, functional SAM is compatible with efficiency techniques like LookSAM [1]; this literature suggests that we may be able to reduce the overhead to 5-10% while maintaining most of the benefit of (Functional) SAM. We hope to pursue this avenue in future work. \ &nbsp; - Our focus here was establishing the *effectiveness* of the functional path approach first. Our work provides a crucial understanding of SAM's limitations and offers the first validated approach to successfully apply sharpness-aware methods to large-scale LM pre-training (as noted by **Reviewer 8e58**). [1] https://arxiv.org/abs/2203.02714 &nbsp; > ### 2. Concern regarding long-term performance: Please have a look at *Table 3*, where we already show the gains provided by functional SAM sustain longer training durations as well. **Our 1.2 B model gets 2.61 in terms of validation loss**, which is very close to the 2.56 validation loss from [4] which you have alluded to. These 0.05 differences between the two works can easily be because their empirical setups, such as hyperparameters, exact architectural implementation, might not be identical. *Thus, we can be confident that functional SAM does yield long-term performance benefits as well.* &nbsp; > ### 3. Demonstrating Enhanced Functional Path: This is a great suggestion, and we are working on measurements to show this. The experiments did not finish in time for the rebuttal but will be included in the revision. &nbsp; > ### 4. Theoretical Support and Preconditioned SAM Motivation: - **Functional path formulation:** The functional SAM update (Eq. 11) itself **involves no approximation, and is an exact analogue of the original SAM update (Eq. 2)**, but where the contributions along the logit-sharpness path have been suppressed by design. We used the penalty SAM formulation for discussion following previous works which take advantage of the fact that penalty SAM is more amenable to theoretical analysis while simultaneously giving similar performance to original SAM. This gave us easier way to present and delineate the differences between SAM and Functional SAM. \ &nbsp; - **Preconditioned SAM:** The motivation is twofold: - (1) Empirically motivated: To address the mismatch between SAM's spherical perturbation and AdamW's elliptical perturbation due to its diagonal preconditioning, which might cause issues in heterogeneous landscapes like Transformers. \ &nbsp; - (2) Theoretically motivated: Preconditioning the perturbation by approx. $H_{G}^{-1}$ (approximated by AdamW's ${M}^{-1}$) can selectively dampen the logit path ${\delta}_{logit} = H_G \epsilon^\ast$ more than the functional path $\delta_{func} = H_F \epsilon^*$, thus promoting the functional path (detailed in App B.1, where we made basic assumptions to make the argument quantitative). We will clarify this motivation in Sec 4.2. &nbsp; > ### 5. Missing References [1-3]: - [1] and [2] are related to SAM in that they propose more efficient variants [1] or modifications based on normalization layers [2]. But both of these are exclusively evaluated in vision, where they do not interface with the problem faced by SAM in language modeling tasks. \ &nbsp; - Although ref [3] shares similar motivations as SAM and is quite interesting, they in their own words say that the “specific approaches differ significantly”. \ &nbsp; - These works are firmly orthogonal to our core contribution: diagnosing the *specific failure mode* of SAM in LMs (logit path dominance) via a novel decomposition and proposing targeted fixes (Functional SAM, precond. SAM) that make SAM *effective* in this domain for the first time. Regardless these will be good works to discuss in our paper, and we thank you for suggesting them. ---- &nbsp; *Let us know if we can clarify any further points. If we have answered your concerns, **please consider giving your score a second thought**.*
Summary: The paper introduces Functionnal-SAM (F-SAM), an alternative to Sharpness-Aware Minimization (SAM) that aims to address its poor performance in NLP tasks. The authors argue that SAM's failure in language modeling is due to its focus on regularizing logit statistics rather than modifying the functional properties of the neural network. They propose F-SAM, which modifies sharpness through the functional path, and PRECONDITIONED-SAM (Pre-SAM), which improves SAM’s perturbation by adapting it to the optimizer’s preconditioning scheme. Their empirical results show very slightly improved performance over both SAM and ADAMW across multiple model scales and training settings. Claims And Evidence: The main claims of the paper are: - SAM performs poorly in NLP tasks because it minimizes sharpness primarily by modifying logits statistics rather than the network’s functional properties. - F-SAM improves sharpness regularization by emphasizing functional modifications over logit-based adjustments. - Pre-SAM further improves sharpness minimization by adapting perturbations to the optimizer’s preconditioning scheme. The combination of F-SAM and Pre-SAM shows slight performance improvement (max improvement displayed is 0.06 loss points on values around 3.5) in large-scale language modeling. The evidence includes: - Theoretical decomposition of sharpness minimization into logit and functional paths. - Empirical validation of the proposed algorithms on multiple model scales (from 2M to 1.2B parameters) in both fixed-length and Chinchilla-style training regimes. - Hessian eigenvalue analysis showing F-SAM reduces sharpness more effectively than SAM. However, key claims remain heuristic rather than rigorously proven, and the cost-benefit trade-off is not discussed in sufficient depth. Further, the empirical results show only *very* marginal improvements over SAM and more critically ADAMW, raising questions about the practical significance of the proposed method. Methods And Evaluation Criteria: The proposed methods are evaluated using: - Validation loss on language modeling tasks with multiple model scales. - Hessian eigenvalue analysis to assess sharpness reduction. - Performance comparisons with SAM and ADAMW under equivalent computational budgets. The evaluation is generally well-structured but has critical weaknesses: - The efficiency trade-offs of F-SAM and Pre-SAM are not analyzed and are merely discussed in Section 7. - No training time comparisons or FLOP analysis to assess whether F-SAM justifies its additional computational cost. - No comparisons to alternative sharpness minimization methods beyond SAM, leaving open the question of whether F-SAM is the best solution for this problem. Further, the claim about improved versions of SAM (Kwon et al., 2021; Tahmasebi et al.,2024; Li & Giannakis, 2024) being an orthogonal line of work is not substantiated and would benefit from either a more detailed discussion or empirical comparison. Theoretical Claims: The paper presents a decomposition of sharpness minimization into logit and functional paths but does not provide a rigorous proof that F-SAM leads to better generalization. Instead, the claims are supported by empirical observations and qualitative reasoning which are unfortunately not sufficiently backed up by experimental results in my opinion. Unfortunately, there is no formal proof that logit-path minimization is suboptimal for NLP. Furthemore, the paper does not provide a theoretical justification for why F-SAM gives better convergence guarantees than SAM. The appendix introduces ANGLE-SAM, which generalizes SAM by parameterizing perturbations using an angle $\phi$, showing that F-SAM and SAM are special cases. However, this remains an intuitive generalization rather than a rigorous theoretical result. I believe exploring such a theoretical generalization and studying its properties with respect to $\phi$ could be a strong contribution to the paper. Experimental Designs Or Analyses: The experimental setup is very clear and sufficient in terms of dataset and model choices, but it could be improved in several ways: - Computational cost is not analyzed, making it unclear whether F-SAM is worth the additional cost. - No training time comparisons between F-SAM, SAM, and ADAMW. - Limited discussion on efficiency: if F-SAM is significantly more expensive while offering small improvements, it is not practically useful. - The results are underwhelming in terms of performance improvements, with the best improvement being 0.06 loss points on values around 3.5. This raises questions about the practical significance of the proposed method. Supplementary Material: The Appendix is interesting and provides insights on Angle SAM which I believe could be a good contribution to the community but seems not ripe yet. Relation To Broader Scientific Literature: The paper is very well-situated within the sharpness regularization literature, and even in the preconditionned optimization literature, even though one could regret the absence of preconditioning-based optimization methods like Shampoo (a precursor of SOAP by Vyas et al.) or [K-FAC](https://arxiv.org/abs/1503.05671). Essential References Not Discussed: N/A Other Strengths And Weaknesses: **Strengths:** - Identifies a limitation of SAM in NLP and proposes an alternative. - Provides a novel decomposition of SAM into logit and functional paths. - Coherent empirical framework across a wide range of model scales. - Hessian eigenvalue analysis to support empirical claims is given **Weaknesses:** - No formal proof of F-SAM’s theoretical advantages. - The computational efficiency is not analyzed, making practical applicability uncertain. - The empirical results are very weak and would not justify the additional cost of F-SAM. Other Comments Or Suggestions: N/A Questions For Authors: - Are the proposed methods adapted for fine-tuning settings? Furthermore, do the given pre-trained models lead to better zero/few-shot performance on downstream tasks? - Would the proposed methods be compatible with parameter-efficient fine-tuning techniques such as LoRA or Adapters? Could they be combined in a fine-tuning pipeline? - SAM is often seen as a regularization technique, however, it is not explained in the paper how F-SAM and Pre-SAM relate to explicit regularization techniques. Could they be combined with L1/L2 regularization or other explicit regularization methods to improve performance further? - The Hessian metrics reported are not associated with their variance but are known to be empirically noisy. Could you provide more details on the Hessian analysis and how it was conducted? If the overall weaknesses and questions are addressed, I would be happy to raise my score, although I believe the paper would benefit from more substantial improvements (Angle-SAM seems to be a very promising theoretical approach) to justify a higher rating. Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: We thank you for your thorough review. We address the primary concerns below: &nbsp; ---- &nbsp; > ### 1. Significance of Performance Improvements: - In LLM pre-training (100M-1B+ params), achieving consistent validation loss improvements of 0.03-0.06 (as seen in Tables 1, 2, 3) is **highly significant**. This magnitude is comparable to or *exceeds* gains reported in recent, well-regarded optimizers for LLMs, such as SOAP [Vyas et al., 2024, 0.04-0.06 improvement] and CASPR [Duvvuri et al., 2024, ~ 0.01-0.03]. This is also similar to the size of gains from architectural and data processing changes [Leviathan, et. al., 2025, Mavromatis, et. al. 2024, Fig Lee, et. al., 2022]. - Crucially, **prior to our work, SAM consistently *degraded* performance** compared to AdamW in this setting (Fig 1). Our methods are the first to successfully leverage SAM-style regularization for *improved* LM pre-training results across scales, representing a notable advance in the field. - As clarified in our response to Reviewer Eee3, these gains are statistically significant and indeed represent genuinely meaningful improvements. &nbsp; > ### 2. Computational efficiency concerns - *Costs relative to SAM and AdamW:* functional SAM has virtually identical computational cost (FLOPs and memory) as standard SAM. Both require one forward pass, one backward pass for the initial gradient, and one backward pass (VJP) for the SAM/F-SAM gradient, resulting in ~2x the cost of AdamW per step. We will include explicitly measured time per step in the revision. - *Fixed Data Budget & Model Size scenarios:* Our primary comparison point in the paper is *equal steps* since this is practically relevant in scenarios limited by data availability or required model size (e.g., inference constraints), where extra training time is acceptable for better final quality. - *Future Efficiency:* As discussed in Section 6, Functional SAM is compatible with efficient SAM methods like LookSAM [1], offering a clear path to reducing the overhead to ~5-10% in future work. Our focus here was the fundamental advance of making *any* SAM variant work effectively with language models, as also identified by Reviewer 8e58. &nbsp; > ### 3. Comparison to Other Methods: - *Improved SAM Variants:* Methods like ASAM (Kwon et al., 2021), ESAM, etc., primarily target *vision* tasks and do not address the fundamental logit-path dominance issue we identified in *language modeling*. Our decomposition and Functional SAM are thus orthogonal contributions aimed at *fixing SAM's failure in a new domain*. - *Preconditioning Methods (Shampoo/K-FAC):* Our preconditioned-SAM technique is indeed compatible with any preconditioning method, and it would be interesting to try our technique with base optimizers which use non-diagonal preconditioning. We leave this to future work. &nbsp; > ### 4. Theoretical Claims: - **Proofs:** We followed a common paradigm in deep learning research: identify an empirical issue, propose a diagnostic (logit/functional path), develop a principled fix (functional SAM), and validate empirically. Rigorous proofs for SAM-like methods are challenging and sometimes unhelpful (see response to X2he, point 2). Our consistent empirical gains across scales strongly support our hypothesis. - **Angle-SAM:** We appreciate the reviewer's interest in Angle-SAM, which arises naturally from our decomposition. We de-emphasized it in the current work because it did not add additional benefits for our primary goal of making SAM effective for LLMs. &nbsp; > ### 5. Experimental Details (Hessian Variance): Hessian metrics (Table 4/App Table 6) were computed using standard techniques (e.g., Lanczos for $\lambda_{max}$, Hutchinson for trace) averaged over 50 batches from the validation set, where each batch is 256 sequences of length 512, and thus **these metrics are aggregated over ~6.5 million tokens**. We noticed that even as few as 5-10 batches already gave stable results, but in our Tables we report it with 50 batches for additional precision. &nbsp; > ### 6. Specific Questions: - *Fine-tuning/Zero-shot:* Future work, but flatter minima (which we achieve, Table 4) often correlate with better transfer and robustness [Liu et al., 2023]. Pruning results (Fig 5) also suggest improved robustness. - *PEFT Compatibility:* Likely compatible, but the interaction needs study. - *Explicit Regularization (L1/L2):* Yes, func. SAM can be seen as a regularizer & is compatible with like L1/L2 regularization (we already use Weight Decay of 0.1) &nbsp; We believe our paper offers a novel diagnosis and the first effective solution for applying SAM to large-scale LM pre-training, a significant and previously unsolved problem. The empirical gains are meaningful in this context and demonstrate the success of our approach. ---- *Let us know if you have additional questions; if we have answered your concerns, we hope you will consider revisiting your review score.*
Summary: This paper investigates why Sharpness Aware Minimization (SAM), effective in vision tasks, underperforms in natural language processing (NLP). The authors identify that SAM in NLP overly focuses on reducing sharpness via logit manipulation rather than improving the model's functional geometry, leading to spurious optimization. To address this, they propose Functional-SAM, which prioritizes functional sharpness reduction, and preconditioned SAM, aligning perturbations with optimizer geometry, demonstrating superior generalization across NLP tasks and model scales compared to SAM and AdamW. Claims And Evidence: While the logit vs. functional sharpness decomposition is intuitive, the theoretical justification relies heavily on empirical observations and simplified assumptions (e.g., free independence of Hessian components). A more rigorous mathematical foundation for the decomposition’s validity across architectures and loss landscapes is lacking. Methods And Evaluation Criteria: Downstream utility (e.g., fine-tuning, robustness) is only briefly explored (via pruning), leaving practical NLP benefits underdeveloped. Theoretical Claims: lacks of theoretical guarantees. 1.No convergence analysis for Functional-SAM or preconditioned SAM. 2.No theoretical bounds on how much functional sharpness reduction improves generalization. Experimental Designs Or Analyses: 1.The evaluation primarily focuses on language modeling using the C4 dataset and decoder-only Transformers. The paper does not validate the proposed methods (Functional-SAM and preconditioned SAM) on other NLP tasks (e.g., text classification, machine translation) or diverse datasets, raising questions about broader applicability. 2.Reliance on C4 dataset alone limits insight into performance on noisy or domain-specific corpora. Supplementary Material: I have read the supplementary material. Relation To Broader Scientific Literature: 1.This paper provide empirical insights into sharpness regularization. 2. Based on their findings, authors propose Functional-SAM and Preconditioned SAM Essential References Not Discussed: The paper references [1], which analyzes gradient norm penalties in the context of SAM sharpness. To clarify the novelty and distinctions of this work, could the authors explicitly discuss how the decomposition of the sharpness gradient in their approach differs from that in [1]? [1]Penalizing Gradient Norm for Efficiently Improving Generalization in Deep Learning. ICML2022 Other Strengths And Weaknesses: The methods inherit SAM’s 2× computational overhead, and the perturbation radius requires careful tuning, especially for large models. The paper notes that tuning \rho becomes coarser for billion-parameter models, potentially limiting real-world adoption where hyperparameter optimization is costly. Additionally, combining Functional-SAM with preconditioning introduces more complexity, which may hinder ease of use. Other Comments Or Suggestions: To ensure reproducibility and facilitate further research, could the authors provide access to the implementation code? Questions For Authors: see questions before. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their detailed comments and their positive view of our paper. We address their concerns below ---- &nbsp; > ### Theoretical justification of the decomposition - The decomposition into logit vs. functional sharpness is valid in *any setup involving the composition of a loss function with the outputs of a parameterized function*. \ &nbsp; - This describes virtually all scenarios in deep learning, and there is **fundamentally nothing which limits its validity to a specific architecture**. We will clarify this aspect in the text to avoid any confusion. &nbsp; > ### Lack of theoretical guarantees - **Convergence Analysis:** We appreciate the comment. However, rigorous convergence analysis for SAM-type methods is notoriously complex: - SAM itself has lacked a general convergence proof in the non-convex case, with initial results only recently published [8]. - Older convergence analyses [1-5] required significant modification of the algorithm itself or strong assumptions to make progress. - The utility of convergence proofs for the design of SAM-like algorithms is also debatable. There is empirical and theoretical evidence that SAM is regularizing sharpness throughout training, rather than just selecting final flat minima to converge to in either the early or late time dynamics only [6]. \ &nbsp; - **Generalization Bounds:** we note that **none of the existing generalization bounds can account for the ineffectiveness of SAM in language modeling**. This observation highlights the potential *pitfalls of pursuing generalization bounds* without adequately accounting for the impact of optimization dynamics. &nbsp; > ### Concerns about efficiency and deployment of methods **Computational Overhead:** We agree that the computational overhead needs to be improved; we believe that SAM efficiency methods like LookSAM [7] should be compatible with Functional SAM and can reduce the overhead to a more modest 5-10% — which we hope to demonstrate in future work. - **Rho $\rho$ Tuning:** for large models, $\rho$ can be added to scaling studies already used for other hyperparameters (learning rate, weight decay) using detailed experiments at small scales to predict good hyperparameter values at large scales. &nbsp; > ### Reliance on C4 - C4 is widely used for benchmarking LLMs and using it facilitates easier comparison to those works. - Additionally, as mentioned in the paper, C4 is a clean dataset and thus a hard testground for regularization techniques, like SAM, that aim to improve generalization. *We expect the gains to be even higher when the dataset is noisy.* - Besides, C4 is a gigantic dataset significant coverage of most textual corpora on the internet. Thus, the evaluation here being dataset-specific is much less of a risk. &nbsp; > ### Decoder-only Transformers We focus on this setting to be closer to the industrial use-case, as **decoder-only Transformers are really the workhorse of generative LLMs**. Downstream tasks can all be modeled on top of these decoder-only models, say, via in-context learning. &nbsp; > ### Practical NLP benefits underdeveloped We understand your concern, and reiterate that before this work, there was not even a clear path to making SAM effective on language tasks. We believe our work has demonstrated a viable path, and hope to develop a truly practical version of the method in future work. We also recommend you to check Reviewer 8e58’s remarks, where they attest to this precise point. &nbsp; > ### Relation to "Penalizing Gradient Norm for Efficiently Improving Generalization in Deep Learning" The mentioned paper only uses gradient-norm penalty as a regularizer, and does not analyze it, let alone use the decomposition of the sharpness. &nbsp; > ### Complexity of combining Functional-SAM with preconditioning The implementation is a trivial and just a **one-liner change** in code (with negligible overhead). We don’t need to maintain any additional preconditioning statistics, but we can rely directly on that given by the base optimizer (Adam). &nbsp; > ### Code Yes, we aim to release code by the camera-ready stage. &nbsp; --- &nbsp; *Let us know if you have any further questions; if we have addressed your concerns we hope you will consider **revisiting your review score**.* &nbsp; &nbsp; [1] M. Andriushchenko and N. Flammarion. Towards understanding sharpness-aware minimization. ICML 2022. [2] P. D. Khanh et al. Fundamental Convergence Analysis of Sharpness-Aware Minimization. NeurIPS 2024 [3] P. L. Bartlett et al. The dynamics of sharpness-aware minimization JMLR 2023. [4] Y. Dai et al. The crucial role of normalization in sharpness-aware minimization. NeurIPS 2023. [5] K. Ahn et al. How to escape sharp minima with random perturbations. ICML 2024. [6] https://proceedings.mlr.press/v202/agarwala23a [7] https://arxiv.org/abs/2203.02714 [8] https://arxiv.org/abs/2503.02225
null
null
null
null
Curvature-aware Graph Attention for PDEs on Manifolds
Accept (poster)
Summary: This paper introduces a Curvature-aware Graph Attention method specifically designed for solving PDEs on manifolds. It addresses the limitations of previous approaches that focused on Euclidean spaces or overlooked the intrinsic geometry of manifolds. The proposed method uses fast parallel transport and tensor product on manifolds to fix the original message passing and aggregation process. The authors also introduce a sub-tree partition method to optimize parameter-sharing and reduce computational complexity. Experimental results show this novel attention mechanism improves the performance on solving PDEs on manifolds. Claims And Evidence: Yes. In the *Experiments* section, the results on solving various PDEs are better than other PDE solvers. In the ablation study, the proposed Curvature-Aware Attention module indeed improves the performance of three attention-based graph neural networks. Methods And Evaluation Criteria: Yes. The evaluation datasets, from three important physical PDEs, are really meaningful. Theoretical Claims: Yes. I have checked the derivation for the closed form of parallel transport on the sphere and the Poincaré half plane, I think they are correct. Experimental Designs Or Analyses: For dataset generation, this paper first discretizes the parameterized manifold, then selects a function $u(x,t)$ and computes the sources term $f(x,t)$ to obtain a data tuple $(u^{(t)}, f^{(t)},u^{(t+1)})$. I think this process makes sense. The main results on Table 1, the authors use 2 criteria $L^2$ and $H^1$, which makes results more convincing and comprehensive. Although authors have conducted a model on five different 2-manifolds, the Lorentz model for hyperbolic geometry is neglected. I point out this view since the Lorentz model is tightly connected to the theory of relativity, in which the wave equation plays a vital role. In Appendix H, the authors make visualizations for prediction results, which intuitively support the effectiveness of Curvature-aware Graph Attention. Supplementary Material: Yes, I have reviewed all the supplementary material, especially the part of Appendix B, C, D, and H. Relation To Broader Scientific Literature: The main contribution of this work is that the proposed Curvature-aware Graph Attention replaces vanilla attention on GT and GAT, and it improves the performance on solving PDEs on manifolds. I think this work will affect the methods on solving manifold PDEs, leading them to focus more on intrinsic geometry. Even for the development of graph neural network, this paper may provide some new ideas about geometric GNN. Essential References Not Discussed: No, there are no essential related works ignored. Other Strengths And Weaknesses: Pros: 1. Considering the manifold's curvature by parallel transport, instead of explicitly encoding curvature by a neural network. 2. It generalizes the matrix multiplication with the tensor field, combining with the proposed subtree partition to optimize parameter sharing, reducing parameter sizes and computational complexity. 3. The extensive experimental results can support the contribution claimed by the authors, from quantitative and visual experimental results. Cons: 1. The motivation for using attention-based GNN to solve PDEs is unclear. 2. I think some points need to be discussed further: after running BFS with depth $d$, the scale of subtrees may be very imbalanced, will this situation affect the model performance? Will the random selection for the source nodes affect the results? 3. Eq. (17) contains the parallel transports of tangent vector and covector, but the paper only gives the closed form of tangent vector parallel transport. Under general settings, the two formulas are not the same. 4. For experiments, the wave equation is important for the theory of relativity, which connects tightly with the Lorentz model. Thus, I think this manifold should be considered. Other Comments Or Suggestions: 1. In *Preliminaries* section, I think there is a typo: “(2,0)-tensor $u^∗ \otimes v^∗$”. (s, t)-tensor $T$ means the multi-linear map $T$ takes s number of cotangent vectors and t number of tangent vectors as inputs, so this example is about (0,2)-tensor. Questions For Authors: 1. The method uses local geometry approximation and Gaussian curvature estimation, replacing the complex surfaces locally with constant curvature surfaces. For curvature zero, the surfaces are isomorphic to the Euclidean 2-plane. For positive curvature, the surface is diffeomorphic to a sphere by the Gauss–Bonnet theorem. But for negative curvature, the surfaces are only conformally equivalent to the Poincaré half plane, if a neighborhood on the graph node $u$ comes to large-scale paths or paths around the entire surface, the Poincaré half-plane model may not accurately reflect the parallel transmission characteristics on the original surface. How do you avoid this situation? 2. In the second line of Eq. (17), the authors want to share the parameters from node $u$ to $w$ by parallel transport, but how do you guarantee the consistency between $\Gamma(\eta)_0^t[\tilde{w}_1\otimes \tilde{w}_2]$ and $\tilde{w}_1\otimes \tilde{w}_2$? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: # Response To Reviewer DBfv We sincerely appreciate your constructive feedback and meticulous evaluation of our work. Below, we provide responses to each point raised. > **Q1**. In _Preliminaries_ section, there is a typo: (2,0)-tensor $u^∗⊗v^∗$ should be (0,2)-tensor. **A:** Thank you for catching this typo. We will fix it in the next version of manuscript. > **Q2.** The motivation for using attention-based GNN to solve PDEs is unclear. **A:** Reviewer y7yT raised similar concerns. While accuracy is crucial in many applications, **time-sensitive scenarios prioritize speed over minor inaccuracies**, motivating the development of neural solvers like **Neural Operators** in the ML community (see **Related Work: Neural Operator** in our paper). A key example is **gaming engines**, where faster rendering enhances real-time FPS. To achieve realistic illumination based on blackbody radiation, the heat equation is solved to determine surface temperature distribution. In this case, **the underlying geometry remains fixed**, and high accuracy is less critical as long as the rendering appears plausible. **Smooth scene transitions (speed) matter far more than pixel-perfect details (accuracy).** Besides, our GNN solver is SE(3)-invariant and can operate without direct coordinate input. Even if retraining is required for a new mesh, **it excels in scenarios where the same mesh appears repeatedly under rotations and translations**, which are common in **material science and gaming engines** (e.g., **triply periodic minimal surfaces** in crystals and block copolymers). In these cases, our method outperforms traditional solvers like FDM and FEM. We will revise our manuscript to incorporate these examples, further reinforcing the motivation for our approach. > **Q3.** The subtrees partition may be very imbalanced. Will this affect the model performance? Will the random selection for the source nodes affect the results? **A:** Your concern about stability makes sense. We conducted an extra experiment involving random subtree partition **100 times** on a torus with **1024 nodes**. It shows that a rather imbalanced partition is unlikely and the outcomes is relatively stable: - Subtree amount: 33.51±1.61 - Mean and standard error of Subtree Scale: 30.63±1.47, 1.92±0.18 - $L^2$ & $H^1$ loss(%): 1.69±2.85, 1.93±2.82 Other settings align with the original paper. > **Q4.** Eq. (17) contains the parallel transports(PT) of tangent vector and covector, but the paper only gives the closed form of tangent vector parallel transport. **A:** By musical isomorphism, we can identify a tangent vector $v$ as a 1-form $v^♭$, a.k.a. a cotangent vector, by $v^♭(u):=\braket{v,u}$. The motivation to introduce PT $Γ(η)$ is to use its **inner-product-preserving nature**. Thus the PT of a 1-form is $Γ(η)v^♭(u):=\braket{Γ(η)v,u}$ and thus is equivalent to PT a tangent vector in implementation. Thus in your example, the PT of a (0,2)-type tensor is $Γ(η)(\tilde w_1^\*⊗\tilde w_2^\*)(u,v)=\braket{Γ(η)\tilde w_1,u}\braket{Γ(η)\tilde w_2,v}$. > **Q5.** For experiments, the wave equation is important for the theory of relativity, which connects tightly with the Lorentz model. Thus, I think this manifold should be considered. **A:** Yes, the wave equation is crucial in relativity. It will be valuable if our model can also handle it on the Lorentz model, which is, however, not a Riemannian manifold since its metric is **not positive-definite.** This raises questions about whether the tools adopted currently apply to a *psuedo-Riemannian manifold*. This is a constructive suggestion and we will leave it for future work. From your point of view, I understand that it will be more impactful **if it can handle PDEs in mechanics**. Thus, we also examine the **Navier-Stokes equation** for incompressible and viscous fluid with unit density on a sphere: $∂_t\mathbf u+(\mathbf u·∇)\mathbf u=-∇p+νΔ\mathbf u$. It is insightful in studying the ocean currents on Earth. The experiment setting and dataset generation align with the presented paper. | Model | $L^2$ loss (%) | | --- | --- | | Curv-GAT(ours) |**0.128±0.002** | | GAT | 1.381±0.006 | | Transolver | 3.314±0.651 | | GNOT | 0.542±0.164 | > **Q6.** If a neighborhood has large-scale paths, the Poincaré half-plane model may not accurately reflect the parallel transmission characteristics on the original surface. **A:** Perhaps our illustration on embedding is also not clear enough to you, you may refer to **Q3 and Q4 in the response to Reviewer y7yT if needed**. We **embed the surface at a node locally** into a constant-curvature surface, with parallel transport decorated **edge-wise, not path-wise**. Subtree partition also mitigates this issue, ensuring consistent parallel transmission unless mesh quality is rather poor. Thanks again for your energy!
Summary: The authors propose a new PDE-solver based on neural nets for PDE's on manifolds. They claim that taking into account the curvature of the manifold plays a significant role in computing accurately the dynamics of the process to solve. The authors align the tangent spaces on a manifold via parallel transport use tensors instead of matrix multiplication. This yields a curvature aware graph attention mechanism, which is better suited for solving PDEs on manifold. The reasoning in sensible. A good literature survey is presented. Performance evaluation is done on various (toy) manifolds and 3 type of PDEs, including nonlinear ones. The methos outperfomrms all methods, in terms of accurace (both L2 and H1), sometimes by an order of magnitude. The paper is supported by theoretical justifications and proofs, along with additional experiments in the appendix. It appears a good fit for the conference. Claims And Evidence: Claim better solutions on manifolds should take into account curvature consideartation. Methods And Evaluation Criteria: Evalutiona is quite robust, although on toy manifolds, but with increasing complexity, such as Wrinkles. 3 different PDEs are tested, comparison is against 11 competing algorithms, including very recent ones such as Transolver of ICML 2024. The performance of the proposed algorithm consistently outperforms other methods. Theoretical Claims: The theory appears fine (although checked only briefly). Experimental Designs Or Analyses: OK Supplementary Material: OK Relation To Broader Scientific Literature: Fine Essential References Not Discussed: Refs are fine Other Strengths And Weaknesses: Writing and illustrations are good. Other Comments Or Suggestions: -- Questions For Authors: * Time considerations in the computations. * Have you tried it on more complex manifolds? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: # Response To Reviewer Ao56 We are sincerely grateful for the time and effort you have dedicated to reviewing our manuscript. Below, we address each of your comments in detail. Should additional revisions be necessary, we are more than willing to make further adjustments. > **Q1**. Time considerations in the computations. **A:** Yes, solving speed is crucial, as it is one of the key advantages of neural solvers over traditional numerical solvers. We find that both the training and inference speeds of our model are acceptable. We have compared the training times of five neural PDE solvers in Figure 14 (line 973), where our model remains faster than the GNN-based solver GINO. In practice, training time is closely tied to the maximum depth of subtrees $d$ and the number of attention heads. Notably, reducing the number of attention heads to one brings the training speed close to that of Transolver (ICML 2024) and GNOT. Moreover, once training is complete, neural PDE solvers are much faster than traditional numerical methods in solving forward problems. When it comes to inference speed, there is no significant difference among neural solvers, even on large meshes with around 2,500 vertices. Specifically, as analyzed (line 365), its inference computational complexity is on par with GAT. > **Q2**. Have you tried it on more complex manifolds? **A:** While our current experiments primarily focus on simple manifolds and more complex ones (*wrinkled surfaces*), our framework is designed to generalize to broader geometric settings. In our experiments, we **primarily aim to verify the effectiveness of curvature-awareness via parallel transport** in GNNs, and the results strongly support our assumptions. Although these manifolds may be considered toy examples, in practice, **many complex surfaces can be decomposed into simpler ones**. These simple manifolds are able to encompass a wide range of common surfaces encountered in game engine design. We are looking forward to your reply! --- Rebuttal Comment 1.1: Comment: Thanks for your clarifications. It would have been better to see also a final experiment on more compex data, compared for example with standard known classical numerical methods. However, over all I believe the paper is of interest to the community and maintain my rank. --- Reply to Comment 1.1.1: Comment: Thanks for your recognition and suggestion. We further conduct an experiment on heat equation on the canonical **Stanford Bunny (a complex manifold beyond toy examples)**, in which the dataset is obtained by finite difference method. The proposed model is compared with recent baselines listed in our manuscript. The results are presented in the below table and the figures are updated in our anonymous respository at https://anonymous.4open.science/r/icml2025-5376/bunny/bunny.pdf. | Model | $L^2$ Loss (%) | | ------------------ | ----------------- | | **Curv-GAT(ours)** | **0.0058±0.0008** | | GAT | 0.0116±0.0022 | | GNOT | 0.0106±0.0003 | | Transolver | 0.0102±0.0006 |
Summary: This paper focus on solving pdes on 2-dim manifolds. It generalizes message passing algorithms to manifolds by adding Gaussian curvature in to consideration. It approximate the complex manifold by constant curvature surfaces in Eq. 11. Such approach using parallel transport on constant curvature surfaces is a better approximation than move vectors in Euclidean spaces. I really like this approach, since constant curvature is one order higher than Euclidean space. The experiments (Fig.8) considered both positive and negative curvatures. Manifold with both positive and negative curvature is also studied. Weakness: Such approach requires constant curvature. However, it cannot be generalized to high-dim manifolds since in line 139, section curvature will be different that Gaussian curvatures. In summary, I think considering curvature in solving message passing nn is an interesting task and this work solves this task in 2-dim space. I would rate as accept and I will keep learning this work and update my rating during the discussion period. Looking forward to the reply from the authors! Claims And Evidence: - Methods And Evaluation Criteria: - Theoretical Claims: - Experimental Designs Or Analyses: - Supplementary Material: - Relation To Broader Scientific Literature: - Essential References Not Discussed: - Other Strengths And Weaknesses: - Other Comments Or Suggestions: - Questions For Authors: - Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: # Response To Reviewer nE4H We sincerely thank you for your insightful feedback and recognition of our work. We remain fully open to implementing any additional revisions. Below, we address each of your comments in detail. > **Q1**.  Such approach requires constant curvature. **A:** This approach only assumes that the surface at a node has approximately constant curvature. Therefore, **the entire surface is not necessary to have constant curvature**, as we solve PDEs on a more complicated manifold *wrinkle* (in Figure 11, line 915). Due to this locality, we cannot directly parallel transport a vector at a node to those too far away since the numerical errors will accumulate. Instead, we turn to a sub-tree partition strategy to balance the trade-off between the computation accuracy and reflecting the curvature changes in a wider range. > **Q2**. It cannot be generalized to high-dim manifolds since in line 139, section curvature will be different that Gaussian curvatures. **A:** Your observation here is actually a crucial limitation of the proposed framework. This approach cannot be directly extended to manifolds in higher dimensions since in that case, the **sectional curvature** is no longer a real number. **Nevertheless, a slight modification is sufficient to extend it to high-dim cases**. If the manifold is compact, then we can use finite charts (each with a coordinate frame) to describe the manifold. Note that sectional curvature is the extended version of Gaussian curvature in higher-dimensional manifolds, and it is indeed a map $K:T\mathcal M\times T\mathcal M\to\mathbb R,(X,Y)\mapsto K(X,Y)$. On a 2d-manifold, it gives Gaussian curvature if the two input tangent vectors are linearly independent. **This observation sheds light on the high-dim cases.** For instance, on a 3d-manifold with a coordinate frame $\{X_1,X_2,X_3\}$, there are 3 sectional curvatures, $K(X_1,X_2),K(X_2,X_3),K(X_3,X_1)$. We can treat each of them in the same way as the 2D case and combine them. But how to compute them consistently and efficiently requires further consideration. We leave it for our future work. We will update the manuscript to incorporate the above discussion. Thanks again for your time! --- Rebuttal Comment 1.1: Comment: Dear authors I agree with you on that the entire surface does not have to be constant curvature. Sorry for the misleading words in my initial comment and I didn't mean that. I just meant locally constant curvature. Your comment on Q2 would be an interesting direction. On d-dim spaces, you will need d(d-1)/2 sectional curvatures and a closed-form formula parallel transport can also be challenging. But I think the current manuscript is already good enough for a paper in top conference and good luck for your future research! Best wishes nE4H
Summary: The paper proposes a curvature-aware graph attention architecture and applies it to produce a supervised neural time-stepper for PDEs on surfaces embedded in $\mathbb{R}^3$. This architecture leverages the concept of parallel transport on surfaces, and proposes embedding an edge into a constant curvature surface to perform this parallel transport. It builds off of existing frameworks for Graph transformers. They test their method against many other graph-based network models on the same task, across 4 model geometries and 3 PDEs and show superior performance. Ablation studies also seem to show that the method improves other architectures as well. ## Update after rebuttal: I appreciated th clarifications from the authors, but I still find the use case for such a method to be relatively niche, and some of the design decisions to be a bit strange. Hence I will keep my score as is (weak reject). If there is sufficient enthusiasm from the other reviewers, I would not stand in the way of ultimate acceptance. Claims And Evidence: The method takes a well-motivated tack overall, but I did have some more detailed technical questions on the approach, which caused me to question the specifics of the design. 1. I found the embedding into constant-curvature surfaces to be rather coarse. Moreover, it was based off of a curvature estimate at a vertex, but one was philisophically aimed at embedding the edge. Can you explain your reasoning here? If given an explicit triangulation, why not leverage prior works that come up with an explicit notion of discrete connection, e.g., "Globally Optimal Direction Fields" by Knoppel et al. 2. The embeddings themselves seemed a bit unclearly defined to me. For example, in the spherical case, you consider the two edge endpoints as vectors in $\mathbb{S}^2$, but it was not clear how you normalize these and with respect to what center. This would seem to be crucial. (On a related note: it's surprising that one did not just scale the sphere or pseudosphere to account for fractional curvature, as would naturally arise for $K|_u$). Ultimately, there is no right or wrong on the design choices above, so I should say that on the empirical evidence, I felt that the evaluation and experiments were sufficient for comparison to other graph-based models, and did show significant improved performance. As for the technical arguments of 4.3, I did not read them carefully, but they seemed correct. I would also like to note that these are not novel in any way, and could have merely referenced existing texts on Riemannian geometry. Methods And Evaluation Criteria: See claims and evidence above. Theoretical Claims: See claims and evidence above. Experimental Designs Or Analyses: See claims and evidence above. Supplementary Material: I skimmed the supplementary material, but did not read it in detail. Relation To Broader Scientific Literature: The paper is one of several methods aimed at graph-based methods for learning of PDEs and I did not see any glaring omissions amongst the references. It might be nice to include references to the many recent papers that use a neurally-parameterized space of functions to solve PDEs in an unsupervised fashion, e.g., "Neural Monte Carlo Fluid Simulation" by Jain et al. & "Model reduction for the material point method via an implicit neural representation of the deformation map" by Chen et al. Essential References Not Discussed: See above on relation to broader scientific literature. Other Strengths And Weaknesses: None. Other Comments Or Suggestions: Explanation of rating: I gave the paper a borderline reject proposal, due to my uncertainty on the utility of such an approach (see below), and concerns on the technical motivations for the approach (see questions in "Claims and Evidence"). On the positive side, the experiments seem to show significant improvement in the application domain over other methods in its class. Questions For Authors: 1. Why would you use such a network to solve a PDE on a surface? I can understand that a trained network is faster than perhaps a more standard numerical solve (like FEM-based). But one must train the network in the first place, presumably with many computed numerical solves. Moreover, any such method is sure to be less accurate and the network is tied to the specific geometry of the surface and would require retraining for any modified geometry. 2. How could this method accomodate node features that cannot be interpreted as elements of the tangent space above a node? It's unclear to me how this could be done, and whether this restriction is vital to the application at hand. In other words, it seems like a well-motivated network model may well take features that cannot be interpreted as elements of the tangent space, so this capability would seem to be very desirable. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: # Response To Reviewer y7yT Thank you for your time and constructive feedback. We appreciate your thorough evaluation and valuable suggestions, which have helped improve our work. Below is our point-by-point response. >**Q1.** Why use this GNN for surface PDEs? It is less accurate than FEM. It requires training with precomputed numerical solutions and retraining for any geometry changes due to its surface-specific nature. **A:** Reviewer DBfv raised similar concerns. Due to space limitations, we kindly refer you to **Q2 in our response to Reviewer DBfv**. We will revise the manuscript to incorporate these examples, further highlighting the motivation behind our proposed method. > **Q2.** Can it accomodate node features that cannot be interpreted as elements of the tangent space above a node? **A:** We would like to clarify that our method **can actually** solve PDEs where variables are not tangent vectors. In fact, the PDEs in our experiments involve scalar fields—for instance, in the heat equation, the node feature $u(\mathbf{x})$ represents temperature. Since parallel transport acts on vectors rather than scalars, we **bridge this gap by constructing a natural tangent vector field associated with the function and manifold, namely, the discrete gradient field of temperature (as noted in Remark 2, line 256)**. Furthermore, since the gradient of tensor products can be defined, our approach can theoretically extend to PDEs with tensor variables. We appreciate this insightful question and will update the manuscript to make it clearer. > **Q3.** The embeddings themselves seemed a bit unclearly defined to me. For example, in the spherical case, you consider the two edge endpoints as vectors in $S^2$, but it was not clear how you normalize these and with respect to what center. This would seem to be crucial. (On a related note: it's surprising that one did not just scale the sphere or pseudosphere to account for fractional curvature, as would naturally arise for $K|_u$) **A:** We are sorry for the confusion caused. It would be better to first clarify our embedding methods before resolving your concerns in "Claims and evidence I". For a node $u$ on a mesh $\mathcal M$, we first compute its Gaussian curvature $K|_u$ at $u$. If $K|_u>ε>0$, then we embed the tangent space $T_u\mathcal M$ at $u$ into a sphere $S$ with curvature $K|_u$ which is tangent to $\mathcal M$ at $u$. Therefore, the edge connecting $u$ and $v$ is embedded into $S$ such that the Euclidean distance between $u,v$ equals the spherical distance (**embedding in isometric sense**). In this way, $\mathcal M$ is locally approximated by $S$ at $u$. Therefore, **we do not need normalizations and we do scale the spheres based on the curvature estimate**. It is the same for pseudospheres. We will revise the manuscript to further clarify these in our next version. > **Q4.** It was based off of a curvature estimate at a vertex, but one was philisophically aimed at embedding the edge. Can you explain your reasoning here? **A:** Sure. It seems embedding at a node $u$ cannot reflect the curvature of an edge $(u,v)$ at first sight. However, in the context of attention mechanism, what matters is how to distinguish the neighbors in a neighborhood. Each $(u,v)$ has a different length and thus the vector will rotate differently with $K|_u>0$. **A feature tangent vector becomes different along different edges due to $K|_u$.** In this sense, the net can discern edge differences based on $K|_u$. Moreover, under a mild mesh assumption, the geodesic between $u$ and $v$ can be approximated by $K|_u$. This is because, the Jacobi field $J(t)$ has an expansion $|J(t)|^2=t^2-{1\over 3}\braket{R(X,Y)X,Y}t^4+o(t^4)$ where $R$ is the curvature tensor at $u$ and $X,Y\in T_u\mathcal M$. Thus, the edge curvature can be reflected by $K|_u$ if $(u,v)$ is short enough. > **Q5.**  If given an explicit triangulation, why not leverage prior works that come up with an explicit notion of discrete connection? **A:** We have read the *Globally Optimal Direction Fields* you suggest and find it less suitable in our model. Discrete connections require geodesic estimation (Eq.(2)), but **as no free lunch in discretization, discrete geodesics must lose some smooth-case properties**. In our implementation, the geodesic we estimate **is obedient to its Euclidean distance**. Besides, **triangulation is not a must in our framework**. So it is just as you say, there is no right or wrong. > **Q6**. It might be nice to include references to the many recent papers that use a neurally-parameterized space of functions to solve PDEs in an unsupervised fashion. **A:** Thanks for pointing out the useful references. We will include them in the *Related Work* section in the next version. Thanks again for your time!
null
null
null
null
null
null
Towards Learning to Complete Anything in Lidar
Accept (poster)
Summary: The paper proposes a zero-short learning method CAL (Complete Anything in Lidar) to use the temporal context from multi-modal sensor sequences to mine object shapes and semantic features that are then distilled into a Lidar-only instance-level completion and recognition model. The experiments on real-world lidar benchmarks demonstrate that the approach can do zero-shot shape completion with promising results. Claims And Evidence: Yes, the claim “CAL performs zero-shot PSC in lidar ” is shown by quantitative experiments on two established datasets with comparions on zero-shot performance to baselines. Furthermore, the paper also shows qualitative results on the recognition of unlabeled classes. Methods And Evaluation Criteria: Yes, the panoptic scene completion metrics, such as PQ, SQ, RQ, and mIoU are standard and appropriate for existing benchmarks including SemanticKITTI and KITTI-360. Theoretical Claims: The paper does not contain complex proofs or new theoretical analysis, and is more on experimental results. The pipeline looks standard and well-documented. Experimental Designs Or Analyses: The ablation studies are thorough, showing the impact of different design and pseudo-labeling choices. However, the pseudo-labeling approach is complicated, which may introduce errors, which makes the whole pipeline difficult to train. Hence, it would be good to explain this more to make the paper more solid and convincing. Supplementary Material: Yes, CAL Model Details. Relation To Broader Scientific Literature: The paper is directly related to unsupervised learning and semi-supervised learning. Moreover, the paper is also related to large vision language models, which is helpful for multi-modal learning. Essential References Not Discussed: None Other Strengths And Weaknesses: Weakness 1. The method heavily relies on the quality of 2D foundation models and multi-frame projection. Even though CRF-based refinements can compensate for partial or noisy coverage, the paper acknowledges that coverage remains imperfect. 2. The whole pipeline looks computationally heavy: pseudo-labels are built using multi-frame and multi-sensor pipelines, which might be acceptable for offline training but limits applications on real-time scenarios. Other Comments Or Suggestions: None Questions For Authors: 1. How good is pseudo-label accuracy in practice? It would be good to provide some analysis of where the pseudo-label pipeline fails. 2. How sensitive is the approach to inaccuracies or biases from the 2D foundation models? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback. We are happy to hear that the reviewer found that our experiments demonstrate promising results for zero-shot shape completion. We are also glad that the reviewer has found our ablation studies thorough, and our method well-documented. Below, we address the comments and questions posed by the reviewer. **Q1. How good is pseudo-label accuracy in practice and when does it fail?** This is a great question, and we acknowledge its importance. In the main paper, we carefully investigated pseudo-label accuracy through extensive experiments and discussions. As reported in *Section 4.3* and *Appendix C.2*, our experiments indicate strong overall pseudo-labeling accuracy. The accuracy of pseudo-labels primarily depends on recognizing and completing objects by tracking them across video frames, integrating multi-view observations, and leveraging zero-shot recognition via vision-foundation model features. In the following, we report the main findings from our experiments: - *Table 2* investigates pseudo-label accuracy when varying the number of tracked frames ($T_{fw}$ and $T_{bw}$), and the tracking stride $w$. Our experiments suggest that a sufficient number of frames is essential for completing objects based on observations across multiple views. Our model is generally robust to view changes, and CRF refinement allows to reduce the number of tracked frames (*Table 10*). We notice that failure cases primarily stem from the foundation model incorrectly switching tracking IDs (in case of strong view changes or occlusions), or failing to recognize objects, highlighting areas for further improvement. - *Table 3* evaluates pseudo-label accuracy by measuring coverage completeness against GT annotations. While pseudo-labels may initially provide partial scene completions due to tracking limitations, our CRF refinement significantly enhances label coverage. However, challenges persist when objects are completely undetected or are located in poorly visible regions. - *Table 4* evaluates pseudo-label quality on two datasets for both semantic-oracle and zero-shot settings (see last two rows). We notice that our pseudo-labels demonstrate strong performance even under zero-shot conditions, while revealing room for improvement in terms of the quality of vision-foundation model features for zero-shot recognition. **Q2. How sensitive is the approach to inaccuracies or biases from the 2D foundation models?** Our method is generally robust to minor errors from video foundation models like SAM2, thanks to both our pseudo-labeler and our training strategy. The pseudo-labeler enhances robustness by aggregating labels across frames, refining 3D masks per scan, and applying CRF refinement to improve label coverage. On the training side, we employ losses and training task formulations (see *Table 5*) that are specifically designed to help the model learn effectively from potentially-noisy pseudo-labels, ensuring it remains resilient even when pseudo-label coverage is imperfect. However, inaccuracies can still arise, particularly from 2D mask tracking failures, such as ID switches during significant view changes or heavy occlusions. To minimize these issues, our pipeline selectively generates pseudo-labels only for objects with high-confidence completions, filtering out lower-confidence outputs. Additionally, we also fine-tuned video-based tracking parameters (as shown in *Table 2*) to further reduce errors. Importantly, our pseudo-labeling engine is modular, meaning that one can easily integrate improved 2D foundation models as they become available, which may translate to direct enhancements in pseudo-labeling performance. **Q3. The proposed pseudo-labeling approach is computationally heavy, which may limit applications to real-time scenarios.** While we acknowledge that our pseudo-labeling engine is computationally intensive, we optimize this by performing pseudo-labeling offline to generate a training dataset, which is then distilled into a single, efficient model during training. Once this model is trained, inference (i.e., completing a sparse LiDAR scan) only requires a single forward pass through the model, making it more suitable for real-time scenarios. Although real-time performance is not the main focus of this work, future efforts could improve the efficiency of our pseudo-labeler to further improve scalability. In this direction, we note that the primary computational cost arises from the video foundation model used for object tracking across an RGB sequence. To improve efficiency, one may consider reducing the number of frames used during 2D mask propagation. Our preliminary results reported in *Table 10* suggest that we can (significantly) reduce the tracking horizon and minimize computational costs while achieving comparable performance.
Summary: The paper introduces CAL (Complete Anything in Lidar), a zero-shot panoptic scene completion framework that infers dense 3D object and scene geometry from sparse Lidar scans without relying on predefined class vocabularies. To achieve this, the authors propose a pseudo-labeling engine that mines 3D shape priors from unlabeled Lidar sequences by leveraging vision foundation models for object segmentation and tracking in videos. These mined pseudo-labels, which combine shape completion and semantic features, are then used to train CAL, a sparse generative encoder-decoder network with a transformer-based instance decoder that performs class-agnostic segmentation and completion. Unlike prior methods, CAL enables zero-shot semantic and panoptic scene completion, amodal 3D object detection, and recognition of novel object classes at test time via text-based prompting. The experiments on SemanticKITTI and SSCBench-KITTI360 show that CAL does not match the performance of fully supervised baselines. Claims And Evidence: Yes. Methods And Evaluation Criteria: Checked. See Other Strengths And Weaknesses Theoretical Claims: Checked. Experimental Designs Or Analyses: Checked. See Other Strengths And Weaknesses Supplementary Material: The supplementary materials are mostly comprehensive and helpful for understanding the paper. Relation To Broader Scientific Literature: The idea of distilling vision foundation models (VFMs) into LiDAR-specific models is no longer novel in the broader literature; for example, prior works have successfully applied VFMs to LiDAR panoptic segmentation [1] and semantic segmentation [2] tasks. This paper extends a similar method to a different yet related task—specifically, instance-level completion and recognition using LiDAR data alone. Thus, the key contribution here builds upon existing insights, following established approaches from other LiDAR VFM-based tasks to instance-level scene completion and recognition. [1] Osep, A., Meinhardt, T., Ferroni, F., Peri, N., Ramanan, D.,and Leal-Taixe, L. Better call sal: Towards learning to segment anything in lidar. In Eur. Conf. Comput. Vis., 2024. [2] Liu, Youquan, et al. "Segment any point cloud sequences by distilling vision foundation models." Advances in Neural Information Processing Systems 36 (2023): 37193-37229. Essential References Not Discussed: In LiDAR-based segmentation, numerous semi-supervised or weakly supervised methods [1, 2], which do not fully rely on (fully-) manually labeled datasets, have not been discussed. Additionally, related works [3] utilizing vision foundation models have also not been discussed. The authors should explicitly compare their method against these relevant methods. [1] Li Li, Hubert P. H. Shum, Toby P. Breckon; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023, pp. 9361-9371 [2] Ozan Unal, Dengxin Dai, Luc Van Gool; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022, pp. 2697-2707 [3] Liu, Youquan, et al. "Segment any point cloud sequences by distilling vision foundation models." Advances in Neural Information Processing Systems 36 (2023): 37193-37229. Other Strengths And Weaknesses: Strengths: - The paper introduces CAL (Complete Anything in Lidar), a novel zero-shot panoptic scene completion approach. It extends beyond traditional fixed taxonomies by learning object shape priors from unlabeled temporal Lidar sequences. - The paper is well-structured, with clear motivation and methodology. Weakness: - The method is compared to fully supervised baselines (Tab. 1), but an ablation against other zero-shot methods would strengthen the effectiveness. Although I acknowledge that the authors claim this is the first method for zero-shot panoptic scene completion in LiDAR, and therefore it might be difficult to find a second existing zero-shot method for direct comparison, perhaps a practical alternative could be modifying current supervised methods (e.g., by adding specific modules or other adjustments) to adapt them for zero-shot evaluation. - It is important to point out that there remains a substantial performance gap (Tab. 1) between existing zero-shot methods and supervised approaches. Consequently, I am not convinced that the zero-shot method proposed in this paper could be effectively applied to the application scenarios mentioned in Fig. 1. - The reliance on pre-trained 2D models (CLIP, SAM) may inherit their biases and limitations. Other Comments Or Suggestions: My concerns mainly focus on the performance (see Other Strengths And Weaknesses) and novelty (see Relation To Broader Scientific Literature) of the proposed method. If the authors can adequately address these concerns, I am willing to increase my rating. Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We’re delighted that the reviewer found our paper well-structured with clear motivation and methodology. We appreciate the detailed feedback, and we are excited to address the concerns raised by the reviewer. **Q1. Novelty of distilling vision foundation models (VFMs) to Lidar** We agree with the reviewer that distilling VFMs to Lidar is not entirely novel in the broader literature; for instance, [1,2] explore this for *segmentation*. Our work, however, introduces zero-shot Lidar *panoptic completion*, which extends beyond segmentation. While we leverage prior insights on distilling CLIP features for zero-shot (ZS) prompting—as in [1]—we find this alone insufficient for *completing* 3D shapes from sparse LiDAR scans. To this end, we combine ZS prompting with temporal aggregation of objects, providing essential cues for shape completion. Generating supervision for ZS panoptic completion presents non-trivial challenges: such as associating objects across time, partial coverage and occlusions, and training models to reconstruct full shapes from incomplete labels. These unique complexities in *completion* distinguish our method from prior work on ZS Lidar *segmentation* such as [1]. We believe that, as *Reviewer KxNA* also noted, our work tackles a *“novel and underexplored problem”* with *“significant value for the research community”*. **Q2. Comparison to zero-shot (ZS) baselines** We thank the reviewer for the suggestion, and agree that ZS baseline comparisons would strengthen our analysis. As the reviewer noted, our method is the *“first for ZS panoptic scene completion in LiDAR”*— making it *“difficult to find a second existing ZS method for direct comparison”*. To this end, we constructed two baselines adhering to the following criteria for a fair ZS comparison: (1) input is a single Lidar scan, (2) scene completion model is trained *without semantic labels*, and (3) instance prediction and semantic inference rely on *zero-shot* recognition. Accordingly, we combined recent Lidar completion methods w/o semantic labels—LODE [5] and LiDiff [6]—with SAL [1], a ZS panoptic segmentation method. As SAL’s codebase is not public, we obtained its ZS predictions on SemanticKITTI directly from its authors. Our baselines are: - LODE + SAL: LODE [5] performs implicit scene completion from sparse LiDAR, trained with GT completion but no sem. labels. We extract a surface mesh from its output, convert it to an occupancy grid, and propagate SAL’s ZS panoptic labels to voxels. - LiDiff + SAL: LiDiff [6], a diffusion-based completion method using GT completion data (no sem. labels), densifies LiDAR point clouds. We convert its output to an occupancy grid and similarly propagate SAL’s ZS panoptic labels to occupied voxels. | | All PQ† | All PQ | All SQ | All RQ | Thing PQ | Thing SQ | Thing RQ | Stuff PQ | Stuff SQ | Stuff RQ | mIoU | |-|-|-|-|-|-|-|-|-|-|-|-| | LODE + SAL | 7.74 | 1.96 | 11.12 | 3.54 | 0.00 | 6.36 | 0.00 | 3.39 | 14.59 | 6.11 | 8.12 | | LiDiff + SAL | 7.35 | 0.36 | 23.95 | 0.65 | 0.22 | **34.81** | 0.40 | 0.46 | 16.06 | 0.83 | 7.38 | | **Ours** | **13.12** | **5.26** | **27.45** | **8.44** | **2.42** | 22.79 | **3.89** | **7.33** | **30.84** | **11.76** | **13.09** | Results show our method outperforms the baselines across nearly all metrics. Notably, while the baselines leverage completion models trained on fully completed GT, our approach excels despite using pseudo-labels with only partial coverage. This highlights that ZS panoptic Lidar scene completion is a challenging task, not trivially solved by existing methods. **Q3. Performance gap between fully-supervised and zero-shot methods** Due to space limits, please see our response to *Reviewer KxNA (Q2)*. **Q4. VFM biases and limitations** Due to space limits, please see our responses to *Reviewers KxNA (Q1) and ptaj (Q1-2)*. **Q5. Additional references** Thanks for the valuable references! We'll add them to the Lidar-based *segmentation* section. [3] and [4] explore weakly supervised segmentation, while [2] uses contrastive pre-training to distill VFMs for segmentation. As the reviewer noted, [2–4] reduce manual labeling efforts. In contrast, our method addresses ZS Lidar panoptic *completion* with distinct challenges beyond the segmentation scope [2–4]. [1] Osep et al., Better Call SAL: Towards Learning to Segment Anything in Lidar, ECCV '24 [2] Liu et al., Segment Any Point Cloud Sequences by Distilling Vision Foundation Models, NeurIPS '23 [3] Li et al., Less is More: Reducing Task and Model Complexity for 3D Point Cloud Semantic Segmentation, CVPR '23 [4] Unal et al., Scribble-Supervised LiDAR Semantic Segmentation, CVPR '22 [5] Li et al., LODE: Locally Conditioned Eikonal Implicit Scene Completion from Sparse LiDAR, ICRA '23 [6] Nunes et al., Scaling Diffusion Models to Real-World 3D LiDAR Scene Completion, CVPR '24 --- Rebuttal Comment 1.1: Comment: Thank you for the rebuttal. It has addressed most of my concerns. I hope the authors can attach some of the experiments and discussions in their revised manuscripts and supplementary material. I've updated my rating to WEAK ACCEPT. --- Reply to Comment 1.1.1: Comment: Dear Reviewer C64v, We are truly grateful for your support and updated score - we're very glad to hear that we were able to address your concerns! Thank you once again for your thoughtful feedback - we will incorporate these additional experimental findings as well as discussions in the revised version of our paper. Best regards, Authors
Summary: This paper introduces a novel zero-shot approach for completing data from a single LiDAR scan, including both object and instance completion. The method is potentially scalable as it leverages a pre-trained foundational video segmentation model, eliminating the need for labeled video data. CLIP features are extracted and fused across multiple views to supervise the completion model. The extensive experimental results, along with the implementations and discussions provided in the supplementary materials, offer valuable insights into the system’s design. While the results do not outperform fully supervised methods, the proposed approach holds significant value for the research community due to its scalability. Claims And Evidence: The paper addresses a novel and underexplored problem: completing parts from sparse lidar inputs. The motivation for the study is well-articulated, and the research problem itself holds significant value. Addressing the completion problem in an open-vocabulary setting is highly valuable, as it presents intriguing possibilities for real-world applications. The authors claim to present the first method for Zero-Shot Panoptic Scene Completion using LiDAR data, which adds a unique contribution to the field. Methods And Evaluation Criteria: Using a video foundation model to extract masks for associating temporal information, and then extracting and aggregating CLIP features in 3D space, is a sound and reasonable approach. However, reliance on the video foundation model, which may not always be perfectly accurate, could introduce errors and potentially limit the overall performance of the system. The title suggests that the method completes objects solely from LiDAR data, while the approach described in the paper still relies on camera images. This discrepancy could be misleading. The figures do not clearly illustrate how the CLIP features are fused together. Theoretical Claims: The methods presented follow a typical learning-based formulation, and therefore, extensive theoretical proofs are not required. The foundational theories related to occupancy networks, LiDAR/camera geometry, and the reliance on the video foundation model are sound and reasonable. Experimental Designs Or Analyses: The experiments are quite extensive and provide comparisons with various fully supervised baselines. However, Table 1 shows a noticeable gap in performance compared to these methods. Could you provide insights into the main reasons behind this gap and suggest ways to improve the performance further? Although the authors claim that the main gap arises from rare classes, the ratio of data for these rare classes does not fully explain the significant performance gap. Supplementary Material: The video and the text content in the supplementary materials effectively explain the idea and design of the approach. Relation To Broader Scientific Literature: N/A Essential References Not Discussed: N/A Other Strengths And Weaknesses: Completing data from sparse views is a highly valuable problem, as it provides fundamental information for downstream tasks such as object detection, as demonstrated by the authors in the paper. Other Comments Or Suggestions: N/A Questions For Authors: How does the reliance on the foundational video model potentially affect the final performance, and what steps can be taken to mitigate any errors introduced by it? What are the main reasons behind the performance gap observed in Table 1 when comparing your method to fully supervised baselines? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We are thrilled that the reviewer finds our task of zero-shot Lidar-based panoptic scene completion challenging and novel. We are particularly happy that the reviewer recognizes our method's scalability potential (wrt. data) and appreciates the extensive experimental results and sound methodology. Below, we address questions raised by the reviewer. **Q1. The reviewer asks about the effect of the video-foundation model on the final performance and asks how errors it introduces can be mitigated.** Great point; indeed, predictions from the video foundation model may not always be perfectly accurate, as noted by the reviewer. In practice, we observed two types of errors: (i) noisy masks, especially inaccurate around borders, that cause artifacts during image-to-Lidar lifting. To address these, we perform several post-processing steps described in the paper, most importantly, DBSCAN-based segment refinement and CRF-based refinement of aggregated labels. (ii) video-segmentation models may also produce tracking errors, such as ID switches. To address this, we empirically determined the window size and stride in which state-of-the-art models are reliable (which we also ablated in *Table 2* of the paper), ensuring that such errors are infrequent. In practice, we find that as long as we have a sufficiently high signal-to-noise ratio, our model learns to ignore such artifacts. To mitigate these errors in future work, one could additionally utilize 3D/4D geometric cues from the Lidar sequence to improve temporal association. **Q2. Performance gap between fully-supervised and zero-shot methods.** Great question! We would like to elaborate further on the main reasons behind this gap (beyond the limitations due to rare classes). As shown by our findings in *Table 1* (semantic oracle vs. zero-shot results), this gap is largely due to zero-shot semantic recognition— which is a challenging and active research area. Several opportunities exist to improve performance, such as enhancing the underlying Vision Language Model (VLM) for zero-shot recognition, or incorporating manually labeled data for supervised fine-tuning. Another potential cause for this gap is that our pseudo-labels have lower coverage (approximately 50% on KITTI-360 and 70% on SemanticKITTI, please refer to *Table 3*) compared to ground truth labels, due to their construction requiring camera-Lidar co-visibility. In contrast, fully supervised methods benefit from training on a complete ground-truth signal with full-grid coverage (see *Figure 4*, 4$^{th}$ column), effectively using more labeled data. While this lower coverage is a limitation when training on fixed-size datasets like SemanticKITTI, as the *Reviewer KxNA* notes, it also underscores our approach’s potential: scaling with more data could help close the gap with fully supervised, closed vocabulary baselines. Our method can already be applied to the application scenarios from *Fig. 1* (as also shown by quantitative ZS-panoptic completion results and qualitative results). However, scaling with more data could enable our method to be more effective and robust for these tasks in the future. On a related note, following *Reviewer C64v*’s suggestion, we added zero-shot panoptic scene completion baselines (please refer to our response to *Reviewer C64v*, *Q2*), which further confirm that zero-shot Lidar panoptic scene completion is a problem with non-trivial challenges. We appreciate *Reviewer KxNA*'s interpretation that our method *“holds significant value for the research community even though it does not necessarily outperform the fully supervised methods”*. We will ensure to include this extended discussion in the paper to further elaborate on the gap between our zero-shot method and fully supervised baselines, as well as potential solutions to address this gap. **Q3. The reviewer highlights that Figure 2 does not clearly illustrate how the CLIP features are fused.** Thanks for pointing this out! We compute CLIP features per-instance in every frame and *average* these (normalized) CLIP features over time across the frames in which this instance was observed and tracked. We will improve the clarity of this figure to highlight this temporal aggregation step, and also expand our textual description in L198. **Q4. Reviewer notes that our title may be misleading as the pseudo-labeling engine in our approach still relies on camera images.** Thank you for raising this point! Our trained (distilled) model indeed takes only a single Lidar scan as input at test time, producing a completed scene representation without using any camera data during inference. However, as the reviewer points out, our pseudo-labeling engine—used solely during training—leverages camera images and Lidar sequences to generate training labels. To avoid any confusion, we’re happy to revise the title (for example, a possible alternative could be "Towards Learning to Complete Anything in Lidar Using Vision Foundation Models").
null
null
null
null
null
null
null
null
Scaling Laws for Forgetting during Finetuning with Pretraining Data Injection
Accept (poster)
Summary: This paper presents a study of scaling laws for fine-tuning, in the particular case where replay data (in the form of pretraining data) is available. The paper models the forgetting loss as a function of the replay data, fine-tuning data, and number of parameters. It also extends the scaling law of existing work on modeling the fine-tuning loss, showing that the amount of injected replay data barely impacts the final validation loss on the fine-tuning data. They also show that as low as $p$=1% of the pre training data is sufficient to not lose performance on pretraining data, and that the scaling coefficient $p$ depends on the nature of the fine-tuning domain. The experiments are comprehensive and validate the hypotheses. ## Update after rebuttal I am satisfied with the authors' response, and increased my score to Accept. As mentioned in the discussions below, I do think it is important clarify the distinction between "fine-tuning" in the general sense versus training on domain specific data in their updated version to highlight the true setting where this study is useful. Claims And Evidence: I think most claims and the evidence presented are pretty solid. Models between 40M-1B parameter ranges are studied which is pretty comprehensive. There are extensive results on each factors and how they affect the scaling. Methods And Evaluation Criteria: - My main complaint about this paper is the notion of the fine tuning task. The authors consider sub-domain splits of the Pile, which is a dataset that is generally used as a pre-training dataset. In practice, many fine-tuning datasets and instruction tuning datasets, such as Alpaca, contain a range of examples from diverse domains. Theoretical Claims: Given the empirical nature of the work, there are no explicit theory or proofs. The scaling law for fine-tuning is drawn from prior work. For forgetting equations, the choices seem fair. Experimental Designs Or Analyses: All experimental design choices are clear, apart from the fine tuning dataset as discussed above. The search space over the pretraining data, fine-tuning data, and model scale covers a broad range. Supplementary Material: There is no explicit supplementary material apart from the appendix. The appendix contains additional results and plots, and details about fitting the scaling curves. Relation To Broader Scientific Literature: To the best of my knowledge, this is the first paper that studies scaling laws for fine-tuning by also considering replay data (i.e pre-training data injection) during fine-tuning. [1] studies scaling laws for forgetting during fine-tuning, but does not consider replay data. Similarly, [2] only studies scaling laws on the fine-tuning task, while this paper also attempts to model the loss on the pre-training data after fine-tuning. [1] https://arxiv.org/pdf/2401.05605 [2] https://arxiv.org/abs/2402.17193v1 Essential References Not Discussed: [1] https://arxiv.org/pdf/2406.01375v1 - proposes scaling laws for domain specific training after pre-training Other Strengths And Weaknesses: Strengths: Overall, a good paper that the community will find useful. Weaknesses: As mentioned I think the paper can be strengthened further with experiments on actual fine-tuning tasks. As such, this feels more like domain-specific training, whose scaling laws some existing work has studied [1]. I will however note that [1] has only studied domain specific pretraining i.e. in the dataset scale of billions of tokens. But, the fine-tuning setup here is still a little artificial than the type of fine-tuning that is conventionally followed (SFT/Instruction Tuning). Given this observation, I lean towards a weak accept for now. [1] https://arxiv.org/pdf/2406.01375v1 Other Comments Or Suggestions: 1. I would recommend adding the hyper parameter values for each configuration and other related details for reproducibility. 2. I would also recommend adding downstream task results for these models as a sanity check. This would give a better sense of how the models perform in downstream applications before and after fine-tuning (eg. MMLU, instruction following tasks etc) Questions For Authors: 1. Could the authors elaborate on why they chose the Pile as the fine tuning task, as opposed to other SFT/Instruction-Tuning/Alignment datasets which are conventionally used? 2. From what I understand, in 4.4, the number of unique tokens available varies but they are upsampled such at $p$=1\%? If yes, it would be helpful to make this clear. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear reviewer, We thank you warmly for your detailed and thorough feedback on our work. We are glad to read that "The experiments are comprehensive and validate the hypotheses", that "most claims and the evidence presented are pretty solid", and that " experimental design choices are clear". Thank you as well for the additional reference. > I would recommend adding the hyper parameter values for each configuration and other related details for reproducibility. Thanks! We added a summary table in appendix. > I would also recommend adding downstream task results for these models as a sanity check Please find [here](https://anonymous.4open.science/r/icml2025-figures-3ED2/barplot_arceasy.pdf) and [here](https://anonymous.4open.science/r/icml2025-figures-3ED2/barplot_MMLU.pdf) results for the `ARC_easy` and `MMLU` task, for the pretrained checkpoint, and for models finetuned on `dm_mathematics`. Performance degrades on the generalist questions of ARC easy, but improves on MMLU which is more aligned with `dm_mathematics`. Furthermore, on the domain `dm_mathematics` we evaluated the quality of 75 checkpoints (all model sizes above 350M) on the `arc_easy` task, in a 0-shot setting. We take the z-score at 99% significance threshold, and report the best result below (ns = not significant, numbers are accuracy gain over p=0%). **TLDR**; the difference is significant for 15 experiments, oscillating between 3.7% and 8.7% more, and pretraining data injection always helps. As noticed before, the bigger the finetuning dataset the more important the forgetting. For these bigger datasets (>9M tokens), injecting pretraining data is crucial. | | 0.1% | 0.5% | 1% | 5% | |:---------------------|--------:|--------:|--------:|--------:| | ('medium', 307200) | ns | ns | ns | ns | | ('medium', 921600) | ns | ns | ns | ns | | ('medium', 3072000) | 3.872 | 4.461 | 4.082 | 4.082 | | ('medium', 9216000) | 4.125 | 4.082 | 4.04 | ns | | ('medium', 30720000) | 4.798 | 6.566 | 4.798 | 5.429 | | ('large', 307200) | ns | ns | ns | ns | | ('large', 921600) | ns | ns | ns | ns | | ('large', 3072000) | ns | 3.872 | 4.588 | 5.093 | | ('large', 9216000) | 3.788 | 4.966 | 5.345 | 4.588 | | ('large', 30720000) | 6.944 | 7.744 | 7.239 | 8.375 | | ('xl', 307200) | ns | ns | ns | ns | | ('xl', 921600) | ns | ns | ns | ns | | ('xl', 3072000) | ns | ns | ns | ns | | ('xl', 9216000) | ns | ns | 3.998 | ns | | ('xl', 30720000) | 4.756 | 5.008 | 5.471 | 5.303 | > From what I understand, in 4.4, the number of unique tokens available varies but they are upsampled such at =1%? If yes, it would be helpful to make this clear. Yes you’re correct - we updated the section and [the figure](https://anonymous.4open.science/r/icml2025-figures-3ED2/ablation_pretraining_size.pdf) x-axis. Each batch is built following the process described in section 3.3: each sequence is picked from the pretraining split with probability 1%, and from the specific domain with probability 99%. The pretrained set is capped to the number of tokens wanted, and then repeated as many times as necessary. > Could the authors elaborate on why they chose the Pile as the fine tuning task, as opposed to other SFT/Instruction-Tuning/Alignment datasets which are conventionally used? We wanted to simulate the setup in which a company fine-tune a model on raw data from internal documentation. The highly-specialized content of The Pile (which is partitioned semantically) allows to simulate the scenario, unlike typical typical IFT datasets that are generalist. Nonetheless, we agree that the Instruction Finetuning scenario is also of interest; thank you for your suggestion. We perform instruction finetuning on the OpenHermes dataset - 3M tokens in the train split (95% of the total). We finetune the model to perform next token prediction of the output, conditioned on the [INST] prompt input [/INST] prefix. We added [finetuning curves](https://anonymous.4open.science/r/icml2025-figures-3ED2/openhermes_finetuning.pdf) and [forgetting curves](https://anonymous.4open.science/r/icml2025-figures-3ED2/openhermes_forgetting.pdf) in appendix. We report the fitted scaling laws parameters here: **Finetuning**: | Domain | alpha | beta | A | E | HeldOutMRE | |:-----------|--------:|----------:|--------:|-------:|-------------:| | Openhermes | 0.17582 | 0.0286171 | 64.2788 | 0.4585 | 0.59% | **Forgetting**: | Domain | alpha | beta | A | B | HeldOutMRE | |:-----------|---------:|---------:|-----:|-----:|:-------------| | OpenHermes | 0.793364 | 0.266993 | 5513 | 8584 | 0.29\% | We thank you again for your review, and hope that our answer has alleviated your concerns. --- Rebuttal Comment 1.1: Comment: Thank you for the clarifications and additional experiments. I have updated my score. > We wanted to simulate the setup in which a company fine-tune a model on raw data from internal documentation. I would recommend the authors to clarify the distinction between "fine-tuning" in the general sense versus training on domain specific data in their updated version, since companies may also fine-tune on data ranging from specific domains to internal generalist instruction tuning datasets. --- Reply to Comment 1.1.1: Comment: Thank you for raising your score. > I would recommend the authors to clarify the distinction between "fine-tuning" in the general sense versus training on domain specific data in their updated version That's a good point, we will add a paragraph in introduction to emphasize that we are focusing on this setup.
Summary: The paper studies the domain adaptation and forgetting effects of language model finetuning by deriving scaling laws that quantify these two phenomena. It shows that one can accurately predict the finetuning performance and the forgetting of the pretraining set of large language models, as a function of the model size, the number of available finetuning tokens, and of the fraction of pretraining data injected into the finetuning data mixture. Claims And Evidence: The claims made in the submission are supported by clear evidence based on my understanding of scaling law - I would suggest having people familiar with scaling law double-check the claim due to my low confidence on this area. Methods And Evaluation Criteria: Using scaling law to understand the dynamics between the domain adaptation and forgetting effects of language model finetuning makes sense and provides novel insights. Theoretical Claims: Yes. I have carefully read the figures related to the claim that there are scaling laws that quantify the dynamics between the domain adaptation and forgetting effects of language model finetuning from various target domains. Experimental Designs Or Analyses: How the paper makes the scaling law plot is sound. However, I am wondering whether the current experiments are enough to demonstrate the robustness of the proposed scaling law: 1. The scaling law is fit with data points coming from model size of 41M, 109M, 334M, 665M, 1.27B. While the proposed curve fits these points well, the paper needs to show the scaling law can accurately predict the loss for a model that is much larger than 1.27B (due to resource constraints, the authors may consider experimenting on a 7B-parameter model). 2. While the U-curve for generalization-memorization tradeoff (Figure 2) is intuitive to interpret, I am unsure whether there is mathematical expression to characterize the U-curve. This is important for understanding the observed tradeoff can extrapolate to region not covered by the current datapoints. Supplementary Material: The paper does not include supplementary material. Relation To Broader Scientific Literature: To the best of my knowledge, this is the first paper that studies scaling law in continual learning and catastrophic forgetting. Essential References Not Discussed: I do not have specific reference in mind. Other Strengths And Weaknesses: Strengths: The paper is novel and deepens the understanding of catastrophic forgetting in continual learning. Weaknesses: 1. It's unsure what's the use case for the proposed scaling law. For traditional scaling laws, they could help us understand how to better distribute budget (e.g., parameter size, pre-training token size). But for scaling laws proposed in this work, the scenario is different, as the paper also shows that a small percentage of pre-training data could already significantly mitigate the forgetting. - I am willing to adjust my assessment if this point could be addressed. Other Comments Or Suggestions: 1. Figure 7 caption: "(Zhang et al., 2024)" -> "Zhang et al., 2024" Questions For Authors: See my comment above. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Dear reviewer, We thank you warmly for your detailed and thorough feedback on our work. We are glad to read that "The claims made in the submission are supported by clear evidence" and that our method "makes sense and provides novel insights." > While the proposed curve fits these points well, the paper needs to show the scaling law can accurately predict the loss for a model that is much larger than 1.27B This is indeed an important point, and we are happy to report that our scaling laws allow us to *extrapolate*, that is, as you mention, predict large scale behavior from small scale. We could predict the forgetting for **650M and 1.3B models**, on **9M and 30M unique tokens**, with a bootstrapped MRE of 0.83% - **using only models no bigger than 350M parameters, and finetuned with no more than 3M unique tokens**. We added [this table](https://anonymous.4open.science/r/icml2025-figures-3ED2/extrapolate.png) in the paper. > I am unsure whether there is mathematical expression to characterize the U-curve We agree that’s a fruitful research question. However, finding how many steps/repetitions are required to find the bottom of the U-curve is by no means trivial, since it depends on both the domain, the model size, and the number of tokens. Nonetheless, assuming fixed number of epochs, we benchmarked the scaling laws of “Scaling Data-Constrained Language Models”. We found it yielded accurate predictions, correct in our setting too. We will clarify this in the paper. Therefore, it might be possible to predict when the bottom of U-curve will be reached (find the **argmin** over time) based on the performance improvements after a few epochs. Our current work doesn’t need to characterize the full U-curve: it is sufficient to estimate the minimum value reached at the bottom (find the **min** over time). > It's unsure what's the use case for the proposed scaling law We will clarify two important outcomes from our scaling laws. A) The proposed scaling law allows to quantify forgetting as function of scale, and notably to extrapolate, as you mentioned before. Testing at small scales allows you to not be surprised with what happens at larger scales. B) We also believe our work brings some understanding to the phenomenon of forgetting. An interesting consequence of our functional form reveals that the leading cause behind forgetting might be related to the parameters count $BpN$. This suggests that forgetting is primarily due to limitations in network capacity. This is also confirmed by the fact that smaller models suffer the most: they lose up to 95% (!) of the pretraining progress when forgetting (i.e, the pretraining validation loss reverts to a point reached at 5% of pretraining), while bigger models only lose 20% of the progress. We added [this plot](https://anonymous.4open.science/r/icml2025-figures-3ED2/arxiv_forget_hours.pdf) in the paper, which quantifies forgetting with a fraction of pre-training cost lost. Overall, this balances some findings of “Beyond Chinchilla-Optimal: Accounting for Inference in Language Model Scaling Laws”, which suggested that smaller models should be preferred due to their lower inference cost. We will make these points in the paper. We thank you again for your review, and we hope that we have alleviated your concerns !
Summary: This paper studies a setting (examined previously by Liu 2022, Kang et al 2024, Ibrahim et al 2024) where a small amount of pre-training data is injected in fine-tuning to prevent catastrophic forgetting of the pre-training domain and provide regularization in the target domain. In this setting, the paper develops scaling laws for both pre-training and fine-tuning losses during fine-tuning. The paper builds upon prior work (Kalajdzievski et al 2024), who develop scaling laws for forgetting losses during fine-tuning when pre-training data is not injected during fine-tuning. Results are reported using GPT-2 models of different scales on a number of datasets. The paper provides a scaling law for pre-training loss as a function of model size, number of target tokens, proportion of pre-training data added to the data mixture at fine-tuning, and also a law for the fine-tuning loss. Similar to prior work, the paper finds that a multiplicative form works better than an additive form. The paper shows that injecting even 1% of the pretraining data can prevent a degradation of the pretraining loss and can provide some regularization on the target domain. ## update after rebuttal” The authors have addressed many of my concerns in their rebuttal especially wrt the following: * Providing an additional isocurve * Reporting the training loss as a function of 'Unique pretrain tokens per unique finetune token' Hence I am increasing the score. Claims And Evidence: The claims are mostly reasonable except for the following: * The paper does not explore the full grid of model size and data set size i.e. (N, D) but rather an isocurve D=100N. While this is a reasonable choice for computational reasons, it would have been useful to study the full grid at least for a single domain to understand how far the isocurve is from the optimal value for this domain. * Though the paper presents a large number of results for a single model (Gpt-2), it is unclear if these laws will carry over to alternative LLMs. Methods And Evaluation Criteria: The methods and evaluation criteria make sense. Theoretical Claims: The paper does not make any theoretical claims. Experimental Designs Or Analyses: Yes, I checked. They seem reasonable. Supplementary Material: Yes - I checked all the parts. Relation To Broader Scientific Literature: * Building upon prior work (Liu 2022, Kang et al. 2024, Ibrahim et al. 2024), this paper investigates a scenario where a small proportion of pre-training data is incorporated in fine-tuning mitigate catastrophic forgetting and improve target domain regularization. * Prior work (Kalajdzievski et al 2024) has developed scaling laws for forgetting for fine-tuning when using LoRA. Hence the novel aspect of this work is limited to a) obtaining these laws in a setting where a small proportion of the pre-training data is injected during fine-tuning, b) using full- finetuning rather than LoRA. Essential References Not Discussed: None Other Strengths And Weaknesses: Strengths: * Studies a problem which is valuable to the community: how to fine-tune while preventing catastrophic forgetting. * Develops scaling laws for both fine-tuning and pretraining loss as a function of model size, finetuning data size and proportion of pretraining data injected during fine-tuning. This is a setting that has not been examined previously in the context of developing scaling laws. * Proposes a practical solution to prevent catastrophic forgetting and obtain some regularization by injecting a small proportion of pre-training data during fine-tuning. * Examines the effect of pre-training data repetitions on forgetting Weaknesses: * Prior work (Kalajdzievski et al 2024) has developed scaling laws for forgetting for fine-tuning when using LoRA. Hence the novel aspect of this work is limited to a) obtaining these laws in a setting where a small proportion of the pre-training data is injected during fine-tuning, b) using full-finetuning rather than LoRA. * Though the paper presents a large number of results for a single model (Gpt-2), it is unclear if these laws will carry over to alternative LLMs. * The paper does not explore the full grid of model size and data set size i.e. (N, D) but rather an isocurve D=100N. While this is a reasonable choice for computational reasons, it would have been useful to study the full grid at least for a single domain to understand how far the isocurve is from the optimal value for this domain. * Similar to prior work, the paper reports MRE for the additive and multiplicative scaling laws. However, it would be advantageous to discuss whether alternative functional forms of the regression model may fit the data better. * Figure 6: It would be useful to report the extent of repetition in the pre-training dataset. Other Comments Or Suggestions: None Questions For Authors: None Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear reviewer, We thank you warmly for your detailed and thorough feedback on our work. We are glad to see that you found that "claims are mostly reasonable", that "the methods and evaluation criteria make sense," and that we study "a problem which is valuable to the community". > Prior work (Kalajdzievski et al 2024) has developed scaling laws for forgetting for fine-tuning when using LoRA. Hence the novel aspect of this work is limited to a) obtaining these laws in a setting where a small proportion of the pre-training data is injected during fine-tuning, b) using full-finetuning rather than LoRA. Additionally to those differences, as explained L148, we want to point out that we use many more specialization tasks, model scales, and we measure forgetting through the pretraining loss (which is impossible to do with models like LLAMA 2 for which the training data is unavailable). > Though the paper presents a large number of results for a single model (Gpt-2), it is unclear if these laws will carry over to alternative LLMs. Indeed, we acknowledge that we only consider autoregressive decocoder-only transformers. Note that the architecture we use is widely used, up to minor changes (see e.g. sec.2.2 of [1]). Testing the impact of these minor architectural changes like use of RoPE and swiglu on the scaling laws is indeed an interesting future research direction. > The paper does not explore the full grid of model size and data set size i.e. (N, D) but rather an isocurve D=100N. While this is a reasonable choice for computational reasons, it would have been useful to study the full grid at least for a single domain to understand how far the isocurve is from the optimal value for this domain. Thanks for raising this point ! We tested our idea with another **isocurve D=10N**. We finetune on free_law, and we measure a bootstrapped MRE of **0.57% for forgetting** and **1.14% for finetuning**. We aded [this curve](https://anonymous.4open.science/r/icml2025-figures-3ED2/forgetting_isocurve10_freewlaw_0.5perc.pdf) and [this curve](https://anonymous.4open.science/r/icml2025-figures-3ED2/finetuning_isocurve10_freelaw_0.5perc.pdf) to the paper. > it would be advantageous to discuss whether alternative functional forms of the regression model may fit the data better We are looking for a forgetting law that (i) should yield a zero finetuning when $D^{ft}$ is zero (ii) does not exhibit too many arbitrary parameters. For example, we found that the $+E$ term is entirely explained by the re-warming of the model (since the finetuning LR was x3 times terminal LR of pretraining). As a matter of fact, measuring it that way instead of regressing its value __decreases__ the MRE from 0.48% to 0.40%. See [this table here](https://anonymous.4open.science/r/icml2025-figures-3ED2/estimate_E.png). We also tested additive laws, as mentioned in p8. Other laws, like $A\frac{D^{\beta}(1-p)^{\kappa})}{N^{\alpha}}+E$ had a higher error of 0.67% despite having one more parameter $\kappa$. We added these results in appendix. Furthermore, to strengthen the statistical significance of our results, we also compute and report the bootstrapped MRE (we sample 125 measurements with replacement from the pool, and average the result over 128 independent samples). > It would be useful to report the extent of repetition in the pre-training dataset Good call! Since that the bottom of the U-curve is typically reached within a few dozen epochs, and that p=1%, the number of repetitions quickly falls to 1. We updated [the figure](https://anonymous.4open.science/r/icml2025-figures-3ED2/ablation_pretraining_size.pdf) with “Unique pretrain tokens per unique finetune token” in x-axis. We discover that 0.3 unique pretrain tokens per unique finetune token are typically sufficient. We hope that our answers have alleviated your concerns, and we thank you again for your review ! [1] Touvron, Hugo, et al. "Llama: Open and efficient foundation language models." arXiv preprint arXiv:2302.13971 (2023).
Summary: The paper addresses two key challenges in finetuning large language models: (1) overfitting when target domain data is limited and (2) forgetting of pretraining knowledge as the model drifts from its original parameters. The paper studies pretraining data injection as a solution to these challenges, and quantifies its effects through scaling laws. Claims And Evidence: The paper's claims are supported by the evidence presented. The central claim that 1% pretraining data injection mitigates forgetting is clearly demonstrated through experiments across multiple model sizes and domains. Scaling law predictions match observed values fairly closely. Methods And Evaluation Criteria: The methods are appropriate for the research question. The choice to measure forgetting via pretraining loss is reasonable. The experimental design systematically varies model size, finetuning dataset size, and pretraining injection proportion. Theoretical Claims: N/A Experimental Designs Or Analyses: The experimental setup is sound. Supplementary Material: No Relation To Broader Scientific Literature: The paper's contributions build on work in scaling laws and compute-optimal training. It is also closely related to the literature on catastrophic forgetting and data mixtures. It has a central point that is, to my knowledge, new and relevant for continual LLM pretraining. Essential References Not Discussed: N/A Other Strengths And Weaknesses: - The paper addresses a practical problem in language model finetuning with a simple and effective solution (1% pretraining data injection). - The experimental design is systematic and thorough, covering multiple model sizes, domains, and dataset sizes. - The finding that smaller models are more prone to forgetting than larger ones is interesting. - The paper focuses on continual pre-training scenarios rather than what is commonly called "fine-tuning" in modern LLM contexts. Other Comments Or Suggestions: - The use of the word "fine-tuning" throughout the paper may be slightly misleading as it's more commonly associated with post-training (SFT, RLHF...). The paper would benefit from clearer positioning relative to instruction tuning. Questions For Authors: - How sensitive is the 1% rule to the nature of the pretraining data? Would you expect higher or lower optimal injection rates if the pretraining data is highly diverse but the target domain is specialized? - How sensitive is the 1% rule to data quality? For example, would it be possible to get away with 0.1% of the pretraining data if it was carefully selected? This question falls outside the scope of this paper, but it could be an interesting direction to take this line of inquiry. - Did you observe any qualitative differences in the types of knowledge that were forgotten when p=0% versus preserved when p=1%? Are certain types of knowledge (e.g., factual, linguistic, reasoning) more prone to forgetting? - Your scaling law includes parameter B, which indicates the relative efficiency of parameters allocated to pretraining versus finetuning. The values vary significantly across domains. What factors do you believe drive these significant differences? - Your results suggest that pretraining data injection is a regularizer that improves generalization. How does it compare to standard regularization methods (e.g., L2 regularization) at preventing forgetting? - Is there a way you can test some of these ideas on standard Llama/Gemma/Qwen base checkpoints? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear reviewer, We thank you warmly for your detailed and thorough feedback on our work. We are happy to read that you found that our work is a “simple and effective solution”, that "claims are supported by the evidence presented", and that we have "a central point that is [...] new and relevant for continual LLM pretraining". > The paper focuses on continual pre-training scenarios rather than what is commonly called "fine-tuning" in modern LLM contexts. We will clarify this very important point in the paper. A critical part of our setup is the specialization data scarcity: we only have a small number of tokens to train on, which is not necessarily the case in continual pre-training, while it is a defining aspect of fine-tuning. **Our findings also extend to more standard LLM fine-tuning set like openHermes**, please refere to the last part of the answer to rev.R4eh > How sensitive is the 1% rule to the nature of the pretraining data? This is an interesting question; testing it would require using a novel pretraining set and pretraining models on it, which is cumbersome. However, we believe that indeed the diversity of the pretraining data is very important ot counteract forgetting. > How sensitive is the 1% rule to data quality? Pretraining data injection acts as a regularization. The value of $p$ allows to move along a Pareto front - see [the figure](https://anonymous.4open.science/r/icml2025-figures-3ED2/pareto_front.pdf). We expect that data quality does not have much impact on the fraction p necessary to overcome forgetting, since p only drives the "regularization strength". However, we believe that higher data quality in the pretraining set would allows using less of it to fine tune in a scenario where we repeat the pretraining data; we expect improvements in an experiment like fig.6 > Did you observe any qualitative differences in the types of knowledge that were forgotten? Are certain types of knowledge more prone to forgetting? We did not look into the details of knowledge that are forgotten; but clarifying how models forget is an extremely interesting future research direction. > Your scaling law includes parameter B [...]. The values vary significantly across domains. What factors do you believe drive these significant differences? This is an interesting point. The parameter B indicates how much pretraining data helps mitigating forgetting. A high B value therefore indicates a strong discrepancy between the pretraining and fine-tuning set, ann indeed B is low for datasets such as free law and wikipedia which are close to the pretraining set, while it is high for dm_mathematics and euro parl, which are far from the pretraining set. We will clarify this in the text. > Your results suggest that pretraining data injection is a regularizer that improves generalization. How does it compare to standard regularization methods (e.g., L2 regularization) at preventing forgetting? We tested an "anchored" baseline with a variant of AdamW, using $\lambda(\theta\_t-\theta\_0)$ as the weight decay term, instead of the conventional $\lambda\theta_t$, where $\theta\_0$ are the parameters of the pretrained checkpoint. Note that this is equivalent to using weight decay on the delta between fine-tuned and base model. We added [this illustration](https://anonymous.4open.science/r/icml2025-figures-3ED2/demo_anchored_adamw.pdf) to the paper. For $\lambda\in[1e-2, 1e-1, 1]$ the forgetting remains significant (between 15% and 4% more than p=1%, accross all model sizes). For $\lambda\geq 1$ the finetuning performance decreases compared to the baseline p=1%. Therefore _more data diversity is superior to regularization in the parameter space_. We added [the plot](https://anonymous.4open.science/r/icml2025-figures-3ED2/github_anchored_adamw.pdf) in appendix. We also found that standard weight decay had not impact on forgetting. > Is there a way you can test some of these ideas on standard Llama/Gemma/Qwen base checkpoints Thank you for the suggestion, this is an interesting future work. We chose to focus on a family of models we had full control over, where we precisely knew the pretraining pipeline and datasets. This allowed us to observe clear trends as a function of scale. In contrast, the precise training pipeline as well as their training data are not available. However, we tested our ideas with other models from the same family, with the **isocurve D=10N**. We finetune on free_law, and we measure a bootstrapped MRE of **0.57% for forgetting** and **1.14% for finetuning**. We aded [this curve](https://anonymous.4open.science/r/icml2025-figures-3ED2/forgetting_isocurve10_freewlaw_0.5perc.pdf) and [this curve](https://anonymous.4open.science/r/icml2025-figures-3ED2/finetuning_isocurve10_freelaw_0.5perc.pdf) to the paper. We hope that our answers have alleviated your concerns, and we thank you again for your review !
null
null
null
null
null
null
On Linear Convergence in Smooth Convex-Concave Bilinearly-Coupled Saddle-Point Optimization: Lower Bounds and Optimal Algorithms
Accept (poster)
Summary: This paper studies the first-order methods for solving smooth convex-concave saddle-point problems with bilinear coupling i.e. $ \min_x \max_y f(x) + \langle y, Bx \rangle - g(y)$. It establishes the first lower bounds on the number of gradient evaluations $\nabla f(x), \nabla g(y)$ and matrix-vector multiplications with $B$ and $B^{\top}$ needed for solving these saddle-point problems. Moreover, it develops algorithm which matches this lower bound. Claims And Evidence: This paper claims to develop an optimal algorithm for solving saddle point optimization problem with bilinear coupling. However, the paper doesn't provide any experiment to show it's performance and comparison to other existing algorithms in practice. Methods And Evaluation Criteria: This paper provides no experiments to evaluate the performance of the proposed algorithm. The analysis is entirely theoretical, focusing on deriving lower complexity bounds and developing an optimal algorithm that matches these bounds. No empirical validation is provided to demonstrate its practical performance on real-world problems or benchmark datasets. Theoretical Claims: The paper claims they develop the first optimal algorithm that matches the lower bound. However, they do not provide explicit pseudocode of the algorithm for solving the saddle point problems. Moreover, the iteration complexity of this paper is given in terms of **weighted square distance** $\mathcal{R}^2$ which is completely different from Kovalev et al 2022b. **This is an unfair comparison.** Experimental Designs Or Analyses: This paper provides no experiments to evaluate the performance of the proposed algorithm. Therefore, there are no experimental designs or analyses to assess for soundness or validity. Supplementary Material: I have checked the following parts of the supplementary material: 1. Appendix B: the pseudocode 2. Appendix C: Table of comparisons 3. Appendix E Relation To Broader Scientific Literature: The saddle-point optimization problem with bilinear coupling is a fundamental problem class that arises in various machine learning applications, including game theory, reinforcement learning, computer vision, robust optimization, and distributed optimization. Developing an optimal algorithm for solving such problems has the potential to significantly benefit the machine learning community. Essential References Not Discussed: I think the references discussed in the paper are sufficient. Other Strengths And Weaknesses: Strength: 1. Provides lower bound for saddle point optimization problems with bilinear coupling. Weakness: 1. This paper claims they provide the first algorithm which attains the optimal complexity for strongly convex strongly concave. To my understanding, Kovalev et al 2022b already attains the optimal complexity with respect to each of the gradient evaluations (they just didn't mention that explicitly). I think the contribution of this paper is incremental. 2. The iteration complexity of this paper is given in terms of **weighted square distance** $\mathcal{R}^2$ which is completely different from Kovalev et al 2022b. **This is an unfair comparison.** 3. The paper provides no experiment to show the performance of proposed algorithm in practice. 4. The presentation is poor. 5. The main problem under consideration is problem (1). However, authors do not provide any pseudocode to solve this problem class. Only pseudocode is for algorithm 1, which solves problem 14. 6. It is hard to follow how problem 14 generalizes problem 1. Authors should add a detailed discussion on this in the Appendix. Other Comments Or Suggestions: 1. Please write down the pseudocode of the algorithm which solves the problem 1. 2. Add a discussion on how problem 14 generalizes problem 1. Questions For Authors: 1. What is the number of gradient $\nabla f(x), \nabla g(y)$ evaluations required for the algorithm in Kovalev et al 2022b. 2. Assumptions 2.6 and 2.7 are not really assumptions right? They follow from assumptions 2.3-2.5. Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: We thank the reviewer for the time and effort. Unfortunately, the development of the *optimal algorithm* for solving problem (1), which is one of the key contributions of our paper acknowledged by other reviewers, is missing from the "strengths" list. Moreover, the criticism of our paper is based on claims that are either **factually false** or *highly questionable*. We address these below. 1. ```Kovalev et al 2022b already attains...``` Unfortunately, this claim is false and unjustified: - *This claim is false*. The algorithm of Kovalev et al. (2022b) cannot reach the lower bounds because it does not implement *the separation of complexities*. Refer to lines 60-72 in Section 1.2 for an explanation. This also clearly follows from Table 1. - *This claim is unjustified*. You should support this claim with evidence, including an explanation with references to Kovalev et al. (2022b), such as precise theorems, equations, paragraphs, etc. 2. ```This is an unfair comparison.``` Unfortunately, *this statement is false.* The exact opposite is true: it is correct to compare $\mathcal{R}$ with any *norm*, including the one used by Kovalev et al. (2022b, eq. (14)), due to the *norm equivalence theorem*. The cost of the transition between the norms is negligible due to linear convergence: it only results in extra *additive factors* in the complexity. Moreover: - It is standard practice in the field to ignore additive constants in linear complexities. This includes *all* SOTA papers in Table 1, most of which are A*/Q1. - On lines 162 and 403, we explicitly mention that we ignore additive constants. This is also acknowledged by **Reviewer ydoA**. 3. ```No experiments.``` It is a common standard in the field that papers with strong theoretical results are not required to include any experiments, just as strong experimental papers are not required to include any theory. Our paper contains strong theoretical results, which are acknowledged by other reviewers. Hence, the absence of experiments is justified. 4. ```Presentation is poor.``` Unfortunately, you have not provided any arguments to support this claim. On the other hand, other reviewers found our paper "well-written" and "clearly written". Regrettably, our only option is to disregard this claim. 5. ```Authors do not provide pseudocode.``` In Section 4.3, we provide a clear and comprehensive explanation of how to apply Algorithm 1 with restarting to solve problem (1), including the definitions of functions $p_i$ and operators $Q_i$ in eqs. (25) and (26). This is more than enough for anyone with at least some expertise in mathematical optimization to use the algorithm. It is also important to highlight that, according to lines 351-367 in Section 4.2, it is necessary to reorder functions $p_i$ and operators $Q_i$, depending on the values of constants $L_i$ and $M_i$ from Theorem 4.7. This would lead to $3!=6$ explicit variants of the algorithm for each order. Hence, a practical implementation would still use Algorithm 1 in combination with separate first-order oracle implementations for functions $p_i$ and operators $Q_i$, which can be easily done in any modern programming language/framework. 6. ```How problem 14 generalizes problem 1.``` A brief explanation is available on lines 351-255, Section 4.3. Some explanation is also available in (Lan and Ouyang, 2021; Gidel et al., 2018; Nesterov, 2007). We are strongly convinced that a more detailed explanation is unnecessary because the reduction of convex-concave saddle-point optimization problems to monotone variational inequalities is one of the most basic facts in optimization theory. For a detailed explanation, please refer to books like "Finite-Dimensional Variational Inequalities and Complementarity Problems". ### Questions 1. ```What is the number...``` It is given in Table 1 of our paper or Table 1 of Kovalev et al. (2022b). 2. ```Assumptions 2.6...``` This is not true. Assumption 2.6 requires $\delta_x,\delta_y>0$, which is not implied by Assumptions 2.3-2.5. Moreover, Assumption 2.6 is the "line" that separates the settings of linear convergence and sublinear convergence. This is clearly mentioned numerous times in the paper, for instance, on lines 160-188 (Section 2.2), and lines 228-262 (Section 3.2). Similarly, it is easy to verify that Assumption 2.7 is not implied by Assumptions 2.3-2.5. --- Rebuttal Comment 1.1: Comment: Thank you for your reply. 1. Kovalev et al 2022b haven't mentioned the gradient evaluations separately. But check the terms inside max of equation 25 from Kovalev et al, 2022b. Each of these terms corresponds to $\nabla f(x)$, $\nabla g(y)$ computations and exactly matches what you have. 2. The definition of R^2 contain terms like $\delta_x$ which in turn depend on $\mu_x, L_y, \mu_{xy}$. So showing $\|x_k - x_* \|^2 \leq \epsilon$ will have another $\frac{1}{\delta_x}$ term in the number of gradient evaluations. 3. No, the lack of experiments is not justified. The authors do not provide any pseudocode or discuss how to implement the algorithm in practice, which raises the question of whether the proposed algorithm benefits from the existing ones. 4. I did provide arguments on why I think the presentation is poor. a. There is pseudocode for the main problem (equation 1), b. There is no discussion on why problem (14) generalizes (1). I will keep my score. --- Reply to Comment 1.1.1: Comment: Thank you for the reply. Unfortunately, all four of your arguments are false: - >Kovalev et al 2022b haven't mentioned the gradient evaluations separately. But check the terms inside max of equation 25 from Kovalev et al, 2022b. Each of these terms corresponds to $\\nabla f(x)$, $\\nabla g(y)$ computations and exactly matches what you have. Kovalev et al. (2022b) reported the **iteration complexity** $\\tilde{\\mathcal{O}}\\left(\\max\\left\\{\\sqrt{\\frac{L_x}{\\mu_x}},\\sqrt{\\frac{L_y}{\\mu_y}}, \\frac{L_{xy}}{\\sqrt{\\mu_x\\mu_y}}\\right\\}\\right)$. The gradients $\\nabla f(x)$ and $\\nabla g(y)$ are computed **exactly once at each iteration** of their algorithm. Hence, Kovalev et al. (2022b) require $\\tilde{\\mathcal{O}}\\left(\\max\\left\\{\\sqrt{\\frac{L_x}{\\mu_x}},\\sqrt{\\frac{L_y}{\\mu_y}}, \\frac{L_{xy}}{\\sqrt{\\mu_x\\mu_y}}\\right\\}\\right)$ computations of $\\nabla f(x)$ and $\\nabla g(y)$. This **does not coincide** with $\\tilde{\\mathcal{O}}\\left(\\sqrt{\\frac{L_x}{\\mu_x}}\\right)$ and $\\tilde{\\mathcal{O}}\\left(\\sqrt{\\frac{L_y}{\\mu_y}}\\right)$ from our paper. - >The definition of $R^2$ contain terms like $\\delta_x$ which in turn depend on $\\mu_x, L_y,\\mu_{xy}$. So showing $\\|x_k - x_*\\|^2 \\leq \\epsilon$ will have another $\\frac{1}{\\delta_x}$ term in the number of gradient evaluations. Our algorithm requires $\\mathcal{O}(\\sqrt{\\kappa_x} \\log 1/\\epsilon)$ computations of the gradient $\\nabla f(x)$ to reach the precision $\\mathcal{R}^2_{\\delta_x\\delta_y}(x^k,y^k) \\leq \\epsilon$. Hence, to reach the precision $\\|x^k - x^*\\|^2 \\leq \\epsilon$, our algorithm requires $\\mathcal{O}(\\sqrt{\\kappa_x} \\log 1/(\\epsilon\\delta_x)) = \\mathcal{O}(\\sqrt{\\kappa_x} \\log 1/\\epsilon + \\sqrt{\\kappa_x} \\log 1/\\delta_x) = \\mathcal{O}(\\sqrt{\\kappa_x} \\log 1/\\epsilon)$ computations of the gradient $\\nabla f(x)$. As you can see, these linear convergence rates **coincide**. - >No, the lack of experiments is not justified. The authors do not provide any pseudocode or discuss how to implement the algorithm in practice, which raises the question of whether the proposed algorithm benefits from the existing ones. We **do provide** a pseudocode for solving problem (1), which is a special instance of problem (14), in Algorithm 1, along with clear and comprehensive instructions on how to implement this algorithm in Section 4.3. - >I did provide arguments on why I think the presentation is poor. a. There is no pseudocode for the main problem (equation 1), b. There is no discussion on why problem (14) generalizes (1). Unfortunately, neither argument supports the "poor presentation" claim: - (a) This is false. Please refer to the information above and our original rebuttal. - (b) There is no place for such a discussion in a scientific paper aimed at an audience with expertise in optimization, beyond the references that we provide in the paper, which are highlighted in our original rebuttal. In particular, the question of equivalence between problems (1) and (14) is so basic that it is literally asked during the undergraduate optimization course exam at most universities around the world, including ours.
Summary: This paper considers deterministic convex-concave minimax optimization problems. In particular, the main focus is on the case where we can obtain linear convergence, as characterized in Assumption 2.6. * First, the authors establish fine-grained lower bounds by separately counting oracle calls for the gradients and the coupling matrix multiplications, and the results (1) recover [ZHZ'22] for the SCSC (strongly-convex-strongly-concave) case and (2) also cover some other cases (including strongly-convex-concave or bilinear functions). * Second, the authors propose Algorithm 1, which is based on the idea of solving a more general finite-sum variational inequality. An application of this to minimax optimization problems yield tight convergence upper bounds for the cases described in the lower bound results. [ZHZ'22] Zhang, J., Hong, M., and Zhang, S. On lower iteration complexity bounds for the convex concave saddle point problems. Mathematical Programming, 2022. Claims And Evidence: (The main results are theoretical.) I have not checked the proofs line by line, but the details seem to have no fatal errors. Methods And Evaluation Criteria: The main results are theoretical. Theoretical Claims: I have not checked the proofs line by line, but I read the appendix, and the details seem to have no fatal errors. (See **Strengths & Weaknesses** and **Questions** for details.) Experimental Designs Or Analyses: The main results are theoretical. Supplementary Material: There are no separate supplementary materials other than the paper. Relation To Broader Scientific Literature: The paper can contribute to theoretical guarantees and understandings on convex-concave minimax optimization algorithms and variational inequalities. Essential References Not Discussed: I am unaware of any particular closely related work that has not been cited in the paper. Other Strengths And Weaknesses: **Strengths** * The paper is well-written (in my opinion), and I really enjoyed reading it. * The paper suggests both novel lower bounds and matching upper bounds that, altogether, closes the case. The proposed results make solid contributions, especially those for the new tight rates for all of the non-SCSC cases. * The paper also contains (upper bound) results that consider finite-sum variational inequalities of the form $(14)$. **Weaknesses** * See **Questions** for details. Other Comments Or Suggestions: * The running title says 'Submission and Formatting Instructions for ICML 2025.' (I have 3 such papers in my review batch, and I don't know why!) * I think it would have been better if there were more discussions on the lower bound instance constructions, which I think is one of the most interesting parts of the paper. * (TYPO) Table 1, Footnote (5) symetric → symmetric Questions For Authors: * Can the authors elaborate on any iteration complexities or number of oracle calls induced in the $\arg \min$ part of Algorithm 1 in the innermost loop $k = n+1$? * In the lower bounds, using matrices with $1$'s in the diagonals and $-1$'s in the super-diagonals is quite standard (as in [ZHZ'22]), but I wonder what the high-level motivations of the instances in Lemmas G.2 and G.3 (for the $\sqrt{\kappa_{xy}}$ parts) were. Could the authors give a bit of an illustration on this, or is this just a magical lower bound? * I am also curious if previous results by [ZHZ'22] can recover the same results considering separate counts for the gradient and coupling matrix oracles for the SCSC case. (I am aware that [ZHZ'22] deals *only* with the SCSC case while this paper covers several more cases.) * This might be a bit out of scope because the proposed problem $(1)$ is already an important, classical problem, but one weakness one could point out (for the upper bound) is that everything applies only when we have the bilinear-coupled structure. Personally, I also have been curious about whether we can construct an intuitive minimax optimization algorithm for SCSC (or maybe more general cases where we can have linear convergence) *without* the bilinear coupling structure in the objective function. I know that the existence of the coupling matrix is essential for the proposed results, but have the authors ever thought of extending Algorithm 1 to the general convex-concave case by, for instance, replacing the $\boldsymbol{B}$'s with the Hessians $\nabla_{xy} f$, or else? Ethical Review Concerns: (None) Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the detailed comments, valuable feedback, and high evaluation of our work. Below, we provide our detailed response to the review. ### Other Comments Or Suggestions - Fixed. - Please refer to the separate paragraph below. - Fixed. ### Questions For Authors - All functions $p_i^{n;t_1,\\ldots,t_n}(z)$ are just quadratic functions that contain previously computed gradients $\\nabla p_i(z)$ and operators $Q_i(z)$. This can be shown by analyzing lines 17 and 22 at recursion levels $k = 1,\\ldots,n$. Hence, line 12 is a simple (possibly constrained) quadratic optimization problem, which does not require any oracle calls. - Please refer to the separate paragraph below. - Indeed, the result of Zhang et al. (2022b) recovers the $\\sqrt{\\kappa_{xy}}$ part in the SCSC case. In particular, the proof of Lemma G.3 is inspired by the result of Ibrahim et al. (2020), which is a slightly generalized version of the result by Zhang et al. (2022b). Combining this with the lower bound for smooth and strongly convex optimization by Nesterov (2013), we recover the full result in the SCSC case. - Some results for general min-max problems beyond SCSC were developed by Kovalev et al. (2022b). However, these results are far from reaching our lower bounds. It is worth highlighting that obtaining accelerated rates is much more difficult for general min-max problems (some results were developed by Alkousa et al. (2020), [2] mentioned by **Reviewer 1jnp**, or in arXiv:2002.02417, arXiv:2205.05653). Overall, this is indeed an interesting question that we are currently starting to examine. ### Lower bounds construction. Here, we attempt to provide an intuition behind the construction of our lower bounds. We start with the basic example of Nesterov (considering an infinite-dimensional space $\\ell^2$ for simplicity). Consider the following linear system: $$\\mathbf{A}x = e_1,$$ where $$\\mathbf{A} = \\begin{bmatrix}1\\\\\\alpha-1&1\\\\&\\alpha-1&1\\\\&&&\\ddots\\end{bmatrix},$$ and $\\alpha \\in (0,1)$. It has a unique solution $x^* = (1,1-\\alpha,(1-\\alpha)^2,\\ldots)$. We can construct a minimization problem with the same solution: $$\\min_x \\|\\mathbf{A}x - e_1\\|^2.$$ One can show that after $k$ computations of the gradient, no more than $\\mathcal{O}(k)$ coordinates of the current iterate $x^k$ are nonzero (due to the linear span assumption). Hence, the distance to the solution is lower bounded by the remainder of the geometric series $\\sum_{i=k+1}^\\infty(1-\\alpha)^i = \\frac{(1-\\alpha)^{k+1}}{\\alpha}$. Moreover, the condition number $\\kappa_x$ is proportional to $1/\\alpha^2$, which gives the lower bound $\\tilde{\\Omega}(\\sqrt{\\kappa_x})$ of Nesterov. The minimization problem above has the following min-max reformulation: $$\\min_x\\max_y 2 \\langle y, \\mathbf{A}x - e_1\\rangle - \\|y\\|^2.$$ Here, we have $\\kappa_{xy}$ proportional to $1/\\alpha^2$, and we can apply arguments similar to the ones above to obtain the lower bound $\\tilde{\\Omega}(\\sqrt{\\kappa_{xy}})$. We can also add a regularizer of the form $\\|x\\|^2$, which, subject to some additional details, leads to the results of Zhang et al. (2022b) and Ibrahim et al. (2020). Our Lemma G.3 can be seen as a finite-dimensional variant of the result by Ibrahim et al. (2020). Now, we discuss the most challenging case of our lower bounds, which is Lemma G.2. The starting point is the lower bounds of Scaman et al. (2018) for smooth and strongly convex decentralized minimization, which allows for the min-max reformulation of the form (1). It is based on splitting the hard function of Nesterov (see above) into two functions and placing them on the first and the last $n/3$ nodes of the path consisting of $n$ nodes (refer to Scaman et al. (2018) for details). To obtain our lower bounds, we need to make the following substantial changes: - We add dual regularization of the form $-\\mu_y \\|y\\|^2$. This allows us to obtain the desired result for $\\mu_y > 0$, but it makes the problem much more difficult to work with. - We replace the $n$-node path with an $n/3$-node path and attach two $n/3$-node star-topology networks to its ends, such that Nesterov functions are stored only on the "leaves" of these stars. This step introduces some sort of symmetry, which simplifies finding the solution to the problem. - We also introduce an extra dual variable, modify matrix $\\mathbf{B}$, and add an extra regularizer of the form $-L_y\\|y\\|^2$ with respect to the new variable to account for nontrivial values of $L_y \\gg \\mu_y$. Additionally, we analyze the spectral properties of matrix $\\mathbf{B}$ in Lemma I.1. We apply a series of reformulations to the resulting min-max problem and obtain a simple minimization problem similar to the example above but with a different value of $\\alpha$. It remains to carefully analyze this value, subject to additional details which we, unfortunately, are unable to discuss here due to the 5000 character limitation.
Summary: This paper develops tight lower complexity bounds and matching optimal algorithms for smooth saddle-point problems with bilinear coupling. The work unifies existing results in different regimes (strongly-convex-strongly-concave, bilinear saddle-point, strongly convex with affine constraints) as well as gives new results in the convex-concave setting. --- ## Update after rebuttal The authors have addressed my concerns regarding the proof of Theorem 3.3 in their rebuttal. I keep my current evaluation of the paper. Claims And Evidence: The claims are clear and supported by convincing evidence. Methods And Evaluation Criteria: The intuition behind the used methodology is lacking and could be made more clear. The evaluation criteria make sense for the problem at hand. Theoretical Claims: I checked the claims and proofs in all the Appendices except Appendix I. The main issue I have is in the proof of Theorem 3.3 (Appendix G), where the case $\mu_x=\mu_y=0$ is proved by assuming that $\mu_y>0$. The authors should provide more explanation why this is justified. Experimental Designs Or Analyses: There are no experiments in this paper. Supplementary Material: There is no supplementary material. Relation To Broader Scientific Literature: The paper studies saddle-point problems which have applications in various fields such as economics, game theory and statistics. A comparison with existing state-of-the-art linearly-converging algorithms is given in Appendix C. Moreover, optimal algorithms and theoretical lower bounds are given, which builds further upon the work of Nesterov. Essential References Not Discussed: The related publications are adequately discussed throughout the paper. Other Strengths And Weaknesses: Strengths: - The paper is clearly written and provides tight theoretical results in the linearly-converging setting. - The results of the paper are relevant to numerous machine learning applications. Weaknesses: - There is no intuition provided for some constants such as the ones in Assumption 2.7 or the Lyapunov function. - Similarly, the proofs are not very enlightening, being only a series of algebraic manipulations. Other Comments Or Suggestions: List of typos and minor errors: 1. The page titles are still “Submission and Formatting Instructions for ICML 2025” 2. Line 140: would it be possible to write $0$ instead of the minimum eigenvalues in the “otherwise” cases, or is this not valid? 3. Line 165: Assumption 2.6 is a bit poorly worded, it might give the impression that Assumptions 2.3-2.5 implies the inequality. 4. Line 200: Equation (9): missing squares for the distances 5. Line 233: fix first set of quotation marks around “hard” 6. Line 245: Theorem 3.2: does not hold for any time $\tau>0$, in Nesterov’s book there is an upper bound for the iteration number which should translate into an upper bound for $\tau$ 7. Line 327: Footnote 5: it is better to specify the proposition/page number instead of the whole book 8. Line 345: Theorem 4.5: should also include a reference to Algorithm 1 as well as the problem to be solved 9. Line 402: “numbers” should be “number” 10. Line 448: capitalisation Polyak-Łojasiewicz (also add en-dash names between instead of hyphen) 11. Capitalisation of conference names in references is not consistent (e.g. “International Conference on Machine Learning” vs “international conference on machine learning” or “Advances in Neural Information Processing Systems” vs “Advances in neural information processing systems” 12. Line 561: “eigenvalues of a matrix” should be “eigenvalues of a symmetric matrix” since otherwise they might not be real 13. Line 566: “argmin” should be “min” 14. Line 699: “symetric” should be “symmetric” 15. Line 718: “this” should be “these” 16. Line 738: Equation (38) does not follow from putting in the values from (37) 17. Line 782: A reference to Assumption 2.6 in addition to Assumption 2.5 should be added. Moreover, the implication stated should be explained more thoroughly 18. Each time a line number from Algorithm 1 is referenced, it comes out as “algorithm 1 of Algorithm 1” instead of “line x of Algorithm 1” 19. Line 1430: “wuch” should be “such” 20. Line 1527: $(n+i)\times(n+i)$ should be in the superscript 21. Line 2290: Missing an identity matrix in the upper bound 22. Line 2430: Lemma K.2: does this hold for $k \neq i$? 23. Line 2460: Equation (174): missing superscript $0$ on $r_i(\hat{z})$; the last $z$ should be $\hat{z}$; $z^0$ should be $z_{\rm in}$ 24. Line 2489: “which is implied by which is implied by” 25. Line 2495: “Introduction step” should be “Induction step” 26. Line 2495: also have to assume that it holds for $k=n$ or the induction is invalid 27. Line 2518: $\nabla \hat{p}_i^k$ should be $\nabla \hat{p}_k^k$ 28. Line 2596: $\hat{p}_i^k$ should be $\hat{p}_k^k$ 29. Line 2835, 2846, 2850: subscript inside the last product should be $j$ instead of $i$ (resp. $j+1$ instead of $i+1$) 30. Line 2880: step (c) should be an inequality and $\|z-z’\|_P^2$ should be $\|z-z’\|_P$ 31. Line 2911: Q(z) should be Q(z^*) 32. Line 2930: The Lipschitz constants are missing 33. Line 2983: the coefficients 12 could be 6? 34. Line 2985: step (e) equality should be an inequality 35. Line 3001-3009 could be removed, the statement is obvious since big O does not care about additive constants 36. Line 3013: deifnition should be definition Questions For Authors: Could you elaborate on the proof of Theorem 3.3 in the cases that $\mu_x=\mu_y=0$ or when one of the two strong convexity constants is $0$? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the detailed comments, valueable feedback, and high evaluation of our work. Below, we provide our detailed response to the review. ### Question about the proof of Theorem 3.3 in the case $\\mu_x = 0$ or $\\mu_y = 0$ Thank you for the question! This indeed may need additional explanation. This question is related to cases **(i)**, **(ii.b)**, and **(iii.b)**. For instance, consider case **(i)** (other cases are similar). It is easy to observe that if the function $g(y)$ is $\\mu_y$-strongly convex with $\\mu_y > 0$, then it is also $0$-strongly convex or simply convex (see Definition 2.1 and Assumption 2.4). Hence, the class of problems (1) satisfying Assumptions 2.3-2.5 with parameters $\\pi = (\\ldots,\\mu_y,\\ldots) \\in \\Pi$ is contained in the class of problems (1) satisfying Assumptions 2.3-2.5 with parameters $\\pi = (\\ldots,0,\\ldots) \\in \\Pi$. Thus, for case **(i)**, we can choose our hard instance of problem (1) with a $\\mu_y$-strongly convex function $g(y)$ with an arbitrary $0 < \\mu_y < L_y/4$. In particular, we choose the hard problem instance according to Lemma G.2, which gives the following lower bound: $$ \\tilde{\\Omega}\\left(\\frac{L_{xy}\\sqrt{L_y}}{\\mu_{xy}\\sqrt{\\mu_y + \\mu_{yx}^2/L_x}}\\right). $$ We can choose $\\mu_y = \\mu_{yx}^2/L_x$ and obtain $$ \\tilde{\\Omega}\\left(\\frac{L_{xy}\\sqrt{L_xL_y}}{\\mu_{xy}^2}\\right), $$ which is the desired lower bound for case **(i)**. ### Intuition behind the numerical constants - There is no particular intuition behind the actual values of the numerical constants in Assumption 2.7 (4 and 18), except that we chose these constants to simplify our calculations in the proof of Theorem 3.3 and make them less ugly. It is likely that these numerical constants can be reduced, but it would not make much sense because Assumption 2.7 is only used to avoid covering uninteresting corner cases with small, i.e., $\\mathcal{O}(1)$, condition numbers, as mentioned in the paper. - The numerical constants in the Lyapunov function in eq. (29) hold little intuition. The values of these constants are mostly driven by the proof and can likely be improved as well, but it would not make much sense as this would only result in logarithmic or additive improvements in complexity. ### Intuition behind the proofs For the intuition behind the construction and the proof of lower bounds, please refer to our response to **Reviewer PYLp**, who raised a similar question. Unfortunately, we were unable to provide a more detailed intuition behind the convergence proof beyond what we have in Section 4.2 (lines 303-345) due to the 5000 character limit (we really tried). We will include a more detailed explanation in the revised version of the paper. ### Typos and minor errors Thank you for the list of typos and minor errors! We fix them all as follows: - (1) Fixed. - (2) Yes, indeed. For instance, in the definition of $\\mu_{xy}$, if $\\lambda_{\\min}(\\mathbf{B}^\\top \\mathbf{B}) > 0$, we have $\\mathrm{range} \\mathbf{B}^\\top = \\mathcal{X}$ and fall into the first option. - (3) Thanks for the suggestion. We have removed the references to Assumptions 2.3-2.5 and added the words "Parameters $\\pi \\in \\Pi$ satisfy the inequality...". - (4-5) Fixed. - (6) Indeed, speaking rigorously, we need to mention the transition from the lower bound on the number of iterations in Nesterov's book to the lower bound on $\\tau$, even though it is straightforward. We will add an appropriate comment to the proof in the revised version of the paper. - (7) Added reference to Lan (2020, proof of Theorem 3.3). - (8-15) Fixed. - (16) Indeed, lines 732-740 are worded inaccurately because we cannot choose $L_x=L_y=0$ due to Assumption 2.7. This is also discussed on lines 741-745 ("Note that strictly speaking..."). We will rewrite line 732 and eq. (37) in a more accurate way, i.e., something like "the class of bilinear saddle-point optimization problems falls under Assumptions 2.3-2.5 with parameters..." - (17) Indeed, the case $\\mu_x > 0$ is trivial. The case $\\mu_x = 0$ implies $\\mu_{xy} > 0$ due to Assumption 2.6, and $\\nabla f(x) \\in \\mathrm{range}\\mathbf{B}^\\top = (\\ker \\mathbf{B})^\\perp$ due to Assumption 2.5 and point #2 from your list. - (18-21) Fixed. - (22) Yes, it does. For $i = k$, it holds due to line 22 (first option). For $k > i$, we have $p\_{i}^{k,t_1,\\ldots,t\_k} \\equiv \\hat{p}\_{i}^{k,t\_1,\\ldots,t\_k} \\equiv p\_{i}^{k-1,t\_1,\\ldots,t\_{k-1}}$ due to line 22 (second option) and line 17 (second option). Hence, we can prove the desired statement by induction. - (23-32) Fixed. - (33) Yes, indeed, they could. - (34) Fixed. - (35) Yes, indeed. - (36) Fixed. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for their reply. The authors have addressed my concerns regarding the proof of Theorem 3.3. I keep my current evaluation of the paper.
Summary: This work studied smooth (strongly)-convex-(strongly)-concave bilinearly-coupled saddle-point problem, and provided lower complexity bounds in terms of the computation time, and achieved the separation of complexities. And they further proposed an optimal algorithm which matches with the lower bound. Claims And Evidence: The results are supported by convincing evidence, generally I am satisfied with the results. Here are some questions: 1. Missing literature. Line 83, you mentioned for strongly-convex-concave case, "To the best of our knowledge, there are no lower complexity bounds that would cover these cases", I think the work [1] should have solved it, check Theorem 2 therein. 2. Another missing literature should be [2], whose upper bound result is different from your Table 1. 3. Echo on [1], they also extended the oracle class to proximal mapping, which is broader than your setting, which may be an extension that you can consider. [1] Xie, Guangzeng, et al. "Lower complexity bounds for finite-sum convex-concave minimax optimization problems." ICML 2020. [2] Wang, Yuanhao, and Jian Li. "Improved algorithms for convex-concave minimax optimization." NeurIPS 2020 Methods And Evaluation Criteria: / Theoretical Claims: / Experimental Designs Or Analyses: / Supplementary Material: / Relation To Broader Scientific Literature: This work advanced the understanding of optimal complexities for bilinear min-max optimization problems. Essential References Not Discussed: See above Other Strengths And Weaknesses: / Other Comments Or Suggestions: / Questions For Authors: / Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the high evaluation of our work and the useful references. We provide our answers to the questions below. 1. Thank you for pointing this out. The statement "to the best of our knowledge, there are no lower complexity bounds that would cover these cases" is indeed a bit inaccurate. Our point was that there are no *linear* lower bounds, i.e., lower bounds for the linear convergence setting. On the other hand, [1, Theorem 2] provides a *sublinear* lower bound. We will fix the inaccuracy and cite [1] in the revised version of the paper. 2. As far as we understand, the main limitation of [2] is that it can achieve the linear SOTA rate $\tilde{\mathcal{O}}\left(\sqrt{\frac{L_x}{\mu_x}+\frac{L_y}{\mu_y} + \frac{L_{xy}}{\mu_x\mu_y}}\right)$ in the SCSC regime only for quadratic problems. On the other hand, [2] achieves SOTA rates for general min-max problems without bilinear coupling. We will discuss this in the revised version of the paper. 3. Thank you for the suggestion. This is an interesting question to consider for future work. In particular, we think that it is possible to remove the terms $\sqrt{\kappa_x}$ and/or $\sqrt{\kappa_y}$ from the lower and upper complexity bounds by assuming access to proximal mappings associated with functions $f(x)$ and $g(y)$, respectively. --- Rebuttal Comment 1.1: Comment: Thank you for the clarification. A further question, can you clarify whether the dist() in your Equation 9 comes with square? I did not find the detailed formal definition. BTW, Equation 32 should be wrong I think, it should be min, rather than argmin (also questionable on whether to apply square). From the proof in Appendix I, the dist() should be defined in terms of squared norm I think. --- Reply to Comment 1.1.1: Comment: Thank you for your reply. There is a typo in equation (9); the squared distance $\\mathcal{R}\_{\\delta_x\\delta_y}^2$ should be defined using squared distances $\\mathrm{dist}^2$ as follows: $$ \\mathcal{R}\_{\\delta_x\\delta_y}^2(x,y) = \\delta\_x \\mathrm{dist}^2 (x;\\mathcal{S}_x) + \\delta_y \\mathrm{dist}^2 (y;\\mathcal{S}_y). $$ There is indeed also a typo in equation (32); $\\arg\\min$ should be replaced with $\\min$, but no square this time: $$ \\mathrm{dist}(x;\\mathcal{A}) = \\min\_{x' \\in \\mathcal{A}} \\|x-x'\\|. $$ The proof in Appendix I indeed uses the squared distances $\\mathcal{R}\_{\\delta_x\\delta_y}^2$, according to the corrected definitions above. Thank you for pointing out the typos; we have fixed them.
null
null
null
null
null
null
LOGO --- Long cOntext aliGnment via efficient preference Optimization
Accept (poster)
Summary: The paper introduces LOGO, a novel and efficient preference optimization strategy designed for long-context alignment in large language models (LLMs). LOGO addresses issues of misaligned responses in long-context models (LCMs) by introducing: - A Reference-Free Preference Optimization Strategy. - Efficient Data Synthesis for Long-Context Preference Optimization. - Positional Indices Synthesis. Claims And Evidence: NA Methods And Evaluation Criteria: Yes, the methods and evaluation criteria proposed in this paper appear well-justified and appropriate for the long-context alignment Theoretical Claims: This paper does not contain significant theoretical proofs to verify. Experimental Designs Or Analyses: The authors have conducted extensive experiments to validate the effectiveness of LOGO. The results demonstrate that LOGO significantly improves long-context alignment while maintaining efficiency. Supplementary Material: Yes, I have reviewed the complete supplementary material Relation To Broader Scientific Literature: This paper makes significant contributions to the broader scientific literature on long-context alignment in LLMs. Prior works have primarily focused on scaling context window sizes (e.g., through post-training on long-instruction data, novel architectures, or positional encoding modifications). However, research has shown that large-context windows alone do not guarantee alignment, as models still exhibit hallucinations and fail to follow instructions effectively. Essential References Not Discussed: NA Other Strengths And Weaknesses: Strengths: - The writing is well-structured, with clear logic and readability, making the paper easy to follow. - The proposed LOGO method is a novel and efficient approach to long-context alignment, demonstrating both effectiveness and scalability. Other Comments Or Suggestions: NA Questions For Authors: - Applicability to Reasoning Tasks: LOGO focuses on long-context alignment and preference optimization. However, recent advances, such as OpenAI's O1 and DeepSeek's R1, have shown that test-time scaling can significantly enhance reasoning ability. Have you evaluated whether LOGO can also improve the model’s reasoning capabilities, particularly in long-context reasoning tasks? - Efficiency Gains vs. Traditional RLHF Approaches How does LOGO compare against standard RLHF approaches like PPO in terms of GPU usage and training time? Would LOGO still be beneficial in a setting with ample computational resources? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer 6VnN, thanks for your insightful comments and suggestions. Below is our detailed response. --- **[Question 1]** Reasoning Capability Evaluation **[Re]** Thanks for raising this important point. However, our primary objective is **not to enhance reasoning per se**, but rather to **mitigate misalignment issues in long-context responses**. Nevertheless, we have extended our experiments with LongBench V2 in this rebuttal, which is specifically designed for real-world long-context reasoning (`Reviewer icM3’s Concern 2`). Our results indicate **significant performance improvements even in reasoning tasks**, particularly in settings ''without chain-of-thought (CoT) prompting''. Notably, Llama3-8B-80K achieves an 8-point average improvement, and Qwen2.5-7B-Instruct yields a 2.6 average point improvement, suggesting that LOGO training may **implicitly enhance reasoning capabilities**—potentially due to the presence of multi-hop data in our synthesis preference data. To achieve reasoning performance on par with OpenAI’s O1 or DeepSeek’s R1, stronger RL supervision signals, such as *Long-CoT*, would likely be required. Interestingly, we have recently been investigating this direction: by enabling LCM to actively search for and integrate key information from long-context inputs before generating responses, we observe substantial performance gains. However, this exploration falls beyond the scope of LOGO itself and is an avenue for future work. --- **[Question 2]** Efficiency Gains vs. Traditional RLHF Approaches **[Re]** A key challenge in applying traditional RLHF methods (e.g., PPO, DPO) to long-context modeling (LCM) is their heavy reliance on reward models, critical models, etc. For instance, PPO requires a reward model, a reference model, and a value model, while DPO depends on a reference model. However, in the long-context domain, there are no publicly available reward models or value models to facilitate such training. To address this, LOGO introduces an alternative: importance scoring and a novel training objective to replace the need for an offline reward model. This significantly improves training efficiency and eliminates the dependency on additional models. Besides, among open-source works, we have found only one study similar to ours: LongPO, which follows a traditional DPO-based approach and requires an additional reference model for training. To provide a clearer comparison, we present the following table highlighting the key differences between SFT, LOGO and LongPO: | Training Strategy | Memory/GPU | Bsz/GPU | Total Throughput (8 GPUs) | Actual Training Length | Training Time (2,000 steps) | Real-world Task | Language Modeling | |----------------------------|------------|---------|---------------------------|------------------------|-----------------------------|-----------------|-------------------| | SFT | 80GB | 1 | 8 samples | 64K | 14h | 43.2 | 6.6 | | SFT + Ring Attention | 45GB | 1 | 4 samples* | 128K | 24h | 44.3 | 6.6 | | LOGO (w/o ref model) + Ring Attention | 69GB | 3 | 12 samples* | 64K | 30h | 47.7 | 9.8 | | LOGO (w/o ref model) | 64GB | 3 | 24 samples | 12K | 16h | 47.0 | 10.4 | | LongPO (**w/ ref model**) | OOM | 2 | 16 samples | 64K | - | - | - | | LongPO (**w/ ref model**) + Ring Attention (ring_size=2) | 62GB | 2 | 8 samples | 64K | >24h | 44.7 | 17.6 | We observed that without employing the **Ring Attention** strategy or a CP-parallel approach, LongPO cannot be deployed on one 80GB GPU due to the combination of long input sequences and the requirement for an additional reference model. Even with the Ring Attention strategy—where every 2 GPUs process a segment of the sequence in parallel, the training time for 2,000 steps exceeds 24 hours. Additionally, LongPO's performance is inferior to LOGO, as it only applies a simplistic preference data processing method, where shorter responses are treated as preferred replies. In short, even in settings with **ample computational resources**, LOGO may still remain beneficial due to its **scalability and reduced model dependency**, making it a more practical and efficient alternative to traditional RLHF methods for long-context alignment. --- We hope our responses have adequately addressed your concerns. If you have any further questions, please don’t hesitate to ask.
Summary: The paper addresses the challenge that open-source long-context models (LCMs) struggle with generation quality in long-context tasks, despite having strong information retrieval capabilities. These models often produce misaligned results, such as hallucinations and instruction-following errors, leading to low recall scores. Key questions addressed: Existing approaches primarily focus on scaling context length by adding more training data at the supervised fine-tuning (SFT) stage, which mainly improves retrieval capabilities. There is a significant gap between retrieval and generation capabilities in LCMs—while they can locate important information, they struggle to use it effectively. Constructing long-context preference pairs is difficult and underexplored in the literature. Proposed approach: The authors propose LOGO, which consists of three key components: A long corpus is broken into chunks, and each chunk is evaluated based on the number of overlapping entities with the given question. Preferred and dispreferred samples are generated by combining different chunk compositions and prompting the model to produce responses. Position indices synthesis is used during training to ensure efficiency and fit within hardware constraints. Main results: LOGO uses only 0.3B tokens. Achieves comparable performance to GPT-4 and LLaMA on LongBench. Maintains strong performance on standard benchmarks like MMLU. Includes ablation studies on hyperparameters such as the number of negative samples and the impact of SFT regularization on final results. Claims And Evidence: Significant improvement on LongBench: This claim is mostly valid. However, some comparisons could be more rigorous. For example, in Table 1, YaRN is compared with LOGO, but the version of YaRN used is training-free. A fairer comparison would include a trained version of YaRN, as prior work has shown that training with context extrapolation methods (like YaRN) improves performance. Performance improvement on synthetic tasks (e.g., "needles" evaluations): Well-supported by experiments in Figure 3 and Section 4.2. No degradation on short-context evaluations and reduced hallucinations: Clearly supported by results in Section 4.3. Methods And Evaluation Criteria: The authors use LongBench to evaluate the model. While LongBench is a reasonable choice, other benchmarks like Ruler would also be valuable. LongBench and LongBench-v2 truncate input sequences from the middle, which may lead to different behaviors compared to evaluations with full-context retention. For example, GPT-4o excels on LongBench-v2 but lacks retrieval strength on Ruler. Including evaluations that retain full-context length would make the claims more robust. Theoretical Claims: Briefly went through the bound analysis in Appendix C and it seems to be valid but did not look into detailsBriefly reviewed the bound analysis in Appendix C—it appears valid, but I did not examine it in depth. Experimental Designs Or Analyses: More details on the training process of competitor models would help clarify the fairness of comparisons. For instance: LLaMA-2-7B-Chat-4K is compared against LOGO trained with Data-Engineering-80K, but the datasets have different token counts (5B tokens for Data-Engineering-80K vs. LOGO). Key hyperparameters like learning rate, batch size, and their impact on LOGO's number of negative samples are not discussed. SFT Regularization Robustness Claim (Section 5.1): The paper states that LOGO is robust to the SFT regularization term, as perplexity drops while task performance remains stable. However, perplexity may not be a reliable indicator of long-context performance, as noted in previous research. Additionally, it is unclear where the reported perplexity values come from (e.g., from the validation dataset during training or an evaluation set). The inverse trend between LongBench scores and perplexity suggests that more evaluations are needed to fully validate the model's improvement. Supplementary Material: N/A Relation To Broader Scientific Literature: The paper has adequate coverage of prior methods for long-context training at the SFT stage. However, literature on preference training for LCMs is scarce, making direct comparisons with LOGO difficult. The authors could reference additional related work on topics like memory compression and attention sparsity. Essential References Not Discussed: N/A Other Strengths And Weaknesses: The strengths: Novel idea on constructing long-context preference data synthetically. Prior literature on this is relatively scarce. Conducted adequate experiments with various prior methods including Data Engineering and Extrapolation methods. Results on long and short context tasks and analysis on the effectiveness of the proposed method. Details on implementations on modeling and framework level make reproduction easier for audience. The weaknesses: Some comparisons may not be fair enough. Ex. comparing LOGO with training free YARN. Evaluations on more benchmarks like Ruler will be helpful in providing a full picture of the method. Other Comments Or Suggestions: N/A Questions For Authors: Can we see a breakdown of performance across different context lengths in LongBench?It would be insightful to observe how performance varies with context length. This would help clarify whether gains come from data quality rather than the proposed method itself. A visualization of scores over context length could provide better insights into its impact. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer 2obo, we sincerely appreciate your thorough review of our work and the detailed feedback provided! --- **[Concern 1]** A fairer comparison of YaRN **[Re]** We have conducted the experiment and found that incorporating YaRN indeed leads to further performance improvements: | Model | S-Doc QA | M-Doc QA | Summ | Few-shot | Synthetic | Avg. | |------------------------|---------|---------|------|----------|-----------|------| | Llama-3-8B-Ins-8K | 38.0 | 36.6 | 27.4 | 61.7 | 40.9 | 40.9 | | + YaRN-64K | 39.8 | 36.7 | 28.8 | 65.4 | 49.0 | 43.9 | | + LOGO-64K | 39.8 | 36.7 | 28.8 | 65.4 | 49.0 | 43.9 | | + LOGO + YaRN-64K | **40.7** | **37.4** | **28.9** | **67.3** | **50.4** | **44.9** | --- **[Concern 2]** Adding evaluations with full-context retention **[Re]** We have conducted the experiments and provided the evaluation results on Ruler below. | Model | Length | niah_1-3 | niah_multikey_1-3 | niah_multivalue | niah_multiquery | vt | cwe | fwe | qa_1-2 | Total | |--------------------------------------|--------|----------|-------------------|----------------|----------------|-------|-------|-------|--------|-------| | Llama-3-8B-Instruct-80K-QLoRA-Merged | 32K | 100.0 | 99.7 | 87.2 | 87.2 | 94.6 | 27.7 | 91.9 | 62.3 | 81.3 | | + LOGO | 32K | 100.0 | 100.0 | 93.2 | 92.5 | 95.2 | 28.1 | 93.2 | 66.2 | 83.5 **(+ 2.2)** | | Llama-3-8B-Instruct-80K-QLoRA-Merged | 64K | 100.0 | 99.0 | 84.5 | 84.5 | 92.6 | 0.2 | 78.7 | 59.5 | 74.9 | | + LOGO | 64K | 100.0 | 100.0 | 90.6 | 88.2 | 93.5 | 1.5 | 86.2 | 62.8 | 77.8 **(+ 2.9)** | | Llama-3-8B-Instruct-80K-QLoRA-Merged | 128K | 99.9 | 77.9 | 76.0 | 76.0 | 88.6 | 0.3 | 81.9 | 52.0 | 69.1 | | + LOGO | 128K | 100.0 | 93.8 | 81.2 | 79.2 | 89.6 | 1.3 | 82.2 | 58.4 | 73.2 **(+ 4.1)** | --- **[Concern 3]** Lack of training process and evaluation details **[Re]** Below, we provide more details: - **Fairness of Comparisons in Training Competitor Models** We acknowledge the token count difference between Data-Engineering-80K (5B tokens) and LOGO. For evaluation, we used the open-source pretrained checkpoint of the Data-Engineering-80K model, but the token format varies from LOGO, complicating direct comparison. Nonetheless, LOGO outperforms Data-Engineering-80K despite using a smaller dataset (0.3B tokens), showing its efficiency. - **Hyperparameters and Negative Samples** We used a fixed learning rate of 5e-5, with a cosine decay scheduler (100 steps warmup from 1e-8). The training setup included 8× A800 GPUs, global batch size of 64, microbatch size of 4 per GPU, and gradient accumulation of 2. The impact of negative prompt numbers is discussed in Section 5.1, with additional details in Appendix F. - **Reliability of Perplexity** Perplexity may not fully reflect long-context performance. We placed the perplexity results in Appendix G, using the PG-19 dataset as a test set, not for validation during training. - **Inverse Trend of Perplexity and Performance** We respectfully disagree with the concern on perplexity and performance trends. Our analysis shows that minor perplexity differences have little impact on model results. The primary goal was to show that LOGO maintains its language modeling ability, and performance should be assessed using benchmarks like LongBench, not just perplexity. --- **[Question]** Lack of breakdown of evaluation details **[Re]** To provide clarity, we first present the partial context length distribution of LongBench. We can find that LongBench has a tightly concentrated length distribution, making it difficult to observe clear performance improvements across different length ranges. | Length Distribution | 0-8K (%) | 8K-16K (%) | 16K+ (%) | |--------------------------------------|----------|------------|----------| | S-Doc QA | 66.0% | 13.1% | 20.9% | | M-Doc QA | 39.3% | 60.3% | 0.3% | | Summ | 62.5% | 32.2% | 5.3% | We encourage the reviewer to refer to our additional experiments on LongBench V2 `(Reviewer icM3 Concern 2)` and Ruler. These benchmarks feature a broader and more diverse context length distribution. The results demonstrate that LOGO consistently improves the backbone model’s performance across all context lengths, with particularly significant gains at longer context lengths (128K), further validating LOGO's effectiveness for long context. --- We hope our responses have adequately addressed your concerns. If you have any further questions, please don’t hesitate to ask. --- Rebuttal Comment 1.1: Comment: The authors addressed my concerns and queries. I choose to retain my score. --- Reply to Comment 1.1.1: Comment: Thanks for your review again. We will surely add more experimental details and the breakdown of the evaluation results in the final revision.
Summary: The paper addresses the issue of long-context models struggling with generating coherent and accurate responses in real-world tasks. It proposes LOGO, a preference optimization-based training strategy for long-context alignment, which includes efficient preference data synthesis and a reference-free training objective. Experiments demonstrate improvements in multiple long-context tasks, with LOGO outperforming existing methods and maintaining or improving performance on short-context tasks. Claims And Evidence: The main claim of this paper that LCMs can achieve significant improvements in real-world tasks by training models with LOGO is not entirely rigorous, as an obvious issue is that increasing the relevant training data will always improve the model's effectiveness by training. Methods And Evaluation Criteria: It is sense-making. Theoretical Claims: I have checked the proofs in this paper, which are reasonable. For example the proof that theoretical guarantee the synthetic positions can cover all possible scenarios listed in Appendix E. Experimental Designs Or Analyses: I have checked the soundness of the experiments in this paper. For example the evaluation on LongBench introduced in Sec. 4.2. I think it is valuable to evaluate the proposed model on the newly released version LongBench-v2. Supplementary Material: A. Details of Preliminary Experiments, and E. Positional Indices Synthesis. Relation To Broader Scientific Literature: The proposed training objective can contribute to the community for improving models on references corpus. Essential References Not Discussed: The references are comprehensive. Other Strengths And Weaknesses: Strengths - I think the proposed training objective, which is similar to the SimPO, is useful to training LLMs with references optimization. - The experiments are extensive to investigate the effectiveness of the model on various kinds of tasks. Weakness - As I mentioned above, the innovative contributions of this paper are limited. The proposed training object for LOGO seems like a variant of the SimPO. Further, the data construction pipeline is a widely used workflow. Other Comments Or Suggestions: The captions in Figure 1 should incorporate the benchmark (i.e., MMLU) used in the evaluation. Questions For Authors: - In the section of Importance Scoring with Automatic Evaluator, how do you extract entities in both question and context? Why do these entities mater for measuring the importance? Why do these entities mater for measuring the importance of chunks? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer icM3, thanks for your insightful comments and suggestions. Below is our detailed response: ------ **[Concern 1]** Rigor of the main claim that increasing training data generally improves model effectiveness **[Re]** We acknowledge that increasing high-quality training data can improve model performance, but training efficiency is a critical factor. Our work focuses on the efficient long-context alignment algorithm in LOGO rather than pure data scaling. Indiscriminate pure data scaling offers limited returns on long-context tasks. As shown in Table 1, SFT with 5B tokens (Data-Engineering-80K) improves LongBench performance by only +1.3 points, whereas LOGO achieves +2.5 points using just 0.3B tokens (16× less data). Additionally, Table 3 shows that LOGO improves performance (40.7 → 47.0), while SFT obtains a smaller gain (40.7 → 43.2) with a similar amount of data. Moreover, scaling data size in SFT requires careful balancing, as improper ratios (Figure 10) can hinder alignment. `We also compare LOGO with other long-context DPO method, please refer to the second point of Reviewer 6VnN.` ------ **[Concern 2]** Evaluation results on LongBench-V2 **[Re]** We have included the results in LongBench-v2 below. | Model | Overall | Easy | Hard | Short | Medium | Long | |--------------------------------------|--------------|------------|------------|------------|------------|------------| | Llama-3-8B-Instruct-80K-QLoRA-Merged | 10.3 | 9.4 | 10.9 | 11.1 | 11.2 | 7.4 | | + LOGO | 18.3 (**+8.0**) | 17.2 (**+7.8**) | 19.0 (**+8.1**) | 20.6 (**+9.5**) | 15.3 (**+4.1**) | 26.9 (**+19.5**) | | Mistral-7B-Instruct-v0.3 | 25.6 | 24.5 | 26.4 | 30.0 | 25.6 | 18.5 | | + LOGO | 29.8 (**+4.2**) | 30.2 (**+5.7**) | 29.6 (**+3.2**) | 35.0 (**+5.0**) | 28.4 (**+2.8**) | 26.9 (**+8.4**) | | Qwen2.5-7B-Instruct | 30.2 | 32.3 | 28.9 | 37.8 | 25.1 | 27.8 | | + LOGO | 32.8 (**+2.6**) | 35.9 (**+3.6**) | 30.9 (**+2.0**) | 40.6 (**+2.8**) | 28.8 (**+3.7**) | 33.3 (**+5.5**) | We observed that Llama-3-8B-Instruct-80K-QLoRA-Merged performed significantly worse than expected, with results even lower than the *25% random guessing baseline*, while LOGO can improve by an average of 8 points. We have carefully verified this outcome using the official code and double-checked our implementation to ensure its correctness. After applying **LOGO training**, both models exhibit substantial performance gains, further validating the effectiveness of our approach. `Additionally, based on the suggestions from Reviewer 2obo, we have also provided the evaluation results on **Ruler** in the second point in the rebuttal box for Reviewer 2obo.` ------ **[Concern 3]** Innovative Contributions **[Re]** Long-context capabilities have become indispensable, making effective long-context alignment more critical than ever. To our knowledge, **LOGO is among the first open-source RL methods tailored for long-context alignment during the time of this paper submission**, whereas prior works primarily achieve alignment with SFT. As demonstrated earlier, SFT alone yields limited benefits for long-context tasks, and while RLHF is a promising alternative, existing RL methods (e.g., DPO) are not well-suited for long-context alignment due to deployment challenges and inefficiencies. While SimPO is effective for short-context generation, it lacks adaptations for long-context tasks. LOGO introduces: 1) **Positional synthesis**, enabling efficient handling of long sequences. 2) **A novel preference data construction method**, along with a novel preference strategy, to address the lack of long-context evaluation models. ------ **[Question 1]** Entity Extraction and Importance Scoring **[Re]** 1) **Entity Extraction**: As noted in Section 4.1 (Lines 261–262), we use spaCy to extract entities (e.g., person names, locations) from questions and chunks. 2) **Importance Metric**: In long-context QA tasks, questions often target specific snippets within the long context. Overlapping entities between a chunk and a question indicate higher relevance (e.g., a chunk mentioning "Berlin" is likely critical for a question about "Germany’s capital"). We empirically found that selecting top-K chunks (e.g., K=16 for 512-token chunks) suffices to cover salient information, balancing efficiency and accuracy. Therefore, we use the number of overlapping entities as a key metric to measure importance, and the importance score reflects the relevance of each chunk and the question. ------ We hope our responses have adequately addressed your concerns. If you have any further questions, please don’t hesitate to ask.
null
null
null
null
null
null
null
null
Reidentify: Context-Aware Identity Generation for Contextual Multi-Agent Reinforcement Learning
Accept (poster)
Summary: The paper proposes an algorithm (CAID) for contextual/multi-scenario (where each scenario is defined by a different MDP) multi-agent reinforcement learning (MARL) in the centralized training decentralized execution (CTDE) paradigm. Each scenario is characterized by a context vector which is unobservable (even during training). Furthermore, in each scenario, different agents may have different identities (also unobservable). Focusing on cooperative MARL, the proposed solution uses existing value decomposition based techniques in conjunction with a new transformer architecture to learn the context vector and agent identities. The architecture includes (i) an encoder that estimates the context vector from the current state and observations of all agents, (ii) a decoder that outputs identities of all agents, and (iii) an action regulator that modifies the Q-values of individual agents based on their identities. The agent identity decoder and the context encoder depend on the full state of the environment and hence, are only used during training; only the individual agents' Q-functions are used during test time. The proposed approach is evaluated empirically and compared with baselines to show reasonable improvement. Claims And Evidence: Overall, the proposed approach seems to perform better than baselines in many environments. However, the improvement appears to be small in many cases (e.g., CAID learns with fewer samples but baseline learns to perform better eventually). Therefore, the claim regarding significant improvement seems to have relatively weak evidence (but is still decent). Methods And Evaluation Criteria: The chosen environments and the experimental setup makes sense. Method-wise there are some choices that don't seem completely natural to me. - It is unclear if assuming that the context vector is hidden is necessary. Since the context vector is only needed during training time, it would be more efficient to generate/learn the context vector using the knowledge of the scenario. Also, the context vector is allowed to change from one time-step to another (within the same episode); this does align well with the intuition of what the context vector is trying to capture. - The contextual agent identities are not used during test time. This seems to create a mismatch between training and testing because, during test time, agents are assumed to have fixed identities that don't change across scenarios. It is unclear why there is no agent-level identity predictor that can be used during test-time. Theoretical Claims: There are no theoretical claims in the paper. Experimental Designs Or Analyses: The experimental setup and results appear reasonable and sound from my reading of the paper. I did not check the supplementary material or any code. Supplementary Material: No. Relation To Broader Scientific Literature: - The idea of learning contextual agent identities appears broadly relevant and can enable better skill transfer in MARL. This aspect of skill transfer is often not studied and seems novel to this paper. - As authors mentioned, the proposed architecture is compatible with any value decomposition based algorithm for cooperative MARL. Essential References Not Discussed: N/A Other Strengths And Weaknesses: - The paper is reasonable well written. - Experiments are performed on wide range of environments. The authors also include an ablation study to understand the impact of various components of the architecture on the performance. - As mentioned earlier, the intuition behind the various components of the proposed architecture is not completely clear. It would be good to visualize the learned context vectors and/or contextual identities (to make a stronger case for the different choices made by the authors). Other Comments Or Suggestions: - Equation (1) is not quite correct mathematically, left side is a vector and right side is a scalar. Just a notational issue. Questions For Authors: - The contextual ids are discrete. In the Action Regulator, are these ids treated as one-hot vectors (to feed to the MLP)? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the thoughtful and constructive feedback. We respond below to the key concerns raised: **Q1**: Why is the context vector treated as a latent variable? Why is it allowed to vary within an episode? **A1**: Thank you for raising this point. In realistic Contextual MARL (CMARL) tasks, the contextual information (e.g., layout changes, perspective shifts) is often not explicitly observable or encoded as scenario IDs. Therefore, we model the context vector as a latent representation derived from the agent-environment interaction history rather than assuming it is accessible. Our motivation stems from real-world CMARL tasks where the semantics of an environment can shift significantly even within a single episode. For instance, in a traffic control task, the agent behavior required during morning rush hours may differ substantially from that during the evening, despite both being within one episode. We allow the context vector to evolve over time, effectively treating the episode as a sequence of sub-episodes, each governed by a different latent context. This dynamic modeling improves CAID's ability to adapt to diverse temporal patterns and enhances its learning efficiency under non-stationary environments. We will clarify this motivation in the revised paper. **Q2**: Does not using an identity predictor at test time create a train-test mismatch? **A2**: This is an excellent question. To avoid requiring any identity information during execution, we designed the Action Regulator to integrate identity into the training objective without modifying the test-time inference pipeline. Specifically, during training, the Q-values are adjusted via the Action Regulator using identity embeddings. This transforms the target Q-values in a context-aware manner, effectively shifting the action value space. However, both during training and testing, the actual actions taken are always based on the raw Q-values directly output from the agent’s policy network. This ensures that identity modeling benefits learning, while the execution remains identity-independent, avoiding any test-time mismatch. **Q3**: Visualization and interpretability of the learned identities and context vectors. **A3**: Thank you for the suggestion. We now include t-SNE visualizations of both the learned context vectors and the raw states (See the figure in https://anonymous.4open.science/r/CAID-7A6C/Visualization.jpg). The results show meaningful clustering across semantically similar scenarios and agents with shared roles. We believe this helps support the interpretability and necessity of these components. **Q4**: Are the discrete contextual IDs used as one-hot vectors in the Action Regulator? **A4**: Yes, the identity decoder outputs a categorical distribution, from which we sample a discrete identity and convert it into a one-hot representation to feed into the Action Regulator MLP. We will clarify this implementation detail in the revised paper. **Q5**: Equation (1) seems notationally incorrect. **A5**: Thank you for catching this inconsistency. In the revised version, we will correct this by ensuring both sides of the equation have consistent dimensionality. We appreciate your attention to this detail. We thank you again for your thoughtful questions. Your comments have helped us clarify key design choices and highlight future directions for improving test-time identity modeling. We hope our revisions address your concerns and improve your evaluation of the work. --- Rebuttal Comment 1.1: Comment: Thanks for the response which answered some questions. I encourage the authors to clarify the point regarding the dependence on agent identities in the paper. I understand it better from the rebuttal and it was not clear from my initial reading of the paper. After reading the other reviews and the rebuttals, I am increasing my score. --- Reply to Comment 1.1.1: Comment: Thank you very much for taking the time to revisit our work during the rebuttal phase and for increasing your score! We greatly appreciate your suggestion to clarify the role of agent identities in the paper. Based on your feedback, we will revise the manuscript to clearly explain the dependence on agent identities.
Summary: This paper introduces a novel approach called Context-Aware Identity Generation (CAID) to improve the generalization ability and sample efficiency of Multi-Agent Reinforcement Learning in contextual environments. CAID leverages a causal Transformer structure to generate dynamic agent identities, while incorporating an action regulation module that embeds identity information into the action-value space. Claims And Evidence: Most of the conclusions in the paper are supported by experimental results, but there is a lack of comparative experiments to illustrate how performance would degrade without identity modeling or how different identity generation methods would impact the results. Methods And Evaluation Criteria: Overall, the proposed methods and evaluation criteria are reasonable. Theoretical Claims: yes. Experimental Designs Or Analyses: The experimental design of the paper is generally reasonable. Benchmark tasks such as SMACv2, VMAS, and PyTSC are selected to evaluate the generalization ability and sample efficiency of CAID. Supplementary Material: I focused on the experimental details and additional results, especially the performance curves on different tasks and the hyperparameter settings. Relation To Broader Scientific Literature: The core contribution of this paper is closely related to the research direction of improving the generalization ability of multi-agent reinforcement learning (MARL), especially in terms of identity modeling and context adaptability. Essential References Not Discussed: 1.The UPDeT model enhances performance by utilizing policy decoupling and Transformer architecture, making it advantageous for deployment in tasks with varying numbers of agents. 2.Multi-Agent Transformer (MAT) treats the MARL problem as a sequential modeling task, demonstrating the potential of Transformer-based architectures in MARL. Other Strengths And Weaknesses: Strengths: 1. Utilize a causal Transformer to generate dynamic agent identities, rather than relying on fixed or predefined identity representations. 2. Embed identity information directly into the action-value space through the action regulator module. 3. Achieve competitive results across multiple multi-agent benchmark tasks. Weaknesses: 1.The paper proposes that dynamic agent identity modeling enhances generalization, but it lacks comparative experiments showing how performance would change if identity modeling were removed or if different identity generation methods were used. 2.The introduction of the causal Transformer structure may lead to higher computational costs, yet the paper does not provide a complexity analysis. 3. There is no analysis of the stability of the proposed method under different conditions. 4. The paper does not explore whether the generated agent identities are interpretable, nor does it analyze whether identity representations can be intuitively understood across different environments. 5. While the paper states that agents in similar tasks can share useful information, it does not clearly explain how task similarity is measured or how identity information is propagated between agents. Other Comments Or Suggestions: 1. Add ablation experiments to test the impact of removing identity modeling or using different identity modeling approaches (e.g., random identity, fixed identity) on the results. 2. Add a complexity comparison analysis. 3. Provide convergence curves or related analysis. 4. Visualize the dynamic changes in identity encoding as tasks vary. 5. Offer a method for measuring task similarity. Questions For Authors: If you resolve my concerns, I will raise my rating. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the constructive and thoughtful feedback. We are encouraged that you found the CAID framework innovative and recognized its potential for improving MARL generalization. Below, we address your concerns in detail: **Q1**: Lack of ablation or alternative identity modeling comparisons. **A1**: Thank you for highlighting this. We have added the following ablation experiments: * CAID w/o AI: Removes the identity modeling entirely—all agents share the same input embedding. * CAID w/ Fixed ID: Assigns static identities based on agent indices (e.g., [1, 2, ..., n]). * CAID w/ Random ID: Assigns random but fixed identities per episode. Results (See the right figure in https://anonymous.4open.science/r/CAID-7A6C/Ablation_study.jpg) show significant performance drops across all variants, especially under strong context perturbations. The original CAID consistently maintains performance, validating the importance of contextual identity learning. **Q2**: No analysis of computational cost due to causal Transformer. **A2**: Thank you for the suggestion. The causal Transformer employed in our method is lightweight, comprising only **a single Transformer block**. We conducted a comparative analysis of the training times for QMIX and CAID based on the recorded logs. Across all scenarios, the additional forward pass introduced by CAID accounts for **less than 15%**, thereby ensuring real-time performance in real-world deployments. Furthermore, the modules introduced in CAID are exclusively involved during centralized training. In the decentralized execution stage, CAID maintains an execution efficiency comparable to that of conventional algorithms. **Q3**: No evaluation of method stability under varying conditions. **A3**: We included robustness analysis across different random seeds, agent initial condition perturbations. Results in our paper show CAID yields acceptable level of performance variance compared to baselines **Q4**: Identity interpretability and task similarity are unexplored. **A4**: We agree this is an important direction. We have added t-SNE visualizations of learned identity vectors (See the figure in https://anonymous.4open.science/r/CAID-7A6C/Visualization.jpg), which show clustering of agents with similar roles across varied environments. Additionally, we believe the KL-divergence of contextual encodings can be as a first step toward quantifying task similarity. We plan to explore more sophisticated similarity metrics and cross-task identity propagation in future work. We deeply appreciate your constructive feedback and have incorporated your suggestions into our revised submission. Your insights significantly enhance the completeness of our work, and we hope these improvements will help raise your evaluation. --- Rebuttal Comment 1.1: Comment: Thank you for your reply, which answered some of my questions, especially the questions about Identity interpretability and task similarity. I am satisfied with your answers, and I wil improve the score. It would be better if Q2 and Q3 could be more intuitively expressed. --- Reply to Comment 1.1.1: Comment: Thank you very much for your kind follow-up and for improving your score! We're glad to hear that our responses to the questions on identity interpretability and task similarity were helpful. Regarding your comment that Q2 and Q3 could be more intuitively expressed, we would like to provide a clearer summary of our answers: **Q2**: No analysis of computational cost due to causal Transformer. **A2**: To evaluate the computational cost introduced by CAID, we collected the average of training times from historical logs of both QMIX and CAID. However, we acknowledge that these logs are affected by server load and other concurrent training processes. We selected six representative scenarios that vary in terms of agent race and agent count. The results shows that CAID introduces only a modest increase in training time (generally within 15%), which we consider acceptable given that all identity-related modules are only used during centralized training and *do not affect test-time execution*. | Algorithm | Protoss_5_vs_5 | Zerg_5_vs_5 | Terran_5_vs_5 | Protoss_10_vs_10 | Zerg_10_vs_10 | Terran_10_vs_10 | |----------|:--------------:|:-----------:|:--------------:|:----------------:|:--------------:|:----------------:| | **QMIX** | 7.43 h | 7.82 h | 7.98 h | 7.96 h | 8.35 h | 8.14 h | | **CAID** | 8.53 h | 9.12 h | 8.89 h | 8.99 h | 9.30 h | 9.25 h | The experiments were conducted using NVIDIA RTX 4090 GPUs. We believe these results support our claim that CAID introduces minimal training overhead while significantly enhancing generalization. **Q3**: No evaluation of method stability under varying conditions. **A3**: All benchmark environments in our experiments (SMACv2, VMAS, and PyTSC) follow the Contextual MARL setting. In each evaluation phase, *we test the trained policy across multiple episodes with varying contexts*—such as mirrored agent layouts, rotated map topologies, or agent type permutations. These variations effectively assess the model’s stability across diverse conditions. To further reduce variance and ensure robust evaluation, we conducted all experiments using *multiple random seeds*. The shaded areas in Figures 4 and 5 represent the standard deviation across different seeds, providing a visual indication of performance fluctuation. We once again thank you for your constructive feedback and encouragement! As you kindly mentioned that you would consider increasing the score, we sincerely hope the additional clarification on Q2 and Q3 further strengthens your impression of our work.
Summary: In multi-agent reinforcement learning (MARL), generalization poses a significant challenge. Existing MARL methods exhibit vulnerability when confronted with even slight variations in task settings, requiring the retraining of policies for each task variant. This paper introduces a Context-Aware Identity Generation (CAID) framework, which utilizes global states and local observations from all agents to construct contextual states and dynamically assign agent identities. Claims And Evidence: See the section “Other Strengths And Weaknesses”. Methods And Evaluation Criteria: See the section “Other Strengths And Weaknesses”. Theoretical Claims: I have checked the correctness of any proofs for theoretical claims. Experimental Designs Or Analyses: I have checked the soundness/validity of any experimental designs or analyses. For a more detailed description, please refer to the section "Other Strengths and Weaknesses,". Supplementary Material: I have reviewed all of the supplementary material. Relation To Broader Scientific Literature: No. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths 1. The writing is clear and easy to understand. 2. The experiments selected a variety of experimental scenarios (SMACV2, VMAS) and diverse algorithms (including zero-shot generalization). The diversity of experimental scenarios helps demonstrate the effectiveness of the algorithms. Weaknesses 1. How is the generalization of multi-agent policies defined? How does the generalization in this paper differ from the definitions of generalization in related works [1][2]? I recommend that the paper clearly highlight these differences in the Related Works section. 2. The motivation of the paper is somewhat unclear and requires further elaboration. In particular, the statement in Section 2 (Related Works) that "However, these approaches are not specifically tailored for Contextual MARL tasks." needs more clarification. From my understanding, the aim of the paper is to address the generalization problem by proposing a context-based learning approach for MARL. However, the rationale behind choosing this approach, as well as a detailed discussion on the shortcomings of existing MARL methods in handling generalization, are lacking and need to be presented in a more systematic manner. 3. What does the 'draw' operation in Equation (5) mean? Additional clarification should be provided here. 4. The experimental setting of the paper needs to add relevant experiments. ①The proposed method seems to adopt a role-based assignment approach, and it would be useful to compare it with works like ROMA and RODE. ②Furthermore, the Action Regularization method discussed in the paper does not impose any constraints on the action space. I am curious about how this ablation method performs on the SMACV2 benchmark. 5. Why does Zerg perform worse than Terran and Protoss in Figure 4? I don't understand why this phenomenon occurs, as there shouldn't be a significant difference at the algorithmic level. Simply stating in the paper that 'the results derived from integrating CAID into QMIX were constrained by QMIX's suboptimal credit assignment capabilities' is not sufficient. It would be more convincing to include additional experiments here. References [1] Mahajan, Anuj, et al. "Generalization in cooperative multi-agent systems." arXiv preprint arXiv:2202.00104 (2022). [2] Qiu, Wei, et al. "Rpm: Generalizable multi-agent policies for multi-agent reinforcement learning." In ICLR. 2023. Other Comments Or Suggestions: No. Questions For Authors: See the section “Other Strengths And Weaknesses”. Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the thoughtful and detailed comments. Below we respond to the key points raised in the "Other Strengths and Weaknesses" section: **Q1**: How is policy generalization defined? How does it differ from definitions in [1] [2]? **A1**: Thank you for this important question. In Section 3.3, we formalize a new generalization framework called *Contextual Multi-Agent Reinforcement Learning (CMARL)*. Unlike traditional multi-task or multi-agent generalization settings (as in [1] [2]), CMARL assumes that different environmental states across episodes can be aligned through latent contextual variables (e.g., rotation, mirroring, permutation). Our goal is to unify semantically equivalent contexts into a compact representation and dynamically assign identities to corresponding agents. This enables policy reuse under *context shift* rather than *task shift*. We will revise the Related Work section to clearly distinguish CMARL from the generalization definitions in [1] [2]. **Q2**: The motivation is unclear and the limitations of prior work are not systematically discussed. **A2**: We appreciate your constructive feedback. We will clarify our motivation by systematically identifying limitations in existing MARL methods: (1) They typically fail to capture semantic transformations in contextual environments (e.g., agent position mirroring), and (2) they lack dynamic identity alignment, making it difficult to reuse policies across variants. We will make explicit that CAID addresses these issues by generating contextual representations and dynamically consistent agent identities for improved generalization. **Q3**: The meaning of the “draw” operation in Equation (5) is unclear. **A3**: Thank you for pointing this out. “Draw” denotes sampling from the agent identity distribution computed via the decoder. We apply Straight-Through Gradients to enable gradient backpropagation. We will revise the equation and add detailed explanations in the main text. **Q4**: Missing comparison with ROMA/RODE; no ablation on Action Regulator. **A4**: Thank you for these insightful comments. We address them as follows: ① We have added comparisons with ROMA and RODE (See the figure in https://anonymous.4open.science/r/CAID-7A6C/Role_methods.jpg). CAID still outperforms these methods on SMACv2. ② We carried out the ablation variant “CAID w/o AI” (without Action Regulator) on SMACv2. Results (See the left figure in https://anonymous.4open.science/r/CAID-7A6C/Ablation_study.jpg) show that removing this module significantly degrades performance, confirming its necessity for consistent action semantics under identity shift. **Q5**: Zerg performs worse than other races in Figure 4. **A5**: Regarding the Zerg performance drop, we believe that Zerg units have higher variance in attack patterns and durability, which makes the value decomposition more sensitive to alignment. As the foundational algorithm of CAID, QMIX exhibits greater performance degradation under these conditions due to its suboptimal credit assignment capability. Thank you again for your valuable suggestions. They have greatly helped us improve the clarity and rigor of our work. And we also appreciate it if you have any further comments.
Summary: The authors introduce Context-Aware Identity Generation framework which is able to generalize between tasks in one Contextual MARL domain. CAID integrates dynamically assigned identity information into action decoding for each agent, which is claimed to provide smooth adaptation to varying contexts. Combined with Action Regulator, which uses identities to produce actions from agents, and Contextual State Encoder, which encode MARL interaction history to context sequence, it was shown that CAID outperforms various baselines on classic MARL environments, such as StarCraft SMAC, Vectorized Multi-Agent Simulator and Traffic Signal Control environments. Moreover, authors compare CAID without components and show that each component is important for better performance. Claims And Evidence: The main claims are supported with experiments: it is clear that CAID outperforms various methods, and that all three components of CAID are important for CAID performance. But it wasn’t shown that CAID roles dynamic assignment is more optimal than role assignment from methods introduced in Related Work (ROMA, RODE, COPA). Moreover, authors compare CAID with only RIIT, COLA and VMIX methods, which don’t use role assignments for more effectiveness: COLA uses consensus builder, which utilizes DINO to learn labels, which does not imply any identification learning. Methods And Evaluation Criteria: Authors compare CAID with lots of baselines on a large number of datasets, making comparison fair with calculating experiments with 5 seeds, and setting up hyperparameters according to original methods. Evaluations are fair and make sense for the problem. But I also wrote my concern about evaluation methods in Claims and Evidence, it seems better to compare CAID with ROMA, RODE or COPA, to make effectiveness of dynamic role assessment more clear Theoretical Claims: Paper does not provide complex theoretical claims, and those that are provided are correct. Authors provide detailed description of their method, so I don’t have questions about CAID methodology. Experimental Designs Or Analyses: As noted in Methods and Evaluation criteria paragraph, all experiments designs sounds valid. Authors provide detailed description of hyperparameters analysis in appendix A. On the other hand, there is some misunderstandings on analysis of experiments, because ablation study shows only importance of identity decoder compared to CAID without identity decoder, so it does not show in full effectiveness of dynamical identification learning in comparison with previous methods. Supplementary Material: I reviewed all parts of the Appendix, I think it could be extended with additional information. Relation To Broader Scientific Literature: The paper is related to MARL literature and propose novel and intriguing architecture for managing MARL agents roles compared to previous architectures. Essential References Not Discussed: The paper covers similar literature well, however, I did not quite understand the part about the meta-RL reference. In my opinion, paper in general discusses agent’s dynamical identity management, and it is the main feature of CAID algorithm, so I think the part about meta-RL is redundant in context of CAID algorithm. Other Strengths And Weaknesses: **Strengths:** - This paper is well-written, and easy to follow - This paper provides good experiment setup and different baselines co compare CAID with - This paper provides intriguing CAID architecture with interesting insights of identity use in MARL agents - This paper provides detailed experiments about importance of each part of CAID **Weaknesses:** - This paper does not provide experiments with baselines uses different strategies of identity management (ROMA, RODE, COPA), and does not show effectiveness of proposed identity management in comparison to previously proposed. Other Comments Or Suggestions: None. Questions For Authors: In Related Work, authors also have a paragraph of discussing Meta Reinforcement Learning. I suggest that authors claim that CAID can adapt to similar tasks in one MARL domain, but authors did not highlight it clearly in abstract and experiments. Do the authors think that CAID is able to adapt to tasks in one MARL domain, and maybe authors can confirm it with experiments? I thought it was wrong to deduct points for this claim, as it was not highlighted clearly in abstract, but if authors also propose it, it would be better to provide experiments for multitask adaptation of CAID. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the detailed and constructive comments! We are grateful for your careful reading of both the main paper and the supplementary material. Below, we respond to your concerns point by point: **Q1**: The paper does not compare with dynamic role assignment methods. **A1**: Thank you for raising this important point. In the initial submission, we focused on comparisons with RIIT, COLA, and VMIX, as these methods are widely adopted and perform strongly on the newly CMARL benchmarks such as SMACv2. However, we fully acknowledge the significance of role-based methods like ROMA and RODE in learning strategies. To address your concern, we have included experimental comparisons with ROMA and RODE (See the figure in https://anonymous.4open.science/r/CAID-7A6C/Role_methods.jpg). Our preliminary results indicate that CAID achieves superior performance and faster convergence, especially in environments with contextual variations such as agent permutations, type switches, and initial state shifts. **Q2**: The relevance of Meta-RL references is unclear. **A2**: We appreciate your thoughtful observation. We originally referenced Meta-RL to provide context for generalization strategies in reinforcement learning. However, we agree that CAID does not follow the standard meta-learning paradigm. Rather, it addresses the generalization challenge by generating context-aware agent identities that unify behaviors across task variants. To improve clarity and focus, we will revise the Related Work section to remove or rephrase the Meta-RL discussion, better highlighting CAID's distinct contribution in dynamic identity generation. **Q3**: Does CAID support multi-task adaptation within the same MARL domain? **A3**: Thank you for this question. As introduced in Section 3.3 of our paper, CAID is specifically designed for the *Contextual Multi-Agent Reinforcement Learning (CMARL)* setting. While CMARL may involve variations across episodes—such as changes in agent initial positions, or types—it differs fundamentally from traditional multi-task reinforcement learning. In CMARL, the core assumption is that tasks share a common structure and can be semantically aligned through a latent contextual representation. The objective is not to learn separate policies for separate tasks, but rather to enable policy reuse by reasoning over contextual variations. Thus, CAID is built to tackle *context shift* rather than *task switch*. We appreciate the reviewer for raising this important point and will clarify the distinction between CMARL and conventional multi-task RL more explicitly in the revised paper. We really appreciate your comments and they really help us improve our paper! And we also appreciate it if you have any further comments. --- Rebuttal Comment 1.1: Comment: Dear authors! I want to thank you for a quality of your answers! I was satisfied by your comparison for all the requested methods RODE and ROMA. After reading the other reviews and rebuttals, I can see that you provide lots of additional experiments showing effectiveness of CAID role assignment. Especially experiment with different role assignment strategies is strongly important in context of provided CAID method. It's good that you clarify CAID is not follow the standard meta-learning paradigm. I think that it would be nice if you higlight this difference clearly in further iterations of the paper. However, overall contribution of CAID method is great. I would like to increase my score to Accept. --- Reply to Comment 1.1.1: Comment: Thank you very much for your thoughtful and encouraging feedback! We completely agree with your suggestion to highlight this difference more clearly in the final version, and we will make sure to revise the introduction and related work sections accordingly. We sincerely appreciate your support and the time you invested in carefully reviewing and engaging with our work. Your comments and score adjustment mean a lot to us!
null
null
null
null
null
null
Watch Out Your Album! On the Inadvertent Privacy Memorization in Multi-Modal Large Language Models
Accept (poster)
Summary: This paper investigates how MLLMs inadvertently memorize privacy that is irrelevant to the training objectives. The authors introduce a layer-wise probing framework to test whether task-irrelevant privacy is embedded in images during fine-tuning. They provide a formal mathematical proof to demonstrate that MLLMs encode such privacy due to spurious correlations within mini-batch training. Through extensive experiments, they find that even though task-irrelevant privacy does not directly affect downstream task performance, it still leads to distinct representational patterns within the parameters. Additionally, they show that this memorization effect is more pronounced when training with smaller batch sizes. ## =================update after rebuttal============== Thanks for the authors' responses. My concerns have been well addressed. Claims And Evidence: In Table 3, the gradient similarity between models trained with privacy-embedded data and those trained on original data remains relatively high compared to transformations in the image and text modalities. This suggests that while task-irrelevant privacy does influence training dynamics, its effect may not be as strong as natural modality transformations. A deeper discussion on this observation would be beneficial. Methods And Evaluation Criteria: Yes. The proposed probing-based framework effectively quantifies the extent of memorization by assessing layer-wise representational differences in response to previously seen and unseen privacy. Theoretical Claims: Yes. I have checked Section 2 and Appendix A. Experimental Designs Or Analyses: Yes. I have checked all experimental designs and analyses in Section 3 and 4. * The experiments are conducted exclusively on VQA tasks, which remains unclear whether similar behaviors occur in other multi-modal tasks. It would be helpful to evaluate additional multi-modal scenarios about the generality of the issue. * The authors mainly study 7B-scale MLLMs with LoRA fine-tuning. Have you considered whether larger MLLMs exhibit more severe privacy memorization due to their increased capacity? * Have you experimented with techniques like membership inference or gradient inversion to assess whether this privacy leakage is practically extractable? Supplementary Material: Yes. I have checked Appendix A-D. Relation To Broader Scientific Literature: The authors uniquely explore the inadvertent memorization of task-irrelevant privacy in MLLMs, which differs from prior attempts that mainly focus on task-relevant privacy risks, and expands the scope of privacy concerns beyond conventional attack scenarios. Essential References Not Discussed: The authors consider a novel privacy leakage scenario that differs from previous research. However, the introduction of related work on PII, which is most relevant to the authors’ study, is still not comprehensive enough. Other Strengths And Weaknesses: Other Strengths: * The authors provide a formal mathematical proof to explain that mini-batch training can induce spurious correlations between task-irrelevant privacy and downstream objectives, which demonstrates that such memorization is not just an empirical anomaly but an inherent issue in training dynamics. Other Weaknesses: See the comments above. Other Comments Or Suggestions: No. Questions For Authors: See the comments above. Ethical Review Concerns: None. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful feedback. Below we address the main points raised in this review. * * * > **Claims And Evidence (W1)**: Privacy-embedded data has less impact on training than natural modality transformations. We clarify that embedding task-irrelevant privacy significantly impacts gradient updates, as evidenced by gradient similarities substantially lower than random noise compared to the Column Origin, indicating that MLLMs distinctly attend to the embedded privacy content. Although gradient similarity remains higher than with transformations in the text modality, it closely aligns with image modality transformations, especially in Qwen. This similarity is reasonable because task-irrelevant privacy embeddings introduce subtle perturbations intentionally designed to be unrelated to the downstream task, whereas natural transformations in images and texts directly affect task-relevant features, capturing greater attention from MLLMs. * * * > **Experimental Designs Or Analyses (W1)**: Findings may not generalize beyond VQA and broader multi-modal evaluation is needed. We choose VQA because it is one of the most widely adopted benchmark tasks for both training and evaluating MLLMs. Moreover, it involves both visual and textual reasoning in a relatively balanced manner, which serves as a strong representative for examining potential and inadvertent memorization. As the reviewer worries about the generalizability issue, we conduct additional experiments on LLaVA 1.5 7B using the image captioning task COCO_2014. In this setup, the gradient similarity between two runs on the original data is $97.8 \pm 5.4$, whereas the gradient similarity between the original data and the privacy-embedded data is $91.4 \pm 7.7$. **This aligns with our findings on VQA tasks and provides further evidence that MLLMs consistently exhibit the capability to encode task-irrelevant privacy across various downstream tasks.** * * * > **Experimental Designs Or Analyses (W2)**: Consider whether larger MLLMs exhibit more severe privacy memorization? We have investigated whether larger MLLMs exhibit more severe privacy memorization due to their increased capacity. We conduct additional experiments by (1) increasing the parameter scale from 7B to 13B, and (2) increasing the LoRA rank from 128 to 256. Due to space limitations, detailed results can be found in the response of *Experimental Designs Or Analyses (W1) to Reviewer 1hkx*. * * * > **Experimental Designs Or Analyses (W3)**: Consider testing extractability with membership inference or gradient inversion. **Yes, we have explored the possibility of privacy leakage via MIA, which shows minimal performance.** For space constrain, comprehensive analyses and results are provided in the response of *Relation To Broader Scientific Literature (W1) to Reviewer y5xM*. Regarding gradient inversion, it is particularly suitable for federated learning. **Our gradient similarity experiments revealed that MLLMs indeed capture weak signals during training. This suggests the potential feasibility of extracting task-irrelevant private content via gradient inversion within federated learning contexts.** * * * > **Essential References Not Discussed (W1)**: Extension of related work on PII. We have discussed several notable studies that investigate the potential to memorize PII. Specifically, we cited Carlini et al., who first revealed that LMs such as GPT-2 could emit PII in training samples. Lukas et al. further analyzed the leakage of PII in LLMs via black-box extraction. We further survey recent studies that explore PII memorization. Kim et al. introduced a probing tool that enables data subjects to detect the potential leakage of PII in LLM services. Shao et al. analyzed how the association capabilities of LLMs could facilitate privacy leakage. Meng et al. proposed a two-step attack to recover masked PII from training data. Recent research has also begun extending PII detection to MLLMs, such as evaluating autonomous web agents (Zharmagambetov et al.). **However, these prior works do not sufficiently consider scenarios where PII is entirely irrelevant to the training task.** We will incorporate these additional references in the related work section of the final version. [1] Carlini N, et al. Extracting training data from large language models. 2021. [2] Lukas N, et al. Analyzing leakage of personally identifiable information in language models. 2023. [3] Kim S, et al. Propile: Probing privacy leakage in large language models. 2023. [4] Shao H, et al. Quantifying Association Capabilities of Large Language Models and Its Implications on Privacy Leakage. 2024. [5] Meng W, et al. RR: Unveiling LLM Training Privacy through Recollection and Ranking. 2025. [6] Zharmagambetov, A, et al. AgentDAM: Privacy Leakage Evaluation for Autonomous Web Agents. 2025. * * * We hope that our explanations above can clarify your doubts and you can consider our work more favorably.
Summary: The paper explores the effects of incorporating synthetic task-irrelevant private content into training datasets on multimodal large language models (MLLMs). The authors analyze how such content influences gradient updates, model memorization, and the ability to differentiate between injected private information and standard task data. They conduct controlled experiments on multiple VQA datasets (COCO, GQA, OCR-VQA, etc.) and propose a probing method to verify whether task-irrelevant information is being inadvertently memorized by models. Key findings suggest that task-irrelevant private content can subtly alter model learning and performance, potentially leading to the unintended encoding of privacy-sensitive information. Claims And Evidence: The authors make several claims regarding the impact of task-irrelevant private content on model learning and memorization: 1. The presence of such content affects training gradients. 2. Models trained with private content exhibit a higher likelihood of responding to related test-time queries. 3. Intermediate embeddings in the trained model contain distinguishing information about the injected content. These claims are supported by empirical results, but certain aspects lack strong validation. For instance, the influence of subset size on probing effectiveness is not thoroughly analyzed. Additionally, while Table 2 suggests significant performance degradation in OCR-VQA, TextVQA, and Visual Genome due to private content embedding, Section 4.2 claims a more generalized minimal effect, which appears contradictory. Addressing these inconsistencies would strengthen the claims. Methods And Evaluation Criteria: The methodology is well-structured, leveraging VQA datasets with a clear partitioning strategy for fine-tuning and probing. However, the decision to use only five items of private content per subset may limit the probing method’s applicability. A more detailed ablation study on the effect of subset size would enhance understanding. The probing evaluation is novel, but it does not sufficiently address whether the model can explicitly regenerate private content under adversarial prompting, a key concern for privacy risks. Theoretical Claims: No. Experimental Designs Or Analyses: While the experimental setup is comprehensive, certain areas need further validation: 1. The size-dependent effect of private content injection on probing results remains unclear. 2. The observed performance decrease in Table 2 suggests varying levels of susceptibility across datasets. The authors should investigate why OCR-VQA and TextVQA experience higher accuracy drops. 3. The probing technique focuses on detecting indirect memorization but does not analyze whether private content can be specifically extracted with targeted prompts. Addressing these issues would refine the paper’s experimental soundness. Supplementary Material: N/A Relation To Broader Scientific Literature: N/A I'm not quite familiar with this area. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: 1. Introduces an interesting privacy-oriented perspective on task-irrelevant content in training data. 2. Provides novel probing methods to analyze memorization effects. 3. Experimental results are well-organized and provide useful insights into the impact of private content. Weaknesses: 1. The probed memorization may not align with the real-world privacy risks community members are most concerned about (i.e., explicit memorization rather than influence on gradients). 2. The limited size of private content subsets in experiments may affect the generalizability of conclusions. Other Comments Or Suggestions: The authors should provide a deeper analysis of how subset size affects memorization sensitivity. Additionally, an experiment explicitly testing whether private content can be reconstructed via tailored adversarial prompts would improve the discussion on practical privacy risks. Questions For Authors: 1. Subset Design: Given that each subset for probing contains only five private content items, could this artificially amplify differentiation between the models? How does the number of private content items per subset influence the probing outcome? 2. Performance Drop: Table 2 suggests that OCR-VQA, TextVQA, and Visual Genome are disproportionately affected by private content injection. Could you clarify why these datasets exhibit greater sensitivity compared to COCO or GQA? 3. Explicit Memorization Risk: While the probing method demonstrates indirect memorization effects, have you tested whether models exposed to private content can regenerate it when prompted adversarially? Would such an evaluation align better with real-world privacy risks? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful feedback. We first summarize all the issues and suggestions raised by the reviewer, and address the main points raised in this review. * * * > **Issue 1**: While Table 2 suggests significant performance degradation in OCR-VQA, TextVQA, and Visual Genome due to private content embedding, Section 4.2 claims a more generalized minimal effect, which appears contradictory. **We argue that this arises primarily from the randomness inherent in the fine-tuning processes, rather than from privacy injection.** In most of our evaluations, the performance fluctuations induced by privacy embedding remain within a tolerable range—some results increase slightly, while others decrease, but no catastrophic deterioration occurs. This consistency suggests that the injected privacy does not substantially alter the core training data distribution or compromise its quality. Regarding the "greater sensitivity", we believe there are two main reasons: 1.**Fine-tuning data volume**: TextVQA and Visual Genome contain considerably fewer samples compared with COCO and GQA as shown in Table 5, thus their performance can exhibit larger variance. 2.**Randomness in the fine-tuning process**: The fine-tuning procedure for LLaVA appears to induce substantially more variability than that of Qwen-VL. Qwen-VL shows only minimal performance fluctuations across all five evaluated datasets when injected privacy. This implies that fine-tuning procedure, but not privacy injection can strongly affect the downstream performance. * * * > **Issue 2**: The decision to use only five items of private content per subset may limit the probing method's applicability. A more detailed ablation study on the effect of subset size would enhance understanding. In response, we have performed an additional ablation study where we increased the number of items within each subset from 5 to 100. Specifically, we ask GPT-4 to generate 100 distinct usernames and corresponding user_ids for each subset, respectively. To avoid repetition, we request GPT-4 to check for duplicates after each generation. We use these 100 private items on Qwen-VL for COCO. Results are shown below: |Origin|w/Privacy (Subset = 5)|w/Privacy (Subset = 100)|ImageTransf.|TextTransf.| |-:|-:|-:|-:|-:| |100.0|97.0|93.2|93.8|49.4| As the privacy subset size increases, the gradients of MLLMs exhibit more significant deviations from the original gradient updates, indicating that **MLLMs spend more effort in each gradient step learning different privacy information when increasing privacy subset size**. * * * > **Issue 3**: The probing evaluation is novel, but it does not sufficiently address whether the model can explicitly regenerate private content under adversarial prompting, a key concern for privacy risks. We have conducted comprehensive experiments to assess the effectiveness of general attack methods such as adversarial prompting and other practical methods like MIAs. We directly ask for the username visible in the testing image (Have you seen the username before?), and ask for the corresponding user_id that requires multi-hop reasoning (What is the user_id of the username in this image?). We find that both LLaVA and Qwen-VL perform nearly random accuracy (~50%) in the first scenario, and the accuracy for correctly identifying the user_id is 0%, which means that **direct prompting is ineffective in detecting the slight task-irrelevant privacy leakage**. Additionally, we have explored the possibility of privacy leakage via MIA. **Our evaluations reveal that MIAs such as LOSS, Zlib Entropy, and Min-k% Prob show only minimal improvements compared to MLLMs before fine-tuning.** For space constrain, comprehensive analyses and results are provided in the response of *Relation To Broader Scientific Literature (W1) to Reviewer y5xM*. * * * > **Issue 4**: The probed memorization may not align with the real-world privacy risks community members are most concerned about (i.e., explicit memorization rather than influence on gradients). Although explicit memorization scenarios such as direct prompting do not pose privacy risks in our setting, our work still identifies two significant real-world privacy risks that community members are concerned about. Firstly, similar to MIAs, an attacker with prior knowledge of a general range of usernames could randomly embed these privacy into images, and use our proposed probing method to detect which usernames were used in the fine-tuning, thus exposing sensitive information. Secondly, our findings demonstrate that MLLMs capture privacy-related information in gradients during mini-batch training, which provides theoretical support for gradient inversion attacks in federated learning settings. A malicious client could potentially exploit gradient information to infer task-irrelevant private content. * * * We hope that our explanations above can clarify your doubts and you can consider our work more favorably.
Summary: The paper examines how MLLMs inadvertently memorize task-irrelevant private content due to spurious correlations during mini-batch training. It begins with a preliminary analysis that formalizes the conditions under which such memorization occurs, followed by a rigorous mathematical proof demonstrating how task-irrelevant content can influence model parameters. To empirically validate their claims, the authors introduce a probing that embeds random privacy into images at varying rates and later tests whether the hidden states of each layer can distinguish between seen and unseen privacy after fine-tuning. They find that introducing randomly generated task-irrelevant privacy significantly shifts gradient directions compared to training without such private content. They also show that even though downstream task performance remains largely unaffected, MLLMs start encoding these spurious signals at lower layers. Finally, smaller batch sizes exacerbate this inadvertent memorization by increasing the likelihood of spurious correlations in mini-batches, which further strengthens the proof. ## update after rebuttal I will keep my ratings since most of my concerns are solved. Claims And Evidence: The authors provide multiple lines of evidence supporting their claims about inadvertent memorization. First, the gradient similarity experiments demonstrate that introducing task-irrelevant private shifts training updates beyond random fluctuations, indicating it indeed encoding spurious signals. Second, the probing experiments show that MLLMs do distinguish between seen and unseen private content, suggesting that these signals are retained in internal representations despite having no direct impact on downstream tasks. Third, the ablation experiment on batch size demonstrates that spurious correlations in encoding arise from the training paradigm of mini-batch training. However, in the gradient similarity experiment, the authors report that the gradient similarity with privacy-embedded data remains notably high when compared to the baseline, which raises concerns about the validity of their claims. Methods And Evaluation Criteria: Yes, the proposed method and evaluation criteria are well-justified, I can easily capture how the probing framework effectively measures inadvertent memorization. Theoretical Claims: Yes. Experimental Designs Or Analyses: Yes. The experimental designs and analyses are comprehensive. However, for the gradient similarity experiments, since real-world training involves multiple batches rather than single-step updates, the authors should verify if gradient differences persist over multiple training steps. Moreover, in the probing experiments, the authors mention directly prompting models about memorized privacy, could the authors clarify the probing accuracy obtained through the direct prompting? Supplementary Material: Yes. The experimental designs and analyses are comprehensive. However, for the gradient similarity experiments, since real-world training involves multiple batches rather than single-step updates, the authors should verify if gradient differences persist over multiple training steps. Moreover, in the probing experiments, the authors mention directly prompting models about memorized privacy, could the authors clarify the probing accuracy obtained through the direct prompting? Relation To Broader Scientific Literature: This authors extend prior findings on privacy risks by examining a relatively unexplored scenario of task-irrelevant privacy memorization. However, they only show the possibility of such inadvertent memorization, without verifying whether the memorized privacy can be exposed or how it might be mitigated. Essential References Not Discussed: None Other Strengths And Weaknesses: All strengths and weaknesses have been addressed above. Other Comments Or Suggestions: None Questions For Authors: None Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful feedback. Below we address the main points raised in this review. * * * > **Claims And Evidence (W1)**: High gradient similarity with privacy-embedded data raises concerns about the claims. We clarify that embedding task-irrelevant privacy significantly impacts gradient updates, as evidenced by gradient similarities substantially lower than random noise compared to the Column Origin, indicating that MLLMs distinctly attend to the embedded privacy content. Although gradient similarity remains higher than in text transformation, it closely aligns with image modality transformations. This similarity is reasonable because task-irrelevant privacy embeddings introduce subtle perturbations intentionally designed to be unrelated to the downstream task, whereas natural transformations in images and texts directly affect task-relevant features, capturing greater attention from MLLMs. * * * > **Experimental Designs Or Analyses / Supplementary Material (W1)**: Verify if gradient differences persist over multiple training steps, not just single-step updates. We conduct additional experiments using LLaVA 1.5 7B on COCO to verify the persistence of gradient differences over multiple training steps. We measured gradient similarity after multiple updates across 1, 10, and 100 mini-batches: |Batch Updates|Origin|w/Privacy|ImageTransf.|TextTransf.| |-|-|-|-|-| |1|98.3|92.9|85.3|5.3| |10|97.5|91.6|83.3|0.6| |100|91.9|84.6|74.6|0.2| All transformed scenarios gradually decrease with the number of mini-batch updates. Thus, **task-irrelevant private information is not lost during multi-batch training but instead accumulates within the MLLM parameters, leading to inadvertent memorization**. * * * > **Experimental Designs Or Analyses / Supplementary Material (W2)**: Clarify probing accuracy from direct prompting in experiments. **We find that direct prompting is completely ineffective for detecting memorized privacy in this task-irrelevant scenario.** Specifically, we directly ask for the username visible in the testing image (Have you seen the username before?), and ask for the corresponding user_id that requires multi-hop reasoning (What is the user_id of the username in this image?). Both LLaVA and Qwen-VL perform nearly random accuracy (~50%) in the first scenario, and the accuracy for correctly identifying the user_id is 0%, which means that direct prompting is ineffective in detecting the slight task-irrelevant privacy leakage. * * * > **Relation To Broader Scientific Literature (W1)**: The authors show the possibility of such inadvertent memorization, without verifying whether the memorized privacy can be exposed or how it might be mitigated. We have conducted experiments to verify whether the memorized privacy can be exposed through existing attack methods. For direct prompting, we have provided the results in response to **Experimental Designs Or Analyses / Supplementary Material (W2)**. Additionally, we have constructed a suitable dataset for MIA by leveraging GPT-4 to randomly generate 20 distinct samples embedding each piece of privacy information, which contains 100 member and 100 non-member instances. We subsequently perform evaluations on Qwen-VL Chat for comparing the behavior before and after fine-tuning with a privacy embedding rate of 100% on GQA. We consider three popular MIA methods: LOSS [1], Zlib Entropy [2], and Min-k% Prob [3]. The results are presented below. It indicates **only a marginal increase in MIA accuracy after fine-tuning, which means that MIAs generally fail when facing such weak, task-irrelevant signals in this paper**. |Model|LOSS|Zlib Entropy|Min-k% Prob| |-|-|-|-| |Before Tuning|0.507|0.638|0.535| |After Tuning|0.499|0.633|0.532| However, this does not inherently imply that task-irrelevant privacy information is secure. **Our gradient similarity experiments indicate noticeable differences in gradients when MLLMs encode privacy, suggesting a potential risk for gradient inversion attacks in scenarios such as federated learning.** Concerning mitigation strategies, we identify spurious correlations captured during mini-batch training as the critical factor to inadvertent memorization. Therefore, increasing batch sizes or employing gradient accumulation can significantly reduce the likelihood of encoding privacy. **Our ablation experiments clearly demonstrate that increasing batch sizes reduces gradient differences before and after encoding, supporting our hypothesis that larger batches effectively mitigate the probability of capturing such spurious correlations.** [1] Yeom S, et al. Privacy risk in machine learning. 2018. [2] Carlini N, et al. Extracting training data from large language models. 2021. [3] Shi W, et al. Detecting pretraining data from large language models. 2023. * * * We hope that our explanations above can clarify your doubts and you can consider our work more favorably. --- Rebuttal Comment 1.1: Comment: Solved my concerns. I will keep my ratings.
Summary: This paper demonstrates that MLLMs can inadvertently memorize private content entirely unrelated to their training tasks. The authors provide a rigorous mathematical proof explaining how mini-batches introduce spurious correlations, leading MLLMs to store even random private data. Through a novel probing method, they reveal that MLLMs internally distinguish between private content they have encountered and content they have not. Claims And Evidence: Figure 6 shows similar accuracy trends for both the direct username query and the multi-hop user-id query. However, the dimensionality-reduced visualizations in Figures 4 and 5 appear distinctly different. Is there a more fundamental explanation for why these lower-dimensional representations exhibit such dissimilar patterns, while the probing accuracy suggests a similar level of memorization? Methods And Evaluation Criteria: While the proposed probing framework is innovative, the probing classifiers themselves can introduce certain biases. Control tasks could further confirm whether the classifier captures genuine memorization patterns rather than noise [1]. [1] Hewitt J, Liang P. Designing and Interpreting Probes with Control Tasks[C]//Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). 2019: 2733-2743. Theoretical Claims: I have checked the correctness of proofs for theoretical claims. The analysis in Section 2 is particularly enlightening. By laying out the definition of task-irrelevant content and presenting rigorous proof in Appendix A, the authors offer a clear mathematical foundation for why random content can be memorized during mini-batch training. The ablation studies further validate the proof, making the argument more convincing. Experimental Designs Or Analyses: I have carefully checked the experimental designs and analyses. For experimental designs, the use of the probing method provides new insights into how MLLMs encode spurious information within parameters. For experimental analyses, The gradient similarity experiments offer a straightforward way to illustrate that MLLMs focus on extra content. However, empirical experiments are only conducted on LLaVA 7B and Qwen-VL 7B, which does not fully explore how varying parameter scales might influence inadvertent memorization. Supplementary Material: I have carefully checked all parts of the supplementary material. Relation To Broader Scientific Literature: This paper studies whether MMLMs memorize randomly generated private content that does not help with the training tasks at all. While previous scientific literature mainly focused on private content already aligned with model objectives, this work shows that MLLMs can also encode irrelevant private content through spurious correlations in mini-batch training. Essential References Not Discussed: In the gradient similarity experiments, which part of the MLLM's parameters was tested? Is inadvertent privacy more likely to be encoded in the LLM parameters or in the vision tower? Other Strengths And Weaknesses: I have already included the key questions in other sections. There are no other questions. Other Comments Or Suggestions: See the questions above. Questions For Authors: See the questions above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful feedback. Below we address the main points raised in this review. * * * > **Claims And Evidence (W1)**: Probing accuracy is similar, but why do visualizations differ so much? We thank the reviewer for this insightful observation, which we also find an intriguing phenomenon. The fundamental distinction between the two scenarios lies in how the MLLMs handle the queries: * In the direct username query scenario, the username is explicitly presented within the probing image, allowing the MLLM to directly recall the seen private content from memory. * In contrast, the user-id query scenario necessitates multi-hop reasoning, where the MLLM must infer the user-id by associating it with the username previously encountered during training. **Therefore, we argue that the second scenario inherently contains more nonlinear yet discriminative features. These complex nonlinear relationships are less distinctly captured by low-dimensional visualization, which results in less pronounced visual clusters.** However, the probing classifier's consistently high accuracy indicates that despite their subtlety in visualizations, these nonlinear relationships remain strongly distinguishable in the high-dimensional parameter space. * * * > **Methods And Evaluation Criteria (W1)**: Probing classifiers may be biased, control tasks can validate true memorization signals. We share the reviewer's concern regarding the potential biases introduced by probing classifiers. In this paper, we deliberately select the simplest linear classifier as the probing model to minimize the bias, following the method proposed by Hewitt et al. Moreover, we construct control tasks by randomly shuffling labels in accordance with Hewitt et al. We find that the test accuracy consistently maintains around 50%, aligning with random guessing. This confirms that subtracting the control task accuracy (i.e., using selectivity) and simply measuring the probing accuracy lead to similar conclusions. [1] Hewitt J et al., Designing and Interpreting Probes with Control Tasks, 2019. * * * > **Experimental Designs Or Analyses (W1)**: Experiments on 7B models only, scale effects on memorization remain unexplored. We thank the reviewer for raising this concern regarding the impact of parameter scales. In response, we conduct additional experiments by (1) increasing the parameter scale of LLaVA from 7B to 13B, and (2) increasing the LoRA rank from 128 to 256 in the LLaVA 7B setting. Our findings indicate that **when privacy is embedded of different parameter scales, the gradients obtained from privacy maintain significant divergence from those of normal training**. Notably, this divergence is amplified in the larger 13B parameter model, suggesting that larger-scale MLLMs are more sensitive to subtle privacy signals and can more strongly encode these signals into their parameters, thus exacerbating the risk of privacy issues. * Results for LLaVA 13B |Dataset|Origin|w/Privacy|ImageTransf.|TextTransf.| |-|-:|-:|-:|-:| |coco|97.4|91.4|85.8|1.9| |gqa|91.8|81.5|74.2|1.2| |ocrvqa|98.0|73.8|28.8|1.3| |textvqa|96.7|90.6|67.1|2.4| |vg|89.1|78.8|73.5|2.9| * Results for LoRA-256 on LLaVA 7B |Dataset|Origin|w/Privacy|ImageTransf.|TextTransf.| |-|-:|-:|-:|-:| |coco|99.4|93.9|87.3|2.8| |gqa|98.2|86.8|76.9|1.8| |ocrvqa|98.8|77.0|30.4|2.8| |textvqa|99.4|94.6|72.4|2.0| |vg|97.6|87.0|75.6|2.6| * * * > **Essential References Not Discussed (W1)**: Unclear which parameters were tested, LLM or vision tower may differ in privacy encoding. In this paper, we initially follow the default settings of LLaVA 1.5 7B and Qwen-VL Chat, where for Qwen we freeze the entire vision block and apply LoRA only to the language transformer block, while for LLaVA we fine-tune both the vision and language blocks. To further investigate whether inadvertent privacy is more likely to be encoded in the LLM parameters or in the vision tower, we freeze all LLM parameters and allow the final layer of vision block to update its gradients during fine-tuning on COCO in Qwen-VL Chat. The results are presented below: |Dataset|Origin|w/Privacy|ImageTransf.|TextTransf.| |-|-:|-:|-:|-:| |Language Block|100.0|97.0|93.8|49.4| |Vision Block|100.0|32.6|20.5|51.8| Surprisingly, we observe a significant reduction in gradient similarity for the vision block when privacy is embedded into the images, while the gradient similarity in the text modality transformation remains relatively unchanged. **Thus, inadvertent privacy is more likely to be encoded in the vision tower**. We will include these experimental results in the final version to emphasize this heightened privacy concern. * * * We hope that our explanations above can clarify your doubts and you can consider our work more favorably.
null
null
null
null
null
null
Accelerating LLM Inference with Lossless Speculative Decoding Algorithms for Heterogeneous Vocabularies
Accept (oral)
Summary: This work proposes three methods for using a draft model with a different vocabulary than the target model in a typical speculative decoding framework. The authors propose: 1) string level exact matching (SLEM) in which the draft tokens are decoded back into string representations and reencoded by the target model tokenizer, 2) string level rejection sampling (SLRM) which modifies the above by including rejection sampling at the string level, and 3) token-level intersection (TLI) in which a modified draft vocabulary is sampled is generated as the intersection of target and original draft vocabulary, enabling more typical token-level verification from a subset of the original drafter vocabulary. The authors explore numerous nuances and challenges that heterogeneous vocabularies introduce such as non-injective tokenizers, KV-cache considerations, lookahead controls which reduce unnecessary forward passes of the draft model. A small empirical study is conducted which compares autoregressive decoding to standard homogeneous SD and heterogeneous SD using SLEM and TLI. Claims And Evidence: Yes, in general the claims made are well supported by the discussion, illustrative examples, and proofs. Methods And Evaluation Criteria: Yes, the methods, datasets, and evaluation criteria used are generally standard. One area for improvement would be to include a standardized SD dataset such as SpecBench [1]. [1] https://github.com/hemingkx/Spec-Bench Theoretical Claims: I briefly reviewed the proofs and they appear to be accurate. Experimental Designs Or Analyses: No issues noted. The experiments appear to be valid. Supplementary Material: Yes I reviewed all supplementary materials. Relation To Broader Scientific Literature: The challenge of producing bespoke draft models for each new potential target model is a big drawback when trying to use speculative decoding in practice. Some model families have natural draft candidates, such as Qwen2.5-0.5B for instance. However, even in this case the vocabulary for Qwen2.5-72B actually differs slightly from 0.5B (token IDs aligned but different size vocab.), highlighting the challenges of naively applying SD. Further, many organizations relying on SD may not have the necessary expertise, data, or compute to pretrain their own draft model or drafting heads. The proposed solutions and analysis in this work offer a unique, novel, and effective solution to these challenges. I am not aware of any other work that has tackled the heterogenous vocabulary problem in SD and as such consider this work seminal. Essential References Not Discussed: None noted. Other Strengths And Weaknesses: ## Strengths * Important and timely topic. * Original and novel approach to a practical challenge of implementing SD in practice. * Well written * Illustrative examples included in text * Several target / draft pairs and datasets considered. * Throughput increases are competitive with homogenous drafters. ## Weaknesses * Additional figures to highlight the overall methods may be beneficial to the reader. * Some select terms appear in text before definition, eg., lookahead value. * No error bars / statistical analysis conducted on empirical results. * “30 prompts” from the datasets used for Table 1 is somewhat informal and would be hard to reproduce. It would be best to use a standardized benchmark such as SpecBench. * Additional discussion on the overhead for the SLEM method would be helpful. While we see throughput gains here I wonder about integration with more sophisticated inference engines such as vLLM. Could repeated tokenization/decoding block GPU in multi-tenant or high query per second settings? Other Comments Or Suggestions: * Algorithm 3 L6-8 could benefit from indentation or endif statements to clarify the conditional branch flow. * Table 3: Consider adding “small vocabulary” to SLRS. * L102: “draft token However” * Suggest highlighting homogenous drafters in results as it’s not always clear which models share a vocab. Questions For Authors: 1. In my review of the supplementary materials I did not find the implementation for non-injective tokenizer “look behind”. Please confirm which section of the SM we can find this. 2. What are some challenges that may be encountered when implementing SLEM for vLLM or other inference engines which rely on asynchronous tokenization on CPU? Has any analysis been conducted on applying your proposed methods for multi-tenant or high query per second settings? Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: We are so grateful for your solid endorsement, rating our paper with the **highest score of 5 out of 5**! We are particularly thankful for your insightful acknowledgment of this work as a significant breakthrough: >“I am not aware of any other work that has tackled the heterogeneous vocabulary problem in SD … as such consider this work **seminal**.” We deeply appreciate your thoughtful recognition of our **“well written,” “important and timely,” “novel and effective solution” to a “big drawback when trying to use speculative decoding in practice.”** We also thank you for contributing a knowledgeable example of a real-world use case from your clearly strong hands-on experience, highlighting the importance of our solutions to the field-wide, genuine pain of practitioners applying speculative decoding: >“The challenge of producing bespoke draft models for each new potential target model is a big drawback when trying to use speculative decoding in practice. Some model families have natural draft candidates, such as Qwen2.5-0.5B for instance. However, even in this case the vocabulary for Qwen2.5-72B actually differs slightly from 0.5B (token IDs aligned but different size vocab.), highlighting the challenges of naively applying SD. Further, many organizations relying on SD may not have the necessary expertise, data, or compute to pretrain their own draft model or drafting heads.” We also thank you for reviewing our proofs and affirming their correctness. We truly appreciate your careful reading and for bringing to our attention some typos and suggested improvements in presentation, which have already helped improve the paper. ## A1. New Extended Benchmarks of SLEM and TLI Independently of our benchmarks, Hugging Face’s core maintainers have thoroughly evaluated the effectiveness of SLEM and TLI (Algorithms 2 and 4) and found our methods to be the most effective among all the speculative decoding algorithms they currently support. As a result, they made SLEM and TLI the default in Transformers (in Oct ’24 and Feb ’25, respectively), powering 5,000 other libraries with various use cases and hardware setups. To facilitate additional standardized benchmarks, we have open-sourced our benchmarking repository, which provides full reproducibility so anyone can compare our methods and any future alternatives on *exactly the same inputs and hardware*. We will attach a link to this repository upon publication. Furthermore, here are 2 extended benchmarks of SLEM and TLI, suggesting **up to 2.1× and 1.69× speedups** on various hardware setups: https://imgur.com/a/speculative-decoding-heterogeneous-vocabularies-extended-benchmark-of-algorithms-2-4-uV4PrTR (anon.) ## A2. Integrating SLEM and TLI into vLLM Thank you so much for your interest in integrating our algorithms into vLLM! Since SLEM and TLI have become the default of Hugging Face Transformers, we have received a lot of interest from users experiencing this pain who have asked about integrating them into vLLM. Thanks to vLLM’s support in disaggregated prefilling, the repeated process of SLEM is nonblocking and therefore should remain effective in both multi-tenancy and high query per second settings. Asynchronous tokenization is expected to increase the throughput in such setups. We do not see any theoretical constraints that would limit integration in vLLM or similar inference engines, nor do we see any major engineering gaps. In fact, we believe that vLLM will eventually support speculative decoding for heterogeneous vocabularies using these algorithms or future alternatives. ## A3. Implementation Details Thanks so much for your interest in SLEM’s implementation! The supplementary materials include the code we contributed to HF Transformers, which aligns with their naming conventions. Some naming differences exist, such as in the lookbehind logic of SLEM. The core logic is in the *AssistedCandidateGeneratorDifferentTokenizers* class, with the lookbehind mechanism implemented via a *diagonal matrix* referenced throughout the code. Helper functions like *_get_tokens_diag* appear only by signature and docstring. The full implementation is available on the *main* branch of HF Transformers. Regarding SpecBench, please note that while the implementations of SLEM and TLI allow homogeneous drafters, our primary focus is on heterogeneous drafters, in contrast to SpecBench, which benchmarks methods constrained to homogeneous drafters or self-speculation. Homogeneous methods are not applicable when the target lacks a model family (e.g., phi-4, Mixtral-8x22B). Also, homogeneous methods are ineffective if the smallest model in the family is still too slow, further limiting their applicability (e.g., see Figure 2-a in [1]). Some of the remaining methods in SpecBench are supported by HF Transformers and therefore were benchmarked in their independent experiments (see Section 1 above). --- Thank you for your support and feedback! [1]: arxiv.org/abs/2405.14105, ICLR ’25
Summary: The authors provide a comprehensive view of the challenges in performing speculative decoding with different vocabularies. They come up with several solutions to address this, each with its own benefits and weaknesses. In my personal experience, this is often a real headache, as training drafters for specific models is often difficult and time consuming. Claims And Evidence: - Methods And Evaluation Criteria: The evaluation criteria (speedup for different draft/target configurations) make sense. Theoretical Claims: The exposition is rigorous, the algorithms are well motivated and the lossless-ness is proved. Experimental Designs Or Analyses: The authors also do a good job of showing specific examples and failure models of naive solutions, and coming up with several new methods. However: * For Algorithm 4, it would be nice to provide some example of the size of the intersection between some common tokenizers/vocabulary, to get a better sense of its usefulness * The experiment section is somewhat lacking. The results are only computed for 30 prompt, which is really not much. I believe larger scales experiments are requirements, with more prompts/more seeds. Additionally, the speedups for most of the combinations in Table 1 either non-existent or not really impressive. * There is also little explanation for why one drafter or one method would be better than another. It would be nice to see at least some heuristic to understand this better. Supplementary Material: - Relation To Broader Scientific Literature: The contributions are, to my knowledge, novel and relevant in relation to the literature. Essential References Not Discussed: - Other Strengths And Weaknesses: The paper is very clear and well written. Other Comments Or Suggestions: I am open to raising my score in light of some additional experimental results and explanations. Questions For Authors: - Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thoughtful review. We appreciate your recognition that our work provides **“a comprehensive view”** of speculative decoding with different vocabularies, that **“The exposition is rigorous, the algorithms are well motivated and the lossless-ness is proved,”** and that we do a **“good job of showing specific examples and failure models of naive solutions.”** We also value your observation that **“The contributions are, to my knowledge, novel and relevant in relation to the literature,”** and that **“The paper is very clear and well written.”** Moreover, we appreciate your personal insight—**“In my personal experience, this is often a real headache, as training drafters for specific models is often difficult and time consuming.”**—which underscores the pressing need to address heterogeneous vocabularies in speculative decoding. ### 1. Intersection Sizes are Provided You requested “some example of the size of the intersection between some common tokenizers/vocabulary”. Please note that: - Table 5 in Appendix C provides the sizes of the intersections for various target–drafter pairs. - Table 2 shows the expected acceptance rate for each pair, which governs its usefulness. We are eager to address all your concerns in hopes of justifying a higher score. Do you see any model pairs that we could add to Table 5 to significantly improve the paper? We are open to adding them all. ### 2. New Extended Benchmarks 1. Independently of our benchmarks, Hugging Face’s core maintainers have thoroughly evaluated the effectiveness of SLEM and TLI (Algorithms 2 and 4) and found our methods to be the most effective among all the speculative decoding algorithms they currently support. As a result, they made SLEM and TLI the default in Transformers (in Oct ’24 and Feb ’25, respectively), powering 5,000 other libraries with various use cases and hardware setups. 2. Section 1 of [our response to Reviewer a7E7](https://openreview.net/forum?id=vQubr1uBUw&noteId=6DDgLIbPNK) adds larger-scale benchmarks with additional pairs and hardware setups. These updated benchmarks suggest **significant speedups of up to 2.1× for SLEM and 1.69× for TLI**. 3. As [1] extensively studied, predicting the expected speedups of speculative decoding algorithms is possible by accurately estimating the ratio between the models’ forward latencies and acceptance rate. Table 2 provides the expected acceptance rate for all algorithms, and the ratio between the number of parameters of each model is often used as a surrogate for estimating the forward latencies ratio (see [2] for example). Our updated benchmarks above include 170 configurations of `<target, dataset, hardware, algorithm, drafter>`, each evaluated over 30 prompts, summing to a total of 5,100 runs. We designed these experiments to align with the highest standards of isolation, portability, and reproducibility, such that we completely sanitize the environment before each run—freeing all CPU and GPU memory. As a result, the initial setup of each run incurs a high overhead, especially because we must reload the models into the GPUs after clearing the memory, which can take a few minutes. This process requires reserving access to hardware for over a week when summing across all nodes and costs thousands of dollars, which is expensive given our constrained budget. Even if we had more budget to average over more prompts, since the set of all `<target, dataset, hardware, algorithm, drafter, prompt, seed>` is uncountable, any finite benchmark would still provide almost empty support. Nevertheless, we remain eager to address all your concerns to justify an even higher score. Given our new extended benchmarks, the acceptance rate analysis, the independent HF benchmarks, and the wide adoption in real-world software libraries for several months, what additional configurations could significantly improve the paper? We will do our best to accommodate specific requests within our limited budget toward a camera-ready version. ### 3. Choosing Drafters or Methods is Trivial In response to your request for “heuristics”, please note: - Table 2 reports the **exact expected acceptance rate** for each method, which—combined with the ratio of forward latency among the models—governs the overall efficiency, as analyzed in [1]. - Our ‘Limitations’ section discusses at length the interplay between acceptance rates, forward latencies, and speedups. What does “heuristic” mean in this context? We always need to select drafters with the maximum acceptance rate and minimum forward latency. --- We truly value your input; your comments have already led to meaningful improvements in our experiments and exposition. **We sincerely appreciate your openness to raising your score in light of new experimental results, practical adoption by Hugging Face, and our explanations**. --- [1]: arxiv.org/abs/2405.14105, ICLR ’25 [2]: arxiv.org/abs/2211.17192, ICML ’23
Summary: This paper explores possible solutions for speculative decoding with a drafter model that does not share the vocabulary with the target model. Such methods, if successful, can enable the use of many more models as the drafter model for a large model to reduce the inference cost of large language models. The authors approached this problem with two distinct verification methods for speculative decoding: token match verification, and string match verification. Benchmarks demonstrated the success of the proposed methods when Gemma-2-9B-IT is used as the target model. Claims And Evidence: The main claims of this paper are the correctness and the effectiveness of the proposed algorithms. The claim of correctness is reasonably substantiated, - The correctness of Algorithms 1, 3, and 4 can be easily established from the proof for the standard speculative decoding algorithm. - There is no explicit discussion on the correctness of Algorithm 2, but the correctness is also not difficult to prove. However, additional data would be necessary to support the claim of effectiveness, - The benchmark results were reported in Table 1 in the main paper, and Table 7 in the appendix. Table 1 shows that with Gemma-2-9B-IT as the target model, Algorithms 2 & 4 can be faster than autoregressive decoding with vicuna-68m as the drafter, and sometimes even exceed the decoding speed of a Gemma-2-2B-IT drafter (same vocabulary as Gemma-2-9B-IT). However, Table 7 paints a different picture, - There appears to be a trend that the proposed methods appear to not perform well on larger models: On quite a number of target/dataset combinations, the proposed algorithms underperforms or performs similarly to autoregressive decoding (e.g. Llama-3.1-8B-Instruct on CNN Daily Mail, Llama-3.1-70B on scrolls, Mixtral-8x22B-Instruct-v0.1 on scrolls and CNN Daily Mail, Llama-3.1-70B-Instruct on scrolls). While admitted the success of speculative decoding depends the choice of the drafter model, the prevalence of underperforming combinations calls into the question whether other factors are also contributing to the problem. - Potential buggy implementation: The new token counts when temperature is 0 should be close, if not identical, across different methods on the same target model. However, they are drastically different in Llama-3.1-70B, CodeLlama-13b-Instruct-hf, and Llama-3.1-70B-Instruct. - Algorithm 3 was not benchmarked, and I am not confident that this algorithm can be made practical due to the need to iterate over a large number of possible drafter tokenizations. Methods And Evaluation Criteria: Yes. Theoretical Claims: I checked the proofs in Appendix E. They are correct as far as I can tell. Experimental Designs Or Analyses: The experimental design, which mostly involves benchmarks, is sound. However I have several doubts about the results and analysis of the effectiveness of the proposed methods. See my comments around Table 7 above. Supplementary Material: No. Relation To Broader Scientific Literature: This paper is a continuation on the works on using speculative decoding for improving LLM inference speed. It directly builds upon the seminal work on speculative decoding ([Leviathan et al, 2023], [Chen et al, 2023]), in both methods and proofs for correctness. [Leviathan et al, 2023]: https://proceedings.mlr.press/v202/leviathan23a/leviathan23a.pdf [Chen et al, 2023]: https://arxiv.org/pdf/2302.01318 Essential References Not Discussed: None. Other Strengths And Weaknesses: Strengths - The problem is well-motivated and successful solutions would be useful for LLM practitioners. - A large number of experiments were conducted to help readers evaluate the effectiveness of the proposed methods. - The proposed methods are simple modifications of existing methods, and thus are easy to understand and implement. Weaknesses: The presentation (especially organization and clarity) of this paper has a lot of room for improvement. For example, - Certain parts can be reordered to make the reading flow less disruptive to readers, e.g.the discussion of Algorithms 1 & 4 should be combined into a single Section 2. - "Algorithm 2 Supports Non-Injective Tokenizers" should be incorporated into the pseudocode of Algorithm 2. This is a crucial part for the correctness of this algorithm, however the current verbose description is both hard to follow and lacking crucial details (more on the second point later). - Many of the discussions in the text are confusing and unclear. For example, - (099-104, left column): It appears to me that the condition $p(t) \leq q(t)$ for all $t \in T$ can only hold when $p = q$. Algorithm 1 being "optimal" in this case is not a terribly useful result. This paragraph seems to try to motivate Algorithm 4, but that can be done in a much simpler way by pointing out any samples from $D - T$ is useless. - (082-089, right column): It appears to me that Algorithm 4 will sample from $T$ and thus not "sub-optimal". I don't understand why "we should accepted the token 'aa'". - (111-114, right column): It appears to me that Algorithm 2 simply can't work if the vocabulary conditions do not hold. I don't understand why it would instead "leading to a decreased acceptance rate" if the acceptance rate is undefined in this context. - (197-198, right column): "it does not guarantee that the output tokens are exactly the target tokens". I am confused about what "output tokens" means here as standard speculative decoding draws samples from the target distribution. What makes their samples not "the target tokens"? Or is it possible "output tokens" here really mean drafter sample tokens (which might get rejected)? - The writing can be made much less verbose. For example, (174-178, left column) could have been a simple sentence "$T \neq D$ limits the ability of the target and draft models to communicate tokeni IDs"; and the repeated reference to HuggingFace's wide adoption such as (040-042, right column) and the first paragraph of Section 5 could be just "our algorithms have already been implemented in the widely used HuggingFace transformers library". Other Comments Or Suggestions: - Line 12 of Algorithm 5 should be "if j < i ... or else sample $t$ from $p_x()$". - Tables 4 & 5 are presented as part of the main paper but are in the appendix. I am confident that the authors will be able to find room for these tables after condensing their writing and removing unnecessary content. Questions For Authors: None. Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: Thank you for your thorough review and for underscoring several positive aspects of this work. We appreciate your noting that **“Benchmarks demonstrated the success of the proposed methods when Gemma-2-9B-IT is used as the target model,”** including the observation that **“Table 1 shows that with Gemma-2-9B-IT as the target model, Algorithms 2 & 4 can be faster than autoregressive decoding with vicuna-68m as the drafter, and sometimes even exceed the decoding speed of a Gemma-2-2B-IT drafter.”** We also value your statement that **“I checked the proofs in Appendix E. They are correct as far as I can tell.”** Furthermore, we appreciate your recognition that the proposed methods are **“easy to understand and implement,”** **“The experimental design, which mostly involves benchmarks, is sound”** and your emphasis on these: >**“Strengths** >- The problem is **well-motivated** and successful solutions would be **useful for LLM practitioners**. >- **A large number of experiments were conducted to help readers evaluate the effectiveness of the proposed methods.”** --- ### Improved Benchmarks Suggest Significant Speedups In Practice We've extended our benchmarks, as mentioned in [A1 for Reviewer a7E7 ](https://openreview.net/forum?id=vQubr1uBUw&noteId=6DDgLIbPNK), and resolved the variance issue in the number of new tokens by filtering out crashed runs before averaging. We intentionally include configurations where our methods fail—to highlight their *limitations* instead of cherry-picking the best cases, communicating that practitioners should be careful when selecting heterogeneous drafters. SLEM and TLI are highly effective in practice and have been widely used in the industry during the past months, which indicates their real-world impact. The algorithms **do not “underperform”**. Like any SD algorithm, their effectiveness is controlled by: 1. Acceptance rates, as Table 2 provides. 2. Ratio between the models’ forward latencies. Larger targets often lead to acceleration, as you noticed. There's also an implementation overhead that has been shown to be negligible in light of the significant empirical speedups. --- >"There is no explicit discussion on the correctness of Algorithm 2, but the correctness is also not difficult to prove." The algorithm verifies that the final output string exactly matches the string that the target model generates, hence the correctness (losslessness) is derived immediately. What discussion do you believe is missing? >I don't understand why "we should accepted the token 'aa'". What advantage do you get from rejecting the draft token ‘aa’? The target model can only generate strings that contain the character ‘a’. >It appears to me that Algorithm 2 simply can't work if the vocabulary conditions do not hold. I don't understand why it would instead "leading to a decreased acceptance rate" if the acceptance rate is undefined in this context. The acceptance rate is always well-defined but might be zero. >"it does not guarantee that the output tokens are exactly the target tokens". I am confused about what "output tokens" means here as standard SD draws samples from the target distribution. What makes their samples not "the target tokens"? Or is it possible "output tokens" here really mean drafter sample tokens (which might get rejected)? SD algorithms operate on two distributions, given by probability vectors corresponding to the *drafter distribution* and *target distribution*. These algorithms output tokens, which effectively define an *output distribution*. Previous works proved that the output distribution aligns with the target distribution (also known as *losslessness*). The fact that two tokens are sampled from the same distribution **does not imply** they are equal, as stated in the paragraph that you mentioned. Thank you so much for asking this question; it has already helped us to improve the paper. We'll add an extended exposition on speculative decoding to enhance the clarity of future revisions. >Algorithm 3 was not benchmarked Please see A1: https://openreview.net/forum?id=vQubr1uBUw&noteId=gIxgR1GrKk. >Line 12 of Algorithm 5 should be "if $j < i$ ... or else sample $t$ from $p_x$". Thanks so much for bringing to our attention this issue. Beyond citing the papers that introduced SD, we included a rephrased version of their algorithm rather than copying it verbatim, as you noticed. Regarding your proposed change, please note that it samples the last token from $r_x$ rather than $p_x$ even if all the drafts are accepted (i.e., $j = i$), and hence **is lossy**. We corrected the mistake by editing line 12: Sample $t \sim r_x$ for $r_x(t):=\frac{p_x(t)-\min\{p_x(t),q_x(t)\}}{1-\sum_{t'\in T}\min\{p_x(t'),q_x(t')\}}$ if line 9 ever rejected a token. Otherwise, sample $t\sim p_x$. --- We are also truly grateful for your additional detailed suggestions for enhancing the ordering and presentation. They've already been very helpful in improving the paper.
Summary: This paper addresses a key limitation in existing speculative decoding (SD) methods for large language models (LLMs): the assumption that the drafter and target models share the same vocabulary. The authors propose three novel lossless SD algorithms—String-Level Exact Match (SLEM), String-Level Rejection Sampling (SLRS), and Token-Level Intersection (TLI)—which remove this constraint and support heterogeneous vocabularies. The proposed methods preserve the target distribution and work with off-the-shelf models, eliminating the need for costly retraining. The paper presents thorough theoretical guarantees and empirical evaluations across summarization, programming, and long-context tasks. Notably, one of the proposed methods (SLEM) has already been adopted as the default for heterogeneous SD in Hugging Face Transformers, demonstrating real-world impact. Claims And Evidence: Yes Methods And Evaluation Criteria: **Dependence on Vocabulary Overlap:** Algorithm 4’s performance depends heavily on the intersection between the drafter and target vocabularies. In edge cases with low or no overlap, performance gains may vanish. Theoretical Claims: Yes Experimental Designs Or Analyses: - **Lack of analysis of computational overhead in Algorithm 3:** While Algorithm 3 (SLRS) is theoretically superior in acceptance rates, it may be impractical due to the exponential complexity of computing string probabilities ψ(t), especially for models with large vocabularies. - **Limited Evaluation of Algorithm 3:** The empirical evaluation focuses primarily on Algorithms 2 and 4. Although Algorithm 3 is interesting, its lack of experimental results makes it hard to judge its practical value. Supplementary Material: Yes Relation To Broader Scientific Literature: This idea has a broad impact in the era of speculative decoding Essential References Not Discussed: N/A Other Strengths And Weaknesses: **Strengths:** 1. **Novel Contribution:** The work relaxes a long-standing constraint in speculative decoding—the requirement for vocabulary homogeneity—broadening its applicability significantly. 2. **Lossless and Theoretically Grounded:** All proposed algorithms are rigorously proven to be lossless, with clear formal definitions, acceptance rate bounds, and theoretical guarantees (e.g., Theorems 3.1, 3.2, 4.1). 3. **Practical and Open Source Impact:** The integration of Algorithm 2 into Hugging Face Transformers and its adoption as the default decoding method for heterogeneous vocabularies highlights immediate practical relevance and community validation. **Weaknesses:** See Above Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your detailed review and for highlighting so many strengths in our work. We appreciate your acknowledgment that **“this paper addresses a key limitation in existing speculative decoding,”** and that **“the paper presents thorough theoretical guarantees and empirical evaluations across summarization, programming, and long-context tasks.”** We also value your remark that **“this idea has a broad impact in the era of speculative decoding,”** and your observation that **“notably, one of the proposed methods (SLEM) has already been adopted as the default for heterogeneous SD in Hugging Face Transformers, demonstrating real-world impact.”** (– After this submission, TLI has also become the default in HF Transformers, as mentioned below.) Furthermore, we are glad you find **“Algorithm 3 is interesting”** and noted that **“Algorithm 3 (SLRS) is theoretically superior in acceptance rates.”** In particular, we are grateful that you recognize how these points underscore the significance of our contribution: >**“Strengths:** >- **Novel Contribution:** The work **relaxes a long-standing constraint** in speculative decoding—the requirement for vocabulary homogeneity—**broadening its applicability significantly**. >- **Lossless and Theoretically Grounded: All proposed algorithms are rigorously proven to be lossless, with clear formal definitions, acceptance rate bounds, and theoretical guarantees (e.g., Theorems 3.1, 3.2, 4.1).** >- **Practical and Open Source Impact: The integration of Algorithm 2 into Hugging Face Transformers and its adoption as the default decoding method for heterogeneous vocabularies highlights immediate practical relevance and community validation.”** Below, we address your concerns regarding the computational overhead of Algorithm 3 (SLRS) and the dependence of Algorithm 4 (TLI) on vocabulary overlap. --- ## A1. SLRS Proven Advantage The paper never claims that SLRS (algorithm 3) is practical *today*, with *existing* off-the-shelf models. Instead, we openly discuss its limitations in Section 3.4, where Lemma 3.3 transparently analyzes the tradeoffs of implementing SLRS with existing off-the-shelf models, which often have almost complete vocabularies (see ‘Vocabulary Constraints’ in Section 3.1 and Appendix D). Lemma 3.3 reveals how practitioners could design *new* vocabularies to facilitate SLRS. SLRS is mathematically proven to be: 1. Lossless (Theorem 3.2), as you mentioned in your review (“All proposed algorithms are rigorously proven to be lossless, with clear formal definitions, acceptance rate bounds, and theoretical guarantees”) 2. Increasing acceptance rates compared to Algorithms 1 and 2 (Table 2), as you mentioned in your review (“Algorithm 3 (SLRS) is theoretically superior in acceptance rates”) Your review mentions that SLRS is novel and addresses a long-standing key limitation. Therefore, we believe SLRS contributes to the research community by laying down theoretical foundations upon which future works can design vocabularies for heterogeneous drafters. Since heterogeneous drafters are a new research direction that has already shown potential (in this work and by its wide and quick adoption in practice) but has not been previously studied, such contributions could become fundamental. ## A2. TLI's Effectiveness Is Mathematically Proven + New Extensive Benchmarks Over Various Practical Setups The effectiveness of any speculative decoding algorithm is controlled by its acceptance rate, and TLI (Algorithm 4) is no exception. In edge cases of low or no alignment between the draft and target distributions, all the known speculative decoding algorithms are expected to fail, including those in this paper. Table 2 provides the readers with the expected acceptance rate of all our algorithms, including TLI. We can see that the acceptance rate depends on the probability mass that the intersection supports. Please note that the acceptance rate is governed by the supported probability rather than by the size of the intersection (namely, the number of tokens in the intersection). Nevertheless, practitioners in the past years have often been using BPE, WordPiece, Unigram, or SentencePiece to construct tokenizers that share a reasonably large intersection, thanks to their heuristics (see ‘Vocabulary Constraints’ in Section 3.1 and Appendices C and D). In practice, TLI is highly effective in various setups and, therefore, has recently become the default behavior in Hugging Face Transformers after they conducted a thorough and independent benchmark. Please see [Section 1 of our response to Reviewer a7E7 ](https://openreview.net/forum?id=vQubr1uBUw&noteId=6DDgLIbPNK) for details. --- Thanks so much for your support! We are eager to address any remaining concerns you might have. --- Rebuttal Comment 1.1: Comment: Thanks for the clarification. I have raised my score. Good luck. --- Reply to Comment 1.1.1: Comment: **Thank you so much for raising your score to recommend *acceptance*!** We greatly appreciate your engagement in the discussion and are pleased that our previous response addressed your concerns. We remain eager to learn from your feedback and improve the paper further. What is the remaining concern that we have not fully addressed yet? Thank you again for your time and attention.
null
null
null
null
null
null
Sampling from Binary Quadratic Distributions via Stochastic Localization
Accept (poster)
Summary: This work addresses the problem of sampling from binary quadratic distributions. The authors apply a stochastic localization framework and focus on a key component—the counting/expectation of the posterior distribution. To this end, they establish Poincaré inequalities for the posterior, from which they derive a spectral gap. Experiments further demonstrate that stochastic localization consistently improves sampling efficiency. Claims And Evidence: The introduction seems to overstate the novelty of the work by emphasizing the use of stochastic localization in binary quadratic distributions, which has already been widely applied in discrete sampling. The primary theoretical contribution of the paper is the quantification of Poincaré inequalities for the posterior distributions. Methods And Evaluation Criteria: Yes, the proposed methods and evaluation criteria are well-chosen. The framework leverages rigorous theoretical analysis (e.g., Poincaré inequalities and spectral gap bounds) tailored for discrete MCMC samplers, while the benchmark datasets (from common combinatorial optimization problems) effectively capture the challenges of sampling from binary quadratic distributions. Theoretical Claims: I don't find problems. Experimental Designs Or Analyses: I am less familiar with the experimental aspects, so I only conducted a high-level review. Based on my assessment, the experimental design appears reasonable and generally supportive of the claims. Supplementary Material: I reviewed the appendix A-C Relation To Broader Scientific Literature: I don't know. Essential References Not Discussed: No, the paper appears to cite all the essential prior works. It covers key contributions in discrete MCMC sampling and stochastic localization, which are sufficient for understanding its context and contributions. Other Strengths And Weaknesses: The theoretical guarantees rely on a strong external field assumption that may not hold in all practical scenarios, potentially limiting the generality of the results. Other Comments Or Suggestions: It would better to question in Line 66 as "Can SL reduce sampling difficulty in the binary quadratic distributions, as it does in continuous settings, by constructing easily samplable posterior distributions?" since there is various SL methods for discrete sampling tasks as discussed in Appendix A. Questions For Authors: - Could you provide the convergence rate/query complexity/iteration complexity explicitly? and compare it with existing results? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for these insightful comments. > Overstating the novelty by emphasizing the use of SL in binary quadratic distributions We appreciate the feedback on framing. While SL *concepts* have appeared in discrete settings (as discussed in Appendix A), prior works often focus on specific models (e.g., SK, Ising with specific structures) and employ model-specific techniques or assumptions. Our contribution lies in: - Developing an SL framework using **standard, general-purpose DMCMC samplers** for the posterior estimation step. - Providing **theoretical guarantees for general BQDs** without requiring additional model-specific structure beyond the quadratic form. This makes our analysis broadly applicable. We agree our main theoretical novelty is the rigorous quantification of Poincaré inequalities for the posterior distributions within this general BQD setting. We will **revise the introduction** to state this more clearly and accurately reflect the relationship to prior discrete SL work, emphasizing the generality of our approach and theory. > Satisfiability of assumption 4.1 As detailed in Remark 4.2, the external field $h$ in the posterior (Eq. 12) is $b+\frac{\alpha(t)Y_t}{\sigma^2t}$. Theorem 3.1 and the SL construction ensure that $|h|$ grows large as $t$ increases **with high probability** (explained around line 161). Therefore, Assumption 4.1 is **not a restrictive assumption on the problem instance, but rather a condition that holds naturally as a consequence of the SL dynamics** for sufficiently large $t$. We recognize that labeling it an "Assumption" caused confusion about its generality. We apologize and will **rename/rephrase this condition** (e.g., "Condition 4.1" or similar) and clarify its status in the revised version. > It would better to question in Line 66 as "Can SL reduce sampling difficulty in the binary quadratic distributions, as it does in continuous settings, by constructing easily samplable posterior distributions?" since there is various SL methods for discrete sampling tasks as discussed in Appendix A. This is an excellent suggestion for better framing. We agree and will **revise the question in Line 66** accordingly in the next version. > Could you provide the convergence rate/query complexity/iteration complexity explicitly? and compare it with existing results? Analyzing the convergence rate of the *overall* SL process to the target distribution remains an open and challenging theoretical problem. This difficulty arises from the time-inhomogeneous nature of the process, as discussed in our response to `Reviewer 4aF7's W2`. We can analyze the operational complexity. Let $N$ be the dimension, $T$ be the number of SL iterations, and $M$ be the total MCMC step budget. - **Baseline DMCMC:** Methods like GWG, PAS, DMALA have complexity dominated by operations like gradient/difference calculations or matrix-vector products, typically scaling as $O(MN^2)$. - **SL + DMCMC:** SL adds two main steps per iteration: MC estimation (line 118, typically $O(N)$ using posterior samples) and SDE simulation (line 119, $O(N)$). Over $T$ iterations, the total overhead is $O(TN)$. The MCMC sampling itself uses the same total budget $M$, distributed across iterations. - **Comparison:** The total complexity for SL+DMCMC is roughly $O(MN^2 + TN)$. In typical high-dimensional settings relevant to MCMC ($N$ large), and practical choices of $T$ (e.g., $T \in ${256, 512, 1024} in our experiments with $M$ up to 10,000), the $O(TN)$ overhead is **negligible** compared to the $O(MN^2)$ cost of the core MCMC sampling. Thank you for prompting this; we will **add this explicit computational complexity analysis and comparison** to the revised manuscript (likely in the Appendix).
Summary: This paper introduces a sampling method for binary quadratic distributions using stochastic localization. It is claimed to be the first theoretical that extends stochastic localization to discrete MCMC samplers. They show polynomial mixing in Glauber dynamics and MH algorithm. Some experiments are provided. Claims And Evidence: for the most part, as some claims lack empirical justification! For instance: 1. This paper does not provide explicit complexity bounds and only shows polynomial-time mixing but claims that stochastic localization significantly speeds up sampling 2. Stochastic localization does not show a huge improvement in real-world problems given the experiments provided in the paper (0.1-0.5%). 3. This method is not a general purpose framework and can only be applied to binary quadratic distributions and how well it generalizes to other discrete sampling problems is not clear to me. 4. Spectral gap bounds do not show that stochastic localization is computationally faster than other methods. Methods And Evaluation Criteria: 1. this paper should provide computational cost analysis and compare it to other MCMC models. 2. No comparison to methods such as discrete HMC 3. Despite the fact that this paper proves polynomial time mixing, there is no clear evidence that it performs better in real world problems and there is no comparison to other MCMC in runtime. Does stochastic localization reduce the number of MCMC steps? Theoretical Claims: 1. the proofs look correct but some assumptions (4.1) can be violated easily in many settings. Seems like if the external field is not large enough, the mixing may fail. 2. Theorem 3.1 is mostly based on the literature but it still does not show that stochastic localization is beneficial in discrete setups. 3. The claim that Poincare inequality provides practical efficiency is not quite correct. it does guarantee ergodicity but does not necessarily imply better performance compared to other techniques. Experimental Designs Or Analyses: It has a variety of benchmarks and multiple MCMC methods are evaluated.good ablation study, but a few things can be improved: 1. There is no computational cost analysis 2. no comparison to methods such as discrete HMC, Variational inference, etc. 3. This method is heavily dependent on hyperparameters such as \(\alpha \)-schedule and step allocation. the ablation study does not study different temperature settings in MCMC or the impact of datasets such as graph sparsity, problem size, etc 4. Since it is claimed that this method outperforms baseline MCMC, the improvement is very small. having some statistical testing to examine the statistical significance can be helpful. Supplementary Material: I scanned through the background and closely followed the proofs. Relation To Broader Scientific Literature: This paper extends stochastic localization to BDDs which is built on top of the line of works on stochastic localization for continuous distribution. Essential References Not Discussed: Yes, references are sufficient. Other Strengths And Weaknesses: Strengths: 1. It is easy to follow and the use of stochastic localization in a discrete setting is interesting. 2. Applying it to many datasets is interesting and can confirm the theoretical claims. Weaknesses: 1. While the polynomial mixing has strong theoretical results, the empirical improvements are not drastic. 2. the theoretical assumptions may not hold. For instance, assumption 4.1 requires |h| to grow large which can be violated easily in many real world problems. 3. Comparison is only done against MCMC based models 4. despite ablation studies, the hyperparameters must be studied more. For instance, do differnet settings change the performance drastically? or is the method robust? Other Comments Or Suggestions: mentioned in the previous sections. Questions For Authors: 1. Could the authors justify how polynomial time mixing implies practical efficiency? 2. How does stochastic localization improve sampling intuitively? What happens if assumption 4.1 fails? 3. Could authors please discuss runtime comparisons? 4, How do results change with different MCCM step allocations? the ablation study seems a bit unclear to me. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your valuable feedback. We have consolidated the main points and respond as follows: > No complexity bounds of SL and cost analysis comparisons SL introduces only minimal additional computation compared to MCMC methods. Please refer to our response to ```Reviewer mA2w's last point```. > Why can the results about spectral gap and Poincare inequality lead to practical efficiency? The speed of DMCMC samplers is generally unknown, but SL decomposes the entire DMCMC sampling process into a series of posterior distribution samplings $P(X|Y_i)$ (see Algorithm 1). We prove that sampling from these posterior distributions satisfies Poincaré inequality with high probability, and the existence of Poincare inequality can ensure polynomial mixing, meaning the fast sampling. > Satisfiability of assumption 4.1 Please refer to our response to ```Reviewer mA2w's second point```. > Theorem 3.1 does not show that SL is beneficial in discrete setups. Theorem 3.1 describes the convergence of the observation process, which *induces* the SL dynamics (line 130). It provides the theoretical underpinning justifying *why* Assumption 4.1 holds with high probability, thus ensuring the Poincaré inequality and fast posterior mixing are applicable. We will improve the description surrounding Theorem 3.1 in the revision to make this connection clearer. > Extend SL to general setting While algorithmic extension to general discrete settings is natural by adapting DMCMC samplers, the *theoretical analysis* (proving Poincaré inequalities) becomes significantly more complex (e.g., bounding Dobrushin interdependence matrix for sample space with more than two states). We focused on the binary case to maintain consistency between our algorithms and theoretical guarantees. We acknowledge this limitation and will **add the extension to general discrete distributions as a future work direction** in the conclusion. ## Experiment Issues > Marginal improvements on the results and statistical confidence Please refer to our response to ```Reviewer 4aF7's W1``` regarding empirical results. > Absence of Discrete HMC baseline and other methods Thank you for this suggestion. To our knowledge, Discrete HMC is not a standard baseline in the discrete sampling literature or the benchmarks as we referenced. If the reviewer could provide references or implementations suitable for BQDs, we are happy to consider it for future comparisons. > Running time and MCMC steps comparison In our experiments, SL and DMCMC samplers used identical MCMC steps for fair comparison, which results in similar runtimes with overhead $O(TN)$ for SL. We will add concrete runtime comparison results in our revision. > Dependence on alpha-schedule, step allocation, sensitivity analysis on hyper-params In our main results, we use GEOM(2,1) $\alpha$-schedule, exponential decay step allocation, and uniform time discretization for SDE. As detailed in Appendix D.2, we tune: - SDE iteration parameters: initial/final noise scale, sample ratio for posterior expectation estimation, number of SDE iterations, and noise level $\sigma$. These parameters are essential for SDE iterations and were also optimized in SLIPS. - Two additional parameters for step allocation: decay rate and minimum MCMC steps Given fixed DMCMC samplers, we believe this represents a minimal set of hyperparameters. Regarding sensitivity to some key hyperparameters: - Figure 1 provides comparisons for different step allocation strategies - Tables 4-9 present ablation studies for \alpha-schedule, K, and \sigma variations These results indicate that while performance varies (expected), **SL consistently outperforms baselines across different settings, demonstrating robustness**. > Impact of temperature settings in MCMC and datasets We used the *identical* temperature annealing schedule from the DISCS benchmark for *all* methods (Appendix D.1) for fair comparison. Regarding dataset characteristics impact, we studied: - Graph sparsity effects in Table 1 using ER datasets of different densities, analyzed in line 356 under "Results on MIS" - Problem size scaling in Table 3, showing results for different-sized datasets, analyzed in line 380 under "Results on MaxCut" In all these diverse settings—varying graph densities and problem sizes—**SL consistently outperformed DMCMC across these variations.** > Explanation on the ablation on step allocation Our theory indicates posterior sampling becomes fast as iterations progress, motivating adaptively allocating more MCMC steps to earlier iterations might be better than uniform allocation. Figure 1 tests this: exponential decay allocation (blue bars, more steps early) generally outperforms uniform allocation (orange bars) for the same total MCMC budget, empirically validating the theoretical motivation. We will clarify this explanation in the caption/text.
Summary: The paper proposes a generic localization sampler for binary quadratic distributions. By simulating the observation process similar to [EAM23], the authors propose an unbiased scheme which is capable of sampling from the target faster than a generic MCMC scheme. Claims And Evidence: The authors provide proofs for their claims as well as some empirical verification. However, the theory for the estimator does seem to be rather preliminary (or perhaps I have not understood it well enough), and I have proposed numerous questions for the authors. Methods And Evaluation Criteria: Yes, the benchmarks are thorough and the comparison between methods seems fair. Theoretical Claims: The authors provide some rigorous spectral gap bounds under a heuristic assumption that the tilt generated by the observation measure is sufficiently large relative to the remaining terms. The proofs under this assumption are rigorous, while the heuristic holds almost surely asymptotically under Theorem 3.1, whose proof is also rigorous. I skimmed the proofs and they appear correct to me; the final results are also sensible. Experimental Designs Or Analyses: The paper benchmarks their method against the unlocalized samplers on a diverse suite of tasks. The results show convincingly that in practice, the localization scheme can lead to significant speed-ups. Supplementary Material: I did not have the opportunity to thoroughly review the supplementary material; I only skimmed the proofs. Relation To Broader Scientific Literature: The primary references are discussed; this is an extension of the stochastic localization sampling schemes from [EAM23], which also appeared in earlier/other works [CE23], etc. The paper also engages with a wide array of discrete space MCMC methods. Essential References Not Discussed: As far as I am aware, the most relevant references have been covered. Other Strengths And Weaknesses: My comments can be found in the other sections, but speaking broadly: ***Strengths*** The empirical results are very convincing. The spectral gap results are very nice and cover a wide range of algorithms. ***Weaknessess*** The theory is not truly end-to-end and relies on a heuristic assumption. I have asked some clarifying questions in the fields below. Other Comments Or Suggestions: Line 166: e nhances -> enhances Questions For Authors: My questions are mainly relating to the theory of this work. 1. What are the missing ingredients for proving a bona fide spectral gap almost surely over all iterations, or at least for a sufficiently large tilt? 2. Similar to the question above, can the guarantees of Theorem 1 be made non-asymptotic? 3. Furthermore, if one only has the spectral gap holding with high probability over the randomized measure, what can one infer about the resulting estimator overall for a given iteration budget? 4. Is there any hope of being ``adaptive’’ in this budget, in case the posterior measure is difficult to sample from? 5. How bad is the error from using a Monte Carlo estimator of the posterior mean (what type of error does it induce for the resulting sampler)? Is there any hope for computing closed forms in any of the instances given, similar to the work of El Alaoui et al. for the SK model? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We are grateful to the reviewer for the thoughtful feedback. ## Q1 The core challenge stems from the path-dependent behavior of Brownian motion. Given the observation process $Y_t = \alpha(t)X + \sigma B_t$, we know that $\frac{Y_t}{\alpha(t)} - X = \frac{\sigma B_t}{\alpha(t)}$, which converges to 0 almost surely as $t$ approaches infinity. Unfortunately, establishing uniform convergence of this process presents significant difficulties. To guarantee a sufficiently large external field after iterations (which would ensure the spectral gap), we need $|\frac{Y_t}{\alpha(t)}| \geq 1 - \frac{|\sigma B_t|}{\alpha(t)} \geq c$ for some positive constant $c$ after a large time $t \geq T$. This requires establishing upper bounds on $ \frac{|B_t|}{\alpha(t)}$. Analyzing the Law of the Iterated Logarithm reveals that we can only obtain upper bounds on $ \frac{|B_t|}{\alpha(t)}$ after some $T(\omega)$ that depends on the specific sample path $\omega \in \Omega$. However, by applying Egorov's theorem, it is promising to show that for any $\varepsilon > 0$, there exists a subset $A$ with measure $P(A) < \varepsilon$ such that $ \frac{|B_t|}{\alpha(t)}$ converges to 0 uniformly on $A^c$. This approach could potentially establish high-probability uniform convergence. ## Q2 Sure! From line 838: > there exists $T$ large enough such that for $t \geq T$, we have $0\leq \Phi\left(\frac{\zeta}{\sigma}-\frac{\alpha(t)}{\sigma\sqrt{t}}\right)\leq\varepsilon$ (Note: We find the typos in lines 833 and 837 and they have been corrected from "$\pm$" symbols to "-". ) this allows us to derive an explicit estimate of $T$ in terms of $\varepsilon$ based on the convergence rate of $\frac{\alpha(t)}{\sigma\sqrt{t}}$ and properties of the normal CDF. ## Q3 In practice, the algorithm proceeds with a *specific* realization $Y_t = y_t$. Theorem 3.1 guarantees that for a sufficiently large $t$, the generated $y_t$ will induce a large external field with high probability. Consequently, the posterior distribution $q_t(X|Y_t=y_t)$ that we actually sample from using DMCMC will satisfy the conditions for the Poincaré inequality (Assumption 4.1) **with high probability**. This means that the DMCMC sampler used for the posterior estimation will mix polynomially fast **with high probability** for sufficiently large $t$. ## Q4 Your intuition is correct. Our theory suggests posterior sampling becomes easier over iterations $t$. This motivates allocating the MCMC budget adaptively. In Section 6.2, under "Impact of MCMC Step Allocation," we empirically validate this. We demonstrate that an adaptive allocation strategy (specifically, exponential decay allocation) **achieves superior performance** compared to a uniform allocation across the vast majority of cases (see Figure 1), confirming the practical benefit of adapting the budget. ## Q5 Sure. Since DMCMC is an irreducible reversible Markov chain with finite state space {$-1,1$}$^N$, satisfying the conditions of Theorem 1.1 in [1], we can derive an error estimate for the estimator of the mean of $\nu_{\beta,h}$ that depends on the spectral gap: $$P_{q}\left[\left|\left|\frac{1}{n}\sum_{i=1}^nX_i-E_{\nu_{\beta,h}} [X]\right|\right|<\varepsilon\right]\geq1-2e^{\gamma_{gap}/5}N_q\exp\left(-\frac{n\varepsilon^2\gamma_{gap}}{4Var_{\nu_{\beta,h}}[X]\cdot(1+g(5\varepsilon/Var_{\nu_{\beta,h}}[X]))}\right),$$ where $X_1\sim q$, $N_q=\|\|\frac{q}{\nu_{\beta,h}}\|\|_2$ measures the error from sampling before the Markov chain reaches stationarity, and function $g(x)=\frac{1}{2}\left(\sqrt{1+x}-(1-x/2)\right)$. Thank you for your insightful comment, we will add a discussion of this bound to the Appendix in the revision. --- [1] Lezaud P. Chernoff-type bound for finite Markov chains[J]. Annals of Applied Probability, 1998: 849-867. --- Rebuttal Comment 1.1: Comment: I thank the authors for their thorough response. I am maintaining my score, although I would be happy to see the theoretical results improved to avoid dependence on Assumption 4.1. --- Reply to Comment 1.1.1: Comment: Thank you for your response. We sincerely apologize for the misunderstanding regarding 'Assumption 4.1'—it should instead be termed a '**Fact**' rather than an 'Assumption'. In the SL framework, 4.1 holds as |h| (In our algorithm, h-value should equal $b+\frac{\alpha(t)Y_t}{\sigma^2t}$) grows infinitely with iterations (line 161 on the right column), ensuring satisfaction after sufficient steps with high-probability. We acknowledge that labeling this as an "assumption" was potentially misleading and will revise this in our manuscript. We hope this clarification could lead to an positive influence on the score evaluation. Please let us know if you have additional concerns regarding the content. We appreciate your constructive comments and look forward to addressing any further questions to improve our work.
Summary: This paper studies stochastic localization for sampling from binary quadratic distributions. As a main theoretical contribution, the authors prove Poincare inequalities for the sampling procedure from the (discrete) posterior distribution \\(q_t(x \mid y)\\) in stochastic localization, and thus establish the convergence rate of the posterior sampling step. A key implication is that the spectral gap for posterior distribution sampling increases as the time \\(t \to +\infty\\). This is achieved by an asymptotic argument. As a result, posterior distribution sampling becomes easier and easier in the later stage of stochastic localization. Claims And Evidence: Yes, the claims are generally well-supported by evidences. I like the ablation study in Section 6.2 that verifies the intuition that posterior sampling becomes easier as \\(t \to +\infty\\). Methods And Evaluation Criteria: Yes, the methods and evaluation criteria make sense. Theoretical Claims: I think the theoretical claims make sense. I have not checked their proofs, though. Experimental Designs Or Analyses: Yes, I read through the experiment section; see comments in Other Strengths And Weaknesses. Supplementary Material: No, I didn't check the appendix. Relation To Broader Scientific Literature: This paper extends previous stochastic localization methods by El Alaoui & Montanari (2022) and Grenioux et al. (2024) to discrete distributions. Essential References Not Discussed: Not that I am aware of. Other Strengths And Weaknesses: 1. I think the empirical results is not strong enough. The experiments suggest that using stochastic localization for discrete distributions is better than directly using a discrete MCMC method. However, the performance gaps between them are tiny (I am assuming \\(\pm\\)xxx in the tables represents one standard deviations). Indeed, the baseline performances are always within one standard deviation of stochastic localization. 1. Another weakness is that this paper actually does not prove how fast the samples produced by stochastic localization converge to the target distributions. The results in this paper only apply to the posterior sampling---an intermediate step in stochastic localization. Other Comments Or Suggestions: Line 166: "e nhances" -> "enhances" Questions For Authors: None at this moment. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for these insightful comments. ## W1 From an experimental perspective, the absolute improvements are indeed modest. However, we would like to emphasize two key points: 1. **Strong Baselines:** Our comparison is between standard DMCMC and SL combined with the *same* DMCMC sampler (SL+DMCMC). The base DMCMC samplers used are already highly effective, demonstrating strong performance on their own. As shown in Table 3, all DMCMC baselines significantly outperform the commercial solver Gurobi, indicating they are already operating near optimal performance levels for these challenging tasks. The fact that SL consistently provides further improvements, even on top of these powerful, well-established DMCMC methods, highlights its efficacy. 2. **Benchmark Standards:** Our experimental setup, comparison metrics, and implementation strictly follow the established DISCS benchmark [1]. As evidenced by Tables 5-7 in [1], even state-of-the-art DMCMC samplers often exhibit performance differences within one standard deviation of each other. Substantial performance overlap when considering one standard deviation is common in this domain. Therefore, comparing methods based on mean results is standard practice, as adopted in [1] and our work. Thanks for highlighting this point. In the revised version, we will **add text to better characterize the near-optimality of these baselines** to properly contextualize SL's performance improvements and clarify the standard deviation reporting convention. [1] Goshvadi K, Sun H, Liu X, et al. DISCS: a benchmark for discrete sampling[J]. Advances in Neural Information Processing Systems, 2023, 36: 79035-79066. ## W2 This is a profound but challenging problem. You are correct that our current theoretical results guarantee fast mixing (via Poincaré inequality) for the *posterior sampling step* within SL, but not the convergence rate of the *overall SL process* to the final target distribution. Analyzing the full SL process is difficult because $\frac{Y_{t}}{\alpha(t)}=X+\frac{\sigma B_{t}}{\alpha(t)}$ is not a time-homogeneous Markov process, making standard convergence analysis tools like functional inequalities hard to apply directly to the overall dynamics. One promising direction, as you note, involves analyzing the properties of $\frac{Y_{t}}{\alpha(t)}$. Since these random variables have densities with explicit expressions, calculating the KL-divergence between distributions at different times $t_1$ and $t_2$ might be feasible. This could potentially reduce the problem to estimating the ratio of partition functions of BQDs under varying external fields. A more general framework, perhaps using Wasserstein distance, might also be needed to characterize the convergence of the overall SL process. We acknowledge this limitation and consider establishing the convergence rate of the full SL sampler a key direction for **future work, which we will explicitly mention in the conclusion**. *(Typos addressed)*
null
null
null
null
null
null
MetaAgent: Automatically Constructing Multi-Agent Systems Based on Finite State Machines
Accept (poster)
Summary: The paper proposes MetaAgent, an approach to automatically construct multi-agent systems using finite state machines (FSMs). Instead of hand-coding roles and workflows, MetaAgent uses a prompt-driven “Designer” to: 1. Identify which agents (roles) are needed to complete a family of tasks. 2. Build a finite state machine to represent the multi-agent collaboration flow, with possible traceback (going back to earlier states if errors are found) and null-transitions (staying in the current state for iterative refinement). 3. Optimize the initially generated FSM by merging redundant states before deploying it to real tasks. Claims And Evidence: Claims: 1. MetaAgent can generate multi-agent systems automatically (as opposed to requiring human-coded instructions). 2. Finite state machines provide superior flexibility over purely linear or orchestrator-based pipelines. 3. The generated multi-agent systems are robust and match or exceed the performance of both (a) domain-specific multi-agent frameworks designed by humans and (b) other auto-generated frameworks. Evidence: 1. Each state is assigned one agent plus a separate condition-verifier mechanism that decides how to transition. This design is demonstrated in detail in examples (e.g., software development with states for requirements, coding, and testing). 2. On “Trivial Creative Writing” and “GPQA(Diamond)” question-answering, MetaAgent’s multi-agent system achieves higher accuracy (and coverage of correct answers) than other prompt-engineering baselines and the SPP auto-design approach. 3. MetaAgent produces a multi-agent system that nearly matches or surpasses specialized frameworks (e.g., DataInterpreter) and outperforms other auto-designed systems. 4. Outperforms MetaGPT (a well-known human-designed pipeline) and other auto-designed frameworks on a set of small projects (e.g., developing Snake, 2048, etc.), measured by functional checkpoints. Methods And Evaluation Criteria: Methods: An LLM-based “Designer” first creates specialized agents—each with its own system prompt and tool access—relevant to the target domain. It then builds a finite state machine (FSM) by defining states, each tied to one agent, along with transition conditions and “listeners” to control information flow. The initially generated FSM may contain redundant states, so a second LLM-driven pass merges or removes overlapping roles. Finally, when the FSM is deployed, the system processes a user’s query state by state, allowing agents to backtrack or refine outputs until a final condition is met. Evaluation Criteria: 1. Quantitative success rates on text-based tasks (e.g., how many questions are answered correctly in GPQA or how many writing tasks pass certain correctness checks). 2. Metrics on machine learning tasks such as F1-score, accuracy, or RMSE, aggregated into a “Normalized Performance Score (NPS).” Theoretical Claims: 1. The authors argue that many standard multi-agent frameworks (linear pipelines, decentralized debate, orchestrator-based) can be seen as special or “constrained” forms of an FSM (with fewer transitions or no null-transitions). 2. By allowing cycles (traceback) and conditional transitions, an FSM can capture more sophisticated workflows and error-handling mechanisms—leading to better coverage of real-world complexities. These claims are well-supported conceptually, showing how “traditional” multi-agent designs can be embedded in or derived from an FSM. That said, the paper does not delve deeply into formal language or automata theory beyond drawing analogies, so the theoretical underpinnings are relatively straightforward: it directly applies state machines to agent orchestration. Experimental Designs Or Analyses: The paper uses: 1. Comparative performance with (a) multiple baseline prompt-engineering paradigms (e.g., Chain-of-Thought, Self-Refine, SPP) and (b) well-known multi-agent frameworks (MetaGPT, AutoAgents). Multiple tasks that vary in complexity (short question-answering, creative writing, data science, and software dev). This broad coverage supports the authors’ claims of generality. 2. Cost analysis in tokens to highlight how the automatic design overhead compares to the cost of actually running tasks. They show that despite an up-front design cost (the LLM constructing the FSM), subsequent repeated usage can amortize that cost-effectively. Strengths in design: 1. A variety of domain tasks confirm the FSM’s generality. 2. Clear success metrics (accuracy, coverage, or pass/fail of checkpoints). Potential limitations: 1. The tasks tested are mostly small-scale and may not stress extremely large or complicated real-world scenarios (e.g., building production-grade software or enormous data pipelines). 2. While the paper reports improved performance on multiple tasks, seeing more ablation or failure modes in very complicated contexts would be interesting. Supplementary Material: From the publicly available draft, the “Appendix” includes: 1. Prompts for building the multi-agent system (the “Designer” instructions). 2. Detailed tables with partial results or demonstration of how states are merged. The authors use these appendices mostly to clarify the LLM prompts, optimization steps, and additional experimental details. Relation To Broader Scientific Literature: 1. The authors mention synergy with code interpreters, search engines, etc., continuing a line of research that integrates LLMs with external APIs or tool frameworks. 2. The paper builds on the growing idea that large language models can “prompt themselves” (or use meta-prompts) to define roles, states, transitions, etc. Essential References Not Discussed: Within the discussion, the paper references many important works on multi-agent frameworks and tool-based LLMs (e.g., SPP, AutoAgents, Symbolic Learning, etc.). However, there are a few broader areas they only lightly mention or do not cite explicitly: 1. Formal methods for verifying the correctness of multi-agent systems: the authors mention automata (FSM) but do not elaborate on advanced formal verification techniques that might reduce error. 2. Additional references to “complex task orchestration” frameworks or thorough treatment of “self-play” from prior RL-based approaches might have offered more historical grounding. Other Strengths And Weaknesses: Strengths 1. Unified framework: They argue how finite state machines unify or generalize popular multi-agent pipelines. 2. Traceback and Null-Transitions: This is an elegant way to add “debugging” loops and multiple tool calls within each state. 3. Experimental breadth: They tested text tasks, machine learning tasks, and code-generation tasks, demonstrating generality. Weaknesses or Limitations 1. LLM Dependence: The entire pipeline’s success hinges on the correctness of the LLM used for designing the agents and transitions. The authors note performance degrades significantly when the LLM is replaced with a weaker model (e.g., GPT-3.5 vs. GPT-4). 2. Limited Depth on Formal Guarantees: While the authors position FSM as a powerful structure, they do not thoroughly discuss formal correctness or how reliably the condition verifiers handle ambiguous states in real, uncertain domains. 3. Cost–Benefit: Although the authors present cost analysis, real-world users might want more discussion or demonstrations of how well one can reuse a single “task-level” design across many similar sub-tasks without re-triggering the design step each time. 4. Minor: The fonts in Figures 1 and 2 are small. Other Comments Or Suggestions: None Questions For Authors: None Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for the reviewer’s appreciation of our finite state machine design as well as the thorough discussion and experiments. ## Re Reference Material: Thank you for your valuable feedback. To the best of our knowledge, our work is the first to introduce finite state machines for the automatic design of multi-agent systems. We appreciate the suggestion regarding formal verification techniques and RL-based approaches. In our revisions, we will expand the discussion to include relevant formal verification methods for multi-agent systems and provide a more comprehensive analysis of RL-based self-play approaches in the context of task orchestration. ## Re Wakeness 1: Firstly, our framework is orthogonal to the foundation model capabilities. A better foundation model leads to better performance. However, when using GPT3.5 as the designer, our method also shows good performance on the Machine Learning Task. Moreover, we admit this is a reasonable concern. But when using LLM as a designer of a Multi-Agent System and even the LLM Agent domain. Almost every existing work can not avoid the influence caused by the ability of the foundation model. Our framework shows good performance on many kinds of tasks empirically. ## Re Weakness 2: The condition verifier serves as a guider who utilizes the knowledge of the foundation designer model (by following state transition conditions) and helps the agent learn from feedback or evaluation results. So the thing is the same as Weakness 1: when the foundational model goes powerful, our FSM becomes more reliable in uncertain domains. ## Re Weakness 3 As for the ‘reuse’, the benchmark on ML bench and Software already show the reuse ability of the designed FSM. Because the FSM use in these benchmarks is designed for general use in the task domain (eg. Software Development). When deployed, the FSM will handle different cases in the domain. Thus, the experiment is showing the ‘reuse’ of the FSM.
Summary: This paper proposes a novel framework, MetaAgent, for the automatic generation of multi-agent systems based on finite state machines. The framework comprises three key steps: (1) Agent Design: The designer model defines the roles and tools for each agent according to task discriptions; (2) Finite State Machine Design: The designer decomposes the task into multiple states and assigns them to appropriate agents; and (3) FSM Optimization: Agents are dynamically merged based on role distinguishability and tool assignment. For experiments, this paper selects a range of text-based tasks (e.g., trivial creative writing and GPQA) and real-world coding tasks (e.g., ML benchmarks and software development tasks). The experimental results demonstrate the effectiveness and efficiency of the MetaAgent. Claims And Evidence: 1. In Section 3.3, in the paragraph titled “Decentralized Debate as FSM,” this paper claims that those methods do not support null-transitions, which is presented as a key difference. This claim is not well-supported. The definition of null-transitions is not difficult, and they can be implemented through manual definition or prompt rewriting. 2. In Section 3.3, in the paragraph titled “Coordinate with Orchestrator as FSM,” the authors claim that those methods can be considered as FSMs. So, what is the main difference between them and MetaAgent? Methods And Evaluation Criteria: 1. Does predefining tools for a specific domain limit the ability to build automated workflows? 2. In section 3.4.3, with only the "merge" action available, is there a possibility that splitting might be necessary, for example, when agent1 and agent2 have similar roles in state 1 but different roles in state 2? 3. The construction of multi-agent workflows heavily relies on the designer’s abilities. In MetaAgent, the absence of hierarchical or tree-like decomposition places a high demand on the designer’s task decomposition skills. Theoretical Claims: There are no theoretical claims. Experimental Designs Or Analyses: The baseline comparisons are inconsistent. Tables 2, 3, and 4 use different baselines. Why is only SPP compared in Table 2? And why does the software development task, also a real-world coding task, have a different baseline setup compared to ML-Bench? Supplementary Material: There is no supplementary material. Relation To Broader Scientific Literature: This paper’s key contribution is using FSMs to build multi-agent workflows. While others focus on multi-agent framework construction, like automatic evolution, this work uniquely introduces FSMs and highlights their advantages over other approaches. Essential References Not Discussed: None. Other Strengths And Weaknesses: **Strengths**: 1. The authors’ introduction of Finite State Machines for modeling multi-agent workflows is insightful and inspiring for the field. 2. The authors thoroughly discuss different types of multi-agent frameworks and analyze the distinctions between MetaAgent and them. **Weakness**: 1. This method heavily relies on the designer's capabilities for task decomposition and planning, and it does not provide any guarantee regarding the lower bound of the method's performance. Other Comments Or Suggestions: 1. The font size in Figure 1 is too small, resulting in poor readability. 2. In the second-to-last paragraph of section 3.4.3, which appendix is being referred to? 3. Sections 4.2 and 4.2.1 should be presented as parallel sections. 4. The explanation of the metrics would be clearer with the inclusion of formulas, such as for NPS. 5. In the second paragraph of section 4.4, “augment” should be corrected to “augments”. 6. Tables 9, 10, and 11 are disorganized. Questions For Authors: In section 4.3, does the calculation of cost include the refinement process during the construction (as mentioned in section 3.4.3)? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thanks for the thoughtful feedback. We are encouraged that the reviewer agrees the Finite State Machine(FSM) is an inspiring method to the Multi-Agent System field and appreciates our theoretical analysis. # Re Weakness: Our framework is independent of the foundation model’s capabilities. Our method remains effective even with weaker models. For instance, when using GPT-3.5 as the designer, our approach still achieves strong results on Machine Learning benchmarks. Moreover, while concerns about the reliability of using an LLM as a designer in a Multi-Agent System, to the best of our knowledge, almost no existing work provides a provable guarantee. And Our framework shows good performance on many kinds of tasks empirically. # Re Claim 1: Simply modifying the prompt is insufficient to enhance the performance of the LLM debate structure on text-based tasks. This is because our finite-state machine (FSM) framework incorporates structural features such as Null-Transition and State Traceback, which cannot be replicated through prompting alone. To empirically validate this, we conducted an experiment comparing a purely prompt-modified LLM debate with the traditional one. Specifically, we rewrote the LLM debate prompt, instructed the model to refine its responses, and evaluated performance on GPQA (Diamond). The modified prompt achieved a score of 0.56, only marginally better than the traditional one (0.54). This result demonstrates that mere prompting cannot match the performance of our FSM-based approach—it is the FSM structure itself that drives the improvement. Theoretically, Null-Transition serves as a refinement mechanism, allowing agents to improve responses based on feedback. In contrast, Decentralized Debate follows a rigid sequence where agents present opinions without feedback opportunities, preventing output refinement. This limitation cannot be overcome through simple prompt modifications. However, introducing a condition verifier to refine responses and select speakers naturally transforms the structure into a finite-state machine. # Re Claim 2: Our MetaAgent Method represents the full realization of a Finite State Machine (FSM), as it incorporates a specialized condition verifier that enables Null-Transition and flexible state transitions, including state traceback. Section 3.3 demonstrates that existing Multi-Agent Systems can be viewed as limited FSMs; the "Coordinate with Orchestrator" approach, which has a shared verifier, suffers from centralized decision-making that becomes computationally burdensome as states increase. In contrast, FSM's decentralized architecture—with independent condition verifiers at each state—significantly improves scalability and adaptability. # Re Method1: We evaluate the MetaAgent Method on ML and Software Development tasks using two predefined tools: Python Code Interpreter and Search Engine. Our experiments demonstrate that the designer LLM effectively selects and assigns tools to different agents. We also test FSM with an expanded tool pool; the designer maintains wise selections and consistent performance. # Re Method2: It is interesting to include the ‘splitting’ action in our method. However, practically, we find the designer LLM always tends to split the given task into trivial sub-tasks, causing a high failure rate because the chain of execution is too long. To fix this drawback, we design the merge action to help the designer LLM rethink and revise the FSM. The ‘splitting’ method is not practical because the initial version of the finite state machine itself tends to be very trivial. # Re Experiment 1 We selected SPP since it is an auto-design multi-agent method. Based on your suggestion, we have added new experiments for the software development task using AutoGen, Task Weaver, and Open Interpreter. This table shows the result: |Method|2048Game|SnakeGame|BrickBreakerGame|ExcelApp|WeatherApp|Avg| |------------------|-----------|------------|--------------------|-----------|-------------|------| |AutoGen|0.75|1|0|0|0|0.35| |Open Interpreter|0|0.5|0|0.25|0.25|0.2| |Task Weaver|0| 0.5|0|1|0|0.3| |MetaAgent|0.75|1|0.5|1|1|0.85| From the above table, we can observe that our method also outperforms existing baselines by a large margin. Note that we do not include Data Interpreter since it is a frame specifically designed for Data Science. For other baselines in the text-based tasks, including direct prompt, CoT, CoT-SC, and Self-Refine. They are merely prompt-based, single LLM methods instead of the multi-gent framework, and they do not support tool-using functionality. They perform poorly and are not a fair comparison with our method since we do not include them in the real-world coding task. # Re Comment 2: In appendix F # Re Comment 4: From DataInterpreter Paper. NPS = $\frac{1}{1 + s}$ (if s is smaller, the better ) or $s$ (if s is bigger, the better ). # Re Question: Yes. It is included in the ‘design’ stage.’
Summary: The paper introduces MetaAgent, a framework for automatically designing multi-agent systems using finite state machines (FSMs). The paper conceptualizes FSM within LLM agent design. The proposed method allows traceback ability to solve complex tasks. The paper also develops an optimization approach to merge the states for efficiency. Results on several text-based tasks demonstrate its performance compared with baselines. ## update after rebuttal Thanks for the responses from the authors. I would recommend that the authors add the discussions on external data and foundation model capabilities in the revised version. Claims And Evidence: Supported: Tool usage and traceback are validated via ablation studies in Table 6. The overall performance is also evaluated on several benchmarks. Not supported: The generalization ability is not tested across domains beyond text-based scenarios. Methods And Evaluation Criteria: 1. The design of the method is clear and easy to follow. 2. Benchmark limitations: only text-based benchmarks are deployed, while real-world multi-agent tasks also include planning and decision-making tasks, such as VirtualHome (http://virtual-home.org/documentation/master/get_started/get_started.html). Theoretical Claims: N/A Experimental Designs Or Analyses: 1. Though traceback helps with the performance, the paper seems not to report its potential influence such as delay in getting the results in practice. 2. The paper assumes LLMs can reliably decompose complex tasks, but does not evaluate failure cases (e.g., ambiguous user queries). Supplementary Material: Yes, A-C. Relation To Broader Scientific Literature: 1. Multi-agent system: Extends prompt engineering to multi-agent coordination. 2. Automatic learning: Combining AutoML with agent workflow design. Essential References Not Discussed: Several papers on flexible multi-agent collaboration architectures are not discussed. What are their relations with the FSM? [1] Li, Guohao, et al. "Camel: Communicative agents for" mind" exploration of large language model society." Advances in Neural Information Processing Systems 36 (2023): 51991-52008. [2] Guo, Xudong, et al. "Embodied LLM Agents Learn to Cooperate in Organized Teams." Language Gamification-NeurIPS 2024 Workshop. [3] Chen, Weize, et al. "Agentverse: Facilitating multi-agent collaboration and exploring emergent behaviors in agents." arXiv preprint arXiv:2308.10848 2.4 (2023): 6. Other Strengths And Weaknesses: Strengths: 1. State merging is a novel way to make multi-agent much more efficient. 2. FSM can potentially serve as a unified framework to develop more complex multi-agent system, which has been discussed in this paper. Weaknesses: 1. Open-source models are not included and discussed. 2. The merging of states depends on one single LLM. The capability of this LLM will limit the performance. Other Comments Or Suggestions: The tables in Appendix C should be revised. Questions For Authors: 1. Is there any discussion on "the optimizing method does not need external data as well as numerous training steps"? If there is external data, will the optimization work better? 2. In ablation studies, the number of iterations is not clearly stated. The process of optimization is not included. Can you provide more information about this? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank the reviewer for appreciating the finite state machine as a unified framework of a Multi-Agent System. # Re Reference: CAMEL is a simple two-agent chat system resembling a Decentralized Debate structure (Session 3.3). AgentVerse employs two cooperation structures: Horizontal : a Linear System where decisions are summarized and Vertical:a Debate System where a solver and reviewer iteratively refine decisions until reaching consensus. These structures are also discussed in Session 3.3. The paper "Embodied LLM Agents Learn to Cooperate in Organized Teams" introduces a Communication-Action model, where agents communicate first and act later. However, this model has drawbacks: agents receive no immediate feedback during the Action Phase, delaying behavior refinement, and the leader agent can only issue instructions during the Communication Phase. In contrast, the FSM structure enables immediate feedback after each action via condition verifiers. Agents refine actions on the spot, determine state transitions, and communicate continuously, ensuring seamless interaction throughout the process. # Re Question 1: The proposed optimization method relies on a self-iteration procedure, where the system refines itself over time. While incorporating external data could enhance this process by allowing the designer agent to learn from additional information, our observations indicate that most failure cases in the initial version of the FSM are caused by overly trivial agent and state assignments. Therefore, the most effective approach is to prompt the designer agent to self-refine the FSM by merging trivial agents and states, eliminating the need for external data. Our FSM structure is designed to handle general open-ended, real-world tasks, where user inputs can vary widely. This variability makes it difficult to collect or synthesize high-quality test cases. As a result, relying on the self-iteration optimization method becomes not only effective but also the most practical approach for improving the FSM's performance in these complex, unpredictable scenarios. # Re Question2: As presented in the optimization method (Session 3.4.3):” This iteration continues until no further states can be merged and the state set stabilizes.” The ‘iteration’ in the ablation study means the optimization method. We will refine our description in the revision. # Re Evaluation: We have implemented a Virtual Home environment to the MetaAgent Framework and tested it with some housekeeping tasks. The following is a highlighted running log. **Agent Design**: LivingRoomAgent, RestRoomAgent, KitchenAgent, and BedroomAgent. **State Design**(show state instructions): Take books from the living room, Clean rubbish in the living room, …, Take books from the bedroom **State Transition Example**: { "from_state": "1", "to_state": "2", "condition": "If books are taken from the living room" }, When deployed, this multi-agent system can plan and submit actions to the environment in the specific format. (example task: Clean the bedroom) The Bedroom Agent first **plan**: Prioritized Cleaning Plan for the Bedroom: 1. **Organize Clothes:** - Fold and organize the clothespiles (15 clothespiles). - Hang the clothesshirts (3 clothesshirts) and clothespantss (5 clothespantss) in the closets. 2. **Clean Surfaces:** - Wipe down the desk, nightstands, coffeetable, and bookshelf. - Ensure the desk and nightstands are properly closed. … And then it **act**: <ACTION> { "action": "Grab", "object": "clothespile", "object_id": 150 } </ACTION> … Several test cases show that the FSM-based multi-agent system can also understand the Virtual Home environment and interact with it properly. # Re Weakness 1: When writing the paper, Open source models do not have enough intelligence level to achieve this complicated task of designing a Multi-Agent System. To invest the influence of the foundation model quality, we also apply GPT-3.5 as a weaker model in the ablation study, whose performance is lower than GPT-4o but is still comparable with other baselines. Recently, we also tried some powerful open-sourced models as designers (like deepseek-v3). It also worked well on our test cases. # Re Weakness 2: Firstly, our framework is orthogonal to the foundation model capabilities. A better foundation model leads to better performance. However, when using GPT3.5 as the designer (GPT-4o as executor), our method also shows good performance on the Machine Learning Task. (In ablation study, Table 5) Moreover, while concerns about the reliability of using an LLM as a designer in a Multi-Agent System (or even within the LLM Agent domain) are valid, to the best of our knowledge, almost no existing work in the area can avoid the influence caused by a weaker foundation model.
Summary: This paper primarily discusses the automated construction of multi-agent systems. Its highlight is the introduction of the finite state machine (FSM) concept, incorporating null-transition states and state traceback into multi-agent systems. This allows the system to more flexibly address two issues: (1) when the current agent does not resolve a subtask as expected, and (2) when a downstream agent identifies problems with a previous agent, enabling traceback. Additionally, the paper outlines the relationships between three mainstream multi-agent approaches under this framework: Linear System, Decentralized Debate, and Coordinate with Orchestrator, explaining that they are all special cases of FSM. Based on the FSM idea, it Introduces the process of building and optimizing multi-agent systems. Its effectiveness was measured in scenarios such as Trivial Creative Writing, GPQA, and Coding. Furthermore, cost analysis and ablation experiments were conducted, demonstrating a comprehensive exploration. Claims And Evidence: The paper mentions that external data is not needed, but in the ablation experiment of "Reduce system redundancy through optimization", it shows that "... a few iterations are required to make the system more robust. . After testing the initial version of the multi-agent system on the pertinent test cases, the multi-agent system will be adapted in the aspect of agent and state design ...", which seems to be Inconsistente. Methods And Evaluation Criteria: For text-based tasks, the definition of metrics are provided, but it doesn't mention how to obtain them. When evaluating software development task, the authors apply the "objective" checkpoints based methods. It would be better if the subjective evaluation metrics are reported together. Theoretical Claims: N/A Experimental Designs Or Analyses: In cost analysis, it is not clear here why the number of tokens used in this paper would be less. It stands to reason that additional verifiers would increase the use of tokens. Supplementary Material: Yes, I have briefly reviewed the supplementary material. Relation To Broader Scientific Literature: A framework to unify existing multi-agent systems, which is well-discussed in Session 3.3. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Main highlight is to use FSM to unify current multi-agent systems, and provide a way to automatically construct multi-agent systems. Other Comments Or Suggestions: See questions. Questions For Authors: 1. What will be the difference between a multi-agent system with single verifier and MetaAgent if the verifier knows the role of each agent? 2. How to get the metrics for text-based tasks 3. What is the exact meaning of "iterations" in the ablation study? Is it case-driven? 4. Why MetaAgent costs less with more verifier? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for the reviewer’s effort and the appreciation in the discussion in Session 3.3. We believe the finite state machine has the potential to be a unified structure of the Multi-Agent System. # Re Claims1: Our optimization method is inherently self-iterative, meaning it does not rely on external training data. In the ablation study, the description highlights the motivation behind this design choice. Initially, we observed that the first version underperformed on test cases, which led us to develop this self-iteration approach as an optimization strategy. We will refine the description in the revision to improve clarity. # Re Question 1: This question explores why a Multi-Agent System, where agents are assigned different roles, can outperform a single LLM agent in certain tasks. Intuitively, assigning specific roles to agents activates their specialized knowledge related to those roles. Similarly, when all condition verification tasks are assigned to a single LLM, two main challenges arise. First, as the number of agents increases, a single condition verifier struggles to understand each agent’s situation. Second, dedicated condition verifiers, assigned to specific agents, can better capture and process the state-specific information, leading to more accurate and efficient verification. # Re Question 2: GPQA evaluates the accuracy of multiple-choice questions, while Writing assigns a score based on keywords. Each keyword has a corresponding list that includes its various forms, and as long as the generated text contains at least one of these variants, it is considered valid for scoring. When designing the objective evaluation criteria for software development tasks, we also select several bad cases for subjective evaluation. Through this process, we gradually update the objective evaluation criteria to make it more reasonable. # Re Question3: No. It means the optimization method described in the method section. We will refine the description in the revision. # Re Experiment Metric also Question4: For a batch of tasks (10+), the cost of the MetaAgent architecture is lower than that of other frameworks. The primary reason is that the FSM is more general: it only requires a one-time design for the task domain, whereas other frameworks necessitate case-by-case design(eg, AutoAgents,SPP). As a result, MetaAgent incurs lower costs when handling a batch of tasks.
null
null
null
null
null
null
Mitigating Heterogeneous Token Overfitting in LLM Knowledge Editing
Accept (poster)
Summary: This paper addresses the problem of heterogeneous token overfitting (HTO) in knowledge editing (KE) for large language models (LLMs). The authors identify that existing KE methods, which indiscriminately optimize cross-entropy loss across all tokens, lead to varying overfitting rates for different tokens, degrading reasoning capabilities. They propose OVERTONE, a token-level smoothing method that adaptively refines target distributions by blending ground-truth tokens with filtered model predictions. Experiments across four KE methods, two LLMs, and diverse benchmarks demonstrate OVERTONE’s effectiveness in improving portability and locality while maintaining reliability. ## update after rebuttal I keep my score as the final score. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: Yes Experimental Designs Or Analyses: Yes - I do not understand the relation between the portability loss and the underfitting degree? Why the underfitting degree prove the model is overfit? It is not clear hear. Supplementary Material: NA Relation To Broader Scientific Literature: This work advances knowledge editing (KE) by addressing token-level overfitting—a gap in prior KE methods (e.g., MEND, LoRA) that optimize generically across tokens. By integrating token-aware regularization and influence-function theory, it bridges fine-grained training dynamics with LLM robustness, offering a universal enhancement for KE frameworks. Essential References Not Discussed: There are some answer-level overfitting analyses that I think should be mentioned. Token level overfitting is good but answer-level is also important. - Neighboring Perturbations of Knowledge Editing on Large Language Models, ICML 2024 Other Strengths And Weaknesses: Strengths: - The analysis is good and tackles the issues well. - The experiments are adequate and convincing. - The identification of HTO as a key bottleneck in KE is a significant contribution. The analysis of token-level loss dynamics provides a fresh perspective on overfitting in LLM editing. Weakness: - While the connection to DPO is intriguing, the paper does not empirically compare OVERTONE with DPO-based editing methods, leaving its practical advantages under-explored. - Some case studies could make the contribution clearer. Other Comments Or Suggestions: NA Questions For Authors: Some questions: - I remember the MQuAKE contains 2-4 hops questions; why does you just do the 2-hops? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We highly appreciate your effort and time spent reviewing our paper and thank you for your expertise and constructive comments. In the following, we address your comments and questions one by one. >**The relation between the portability loss and the underfitting degree (UD).** Yes, portability loss does not directly imply UD. In Sec 2.2, Fig 1 shows that high portability loss is attributed to overfitting. Toward a deeper understanding of such overfitting, in Sec 2.3, (negative) UD (NUD) is defined to uncover the token-level HTO, which is a key contribution of this work as admired by the reviewer. Specifically, NUD represents a token is overfitted as its training loss is too small. Specifically, NUD computes the difference between the token's training loss (of edited model), and the greedy decoded token's pretrained loss (of unedited model). The choice of greedy decoding is on purpose, as it reflects the unedited model's most confident knowledge proper and valid in the past. By comparing the two, NUD indicates that the edited model is overly confident, and is "overfitted" thereof. We will make this clearer. >**Related work on Answer-level overfitting.** We thank the reviewer for bringing this interesting work to our attention. After reading it, we agree that the neighboring knowledge perturbation due to the answer-level overfitting is insightful, and it would be interesting to explore bridging the two types of overfitting and building more principled solutions. We will highlight this referred paper, the connection, and this future direction in the revision. >**Empirical Comparison with DPO.** Following the reviewer's suggestion, we train LoRA with DPO from EasyEdit. We use the pre-edited model's old knowledge as the negative data. Due to time constraint, we only conduct Single Editing on ZsRE. We note that DPO performs worse, which we presume due to the practical challenge analyzed in Sec 3.2. } | | | Rel. | Gen. | Por. | Loc. | Avg | |--------|-------|------|-------|-------|-------|-------| | Llama2 | Ours | 100 | 94.31 | 61.16 | 87.2 | 85.67 | | | DPO | 100 | 94.74 | 33.64 | 41.66 | 67.51 | | Llama3 | Ours | 100 | 98.5 | 51.57 | 93.13 | 85.8 | | | DPO | 100 | 97.77 | 19.61 | 10.58 | 56.99 | >**Some case studies could make the contribution clearer.** Thank you very much for the suggestion. Due to time constraint, we will dive into more in-depth visualization on OVERTONE effect, and how OVERTONE helps multi-hop reasoning on MQuAKE in the revised paper. >**MQuAKE contains 2-4 hops questions; why does you just do the 2-hops?** We apologize for the misleading statement. In experiments we followed MQuAKE official Repo, DeepEdit, and EasyEdit to load the multi-hop questions. We didn't manually filter out 3 and 4 hops questions. After rechecking the source documents carefully, we noted that "2-Hop" is inaccurate and should be "Multi-Hop". We will correct these descriptions in the revised paper.
Summary: This paper investigates the Heterogeneous Token Overfitting problem in knowledge editing. The authors first analyze the root cause of this issue, attributing it to the training paradigm that indiscriminately optimizes the probabilities of all tokens. To address this, they propose OVERTONE, which refines the traditional loss function. The theoretical advantages of OVERTONE are demonstrated, and experiments show that it outperforms several baselines across diverse experimental settings. Claims And Evidence: The claims are supported by evidence. Methods And Evaluation Criteria: While OVERTONE effectively adjusts the target distribution by filtering out noise tokens, it may inadvertently introduce bias in scenarios such as knowledge conflicts, where the model's own predicted distribution could be unreliable. Theoretical Claims: I have checked the correctness of any proofs for theoretical claims. Experimental Designs Or Analyses: 1. I think OVERTONE can be applied to ROME (or MEMIT) to improve the loss function (Equation (4) in the original paper https://arxiv.org/pdf/2202.05262), but the experiment did not show the corresponding results. 2. This paper lacks a comparison with LTI ([1]), another method designed to alleviate overfitting. 3. The impact of varying the parameter filtering threshold n on the performance of the proposed method is not explored in the experiments. References: [1] https://openreview.net/forum?id=t8qcGXaepr Supplementary Material: I have reviewed the supplementary materials. Relation To Broader Scientific Literature: Prior work has identified the problem of overfitting in knowledge editing. The main contribution of this paper is to analyze this problem from the token-level, providing new insights into its underlying causes. The authors propose OVERTONE, a new method to mitigate overfitting. This work advances the field by offering more granular understanding of the problem. Essential References Not Discussed: No necessary related work has been omitted. Other Strengths And Weaknesses: Please see above comments. Other Comments Or Suggestions: Please see above comments. Questions For Authors: Recent research has begun exploring knowledge editing in the form of free text. Have the authors considered the problem of overfitting in this context? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We highly appreciate your effort and time spent reviewing our paper and thank you for your expertise and constructive comments. In the following, we address your comments and questions one by one. >**OVERTONE can be applied to ROME (or MEMIT) to improve the loss function (Eq 4 in the original paper).** Thank you for the insightful idea of extending OVERTONE to ROME. We noted two unique designs in ROME (and MEMIT) makes it differ from four methods we studied. First, the impact of auto-regressive loss, which OVERTONE alters, on ROME is weaker, in the sense that the MSE loss will determine the final parameter update. Second, ROME relies on random prefix augmentation, which affects overfitting as well. Given these facts, we plan to work on a more principled way to extend OVERTONE, a augmentation-free end-to-end training paradigm, in light of its principle. That is, we seek a better way to *smooth (relax) different token fitting adaptively* with the model's own knowledge, following the principle of OVERTONE. We will highlight this challenge, together with our future plan in the revision. >**Comparison with LTI, another method designed to alleviate overfitting.** Both LTI and OVERTONE works on mitigating overfitting in knowledge editing. The conceptual similarity, from a high level, lies in adding pre-trained knowledge to the editing. But LTI explores a distinct direction, with its difference to ours lying in X folds. First, LTI explores in-context learning (ICL) to incorporate pre-trained knowledge into the editing data, while ours designs an adaptive token-level distribution mixing, in light of the token-level HTO dynamic. Second, LTI, which is primarily developed for ROME-based solution, acts on both latent representation and output prediction loss. Ours, on the other hand, is agnostic to the editing method and alters the output prediction loss only. Finally, LTI, same as ROME, relies on a data augmentation, while ours does not include such mechanism. Following the reviewer's suggestion, we will highlight these differences in the revised paper, and will explore bridging the two directions in our future work. >**Bias in knowledge conflicts and the model's own predicted distribution could be unreliable.** We agree that potential knowledge conflict and general noise can be misleading. To reduce this risk, OVERTONE incorporates two mechanisms. First, the unreliable (noisy) part is filtered out. Second, mixing with the model's prediction is conducted only if the mixed distribution correctly assigns the ground truth label (i.e., training token) the highest probability (Eq 3). Finally, *provably solving the potential knowledge conflict* for knowledge editing is still an open question, and we will highlight this in the revision. >**Impact of filtering threshold $n$.** Mathematically, "without filtering" is equivalent to setting $n \rightarrow \infty$. As shown in Tab 3, this leads to a worse performance. To further study how sensitive $n$ is, we follow the reviewer's suggestion and try a larger $n=1$ on LoRA, which is the default value from Top $n\sigma$ paper. This gives us average performance 85.49 (Rel: 100, Gen: 94.85, Por: 61.44, Loc: 87.01) on editing ZsRE, which is slightly lower than 85.67 from $n=0.5$. We believe this insensitivity is reasonable, considering that correctness-checking mechanism will discard the mixing if it is misleading. We will add this discussion in the revised paper. >**Knowledge editing in free text form.** We thank the reviewer for coming up this interesting direction. After checking related papers, we agree that the free-form text can express more diverse knowledge, on which the editing can be important but also more challenging. Considering the common practice that knowledge to edit at a time is few, we expect a similar overfitting due to the small training size nature, and HTO because some pretrained knowledge can still be useful, making different parts (tokens) have varied difficulties to learn. Therefore, we believe that our method can shed light on this interesting problem. However, we also believe that the free-form will add additional challenge to understand and quantify the overfitting. Therefore, we will explore this direction in our future work, and add this discussion, together with related papers, in the revised paper.
Summary: This paper investigates the challenge of heterogeneous token overfitting in knowledge editing of large language models, where different tokens in the target knowledge generalize at varying rates during selective parameter updates. To address this, the authors propose OVERTONE—a token-level smoothing approach that adaptively refines the training target for each token to mitigate overfitting while preserving unrelated pre-trained knowledge. The paper presents both a detailed theoretical analysis, which connects OVERTONE to concepts such as DPO, and extensive empirical evaluations on multiple benchmarks using models like LLaMA 2 and LLaMA 3. The results demonstrate that OVERTONE significantly enhances editing performance by improving the model’s reasoning (portability), generality, and locality with negligible computational overhead, offering a flexible plug-and-play solution that can complement existing KE methods. ## update after rebuttal The additional experiments have indeed resolved most of my doubts and provided more comprehensive support for the arguments in this paper. I am inclined to accept this paper, but since I initially gave it a score of 3, which means leaning towards accept, I will maintain the score of 3. Claims And Evidence: The paper's core claims about the effectiveness of the OVERTONE method are generally well-supported by extensive experimental evidence across multiple editing methods, models, and datasets. However, some claims lack sufficient supporting evidence: - Model-agnostic wide applicability: While the effectiveness has been demonstrated across 4 editing methods, testing was limited to only two LLaMA series models. Experiments on additional architectures such as Qwen or Mistral would strengthen the validation of the method's broad applicability. - Connection to DPO: The derivation linking OVERTONE to DPO is mathematically detailed; however, the practical implications of this connection are not supported by direct experimental comparisons. More empirical evidence demonstrating that the benefits of DPO carry over to the OVERTONE framework would help solidify this claim. Methods And Evaluation Criteria: The proposed methods and evaluation criteria are appropriate for the problem at hand; I believe that aside from not considering a broader range of model architectures, both the approach and evaluations are reasonable. Theoretical Claims: I reviewed the proofs for all the proposition. In general, the derivations appear logically align with standard techniques such as Taylor expansion, KL-divergence properties, and influence function analysis. However, several proofs rely on nontrivial assumptions—for example, assumptions about gradient isotropy, the convergence of pretrained models, and specific cosine similarity bounds—which may be strong in practice. These assumptions, along with simplifications in the first-order approximations and the filtering mechanism provide valuable theoretical intuition, further empirical validation. Experimental Designs Or Analyses: The experimental designs and analyses were carefully structured and exhibit strong soundness and validity. However, there are some points which can be improved: - Although the experimental results demonstrate significant improvements across various metrics, the study would benefit from deeper statistical validation. Specifically, including statistical significance tests along with reporting the mean and variance from multiple experimental runs (e.g., using different random seeds) would enhance the robustness and persuasiveness of the conclusions. - Additionally, the experiments predominantly rely on LLaMA 2 and LLaMA 3, which share substantial architectural similarities. This architectural limitation may affect the generalizability of the method to other model designs, such as Qwen and Mistral, or multimodal models. Expanding the evaluation to include a more diverse set of architectures would help in thoroughly validating the broader applicability of the proposed method. Supplementary Material: I've review all the supplementary. Relation To Broader Scientific Literature: This paper tackles the issue of heterogeneous token overfitting in knowledge editing for large language models by building on prior work in areas like selective fine-tuning and parameter-efficient methods (e.g., LoRA, ROME, and MEMIT). It extends traditional strategies such as label smoothing and early stopping by introducing a token-level adaptive smoothing approach that preserves the model’s pre-trained knowledge while integrating new information. The authors support their method with a theoretical analysis based on influence functions and draw connections to constrained optimization techniques like direct preference optimization (DPO). Overall, the approach not only improves the reliability, generalizability, and locality of edited models but also offers a versatile, model-agnostic framework that advances both the practical and theoretical understanding of LLM knowledge editing. Essential References Not Discussed: I think the paper adequately covers the essential references needed to understand the context for its key contributions. But I am not very familiar with this domain. Other Strengths And Weaknesses: The paper introduces a novel token-level adaptive smoothing approach that effectively mitigates heterogeneous token overfitting in knowledge editing for large language models. The method is supported by thorough theoretical derivations and extensive empirical evaluations on LLaMA 2 and LLaMA 3, showing significant improvements in reasoning capacity, generality, and locality. However, the experimental validation is limited to a pair of closely related architectures, suggesting that further statistical validation and broader model evaluations are needed to confirm the method’s wide applicability and real-world effectiveness. Other Comments Or Suggestions: - Consider including statistical significance tests along with mean and variance from multiple runs (e.g., using different random seeds) to further validate the improvements and robustness of OVERTONE. Broader Model Evaluation: - Expanding the experimental evaluation to include architectures beyond LLaMA 2 and LLaMA 3 (such as Qwen or Mistral) would strengthen the claim regarding the method's model-agnostic benefits. Questions For Authors: - Your experiments primarily focus on LLaMA 2 and LLaMA 3 models. Have you evaluated or can you comment on how your approach scales to other architectures (e.g., Qwen or Mistral)? - Could you provide details on how the hyperparameters were selected for the experiments and discuss any observed sensitivity to these choices? - The paper draws a connection between your approach and DPO. Could you provide more insights or quantitative analyses on how this theoretical connection affects the overall performance of the OVERTONE method in practice? - While the experimental results show promising improvements, could you elaborate on the variability of these results? Specifically, have you conducted multiple runs with different random seeds and performed statistical significance tests to verify the robustness of your improvements? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We highly appreciate your effort and time spent reviewing our paper and thank you for your expertise and constructive comments. In the following, we address your comments and questions one by one. >**More model architecture.** We follow recent works (e.g. EasyEdit survey) to study the representative LLaMA family. Following the reviewer's suggestion, we further study Qwen2.5-3B-Instruct. Due to time constraint, we only experiment it with ZsRE, using hyperparameters on LLaMA-2 (which can be suboptimal). As shown at: https://anonymous.4open.science/r/hto-overtone, OVERTONE again helps achieve better editing performance. For the reviewer's convenience, we paste FT-M Single Edit results here. | | Rel. | Gen. | Por. | Loc. | Avg | |--------|------|-------|-------|-------|-------| | FT-M | 100.0 | 99.3 | 50.98 | 73.13 | 80.85 | | +Ours | 100.0 | 96.18 | 56.26 | 80.66 | 83.28 | >**Comparison with DPO.** Following the reviewer's suggestion, we train LoRA with DPO.We use the pre-edited model's old knowledge as the negative data. Due to time constraint, we only conduct Single Editing on ZsRE. We note that DPO performs worse, which we presume due to the practical challenge highlighted in Sec 3.2. | | | Rel. | Gen. | Por. | Loc. | Avg | |--------|-------|------|-------|-------|-------|-------| | Llama2 | Ours | 100 | 94.31 | 61.16 | 87.2 | 85.67 | | | DPO | 100 | 94.74 | 33.64 | 41.66 | 67.51 | | Llama3 | Ours | 100 | 98.5 | 51.57 | 93.13 | 85.8 | | | DPO | 100 | 97.77 | 19.61 | 10.58 | 56.99 | >**Statistical significance tests along with mean and variance from multiple runs.** Our experiment design follows the convention in Knowledge Editing and is based on the widely-used EasyEdit. All metrics are average from different samples, each uses a *different* initial value. The random seed is fixed to 42, so "standard" and "ours" use the identical initial value for the same sample. Following the reviewer's thought, we tried seed 2025, and ran "ours" as in Tab 3, resulting in a new average 85.44 (Rel 100, Gen 94.88, Por 60.41, Loc 86.47), which is very close to the reported one, confirming the effectiveness of our method. From a statistical test perspective, out of 72 comparisons ("standard" vs "ours"), ours achieved better performance in 69 cases, providing a "significant" improvement based on binomial test. Finally, we agree with the reviewer that conducting multiple runs on each sample (knowledge) will further enhance the reliability, which is valuable but has been largely overlooked by the community. We will highlight this in the limitation section of the revised paper, and will follow this principle in our future work. >**Hyperparameter selection and sensitivity.** We didn't conduct an extensive hyperparameter tuning. Our current selection can be found in App B, and are made as follows: $\epsilon$ is set to close to 0, and we tried 0.05 and 0.01; $n$ for filtering tried $1$ (default in "Top $n\sigma$" paper) and more aggressive $0.05$ for simpler LoRA and FT. Finally, mixing $\lambda$ was set to $0.1$ to encourage fast integration of $\pi_{flt}$ without tuning. We didn't notice too big difference when trying different $n$, which we found reasonable, as the *correctness-checking mechanism (Eq 4), will discard a smoothing if it is too misleading*. However, OVERTONE can be sensitive to $\lambda$, as greater $\lambda$ makes our method more similar to standard training.
Summary: This paper proposes OVERTONE, a token-level smoothing method to address heterogeneous token overfitting (HTO) in knowledge editing (KE) for large language models (LLMs), enabling specific knowledge updates without compromising pre-trained capabilities. Experiments across multiple methods, LLMs, and scenarios show OVERTONE improves performance and versatility over previous KE approaches, with minimal computational overhead and an implicit DPO mechanism. Claims And Evidence: The claims made in the submission are supported by clear and convincing evidence. Methods And Evaluation Criteria: The proposed methods and evaluation criteria make sense for the problem at hand. Theoretical Claims: I did not check the theorectical claim thoroughly. Experimental Designs Or Analyses: The soundness and validity of the experimental designs and analyses seem to be appropriate. Supplementary Material: I have checked the supplementary parts except the proofs. Relation To Broader Scientific Literature: The key contributions of the paper build on prior work in knowledge editing (KE) and large language model (LLM) fine-tuning. The identification of heterogeneous token overfitting (HTO) as a critical issue in KE extends the understanding of overfitting in LLMs, which has been explored in works such as Zhang et al. (2024), who investigated overfitting in fine-tuning LLMs. Essential References Not Discussed: I think there are no essential related works missing from the paper that are critical to understanding the context of its key contributions. Other Strengths And Weaknesses: **Strength:** * The paper is well written with clear motivations. * The paper conducts comprehensive experiments. **Weakness:** * Experimental results, particularly in Table 5 and Table 1, raise doubts due to lower portability compared to WISE and generalization declines in FT/LoRA. Please see questions. Other Comments Or Suggestions: N.A. Questions For Authors: 1. What is meant by "We define underfitting degree (UD) as the difference between the pre-edited and running log-likelihood; negative UD indicates an overfitting"? Why does a negative UD represent overfitting? 2. Is the proposed solution (OVERTONE) adaptable to other fine-tuning methods, tasks, and datasets beyond knowledge editing tasks? Additionally, does it still perform well on long-text scenarios? 3. Why does portability, an emphasized metric for overfitting in this paper, perform significantly worse than the WISE method in many scenarios (e.g., Table 5)? Additionally, why is there a notable decline in generalization for FT and LoRA? 4. Why were methods like MEND, MEMIT, and MELLO not evaluated in Table 2? 5. In Table 3, why does adding filtering tail regions lead to a decrease in locality? Does this imply that some useful general information is being filtered out? 6. Minor: There is a grammatical error in lines 314-315: “*We next check where the improvement was made. from the table, the first gain was from improved portability.*” Ethical Review Concerns: N.A. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We highly appreciate your effort and time spent reviewing our paper and thank you for your expertise and constructive comments. In the following, we address your comments and questions one by one. >**Why does negative UD (NUD) represent overfitting?** NUD represents a *token* is overfitted as its training loss is *too small*. Specifically, NUD computes the difference between the token's training loss (of edited model), and the *greedy decoded* token's *pretrained* loss (of unedited model). The choice of greedy decoding is on purpose, as it reflects the unedited model's *most confident knowledge proper and valid in the past*. By comparing the two, NUD indicates that the edited model is overly confident, and is "*overfitted*" thereof. We will make this clearer. >**Is OVERTONE adaptable to other fine-tuning methods and tasks? On long-text scenarios?** OVERTONE is developed in light of HTO (i.e., editing knowledge is overfitted at different speeds). As per Sec 2.3, one cause of HTO is when knowledge editing (KE) involves few training data (such as single one) and trains the model with the fixed data for many steps, inevitably overfits. Similar concerns may raise in other tasks that seek *selective* updates of LLMs such as machine unlearning, wherefore OVERTONE could be applied. Moreover, when the training text is long, as the number of tokens to learn grows, we expect HTO to exacerbate, and OVERTONE to be helpful. >**Why does portability perform significantly worse on WISE (Table 5)?** Table 5 reports editing performance on LLaMA 3 with hyperparameter adopted from LLaMA 2, which can be suboptimal. As an evidence, the vanilla WISE also performs worse than on LLaMA 2 (WISE has an activation mechanism to determine if edited parameters should be used, which needs additional tuning as well). Nonetheless, OVERTONE was able to help achieve a better editing-generality-portability trade-off, leading to a higher *average* performance. >**Why a notable decline in generalization (gen) for FT and LoRA?** We believe the generality (Gen) decrease is also caused by suboptimal hyperparameter: As shown Fig 1, Gen doesn't encounter degradation due to HTO: as Gen loss change is nearly identical to that of training loss. Therefore, when OVERTONE mitigates HTO, it slightly decreases Gen while achieving better portability and locality. Still, the *average* performance is improved, and we believe our method can benefit from a more extensive hyperparameter tuning. >**Why were MEND, MEMIT, and MELLO not evaluated in Table 2?** Table 2 showed that OVERTONE can help improve reasoning in more challenging scenarios. Therefore, we focused on FT and LoRA, two simple methods *suffered more degradation* from HTO, to demonstrate the effectiveness of OVERTONE. We didn't include MEND, MEMIT, and MELLO because of their different mechanisms: MEND relies on a *large external dataset* to train its hypernet, MEMIT uses a objective other than maximizing the editing data likelihood, and MELLO is training-free. >**Filtering tail regions makes locality decrease.** This trend can be related to the definition of locality. Conceptually, a perfect locality only requires the edited model prediction match its pretrained output, regardless of its correctness and usefulness. Therefore, without filtering the tail region of the model's *own prediction*, the model's pretrained "knowledge" dominate the target $\pi_{tar}$ to learn, leading to a higher locality. However, the tail region of the model's *own prediction* is usually noisy, and this noise can be harmful. As shown in Tab 3, both generality and portability decreased. >**Typos and grammatical error.** Thank you for catching the error! We will fix this mistake in the revised draft.
null
null
null
null
null
null
Neural Encoding and Decoding at Scale
Accept (spotlight poster)
Summary: This article introduces a multimodal, multi-task model named "Neural Encoding and Decoding at Scale (NEDS)" for large-scale neural encoding and decoding. The model employs a novel multi-task masking strategy, enabling simultaneous bidirectional prediction between neural activity and behavior—predicting neural activity from behavior (encoding) and predicting behavior from neural activity (decoding). NEDS was pre-trained on a large-scale multi-animal dataset and fine-tuned on new animal data, demonstrating exceptional performance and generalization capabilities. Claims And Evidence: Convincing enough. Methods And Evaluation Criteria: The methodology presented in this paper bears resemblance to previous works such as POYO+ and NDT2. Moreover, the IBL repeated site dataset serves as a benchmark within the domain, underscoring the significance of this study in advancing the alignment research between neural and behavioral modalities. The contributions of this paper are thus noteworthy for their potential to enrich our understanding of the intricate interplay between neural activity and behavior. Theoretical Claims: The use of mask-based pre-training for Transformers is quite common. I am curious whether the three embeddings—Modality Embedding, Temporal Embedding, and Session Embedding—are truly effective. Can the author provide evidence to support this? Experimental Designs Or Analyses: The paper only compares two methods, POYO+ and NDT2, on a single dataset. Although some performance improvements were achieved, the overall framework does not differ fundamentally from prior work. The experimental section is also limited to evaluating performance in terms of encoding and decoding, lacking insightful scientific findings. Supplementary Material: No supplementary material in this submission. Relation To Broader Scientific Literature: NDT2 is a Transformer-based spatiotemporal encoder-decoder model that undergoes unsupervised pre-training through Masked Autoencoding (MAE). And POYO+ is a novel multi-task, multi-session neural decoding approach capable of decoding neural activity from different cell types and brain regions. This article combines encoding and decoding to form a neuro-behavioral conversion, an idea which I find unsurprising given its similarity to concepts already prevalent in the paper by Li et al. [1]. [1] Li Y, Lou A, Xu Z, et al. NeuroBOLT: Resting-state EEG-to-fMRI Synthesis with Multi-dimensional Feature Mapping[J]. Advances in Neural Information Processing Systems, 2025, 37: 23378-23405. Essential References Not Discussed: Li Y, Lou A, Xu Z, et al. NeuroBOLT: Resting-state EEG-to-fMRI Synthesis with Multi-dimensional Feature Mapping[J]. Advances in Neural Information Processing Systems, 2025, 37: 23378-23405. Other Strengths And Weaknesses: My main concern lies in the potential value of the contribution of the neuro-behavioral conversion presented in this article. Compared to previous works, this article seems more like an incremental project and does not address fundamental issues in the field, such as whole-brain alignment for neuro-behavioral conversion or conversion specific to the human brain, among others. Other Comments Or Suggestions: No. Questions For Authors: I don't have any more questions; I am looking forward to the author's response to the concerns I raised earlier. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: > The use of mask-based pre-training for Transformers is quite common. I am curious whether the three embeddings—Modality Embedding, Temporal Embedding, and Session Embedding—are truly effective... While masked modeling is a common objective for training transformers, it is not yet clear which masking schemes work well for modeling neural activity and behavior. We aimed to address this question in our paper and with our proposed method. We appreciate the suggestion to ablate the different embeddings for NEDS. To do this, we performed an ablation study on single-session data (10 test sessions) to evaluate the importance of the temporal and modality embeddings. Session embeddings were not ablated, as they are essential for distinguishing data from different sessions. The ablation results, presented in the following table, demonstrate that the embeddings generally improve performance across the 5 tasks we evaluate (with a few exceptions). We want to note that these are single-session ablations and will be more sensitive to hyperparameters. | | Encoding | | Decoding | | | | |--------------------------|----------------|------|----------------------------------|------|------|------| | | Encoding (bps) | Choice (Acc) | Block (Acc) | Wheel (R2) | Whisker (R2) | | NEDS (multi-session) | **0.267** ±0.080 | **0.909** ±0.084 | **0.865** ±0.052 | **0.641** ±0.051 | **0.586** ±0.083 | | NEDS (single-session) | 0.203 ±0.062 | 0.840 ±0.115 | 0.827 ±0.072 | 0.568 ±0.089 | 0.523 ±0.097 | | NEDS (w/o modality embed) | 0.198 ±0.070 | 0.846 ±0.116 | 0.831 ±0.046 | 0.516 ±0.066 | 0.459 ±0.086 | | NEDS (w/o temporal embed) | 0.243 ±0.070 | 0.842 ±0.121 | 0.835 ±0.053 | 0.523 ±0.082 | 0.478 ±0.085 | | NEDS (w/o temporal + modality embed) | 0.246 ±0.069 | 0.859 ±0.106 | 0.830 ±0.063 | 0.530 ±0.067 | 0.469 ±0.095 | > The paper only compares two methods, POYO+ and NDT2, on a single dataset. Although some performance improvements were achieved, the overall framework does not differ fundamentally from prior work. The experimental section is also limited to evaluating performance in terms of encoding and decoding, lacking insightful scientific findings. Prior work in modeling spiking neural activity and behavior focuses on modeling a single direction of the relationship (i.e., decoding or encoding). NEDS introduces a new framework for modeling neural activity and behavior by jointly modeling the two modalities, allowing for simultaneous encoding and decoding at test time (POYO+ and NDT2 solely focus on decoding). We believe this multimodal approach is novel for this domain. We agree that more experiments are needed to extract scientific insights. However, encoding and decoding remain key tools for understanding what and how information is represented in the brain. For example, the International Brain-wide Map [1] used these methods to characterize how visual, sensory, and motor information are distributed across the mouse brain. We are excited to extend this with NEDS, leveraging large-scale, multi-animal data to capture richer information and reveal how neuronal functional profiles relate to brain anatomy, as shown in Figure 4. > My main concern lies in the potential value of the contribution of the neuro-behavioral conversion presented in this article. Compared to previous works, this article seems more like an incremental project and does not address fundamental issues in the field, such as whole-brain alignment for neuro-behavioral conversion or conversion specific to the human brain... We thank the author for the suggested citation which we would be happy to include. We want to emphasize that we strongly believe that our contribution is not incremental. Currently, there are very few foundation models trained on spiking data, contributing to the novelty of our approach. Also, current foundation modeling approaches for spiking activity have primarily focused on decoding, whereas, to the best of our knowledge, no prior work has unified neural encoding and decoding. We note key differences from NeuroBOLT [1]: NeuroBOLT translates between neural modalities (fMRI and EEG) without modeling behavior, and uses separate encoders per modality, unlike NEDS, which uses a shared transformer to unify representations. We also feel that the critique related to whole-brain alignment does not apply to spiking recordings which are spatially restricted. Whole-brain alignment is a more tractable problem in fMRI research due to its broader spatial coverage, but is not typically addressed in invasive electrophysiology. [1] International Brain Laboratory, et al. "A Brain-Wide Map of Neural Activity during Complex Behaviour." bioRxiv (2024). [2] Li Y, Lou A, Xu Z, et al. NeuroBOLT: Resting-state EEG-to-fMRI Synthesis with Multi-dimensional Feature Mapping[J]. Advances in Neural Information Processing Systems, 2025, 37: 23378-23405.
Summary: This paper introduces Neural Encoding and Decoding at Scale (NEDS), a multimodal, multi-task model that simultaneously performs neural encoding (predicting neural activity from behavior) and neural decoding (predicting behavior from neural activity) by bridging behaviors and neural activity with a shared masked training Transformer. The framework is evaluated on 83 mice and shows good results in single-session and multi-session cases. ## update after rebuttal Thanks for the responses. I keep my original score and hope to see these supplements and further discussion in the camera-ready version. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: Yes, that's an empirical but interesting study. Experimental Designs Or Analyses: Yes, good. Supplementary Material: Yes, I've read all the supplementary materials. Relation To Broader Scientific Literature: Yes, some recent works have been involved for comparison. Essential References Not Discussed: No. The reference is well. Other Strengths And Weaknesses: Strengths: That’s a valuable research focusing on handling neural encoding and decoding with a shared model. Substantial experiments show effectiveness of multi-modal training to achieve good results. Weaknesses: 1) The evaluation was performed with fixed 10 mice held out from 83 mice, which may introduce randomness. 2) In my view, the work successfully validates the effectiveness of using multi-session data for pretraining and training shared models with brain and behavior data, instead of “scaling” for a large model. Therefore, the title may not be suitable for the content, though that’s cool with a short title. Other Comments Or Suggestions: 1. It could be better to extend the results to other recordings, or you may discuss the heterogeneity of the neuropixel data to show the capability of the model. 2. Just one hold-out test is a bit thin, and it would be beneficial to do at least one or two more validations to make sure the results are stable, albeit at a cost. 3. Have to test the impact of the token length? How do you prepare the continuous and discrete data to cover an overall behavior? 4. Is there any difference between the data from different labs (total 10, mentioned in the section 3.1)? 5. It would be beneficial to give some details about the four kinds of behaviors, or give some references. 6. Have you introduced the design of the objective functions and some model details, such as the position embeddings. 7. Is it possible to test the performance in the animal-independent condition, which means test the performance with finetuning on the test 10 mice? 8. Statistical analysis would make the results better. Questions For Authors: Please see the suggestions part. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: > It could be better to extend the results to other recordings, or you may discuss the heterogeneity of the neuropixel data to show the capability of the model. We agree with the reviewer's suggestion to extend NEDS to other datasets. To address this, we are currently training NEDS on a primate motor task dataset to show its generalizability across different recording setups, species, and behavioral tasks. In future work, we are excited to extend our approach to larger and more diverse datasets. > Just one hold-out test is a bit thin, and it would be beneficial to do at least one or two more validations to make sure the results are stable, albeit at a cost. We agree that additional validation would strengthen the evaluation. Our results rely on a fixed hold-out set due to computational and experimental constraints, and while more rigorous cross-validation would provide a more accurate assessment of model differences, it is currently challenging given the scale of our datasets and models. The development of shared benchmarks will help enable more robust comparisons across different approaches; we plan to participate in such benchmarks in future work. > Have to test the impact of the token length? How do you prepare the continuous and discrete data to cover an overall behavior? This is a great question and something we have not explored. We utilize the tokenization scheme from [1] for our model where the neural data is binned at a specific resolution (20ms) and then each time step is passed into the transformer as a token. We fix the context length to 2 seconds, aligned to movement onset, for all trials and animals. The model performance is more likely influenced by the time bin size rather than token length, as suggested in previous work [1]. We are currently training NEDS on two different bin sizes on a monkey dataset. We are excited to further explore this in future work. For more details on how we tokenized continuous and discrete behaviors, please refer to Section 3.3, “Modality-Specific Tokenization,” in the main paper. At a high-level, the discrete data is transformed into tokens that are repeated multiple times to match the resolution of the continuous data. > Is there any difference between the data from different labs (total 10, mentioned in the section 3.1)? This is an interesting question! Part of the goal of the International Brain Laboratory (IBL) was to reproduce the same experiment and data across multiple labs. To do this, the IBL utilized a standardized experimental protocol and quality metrics to ensure consistent data quality across labs [2]. As can be seen in Figure 7 of [2], the authors conducted an experiment using all neurons in the IBL repeated site datasets to predict their brain region and lab identity. The findings indicate that while brain region identity can be reliably decoded from single-neuron profiles, lab identity cannot be inferred from the data. > It would be beneficial to give some details about the four kinds of behaviors, or give some references. We appreciate the feedback and will incorporate more descriptions of the behaviors into the final camera-ready. > Have you introduced the design of the objective functions and some model details, such as the position embeddings. Yes, details about the objective function and position embeddings can be found in Section 3.3, “Architecture,” of the main paper. > Is it possible to test the performance in the animal-independent condition, which means test the performance with fine-tuning on the test 10 mice? Currently, fine-tuning is required for evaluating these models to align the session and neuron-specific weights into a shared representation. Zero-shot performance of these models is an exciting future direction. > Statistical analysis would make the results better. We agree that statistical analysis would improve our results. To address this in the limited time window during the rebuttal, we are currently re-running our single-session analysis with different random seeds to determine whether model performance was significantly influenced by randomness. We plan to include these results in our response once the experiments are complete. [1] Pei, Felix, et al. "Neural Latents Benchmark'21: Evaluating latent variable models of neural population activity." arXiv preprint arXiv:2109.04463 (2021). [2] International Brain Laboratory, et al. "Reproducibility of in vivo electrophysiological measurements in mice." bioRxiv (2024): 2024-12. --- Rebuttal Comment 1.1: Comment: The authors haven't directly addressed most of my concerns but attributed them to future work and the camera-ready version. Especially: 1. The limitation of validation (using 10 mice held out from 83 mice). 2. The choice of key parameters, such as the token length. 3. Data heterogeneity across labs. 4. Statistical analysis. --- Reply to Comment 1.1.1: Comment: We would like to thank all the reviewers for their patience. **We are unable to post the updated results for all the reviewers** so we hope that they can look at this response. **Reviewer MUVE, geJL and MJ1y**: We thank the reviewer for suggesting an evaluation of our model’s **generalizability across diverse datasets, tasks, and unaligned data**. To this end, we tested NEDS on the MC-RTT primate motor task dataset [1], which differs significantly from the IBL visual decision-making dataset in several ways: (1) MC-RTT uses Utah arrays, whereas IBL relies on Neuropixels recordings; (2) MC-RTT involves monkeys, while IBL uses mice; (3) MC-RTT focuses on a random target reaching motor task, unlike the visual decision-making task in IBL; and (4) MC-RTT data is unaligned, compared to the trial-aligned structure in IBL. As the MC-RTT dataset contains only a single recording session, we compared the single-session variant of NEDS against a state-of-the-art MLP decoder. Encoding quality was measured via bits-per-spike (bps), and decoding performance for finger velocity was measured using R2. We utilized the MLP and data splits from https://github.com/seanmperkins/bci-decoders/. We tuned the learning rate of the MLP, keeping other parameters fixed to the defaults provided in the repository. We trained 30 random models for NEDS and chose the best validation performance similar to what we did in our main text. The results, summarized in the table below, show that NEDS works well across recording modalities, species, and tasks. | Method | MC-RTT (20 ms) | Encoding (bps) | Decoding (Vel R2) | |---------------------|----------------|----------------|--------------------| | MLP | NA | | 0.66440 | | Unimodal NEDS | 0.03711 | | 0.65029 | | Multimodal NEDS | **0.07168** | | **0.71786** | [1] Pei, Felix, et al. "Neural Latents Benchmark'21: Evaluating latent variable models of neural population activity." arXiv preprint arXiv:2109.04463 (2021). ---------------------------- **Reviewer MJ1y** (statistical analysis): We re-ran our single-session analysis with different random seeds to determine whether model performance was significantly influenced by randomness. The results, shown in the table below, indicate that performance is consistent across seeds for all tasks—except for whisker motion energy decoding, which exhibits some variability. We expect that multi-session pre-training would reduce this variability as it is easy to overfit during single-session training. | Seed | Encoding (bps) | Choice (Acc) | Block (Acc) | Wheel (R2) | Whisker (R2) | |----------|----------------------|----------------------|----------------------|--------------------|--------------------| | seed-42 | 0.202872 ± 0.06201 | 0.83986 ± 0.11547 | 0.82679 ± 0.07201 | 0.56819 ± 0.08910 | 0.52340 ± 0.09704 | | seed-43 | 0.20549 ± 0.07061 | 0.82951 ± 0.11135 | 0.82841 ± 0.07333 | 0.52912 ± 0.07930 | 0.45787 ± 0.10177 | | seed-44 | 0.18999 ± 0.06888 | 0.83449 ± 0.13023 | 0.82645 ± 0.05390 | 0.52050 ± 0.06860 | 0.47325 ± 0.09959 | | Average | 0.19945 ± 0.00830 | 0.83462 ± 0.00518 | 0.82722 ± 0.00105 | 0.53926 ± 0.02542 | 0.48484 ± 0.03427 |
Summary: This paper proposes NEDS, a multimodal, multi-task auto-encoder to learn meaningful representations of neural activity and behavior. In brief, the model is based on an encoder-only transformer that tokenized spikes in a similar scheme to NDT (linear projection of binned spikes), as well as both continuous discrete behaviors, then decodes the same information from a multi-head decoder. This configuration allows the authors to perform multiple tasks: predicting behavior from neural activity, neural activity from behavior, and solving different masking problems. The authors demonstrate their model on the International Brain Lab (IBL) dataset, showcasing that it performs better at many of tasks than a slate of baseline models: linear decoders, reduced rank regression, unimodal models, POYO+ and NDT2. They find that the latent embeddings from the model correlate with which brain regreion the data came from, bringing confidence that it has a learned a meaningful representation of the neural data. Claims And Evidence: Yes, the claims are well-substantiated and the evidence is clear and compelling. Methods And Evaluation Criteria: To evaluate this model, the key question is whether the proposed method works better than prior methods: they show this through extensive and convincing evaluation with an appropriate slate of benchmark models. Indeed, the extent of the improvements they demonstrate is remarkable compared to prior art; I was impressed with how much better this did than their well-tuned baseline. A secondary consideration is whether the model gives new insights, and their Figure 4 is an interesting proof of concept of that; though I should add that region decoding is a fairly artificial and rather easy task, it is nevertheless reassuring that this works. Theoretical Claims: No theoretical claims in this paper. Experimental Designs Or Analyses: I did check the soundness of their experiments and analyses. Again, this is pretty well-trodden territory, the IBL dataset is a well-understood dataset, used prior in many papers, including in prior work from Zhang et al. (2024), and the dataset selection, model selection, etc. seem appropriate and in line with the literature. Supplementary Material: I had a brief look, this is mostly detailed methods, it seemed fine. There are not many details on the POYO+ baseline which is under review, which makes it hard to judge, but I trust that POYO+ is like Azabou et al. (2023), with some tweaks to the architecture. Relation To Broader Scientific Literature: The proposed method builds on prior work, including some of the insights from POYO (a multi-head decoder), NDT (tokenization scheme), and the universal translator from Zhang et al. 2024 (multi-mask scheme). The key innovation is using one model to co-embed behavior and spikes. This deserves a publication in and of itself. Essential References Not Discussed: None. Other Strengths And Weaknesses: There are nice figures, the text is well-written. It would be nice to see this applied to other datasets in a follow-up paper, but there's plenty here to warrant publication. Other Comments Or Suggestions: The reference to Azabou et al. (the original POYO paper) has the wrong date, it was published at NeurIPS 2023, not 2024. Questions For Authors: The paper is very clear, no further questions. Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: We appreciate that the reviewer found our paper well-written and our evaluation convincing. We agree that evaluating NEDS on additional datasets would make our paper stronger. In response, we are currently applying NEDS to a primate motor task dataset (MC-RTT [1]) to demonstrate its generalizability across different recording modalities, species, behavioral tasks, and data structures. We plan to include these results in our response once the experiments are complete. [1] Pei, Felix, et al. "Neural Latents Benchmark'21: Evaluating latent variable models of neural population activity." arXiv preprint arXiv:2109.04463 (2021).
Summary: The paper introduces Neural Encoding and Decoding at Scale (NEDS), a multimodal, multi-task model designed to simultaneously predict neural activity from behavior (encoding) and behavior from neural activity (decoding) using large-scale, multi-animal datasets. NEDS employs a novel multitask-masking strategy that alternates between neural, behavioral, within-modality, and cross-modality masking, implemented within a transformer-based architecture. The model is pretrained on the International Brain Laboratory (IBL) repeated site dataset, comprising Neuropixels recordings from 83 mice performing a visual decision-making task, and fine-tuned on held-out animals. The main findings include: (1) NEDS achieves state-of-the-art performance in both encoding and decoding compared to baselines like POYO+ and NDT2; (2) performance scales with pretraining data and model capacity; and (3) NEDS’s latent embeddings exhibit emergent properties, predicting brain regions with 83% accuracy without explicit training Claims And Evidence: The claims made in the submission are largely supported by clear and convincing evidence. The authors claim that NEDS outperforms existing large-scale models in both encoding and decoding, which is substantiated by quantitative comparisons with POYO+, NDT2, and linear baselines across tasks like choice, block prior, wheel speed, and whisker motion energy. The evidence includes performance metrics such as bits per spike (bps) for encoding and accuracy/$R^2$ for decoding, computed on 10 held-out animals. The claim of scalability with pretraining data is supported by the improved performance of multi-session NEDS over single-session NEDS. The emergent property of brain region prediction is convincingly demonstrated through a linear classifier achieving 83% accuracy on neuron embeddings. No claims appear problematic, as the results are consistently backed by experimental data and visualizations. Methods And Evaluation Criteria: The proposed methods and evaluation criteria are well-suited to the problem of modeling bidirectional relationships between neural activity and behavior. The multitask-masking strategy is a logical extension of masked modeling techniques, allowing flexibility for both encoding and decoding within a single framework. The use of the IBL dataset, with its standardized recordings across 83 mice, is an appropriate benchmark for evaluating scalability and generalization in systems neuroscience. Evaluation metrics—bits per spike for encoding and accuracy/$R^2$ for decoding—are standard and meaningful for assessing neural prediction tasks. The comparison with state-of-the-art models (POYO+, NDT2) and linear baselines ensures a robust evaluation. However, the reliance on trial-aligned data and simple behavioral variables (e.g., wheel speed, choice) may limit the generalizability to more complex tasks, though this is acknowledged as a limitation in Section 7. Theoretical Claims: The paper does not present formal theoretical proofs requiring verification. Experimental Designs Or Analyses: I reviewed the experimental designs and analyses in Sections 5 and 6, focusing on single-session, multi-session, and brain region classification experiments. The designs are sound: pretraining on 74 sessions and fine-tuning on 10 held-out sessions is a standard approach, and the train-validation-test split (70%-10%-20%) is appropriate. The ablation study on masking schemes (Appendix B) validly isolates the contribution of each component, showing that within-modality and cross-modal masking enhance performance. Hyperparameter tuning using Ray Tune (Appendix C) is rigorous. The brain region classification experiment is well-executed, using 5-fold cross-validation and comparing unimodal vs. multimodal embeddings. Supplementary Material: I reviewed the supplementary material in Appendices A-F, including the bits per spike metric (A), masking scheme ablation (B), model and hyperparameter details (C), model size effects (D), benchmark comparisons (E), and training details (F). The materials were very detailed and ensures reproducibility as well as soundness of experiments. Relation To Broader Scientific Literature: NEDS builds on prior work in neural encoding and decoding, it's main algorithmic inspiration being He's 2022 work on Masked Autoencoders for computer vision tasks. Based on this idea, it extends unimodal approaches like POYO+ (decoding-focused) and NDT2 (decoding-capable) by unifying encoding and decoding, addressing a gap noted in the literature. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: The paper’s originality lies in its creative combination of masked modeling and multi-task learning, enabling a flexible, bidirectional model—a significant advance over task-specific predecessors. The clarity of explanation is strong. The potential for real-world impact, such as brain-computer interfaces, enhances its significance. The emergent brain region prediction also adds an valuable insight. Weaknesses: The reliance on mice data and simple behavioral tasks (e.g., wheel speed, whisker motion) limits the demonstration of NEDS’s capability for complex behaviors like visual decoding. While the authors suggest extensions to other modalities (Section 7), the current scope feels narrow given the “foundation model” ambition. The computational constraint on hyperparameter tuning for multi-session models (Section 7) is a minor weakness, though mitigated by practical tuning on a subset. Other Comments Or Suggestions: N/A Questions For Authors: Generalizability to Complex Tasks: The evaluation focuses on simple behaviors (e.g., choice, wheel speed) in mice. Have you tested or plan to test NEDS on more complex tasks, such as visual decoding or multi-step decision-making? Potential for unaligned data: The authors mentioned this in the limitations. Given POYO is capable of training on unaligned data and the claim that NEDS's potential for this extension, how would this be possible for NEDS and why isn't it explored in this work? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We appreciate the reviewer’s suggestion to evaluate the generalizability of our model on additional datasets, tasks, and unaligned data. Given the complexity of the neural recordings we analyze in our paper, spanning multiple brain regions and animals, we intentionally focused on a small set of well-defined, trial-aligned behaviors (e.g., choice) for our evaluation. We agree with the reviewer that it would be interesting to test NEDS across more complex datasets and tasks. To address this, we are currently training multiple baselines models and NEDS on the MC-RTT primate motor task dataset [1], which differs significantly from the IBL visual decision-making task and is also unaligned. We plan to include these results in our response once the experiments are complete. [1] Pei, Felix, et al. "Neural Latents Benchmark'21: Evaluating latent variable models of neural population activity." arXiv preprint arXiv:2109.04463 (2021).
null
null
null
null
null
null
CLARIFY: Contrastive Preference Reinforcement Learning for Untangling Ambiguous Queries
Accept (poster)
Summary: This paper address the query selection problem of PbRL. The authors propose a representation learning algorithm that embeds trajectories into high dimensional vectors and enlarge the distance between unambiguous trajectories pairs. The authors compare their proposed method with existing PbRL methods. Claims And Evidence: The experiment results include a table for task performance and an ablation study shows that their method can increase the clarity of query. These results confirm the efficacy of their method. In the meantime, this paper does not explicitly mention if the preference labels are collected in a single round or multiple rounds. If the preference labels are collected in multiple rounds, then the embedding space at round $T$ depends on previous rounds. In this setting, the method proposed in this paper can be considered as a exploitation strategy. In the meantime, exploration is also important: it seems that allocating label budget in ambiguous queries could be beneficial, as it might help us resolve the decision boundary. If the preference labels are collected in multiple rounds, could you please results for label efficiency? In other words, how performance changes as we get more preference labels? Methods And Evaluation Criteria: This paper compares methods using the task performance of offline RL, which is standard in this area. Theoretical Claims: No, I did not check the correctness of the proofs. Experimental Designs Or Analyses: Yes, I have checked the experiment results. 1. This paper overlooks the analysis of exploitation and exploration tradeoff in the experiments. 2. Since the author claims that their method mitigates overfitting issue, please provide results for the quality of reward learning, i.e. accuracy on test preference set. Supplementary Material: No. Relation To Broader Scientific Literature: Embedding entire trajectories into a high-dimensional space and adjust the embeddings based on the proximity between trajectories is an interesting idea for the offline RL and the RL literature. Essential References Not Discussed: No. Other Strengths And Weaknesses: Ambiguous queries do not have labels of "1" or "-1", so they are not included in the samples used for minimizing eq. 6. The paper lacks explanation for why minimizing eq 6 resolves representation collapse of ambiguous queries. Other Comments Or Suggestions: 1. In line 199-200, I think the "positive" samples should be $\sigma_+$ and $\sigma'_+$. Questions For Authors: 1. I notice that the preference loss $\mathcal{L}_\text{reward}$ does not appear in e.q. (11). How do you get the reward function from the trajectory embeddings? What is the reason of not learning the representations with both representation losses and reward learning losses? 2. Are the preference labels collected in single round or multiple rounds? Could you please provide results for the exploitation-exploration trade-off? 3. Can you provide results for reward fitting (i.e. accuracy on unseen preference samples)? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Dear Reviewer, Thanks for your valuable and detailed comments. We hope the following statement clear your concern. **We conducted additional experiments and the results are shown in the [link](https://docs.google.com/document/d/e/2PACX-1vS0ZIKigh-syAaNtcr2Udzk8katEE6AtC0OA23Xveb1dUqzFtMws64U6o6GFUep_BTzQ0EaA770n88P/pub).** **Claim 1, Experimental Designs 1 and Q2: Preference collection and exploration-exploitation trade-off.** **A for Claim 1, Experimental Designs 1 and Q2:** The preference labels in our study are collected over multiple rounds, with the embedding space updated iteratively as new queries are selected. As for the exploration-exploitation trade-off, CLARIFY prioritizes exploitation by focusing on unambiguous queries, while its reject sampling strategy (Sec. 4.2) inherently supports exploration. This strategy diversifies the queries by sampling from a distribution, avoiding overemphasizing the "clearest" pairs. As suggested, we conducted experiments to evaluate the exploration-exploitation trade-off. We compared CLARIFY against two baselines: (1) pure exploration (random query selection, referred to as "Random") and (2) pure exploitation (maximizing the density for clearly-distinguished queries $\rho_\text{clr}(d_\text{emb})$, referred to as "Exploitation"). As shown in Tables 1 and 2 in the supplement link, CLARIFY outperforms Random by over 20\% in success rate, and surpasses Exploitation by over 15\%. These results demonstrate an effective exploration-exploitation trade-off. **Claim 2: Label efficiency.** **A for Claim 2:** As suggested, we evaluate CLARIFY's label efficiency from 50 to 2000. Table 3 in the supplement link shows that CLARIFY consistently outperforms baselines under various query numbers. In addition, we observed that the performance of CLARIFY with 100 queries approaches that of MR with 1000 queries, demonstrating CLARIFY's effectiveness in achieving high performance with fewer labels. **Experimental Designs 2 and Q3: Reward fitting accuracy on the test set.** **A for Experimental Designs 2 and Q3:** As suggested, we evaluate CLARIFY's reward fitting accuracy on the test set. Table 4 in the supplement link reflects the effectiveness of reward fitting, showing that CLARIFY attains about 3\% higher accuracy than OPRL on most tasks. **W1: How Eq. 6 resolves representation collapse.** **A for W1:** The quadrilateral loss $\mathcal L_\text{quad}$ prevents representation collapse by enforcing preference-aware geometry: 1. For clear preferences, it creates a hyperplane separating good and bad trajectories (Proposition 5.2). 2. For ambiguous pairs, while $\mathcal L_\text{amb}$ minimizes their embedding distance, $\mathcal L_\text{quad}$'s contrastive gradient prevents trivial clustering of these pairs. To support this, we conduct an ablation study on $\mathcal L_\text{amb}$, as shown in Table 5 in the supplement link. Removing $\mathcal L_\text{quad}$ degrades performance by about 10\% on average, validating its necessity. **A for Other Comments:** Thank you for your keen attention to detail! We have corrected the positive and negative samples to $(\sigma_+, \sigma^\prime_+), (\sigma_-, \sigma^\prime_-)$ in the revised manuscript. **Q1: $\mathcal L_\text{reward}$ optimization.** **A for Q1:** To elaborate, we present an additional figure (Figure 1 in the supplement link) that illustrates the architecture of CLARIFY. As shown in the figure, the reward model is updated using the loss function $\mathcal L_\text{reward}$ (Eq. 2) during the reward learning phase, while the embedding space is updated according to Eq. 11 during the embedding learning phase. These two phases are strictly decoupled. Thus, the loss function $\mathcal L_\text{reward}$ is not incorporated into Eq. 11. Training the reward model solely with the Bradley-Terry loss is a standard approach in the PbRL literature [1,2,3]. This separation allows preference learning to focus on human feedback, while embeddings specialize in distinguishing trajectory pairs. Thanks again for the valuable comments. We hope our response has cleared your concerns. We are looking forward to more discussions. [1] Lee, Kimin, et al. "B-pref: Benchmarking preference-based reinforcement learning." arXiv preprint arXiv:2111.03026 (2021). [2] Shin, Daniel, et al. "Benchmarks and Algorithms for Offline Preference-Based Reward Learning." Transactions on Machine Learning Research. [3] Cheng, Jie, et al. "RIME: Robust Preference-based Reinforcement Learning with Noisy Preferences." International Conference on Machine Learning. PMLR, 2024.
Summary: This paper presents CLARIFY, a method that selects unambiguous queries that humans can more easily label. It does this by learning a meaningful embedding space using two contrastive losses. This allows for weaker teachers to provide meaningful feedback on the selected trajectories. Experimental results in continuous control tasks show that CLARIFY can significantly improve labelling efficiency and improve policy performance. ## After Rebuttal The authors have nicely addresses my concerns. I will raise my score to a 4. Claims And Evidence: The only claim I have an issue with is their claim that Mu 2024 cannot be applied to offline settings (this claim is made on line 162). I have not looked into Mu 2024 deeply, but I do not see why it cannot be applied to offline settings with some simple modifications. Furthermore, I don't understand why the offline setting is important for the tasks the authors consider. The main claims about the performance gains of Clarify are solid. They conduct many experiments and it seems like Clarify can indeed improve performance in most settings. Methods And Evaluation Criteria: The experiments and evaluation criteria do make sense. This paper works to improve RLHF, and some of the earliest papers on RLHF focused on continuous control [1]. They evaluate based on the average ground-truth reward received, which makes sense. However their experiments are all conducted on relatively simple continuous control environments. One concern I have is that their approach of only selecting easy samples to train on will work better in easy settings than in hard settings. This means their experimental setup may overestimate the utility of their method. [1] Christiano, Paul F., et al. "Deep reinforcement learning from human preferences." Advances in neural information processing systems 30 (2017). Theoretical Claims: no, I did not check them for correctness. Experimental Designs Or Analyses: Yes I did check the soundness of the experimental design. The experimental design and analysis is sound. They compare their algorithm with many baselines, and they conduct experiments in a wide variety of environments. In addition, they report five independent runs for each algorithm, and report standard deviation for all experiments. The main experimental results I am referring to are in Table 1. They also show that CLARIFY results in queries that are more distinguishable to humans (Figure 4), and these results also seem sound. Supplementary Material: I briefly read the appendix. Relation To Broader Scientific Literature: The authors approach seems novel and very relevant to data selection for RLHF, which is a popular area at the moment. I think the whole CLARIFY framework is novel, but it needs more detailed comparison to Mu 2024. Essential References Not Discussed: Mu 2024 is cited, but not discussed in depth. Since Mu 2024 aims to accomplish a very similar goal as CLARIFY (albeit in the online setting), I think CLARIFY's novelty and contribution in comparison to Mu 2024 should be discussed in more detail. Other Strengths And Weaknesses: Strengths - The writing is easy to understand, and the presentation is nice - The approach seems to be novel - The experiments are thorough and their approach shows solid improvement compared to baselines. Weaknesses: - The authors do compare in depth to Mu 2024 (why can’t it be included in the offline setting?) - There is no discussion of offline versus online approaches. Why do the authors sample queries from the offline dataset, rather than sampling new on-policy queries from the current policy? It seems like this would be a more effective approach in all of the experimental settings the authors consider. Other Comments Or Suggestions: none Questions For Authors: See the above sections. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear Reviewer, Thanks for finding our paper nice presentation, novel, thorough experiments and solid improvement. We hope the following statement clear your concern. **Claim and W1: Comparison to [1] and its offline applicability.** **A for Claim and W1:** - While both CLARIFY and [1] attempt to tackle ambiguous queries, their approaches are fundamentally different. [1] address ambiguous queries by learning diverse skills through unsupervised exploration. The requirement for online exploration makes it incompatible with offline settings, where agents cannot interact with the environment. In contrast, CLARIFY operates entirely offline by learning preference-informed trajectory embeddings through contrastive learning. This eliminates the need for online exploration, making CLARIFY more suitable for real-world scenarios where environment interaction is costly, risky, or unavailable. - Additionally, CLARIFY optimizes query distinguishability directly via embedding distances, whereas [1] relies on skill diversity as a proxy. This independence from skill diversity enhances CLARIFY's robustness in tasks with constrained skill exploration (e.g., limited state-space traversability), which eliminates the reliance on costly exploration for discovering diverse skills. **Methods 1: Task complexity and generalizability.** **A for M1:** We appreciate the reviewer’s concern regarding task complexity. Our experiments include challenging tasks, such as MetaWorld's peg-insert-side, which require multi-step reasoning and are more complex than prior RLHF [2] evaluations focused on locomotion. CLARIFY significantly outperforms baselines across these tasks, as shown in Table 1 of the paper, demonstrating its effectiveness beyond simple settings. **W2: Importance of offline setting and query sampling.** **A for W2:** Offline reinforcement learning assumes agents learn solely from a fixed pre-collected dataset without environment interaction. It is critical for safety-sensitive domains (e.g., healthcare, autonomous systems), where real-time exploration is either unsafe or impractical. In such cases, agents must rely on pre-collected datasets for learning, and thus, queries should be sampled from this static dataset rather than generated on-policy. This aligns with CLARIFY's approach, which leverages offline datasets and contrastive learning to ensure the method's applicability in real-world, offline settings. We sincerely thank the reviewer again for the timely and valuable comments. We hope that our response and additional experimental results have cleared most of your concerns. [1] Mu, Ni, et al. "S-EPOA: Overcoming the Indistinguishability of Segments with Skill-Driven Preference-Based Reinforcement Learning." arXiv preprint arXiv:2408.12130 (2024). [2] Christiano, Paul F., et al. "Deep reinforcement learning from human preferences." Advances in neural information processing systems 30 (2017).
Summary: This paper proposes an offline PbRL framework, CLARIFY, to address challenges arising from ambiguous queries. The method learns a trajectory embedding space through contrastive learning and utilizes the learned embedding to maximize the selection of clearly distinguished queries via rejection sampling, improving human labeling efficiency and achieving state-of-the-art results in offline PbRL settings. ## Update After Rebuttal: I raised my score from 2 to 3 as the authors' rebuttal addressed most of my concerns. Claims And Evidence: The paper claims that the proposed embedding space is meaningful and coherent, as high-performance trajectories form one cluster, low-performance trajectories form another, and intermediate trajectories transition smoothly between them. The authors support this claim using Figures 2 and 3, which visualize the learned embedding spaces. My understanding is that Figure 2 represents the embedding space learned using only quadrilateral loss, while Figure 3 shows the embedding space learned with both ambiguity loss and quadrilateral loss. However, they appear similar. It would be helpful to highlight the failure points of the embedding space when using only ambiguity loss or only quadrilateral loss and demonstrate how combining these two losses addresses the problem. Methods And Evaluation Criteria: The proposed methods and benchmark datasets are reasonable for the problem or application at hand. Theoretical Claims: The theoretical claims appear reasonable and effectively cover the core concept proposed in the paper. Experimental Designs Or Analyses: Adding a naive baseline that simply rules out ambiguous sample pairs with "p = no_cop" and comparing its performance with the proposed method would help readers better appreciate the effectiveness of the proposed approach. Additionally, the proposed method introduces extra complexity to the PbRL algorithm by (1) learning an embedding space and (2) performing rejection sampling. Moreover, its final loss function involves several hyperparameters (e.g., $\lambda_{\mathrm{amb}}$, $\lambda_{\mathrm{quad}}$, $\lambda_{\mathrm{norm}}$). Comparing the computational costs and hyperparameter tuning efforts with other baselines would provide valuable insights into the true applicability of the proposed method. Supplementary Material: I did not review the supplementary material. Relation To Broader Scientific Literature: Designing effective reward learning within the PbRL framework is closely related to the broader scientific literature. Essential References Not Discussed: There is potentially related literature that incorporates rejection sampling into PbRL or Preference Optimization, such as [1] Statistical Rejection Sampling Improves Preference Optimization, Liu et al., ICLR 2024. Other Strengths And Weaknesses: Strengths: - The paper is well-written and easy to follow. - The claims are largely supported by visualizations and theoretical analyses. - The paper evaluates the method on diverse benchmark datasets, demonstrating its real-world applicability. Weaknesses: - The effect of each loss term in the final proposed loss is not empirically studied in depth. - The computational overhead of learning the embedding and the effort required for hyperparameter tuning are not thoroughly analyzed. Other Comments Or Suggestions: There is a typo in Line 179: 'amb' should be subscripted, e.g., $\mathcal{L}_{\mathrm{amb}}$. Questions For Authors: Q1. Why and how do ambiguous queries hinder the practical application of PbRL? Is this issue solely related to labeling efficiency, or does it also impact policy training within the PbRL framework? Q2. Could you elaborate on the differences between S-EPOA: Overcoming the Indistinguishability of Segments with Skill-Driven Preference-Based Reinforcement Learning, Mu et al., 2024, and the proposed method beyond the distinction between online and offline settings? Q3. How do the computational overhead and the effort required for hyperparameter tuning of the proposed method compare to those of other baselines? Q4. Could you visualize the failure points of the embedding space or the performance results when using only ambiguity loss or only quadrilateral loss and demonstrate how combining these two losses resolves the issue? Q5. How does the proposed method compare to a naive approach that simply rules out pairs with 'p = no_cop'? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer, Thanks for your valuable and detailed comments. We hope the following statement clear your concern. **We conducted additional experiments and the results are shown in the [link](https://docs.google.com/document/d/e/2PACX-1vQX0KIRCSWV8LrON718raf-d_BL75LRXMY5yB-Ts28kW0BZIVyWHan0kgw54vnZQtuxp1ODwe4IH1ws/pub).** **Claim, W1 and Q4: Embedding failure cases.** **A for Claim and Q4:** We would like to clarify that Figure 2(c) in the original paper visualizes embeddings trained with both losses rather than using only quadrilateral loss. As suggested, we visualize the failure modes in Figure 1 in the supplemental link. Specifically, using only $\mathcal L_\text{amb}$ causes erroneous clustering, where low and high return trajectory embeddings are intermixed (Figure 1(a) in the supplemental link). In contrast, using only $\mathcal L_\text{quad}$ yields densely packed embeddings with insufficient separation (Figure 1(b)). Combining both losses resolves these issues and leads to smooth, coherent clusters (Figure 1(c)). This illustrates their complementary roles in structuring the embedding space. **Experimental Design 1 and Q5: Naive baseline comparison.** **A for Experimental Design 1 and Q5:** As suggested, we compare CLARIFY with the naive approach that rules out pairs with `no_cop` preference labels, as shown in Table 1 in the supplement link. CLARIFY outperforms the naive approach (the "Naive" method in the table) by over 50\% on most tasks. This demonstrates the effectiveness of CLARIFY. **Experimental Design 2, W2 and Q3: Computational costs.** **A for Experimental Design 2, W2 and Q3:** - As suggested, we analyze the computational cost and the effort required for hyperparameter tuning of CLARIFY, as shown in Table 2 in the supplement link. CLARIFY incurs moderate computational overhead, roughly 2-3 times that of OPRL, primarily due to embedding learning. While training time may increase, CLARIFY effectively identifies more clearly distinguished queries, accelerating the labeling process. - On the other hand, CLARIFY demands minimal hyperparameter tuning. In the original paper, key hyperparameters ($\lambda_\text{amb},\lambda_\text{quad}$) were fixed across tasks with robust performance. To support this, we conduct additional experiments to visualize the embeddings under various hyperparameter configurations, as in Figure 2 in the supplement link, illustrating the stability of embeddings under parameter variations. We have added these results in the revised version. **Q1: Ambiguity's impact on PbRL.** **A for Q1:** Ambiguous queries hinder PbRL primarily in two ways: 1. Labeling efficiency: Human teachers often struggle to differentiate between similar segments, leading to skipped labels (`no_cop`) that waste annotation effort. As in Table 3 of the original paper, only about 50% of queries receive clear preference labels, indicating significant inefficiencies. 2. Reward learning accuracy: Labelers may produce random or incorrect preferences when segments are only marginally different. This can introduce errors to the reward model, ultimately degrading policy performance. **Q2: CLARIFY vs S-EPOA [1].** **A for Q2:** While both methods tackle ambiguous queries, they differ fundamentally in their approaches. S-EPOA addresses indistinguishability via unsupervised exploration and skill-based query selection in online settings. In contrast, CLARIFY utilizes contrastive learning to create trajectory embeddings informed by preferences, which enables offline query filtering. One of the key advantages of CLARIFY is that it eliminates the need for online exploration, making it particularly suitable for real-world applications, where interaction can be costly or risky. Furthermore, CLARIFY directly optimizes query distinguishability rather than relying on skill diversity as a surrogate. This independence from skill diversity enhances CLARIFY's robustness in tasks with constrained skill exploration (e.g., limited state-space traversability), eliminating reliance on costly exploration for discovering diverse skills. **Answer to Essential References and Other Comments:** Thank you for your keen attention to detail! We have subscripted $\mathcal L_\text{amb}$ throughout the manuscript. Additionally, we have discussed [2] in the related work section. Thanks again for the valuable comments. We sincerely hope our additional experimental results and explanation have cleared the concern. More comments on further improving the presentation are also very much welcomed. [1] Mu, Ni, et al. "S-EPOA: Overcoming the Indistinguishability of Segments with Skill-Driven Preference-Based Reinforcement Learning." arXiv preprint arXiv:2408.12130 (2024). [2] Liu, Tianqi, et al. "Statistical Rejection Sampling Improves Preference Optimization." The Twelfth International Conference on Learning Representations. --- Rebuttal Comment 1.1: Comment: Thank you for your thoughtful rebuttal. I appreciate the detailed responses to my questions and have increased my score to 3 in light of your clarifications. --- Reply to Comment 1.1.1: Comment: We would like to thank the reviewer for raising the score! We also appreciate the valuable comments, which helped us significantly improve the paper's strengths.
Summary: This paper presents CLARIFY, an offline preference-based reinforcement learning (PbRL) algorithm, that leverages contrastive learning to organise the embedding space which is used to learn the reward function. During the reward-learning phase, CLARIFY alternates between learning a reward via Bradley-Terry and a contrastive objective that encourages preferred state-action pairs to cluster together. This paper additionally shows that under CLARIFY: i) the distance between two trajectory embeddings is has a lower-bound, and that ii) there is a hyperplane that separates the embeddings of all preferred trajectories, and all dispreferred trajectories. Experiments show that Metaworld and DMControl tasks, where IQL policies using the reward learned by it CLARIFY outperforms in all but one scenario recent offline preference-based learning baselines (including OPRL, PT, OPPO and LiRE). **Post-rebuttal update** The main concern about this paper was whether CLARIFY was robust to noisier labellers, with additional ablations showing that it indeed was. There were also additional concerns regarding the clarity of the text and the data flow of the method that were equally clarified during rebuttal. Claims And Evidence: The following claims are made in the paper: **C1. CLARIFY provides better performance than current offline PbRL baselines under non-ideal teachers** This claim is mostly supported by the thorough experiments in Table 1, where CLARIFY beats the baseline in all tasks except Metaworld's `peg-insert-side`. The robustness of CLARIFY to different non-ideal scripted teachers is not sufficiently investigated however. The teacher presented in CLARIFY can skip trajectories whose ground-truth reward is close, but will never flip preferences (ie choosing a non-preferred trajectory over a preferred one), or skip preferences where the ground-truth rewards are large. Even with the presented scripted teacher, it is hard to gauge what is the effect of $\epsilon$ in the expected performance. How does CLARIFY perform if $\epsilon$ is ~0.3? What happens if $\epsilon$ is zero (ie no trajectory is labelled as `no_cop`)? It is not necessary for CLARIFY to outperform the baselines in these situations, but it is important to characterise its behaviour so that the community can understand when is best to use CLARIFY. **C2. These improvements are both due to the space separation induced by CLARIFY, and the rejection sampling technique presented in the paper.** Table 4, and 6 shows that best results are obtained when all elements of CLARIFY ($L_{amb}$, $L_{quad}$, and rejection sampling) are active. However, this analysis is only carried out for two meta-world tasks and with $\epsilon=0.5$. Other epsilon values and a few DMControl tasks should also be analysed. In table 4, it would make sense to compare against an uncertainty-based sampling method like the one used in PEBBLE (Lee et al 2021a in the paper's bibliography). Lastly, CLARIFY uses a Bidirectional Transformer, it is conceivable that the encoder of the transformer (which as far as I understand consumes the trajectories) is providing extra information to the policy. I would train implement the policy as a causal-decoder transformer only as an ablation to verify the model architecture is not behind the observed gains. (It is possible that my assumptions of how CLARIFY is implemented are mistaken, this paper could benefit from an architecture diagram). **Post-rebuttal update** During rebuttal new experiments were added to address the concerns above. In particular, it became clear that CLARIFY is quite robust to $\epsilon$ changes and to other labellers (C.1). Similarly, experiments were added to show that the observed performance improvements were not due to the use of a bidirectional transformer (C.2) Methods And Evaluation Criteria: Yes, the paper contains comparisons against recent baselines on commonly used tasks (MetaWorld and DMControl). Theoretical Claims: I did not have time to go through the proofs of Propositions 5.1 and 5.2, since they are contained in the appendices. But the claims derived from these propositions seem sound. I believe there is a typo in Proposition 5.2, $d(z^-, \mathcal{H})$ should be $\le \eta$ rather than $\le - \eta$. Experimental Designs Or Analyses: All the analyses and ablations make sense, apart from the issues discussed above. Supplementary Material: I did not, except for Algorithm 1. Relation To Broader Scientific Literature: CLARIFY is tackling the very challenging (and highly researched) problem of learning to solve a task without an explicit reward function from an offline dataset of interactions. In this context, the use of contrastive learning (which has been used elsewhere in Machine Learning to increase sample efficiency) is interesting. Essential References Not Discussed: None that I could find. Other Strengths And Weaknesses: **Other Strengths**: * The paper includes a comparison against actual human labellers for the `walker-walk` task (though unfortunately its performance is approximately half of the performance achieved with a non-ideal teacher, cementing the need of analysis with other non-ideal teachers). **Other Weaknesses**: * Query clarity ratio is not very clearly defined in the manuscript. I believe it is 1 - (ratio of `no_cop`) labels provided by human labellers? * Similarly, I am very confused by what is meant by clearly-distinguished queries in Table 3. * The Impact Statement is well thought out, but it does not mention possible negative consequences of offline PbRL (namely the ability for laymen to very easily program an agent to carry out malicious tasks). **Post-rebuttal update** Authors addressed all the above issues. I would urge authors to adopt the definitions of `clearly-distinguished queries` and `clarity-ratio` present in the rebuttal. Other Comments Or Suggestions: * What does `no_cop` stand for? Questions For Authors: * Q1. When is equation (9) used in Algorithm 1? Is it simply a pre-training for the bi-directional encoder? * Q2. The paper never really states what the distance metric $l$ in equation 5 is. I assume is simply the euclidean distance? Perhaps a more appropriate distance would be the cosine distance? In very high-dimensional spaces, it's very easy for two points to be very far apart. * Q3. What is the number of queries used for Table 1? This should be clearly stated. * Q4. How much do the results in Table 2 deteriorate under 50 queries? Does CLARIFY also ~77% of the IQL-with-ground-truth reward performance with only 100 samples for other tasks? * Q5. The error bars on Figure 4 are quite large, particularly for `walker-walk`, could you run a statistical significance analysis to verify that CLARIFY and OPRL are indeed different? * Q6. In Figure 5 what are the thresholds for Large Return Difference, Medium Return Difference, and Small Return Difference? * Q7. Using T-SNE visualisation to prove that the embeddings are robust to changes in the loss $\lambda$ is very unconvincing. T-SNE has its own hyper-parameters and it may be ignoring important differences in the underlying space when projection to 2D. **Post-rebuttal update** Authors addressed all the above issues. I would urge authors to adopt the definitions of `clearly-distinguished queries` and `clarity-ratio` present in the rebuttal. Regarding Q2, the research would likely benefit from further investigation of the effects of different distances (beyond a TSNE projection), but perhaps this could be left for further work. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer, Thanks for your valuable and detailed comments. **We conducted additional experiments and the results are shown in the [link](https://docs.google.com/document/d/e/2PACX-1vS7v9XEpXMFrH0skymO1RQUiXP2lcnnRoP114HpluBSSpvxE3vuRHNYJ1RwlggWB-rlihxrpdeVv53O/pub).** **C1.1: Robustness to non-ideal teachers.** **A for C1.1:** We evaluated CLARIFY with a new flipping teacher that randomly assigns preferences for close-reward queries. Table 1 shows that CLARIFY outperforms OPRL by over 20% in success rate, confirming its robustness to varied non-ideal feedback. **C1.2: CLARIFY's behavior across $\epsilon$ values.** **A for C1.2:** We evaluated CLARIFY with $\epsilon$ in 0$\sim$0.7. As shown in Table 2, at $\epsilon=0.3$, CLARIFY outperforms OPRL by 8~18\%. In contrast, at $\epsilon=0$, CLARIFY's query selection reverts to random sampling, leading to performance comparable to MR. This suggests that CLARIFY excels when distinguishing a query is difficult. **C2.1: Additional component analysis.** **A for C2.1:** We conduct component analysis on DMControl tasks with $\epsilon$=0.7. Tables 3 and 4 show that full CLARIFY ($\mathcal L_\text{amb}$ + $\mathcal L_\text{quad}$ + rejection sampling) consistently performs best, confirming component necessity. **C2.2: Comparison to PEBBLE.** **A for C2.2:** We compare CLARIFY to uncertainty-based sampling (PEBBLE). Table 5 shows that CLARIFY achieves better performance, demonstrating the effectiveness of our query selection strategy. **C2.3: Architecture impact.** **A for C2.3:** We illustrate our architecture in Figure 1, which shows that CLARIFY operates in two strictly decoupled phases. In the embedding training phase, the Bi-directional Decision Transformer encoder trains solely on trajectories using contrastive and reconstruction losses, without access to reward model or policy. In the reward learning phase, the reward model updates only on preference data without access to embedding information. This design prevents the encoder from leaking privileged information about future states or rewards. **Answer for Theoretical, Weaknesses, Other Comments:** - (**Theoretical**) Prop 5.2: corrected to $d(z^-,H)\ge\eta$. - (**Other Comments**) `no_cop` denotes cases where labelers consider queries too similar to specify a preference, resulting in a skipped label. - (**W1**) Query clarity ratio is defined as the proportion of clearly-distinguished queries to the total number of queries. - (**W2**) Clearly-distinguished queries are those where human preferences are clear (not `no_cop`). - (**W3**) Safety impact: A discussion of malicious use risks was added. **Q1, Q2, Q3, Q6: Clarity of the statement.** - **A for Q1:** Eq. 9 is integrated into the total training objective (Eq. 11) used in Algorithm 1 line 3. It is not a simple pretraining but trains the encoder continuously. - **A for Q2:** Distance metric $\ell$ is the Euclidean distance. We conduct additional experiments to compare the Euclidean and Cosine distances. Figure 2 shows that points in the embedding trained with Cosine distance cluster together, while Euclidean distance is more suitable for trajectory embedding. - **A for Q3:** Query number: 1000 for MetaWorld tasks, 500 for cheetah-run, 200 for walker-walk (Table 10). - **A for Q6:** The thresholds of Large, Medium, and Small in MetaWorld and DMControl tasks are 300/100/10 and 30/10/1 respectively. **Q4: Performance with fewer queries.** **A for Q4:** We evaluate CLARIFY with 50 to 2000 queries. Table 6 shows a slight performance decline for CLARIFY with only 50 queries, though it still outperforms MR. Table 7 shows CLARIFY's performance with 100 queries on various tasks, which reaches about 70\% of IQL's performance with ground truth rewards. **Q5: Statistical validation for walker-walk.** **A for Q5:** We conduct a statistical significance analysis for CLARIFY and OPRL. - Table 8 shows the 95% confidence intervals (CIs) of the query clarity ratio and label accuracy. The narrow intervals validate CLARIFY’s performance. Though walker-walk shows overlapping CI due to high environment stochasticity, CLARIFY's directional improvements in both metrics demonstrate the effectiveness of its query selection. - Additionally, we conducted independent two-sample t-tests comparing CLARIFY with OPRL. The experimental results in Table 9 show that CLARIFY achieves statistically significant improvements ($p<$0.05) in 5/6 tasks. These results confirm the advantage of CLARIFY's query selection over OPRL. **Q7: T-SNE visualization robustness.** **A for Q7:** Thanks for pointing out this issue, and we conduct visualizations across multiple random seeds (Figure 3), revealing consistent clustering patterns, indicating robust embeddings. Additionally, we provide PCA (Figure 4) visualizations to support our conclusion. We hope that our response has cleared most of your concerns. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed rebuttal and for all the additional and thorough experiments. I would also urge the authors to include Figure 1 in the manuscript. Based on the authors responses, I have raised my review score to a 3. --- Reply to Comment 1.1.1: Comment: We would like to thank the reviewer for raising the score! We also appreciate the valuable comments, which helped us significantly improve the paper's strengths.
null
null
null
null
null
null
Doubly Protected Estimation for Survival Outcomes Utilizing External Controls for Randomized Clinical Trials
Accept (poster)
Summary: Estimating the average treatment effect (ATE) from both trial and external control datasets is challenging due to data heterogeneity, specifically covariate shift and outcome shift. This paper proposes a doubly protected estimation framework to address these challenges. 1. When the external control dataset is comparable to the trial dataset, the authors propose a doubly protected estimator, which corrects for covariate shift using density ratio weighting of baseline covariates. 2. When comparability is violated (i.e., outcome shift occurs), the method selectively "borrows" only a subset of the external control dataset that remains comparable to the trial data, improving robustness. The effectiveness of the proposed method is demonstrated via extensive simulation studies and illustrated through a real-world application in migraine treatment evaluation. ## update after rebuttal Thank you for the response and the additional experiments. I think the new results strengthen the paper and make the overall argument more convincing. I would like to keep my score as Accept. Claims And Evidence: The theoretical and empirical claims made in the paper are well-supported by rigorous derivations, asymptotic properties, and empirical validation through simulations. Methods And Evaluation Criteria: The proposed methodology and evaluation criteria are appropriate for the problem at hand: 1. The doubly protected estimator improves efficiency by integrating external controls while mitigating biases using density ratio weighting and DR-Learner. 2. The paper extends semiparametric efficiency theory to survival analysis, allowing dynamic selection of comparable external controls. 3. The approach is flexible, accommodating machine learning models to estimate survival curves without strong parametric assumptions. The ATE estimation is based on restricted mean survival time (RMST). However, when the survival curve remains at a high probability at $\tau$ (as seen in Figure 3A), RMST may underestimate survival differences by neglecting the tail distribution, which can significantly contribute to the total effect. The authors should discuss this limitation in the future work. Theoretical Claims: I briefly reviewed the correctness of the proofs and did not identify any errors. The derivations are rigorous and well-structured. Experimental Designs Or Analyses: The experimental design is sound and well-structured: 1. The benchmarking against multiple baselines provides a comprehensive comparison. 2. The simulation study covers various settings, including different types of bias-generating mechanisms (selection bias, unmeasured confounding, and lack of concurrency). The only concern I have is that, the feature generation process in simulations assumes independent features and linear relationships (moreover, the coefficients are all 1s) between the covariates, treatment assignment, and hazard function. Supplementary Material: There is no supplementary material. Relation To Broader Scientific Literature: This paper develops a flexible and data-adaptive (for covariates shift, and outcome shift) framework that accounts for survival analysis datasets. The consistency and Essential References Not Discussed: N/A Other Strengths And Weaknesses: Other strengths 1. The paper clearly articulates the problem, the proposed solution, and its impact on clinical trials. 2. The derivations are mathematically sound and insightful, providing clear intuition alongside proofs. 3. The simulations and real-world application convincingly support the claims. Other Comments Or Suggestions: I believe the correct term for Assumption 3.2 should be *informative censoring*, rather than *non-informative censoring*, as stated in the paper. This assumption acknowledges that event and censoring times are dependent and only become independent when conditioned on covariates. In other words, knowing the censoring time provides information about the event time. However, this is merely a terminological distinction and does not affect the validity of the paper. Questions For Authors: Assumption 3.1 states $q_R(X)<1$, which implies $P(R=1\mid X) < P(R=0 \mid X)$. Should this instead be $P(R=1\mid X) < 1$, allowing for the trial population to be dominant in some covariate regions? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thanks for the careful reviews. Here are our detailed responses to your questions. **Methods And Evaluation Criteria** 1. The selection of the cutoff value $\tau$ for computing RMST is crucial in practice since the tail distribution after $\tau$ is neglected. Typically, the event rates at this cutoff value should exceed 10% to **ensure sufficient data for model development.** A common rule of thumb is to set $\tau$ to be around the 75%-80% quantile of observed event times. However, this choice varies case by case and should be informed by domain expertise with real-world data. A brief discussion on this will be included in the paper. **Experimental Designs Or Analyses** 1. We have included additional set of simulation, where for RCT, the hazard is $\lambda_a(t\mid X) = \exp(-0.5a-0.2X_1-0.5X_2 -0.1X_3)$; for EC, $\lambda_0(t\mid X) = \exp(-0.1X_1-0.5X_2 -0.2X_3)$ (i.e., **the coefficients are not the same anymore**), and the covariates are generated by the multivariate normal distribution with pair-wise correlation 0.5 (i.e., **non-independent features**). The results are presented here (https://anonymous.4open.science/r/ICML2025-7977/fig4.png), and main findings stay the same as before. 2. Moreover, **transformation on the covariates** (e.g., polynomial transformation) can be considered as data pre-processing to capture the non-linear relationship between the covariates and the time-to-event outcomes. Furthermore, our method allows to use flexible survival models (e.g., survival random forest) for estimating the survival curves (e.g., $S_a(t\mid X)$), which could also capture the non-linear relationship. **Other Comments Or Suggestions**: We have changed Assumption 3.2 to *Informative censoring* as $T^{(a)}\perp C\mid X, A=a, R=r$ for $a=0,1$ and $r=0,1$. **Questions For Authors**: Thanks for catching this typo. We have changed Assumption 3.1 (ii) to "$0<\pi_A(X),\pi_R(X)<1$ in the support of $X$", where $\pi_A(X)=P(A=1\mid X, R=1)$ and $\pi_R(X)=P(R=1\mid X)$. --- Rebuttal Comment 1.1: Comment: Thank you for the response and the additional experiments. I think the new results strengthen the paper and make the overall argument more convincing. I would like to keep my score as Accept. --- Reply to Comment 1.1.1: Comment: Thank you for taking the time to review our response and additional experiments. Your comments certainly strengthen our paper and will be incorporated in the final version.
Summary: Authors study the estimation of restricted mean survival time in a randomized controlled trial where external controls are leveraged to increase statistical power. Since there may be a conditional shift (outcome drift) between trial controls and external controls, it is well understood that doing this is not trivial and requires adjustment for it (e.g., reweighting external controls to mimic the trial controls' covariate distribution). Authors derive the efficient influence function for that estimand which motivates a doubly-robust estimator. They also propose an estimator (based on selective borrowing) that can operate when there are unmeasured confounders that drive the outcome drift between the trial and external controls. Claims And Evidence: Yes Methods And Evaluation Criteria: Synthetic experiments seem reasonable and authors methods perform better under cases they are expected to. There a few things I could not quite follow about the real-world experiments which I elaborated on below (Experimental Designs Or Analyses) Theoretical Claims: I did not check the proofs in detail, as they are rather lengthy. The parts I skimmed through seemed correct, and the resulting expressions make sense. As the authors mention, most of the techniques for deriving EIFs are borrowed from the literature, so I would not expect any mistakes in the results. Experimental Designs Or Analyses: Experiments seem exhaustive but this section can benefit from polishing & organization. For real-world experiments, what are CGAI and CGAG trials? Creating a table for abbrevations can help. I could not really understand the evaluation criteria for real-world experiments through PrSS. Where does that ground-truth threshold (-0.1) come from? Even if we take that as given, I had a hard time understanding what were you looking at. The real-world experiments are extremely important to justify the complicated theory in the paper. I think you have a nice dataset & experimental setup, but just need to be much more clear about what you are doing & how you are evaluating. Supplementary Material: NA Relation To Broader Scientific Literature: One of the main drawbacks of this paper for me is its relevance/contributions to the broader machine learning community. It focuses on a very particular estimand that is relevant for causal inference from censored data, and it develops an efficient estimator which can leverage external controls using recipes from the efficient influence functions literature. I do see how this could be useful in practice, but have a feeling that this work would be a great fit for a statistics conference/journal, but it does not contribute much on the methodology/technical side that could be generalized to/used in other ML problems. Essential References Not Discussed: NA Other Strengths And Weaknesses: see above (Relation To Broader Scientific Literature) Other Comments Or Suggestions: NA Questions For Authors: NA Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank reviewer for the through assessment. We here provide the detailed response to each of these points. **Experimental Designs Or Analyses:** 1. We have **refined and reorganized the experiment sections** to align with the objectives of the simulation, including how to design the data generation mechanisms accordingly, competitors, evaluation metrics, results, and detailed discussions. 2. The names of real-world dataset have been changed to **EVOLVE-1 study** and **REGAIN study** and made **consistent** throughout. The **EVOLVE-1 study** (Evaluation of Galcanezumab v.s. Placebo in the Prevention of Episodic Migraine) serves as the *randomized clinical trials* (**CGAG** is its protocol name). **REGAIN study** (Evaluation of Galcanezumab v.s. Placebo in Patients with Chronic Migraine) serves as the *external controls* (**CGAI** is its protocol name). The references will be included in the paper. 3. The threshold $-0.1$ for the real-data is chosen by **domain knowledge**, that is, reducing the RMST at month $\tau=6$ by $0.1$ is considered clinically meaningful. Further, the PrSS could be computed under **various other thresholds** and similar conclusions could be drawn; see https://anonymous.4open.science/r/ICML2025-7977/fig3.png. 4. To interpret the results in Panel (C) of Figure 3, we could use an example to illustrate. Suppose that we aim to **reach PrSS at most $0.6$ at month $6$**, our method “adapt” only need to recruit **$100$ patients** for the placebo group (solid red line at month $6$), however, the benchmark method “aipw” needs **at least $150$ patients** for the placebo group (dash green line at month $6$). Therefore, our approach could attain similar levels of PrSS with **fewer patients** by leveraging the external controls and thus **shorten the patient enrollment period**, which could eventually accelerate the drug development for rare diseases. **Relation To Broader Scientific Literature** 1. First, the proposed method can be generalized to **any estimand that is a function of the survival function $S_a(t)$** (e.g., mean or median of the survival time), not necessarily limited to one particular estimand as the EIFs are derived for $S_a(t)$. Let the estimand of interest be $\theta_\tau(t)=\Phi_\tau(S_a(t))$, the associated EIF for the estimand of interest can be obtained as $\psi_\theta(t) = d\Phi_\tau(q)/dq \cdot \psi_{S_a}(t)$ by Taylor expansion, where $\psi_{S_a}(t)$ is the EIF for $S_a(t)$, and the DR-learner is directly applicable to detect the outcome drifts in the estimand of interest. Thus, our proposed method should be **useful to any integrative (causal or not) analysis for survival outcomes**. We will emphasize this point in the paper. 2. **Survival problems should be important in the machine learning (ML) community, such as dropout and customer churn.** In many ML problems, there are heterogeneous data sources that can be integrated for the same task or domain adaptation; the critical issue is to handle data heterogeneity. Our proposed selective borrowing method offers a new perspective of the integrative analysis for the survival outcomes with a proper way for inference, which could be a valuable contribution to the general ML community. We will include such discussions in the paper as well.
Summary: The paper introduces a new way to estimate treatment effects in survival analysis using external controls, which is especially helpful when clinical trials have small control groups, like in rare diseases. It introduces a doubly protected estimator for the restricted mean survival time (RMST) difference, combining doubly robust estimation to adjust for covariate shifts and a DR-Learner to mitigate outcome drift. By leveraging machine learning, the method flexibly models survival curves and selectively borrows external data while ensuring robustness. Empirical validation through simulations and real-data application demonstrates its practical utility. Claims And Evidence: - **Doubly Protected Estimator for RMST Difference**: - Theorem 3.5 and Theorem 3.6 provides a theoretical foundation for constructing valid confidence intervals and ensuring asymptotic properties, which strengthens the claim. - **Handling Covariate Shifts and Outcome Drifts**: - The use of the density ratio to adjust for covariate shifts and the DR-Learner to address outcome drift is conceptually sound. These approaches are grounded in semi-parametric theory and machine learning. - **Asymptotic Properties**: - The authors claim to establish asymptotic consistency and efficiency improvements for their estimator. They use efficient influence function, which is known to provide such guarantees under regularity conditions. - **Empirical Validation**: - The claim that the method performs well in simulations and a real-data application is supported by sunthetic and real-data analysis of Galcanezumab for migraine headaches in section 4. Methods And Evaluation Criteria: The claim that the method does not require stringent parametric assumptions is plausible, as the framework incorporates flexible machine learning techniques for survival curve approximation. Theoretical Claims: I have briefly reviewed the correctness of Theorems 3.4, 3.5, and 3.6, and they appear to be correct. The derivations and proofs seem consistent with the theoretical framework and align with established principles in the field. Experimental Designs Or Analyses: For synthetic data, extensive simulations show robustness and efficiency gains compared to trial-only estimators and other methods. For real data: The method is applied to evaluate the efficacy of Galcanezumab in mitigating migraine headaches, illustrating its practical utility. Supplementary Material: Yes, I reviewed some of the proofs for the main theorems in the supplementary material, and they appear to be correct. Relation To Broader Scientific Literature: Doubly robust estimators are well-established in causal inference and missing data literature. These estimators are robust to misspecification of either the outcome model or the propensity score model, making them attractive for handling covariate shifts (e.g., Bang & Robins, 2005; Van der Laan & Rose, 2011). The authors extend doubly robust estimation to the context of survival outcomes with external controls. They use the density ratio of baseline covariates to adjust for covariate shifts and derive the efficient influence function for the restricted mean survival time (RMST) difference. This builds on semi-parametric theory (Tsiatis, 2006) and provides a principled framework for integrating external data while maintaining efficiency and robustness. Essential References Not Discussed: I think it's well discussed. Other Strengths And Weaknesses: The theoretical contributions, such as the derivation of the efficient influence function for the restricted mean survival time difference and the establishment of asymptotic properties, provide a rigorous foundation for integrating external data into survival analysis. The application of the method to evaluate the efficacy of Galcanezumab for migraine headaches demonstrates its practical utility. Other Comments Or Suggestions: - In the supplementary material, it would be easier to read if the author restate the theorem and lemma before the proof, and in the main text, use one or two sentence to mention the proof is at which section in the appendix. - It would be better to include the synthetic experiment implementation code as supplementary material, rather than stating, "Our implementation codes will be made publicly available after the acceptance of this manuscript." (Additionally, "codes" should be revised to "code Questions For Authors: - It would be great if the author can add a small paragraph of double robust estimators in the related work section. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the comments. Here is a detailed response to your concerns. **Other Comments Or Suggestions** 1. We will restate each theorem and lemma in the supplementary material. In the main text, we will **cross-reference the proof** of each theorem and lemma in the Appendix. "Theorem 3.4 is proved in Appendix A.1, Theorem 3.5 is proved in Appendix A.2, Theorem 3.6 is proved in Appendix A.3, Lemma 3.7 is proved in Appendix A.4, and Theorem 3.8 is proved in Appendix A.5". 2. We have already prepared the codes along with one implementation example for the proposed method in the supplementary material. **Questions For Authors** 1. One short paragraph of **double robust estimators** will be included for the related work section: ”However, existing integrative methods are limited by the assumption of the Cox model, either on the cause-specific or subdistribution hazard scale, which requires to accurately model the survival probability. In recent years, semiparametric efficient and doubly robust estimators, which leverage the efficient influence function (Bickel et al., 1993; Tsiatis, 2006; van der Vaart, 2000; van der Laan & Robins, 2003), including estimation equation methodology (Hubbard et al., 2000; Robins & Rotnitzky, 1992; van der Laan & Robins, 2003) and targeted maximum likelihood estimation (van der Laan & Rubin, 2006; Rytgaard et al., 2022), have gained great popularity in many fields and are increasingly used to draw inference about treatment effects.” --- 1. Bickel, P. J., Klaassen, C. A. J., Ritov, Y., & Wellner, J. A. (1993). Efficient and adaptive inference in semiparametric models, Forthcoming monograph. 2. Tsiatis, A. A. (2006). Semiparametric theory and missing data (Vol. 4). New York: Springer. 3. van der Vaart AW (2000) Asymptotic statistics, vol 3. Cambridge University Press, Cambridge. 4. van der Laan MJ, Robins JM (2003) Unified methods for censored longitudinal data and causality. Springer, Berlin. 5. Hubbard AE, van der L MJ, Robins JM (2000) Nonparametric locally efficient estimation of the treatment specific survival distribution with right censored data and covariates in observational studies. In: Statistical models in epidemiology, the environment, and clinical trials, Springer, Berlin, pp 135–177. 6. Robins JM, Rotnitzky A (1992) Recovery of information and adjustment for dependent censoring using surrogate markers. In: AIDS epidemiology, Springer, Berlin pp 297–331 7. van Der Laan, Mark J., and Daniel Rubin. "Targeted maximum likelihood learning." The international journal of biostatistics 2.1 (2006). 8. Rytgaard, Helene C., Thomas A. Gerds, and Mark J. van der Laan. Continuous-time targeted minimum loss-based estimation of intervention-specific mean outcomes. The Annals of Statistics 50.5 (2022): 2469-2491.
Summary: The authors propose a "doubly protected" estimator for treatment-specific restricted mean survival time difference in RCTs, focusing on alleviating biases commonly encountered when employing additional (i.e., non-trial-derived) external control data. Their estimator accounts for both covariate shift and outcome drift, addressing some of the most common issues preventing the usage of external control data for such problems. Methodologically, the paper makes two main contributions: 1. Development of an integrative estimator under an assumption of comparability 2. Extension of the estimator from (1) to settings in which comparability may be violated In terms of theory, the authors prove two (main) theorems (Theorem 3.6 and Theorem 3.8) establishing the estimation error of the estimator from (1) and the variance of the estimator from (2). Empirically, the authors consider three simulation scenarios, each focused on a potential bias that might occur with external control data, including selection bias, unmeasured confounders, and lack of concurrency. The authors show that their adaptive estimator (2) performs well and can choose adaptively to what extent and which external control samples should be included for estimation, resulting in comparable bias to the trial-only estimator while achieving moderately reduced MSE and SE. Lastly, the authors consider an exemplary real-data application of their method on an RCT of a drug for episodic migraines. Claims And Evidence: The authors are relatively modest in their claims and do not (IMO) oversell, focusing primarily on the point that "This approach effectively incorporates external controls without introducing biases into the integrative treatment evaluation.", which I agree with based on the presented evidence. Methods And Evaluation Criteria: The general simulation setup makes sense. The real dataset also seems appropriate. I have several comments regarding the simulation design: 1. The authors seem to assume time-constant hazards (i.e., data is effectively simulated from an exponential distribution) throughout - coming from a survival perspective, this seems quite restrictive, especially given that some competing methods consider e.g., Weibull distributions in their simulation [1]. Just to be clear, I think this is fine for the censoring hazard function but not necessarily the event hazard function. I don't think the current ablations on $\beta_C$ are enough and would like to see simulations including non-time-constant hazards. 2. Similarly, given the fact that lack of accounting for time-varying outcome drift and time-varying covariate effects are presented as drawbacks of current methods in related work, I was surprised to not see the simulations directly addressing these. 3. The simulations in [1] also investigate changes in covariate effects separately and jointly with time-varying baseline hazard differences (see 1 + 2). Unless the authors have strong reasons for not investigating these, I think they would strengthen the simulations and thus the paper. [1] Li et al. (2023b) Theoretical Claims: I only skimmed the proofs and have no concerns. Experimental Designs Or Analyses: 4. Several experimental details regarding the empirical results are either missing or unclear: (i) which penalty term is used for step 2 in 3.3 in the experiments and how is the $\lambda$ tuned? (ii) How are the nuisance functions estimated throughout the experiments? Supplementary Material: I skimmed the proofs in the supplementary and reviewed the additional simulations in detail. Relation To Broader Scientific Literature: The first part of the proposed estimator (3.2) seems like a relatively straightforward (novelty-wise) extension of [2] to the survival outcome setting. Despite this, I think the adaptive proposed method (i.e., 3.3) which stems from a combination of 3.2 and the DR-Learner framework [3, 4], is interesting, given its low bias and low-moderate improvements in terms of MSE, relative to the trial-only estimator. [2] Gao et al. (2024) [3] Kennedy Edward (2020) [4] Kallus & Oprescu (2023) Essential References Not Discussed: None that I am aware of. Other Strengths And Weaknesses: Overall, I think this paper handles an interesting and timely problem, especially in the context of survival outcomes. While novelty could be higher, the paper itself is interesting, especially given the proposed estimators' basically non-existent cost in terms of bias. Despite this, I think there are several points (especially in the simulations) that need to be extended and or better explained (see e.g., also 4). Other Comments Or Suggestions: - Table 3, simulation scenario 2 is missing a right closed parentheses before the final curly brace. - AFAIK, the default ggplot2 color scheme is not particularly color-blind friendly, so I would suggest the authors switch their figures to a different palette. - Some colors are not matched between figures (e.g., Fig 1 top has acw in olive, bot. in green) and some figures use colors very close to another figure for something very different (e.g., Figure 2 has blue for sim. scenario two, while Fig. 1 bot. uses it for one of the estimators). - Some Figures use panels (Fig. 3) and some top bottom - visually, I think it's easier for readers to have it consistent. - The y-axis of Figure 2 is very (too, IMO) tight for "Relative Efficiency", presumably due to being forced to share it with the other facet - I would suggest relaxing this and likely making it symmetric for the relative efficiency. It may also make sense to flip the y-axis, to keep with the high -> good direction of the other facet ("Proportion of Borrowing"). Questions For Authors: 5. The second simulation scenario was a bit unclear to me, primarily due to two things: - Is there a particular reason why the cond. hazard function is not indexed with a? If it is since there is no $−.5a$ term, what is the reason that term was left out? - What is the indicator on $R = 0$ doing? AFAIU, $U$ is sampled independently for $R=1$ and $R=0$, so why is the additional indicator needed? (as, also according to the text, $U$ reflects differences in the baseline hazards). 6. Some terms are never really defined, for things like [lack of] concurrency, having at least an informal definition and/or citation to somewhere would make it easier to read for people coming from adjacent fields (e.g., survival). Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thanks for the careful review and nice words. We hereby provide one-to-one responses to your concerns. **Methods And Evaluation Criteria** 1. The current three settings represent three typical scenarios we often encounter in practice: Setting 1, where all the ECs are comparable after adjusting for covariate shift and should be included; Setting 2, where the unmeasured confounding is present, none ECs are comparable and the external data should not be used; Setting 3, where there is lack of conccurency, only 1/3 of ECs are comparable and that portion of external data should be borrowed. 2. As suggested, we consider two additional simulation settings, and the results are presented here (https://anonymous.4open.science/r/ICML2025-7977/fig1.png). * (Setting 4) **different covariate effects**: for RCT, $\lambda_0(t\mid X)=\exp(-0.2X_1 -0.2X_2 -0.2X_3)$; for EC, $\lambda_0(t\mid X)=\exp(-0.5X_1-0.5X_2-0)$ * (Setting 5) **different time-varying hazards**: for RCT, $\lambda_0(t\mid X) = t \exp(-0.2X_1-0.2X_2 -0.2X_3)$; for EC, $\lambda_0(t\mid X) = 2t \exp(-0.2X_1-0.2X_2 -0.2X_3)$. 3. Both the proposed estimator and TransCox (Li et al. (2023b)) could handle differences in covariate effects and time-varying hazards. **However, TransCox is only valid under the Cox model.** If the conditional survival curve $S_a(t\mid X)$ is not a Cox model (e.g., under Settings Two and Three in the paper, if we integrate out $U$ (or $\delta$) out in the hazards, the model generation will no longer be Cox model), TransCox will have large biases whereas our proposed estimator still controls the bias due to its double robustness and achieves improved performance. **Experimental Designs Or Analyses** 1. The penalty term is chosen to be the **adaptive lasso** (Zou, 2006). The conditional survival curves $S_a(t\mid X)$ and $S^C(t\mid X)$ for the event and censoring are modeled by the **Cox PH model**, and the propensities $\pi_R(X)$ and $\pi_A(X)$ are modeled by **SuperLearner** from the `SuperLearner` R package, which is an ensemble model of the Logistic Regression and Random Forest. These details will be added to the Simulation section. **Other Comments Or Suggestions** 1. The comments on the tables and figures are well-received. We have updated the colors for figures by the *Nature Platte*. Also, the shapes of points are changed to be different for estimator and scenario. The updated figures are provided here (https://anonymous.4open.science/r/ICML2025-7977/fig2/fig2_1.PNG; https://anonymous.4open.science/r/ICML2025-7977/fig2/fig2_2.PNG; https://anonymous.4open.science/r/ICML2025-7977/fig2/fig2_3.PNG). **Questions For Authors** 1. The conditional hazard function **should be indexed with a**. The original table only includes the hazard functions that have been modified under each considered setting. To avoid confusion, we explicitly list the hazard function as $\lambda_a(t \mid X, R)$ under each setting. * (Setting 1) $\lambda_a(t \mid X, R) = \exp(-0.5a - 1_p^T X \cdot 0.2)$; * (Setting 2) $\lambda_a(t \mid X, R) = \exp[-0.5a - 1_p^T X \cdot 0.2 + 3\{U + \mathbf{1}(R=0)\}]$; * (Setting 3) $\lambda_a(t \mid X, R) = \exp(-0.5a - 1_p^T X \cdot 0.2 + 3\delta \mathbf{1}(R=0))$. 2. Under Setting 2, we include $U$ in the hazards for RCT as well to keep the variability of hazards the same level across two datasets. In particular, for $R =1$, $U \sim N(0,1)$ with zero-mean is included in the hazard model, whereas for $R=0$, $U+1 \sim N(1,1)$ with non-zero mean is included, which is expected to **introduce more outcome drift for the external controls**. 3. **Lack of concurrency** will be discussed further in the section of Introduction, where FDA drafted the Guidance documents on the use of external controls: "Lack of concurrency could occur when RCTs and ECs are collected in **different time periods** or under **varying healthcare settings**. Therefore, directly integrating ECs with RCTs without any adjustment could introduce biases into the treatment estimation". The guidance reference from FDA will be added to the paper --- 1. Li, Z., Shen, Y., and Ning, J. Accommodating time-varying heterogeneity in risk estimation under the cox model: A transfer learning approach. Journal of the American Statistical Association, 118(544):2276–2287, 2023b. 2. H. Zou. The adaptive lasso and its oracle properties. Journal of the American statistical association, 101(476):1418–1429, 2006. --- Rebuttal Comment 1.1: Comment: I'd like to thank the authors for their rebuttal clarifications, additional simulation experiments, and miscellaneous fixes. Since these have addressed my main concerns and the additional simulation results are consistent with the ones previously performed in the paper, I am raising my score to Accept. --- Reply to Comment 1.1.1: Comment: Thank you for taking the time to review our rebuttal and new additional experiments, as well as for updating your score. Your comments certainly strengthen our paper and will be incorporated in the updated paper.
null
null
null
null
null
null
Topo-Miner: CRISPR-Enhanced DNA Computing for Accelerated Topological Feature Extraction
Reject
Summary: This paper presents Topo-Miner, a CRISPR-enhanced DNA computer designed for rapid and accurate topological feature extraction. The key contributions include CRISPR-enhanced DNA computing for TDA, novel encoding of graph topology into DNA sequences, computational speedup over Ripser, integration with the TopoComp platform, etc. While the paper presents a compelling vision, its experimental validation is currently missing, and the theoretical claims regarding computational limits need more concrete justification. Claims And Evidence: 1. Topo-Miner significantly accelerates persistent homology computations (50x-200x speedup. Simulations suggest dramatic speedups over Ripser. However, no in vitro experimental validation has been conducted yet. 2. CRISPR-based DNA computing is reliable for TDA. DNA-based persistent homology is theoretically possible, but the paper does not demonstrate practical error rates in a wet-lab setting. The error correction model is well-structured but needs empirical verification. 3. Topo-Miner enables higher-order topology and string theory-inspired computations. While DNA encoding and CRISPR manipulations are promising, claims about approximating Calabi-Yau manifolds and AGI applications are highly theoretical 4. Integration with STING (for GNNs) and TopoPath (for NP-hard problems) enhances broader applications. Well-supported by the paper. Maybe we can: 1. Clarify error rates and experimental feasibility of DNA-encoded persistent homology. 2. Provide preliminary wet-lab results for at least small-scale CRISPR-based homology computations. Methods And Evaluation Criteria: The methodology is well-structured and highly novel, involving: 1. DNA encoding of graphs (nodes, edges, simplices). 2. CRISPR-mediated boundary operations. 3. Matrix reduction using dCas9/Cas12a, and 4. Tensor-based topological feature extraction. However, some critical issues remain: 1. Lack of empirical validation: All results are simulation-based. 2. Scalability assumptions of DNA computing are not fully justified. 3. Comparisons to tensor-based TDA are missing. Some suggested improvements: 1. Show at least partial experimental verification of CRISPR-based boundary operations. 2. Compare Topo-Miner to tensor-based TDA approaches. 3. Discuss DNA strand scalability and reaction times in practical settings. Theoretical Claims: The paper makes strong theoretical claims, particularly: 1. CRISPR-based TDA reduces time complexity to O(n) for boundary operations. 2. Matrix reduction complexity drops from O(n³) to O(n²) or better. 3. DNA strand encoding provides better space efficiency. While these claims are plausible, they rely on idealized reaction conditions. Some Suggested Improvements: 1. Include error propagation analysis (e.g., how off-target CRISPR activity affects results). 2. Provide formal lower bounds for accuracy in practical settings. Experimental Designs Or Analyses: Simulation results are promising, but no real-world wet-lab experiments have been conducted. The in vitro validation plan is detailed, but there is no execution yet. Suggested Improvements: 1. Conduct at least one small-scale experimental validation before submission. 2. Provide quantitative benchmarks for DNA sequence errors. Supplementary Material: The supplementary material is comprehensive. Very Good! Relation To Broader Scientific Literature: The paper situates itself well in TDA, DNA computing, and CRISPR literature. But missing comparisons to MoELoRA-like hybrid TDA approaches. Essential References Not Discussed: None. Other Strengths And Weaknesses: Strengths: 1. Highly novel fusion of TDA, CRISPR, and DNA computing. 2. Massive parallelism via DNA strands. Weaknesses: 1. Lack of empirical validation (no wet-lab results). 2. Speculative claims regarding AGI and string theory applications. 3. Scalability of DNA computing is assumed rather than proven. Other Comments Or Suggestions: The writing is clear and well-organized, but certain claims need tempering. Questions For Authors: 1. Have you performed any small-scale wet-lab experiments to validate CRISPR-based persistent homology? 2. What is the theoretical limit of DNA-based topological feature extraction—could it surpass traditional computing approaches for all cases? 3. How do you handle potential off-target effects in DNA-encoded boundary operations? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer KA9Q, Thank you for your detailed and insightful review of our manuscript (Submission 14396). We appreciate your recognition of our work's vision and novelty, the positive comments on the supplementary material, and the constructive feedback, including the Weak Accept (3) recommendation. We understand the primary concern regarding the current absence of experimental validation and address this and other points below. **1. On Experimental Validation** We acknowledge that *in vitro* results are essential for ultimate validation. This paper focuses on establishing the necessary **theoretical and computational groundwork** for this novel approach – defining algorithms, analyzing performance, and demonstrating feasibility via simulation, which we believe is a critical first step before complex bio-computational experiments. Our simulations (Sec 5.1) provide strong preliminary support, being **rigorously calibrated using experimental parameters** from cited literature (e.g., Chen et al., 2013; Kleinstiver et al., 2016; Zhang & Winfree, 2009). This grounding in empirical measurements (kinetics, error rates) offers quantitative insights into likely system behavior and performance potential. The detailed experimental plan (Sec 5.2 & Supp) outlines our clear path to empirical verification. * **Planned Revision:** We will enhance the manuscript to more explicitly detail simulation calibration sources/methods, strengthening the link to experimental findings, and clearly position this paper as providing the foundational theory and computational validation preceding experiments. **2. On Theoretical Justification, Practical Errors, and Accuracy Bounds** We appreciate you finding our analysis plausible and the error model well-structured. * **Error Propagation & Off-Target Effects:** Our framework incorporates error sources (incl. off-target) and mitigation strategies (HiFi Cas, design - Sec 4.3), using literature estimates in models/simulations. * **Planned Revision:** Revise Sec 4.3, 4.4, & Supp D to more explicitly discuss error *propagation* analysis (incl. off-target impact) and how mitigation is modeled, linking to planned robustness analysis (Supp A). * **Accuracy Bounds:** The current proof (Sec 4.4, Supp D.4) provides a theoretical baseline. * **Planned Revision:** Clarify proof assumptions and state that deriving tighter bounds under experimentally-derived error rates is key future work. **3. On Scalability Assumptions** Scalability claims stem from the theoretical potential of molecular parallelism (Supp D.1/D.2) offering complexity advantages (e.g., $O(n^2)$ vs $O(n^3)$). * **Planned Revision:** Revise discussion (Sec 4.1/4.2/Conclusion) to explicitly acknowledge practical limits (kinetics, diffusion, cost), framing theoretical complexity as the paradigm's *potential* requiring experimental optimization. **4. On Missing Comparisons (Tensor-based TDA, Hybrid TDA)** Thank you for highlighting these areas. Our initial focus was the Ripser baseline. * **Planned Revision:** Add a **conceptual comparison** in Related Work/Discussion to Tensor/Hybrid TDA, contrasting computational paradigms and discussing potential distinct niches for Topo-Miner (e.g., extreme data scale, bio-integration). **5. On Speculative Claims (AGI, String Theory)** We agree these need careful framing. * **Planned Revision:** Revise Intro/Conclusion to clearly label these as **speculative, long-term possibilities** contingent on core technology success, illustrating potential impact. **Responses to Specific Questions** 1. **Small-scale wet-lab experiments:** None completed for this submission; the focus is theoretical/computational groundwork. The plan (Sec 5.2 & Supp) guides immediate next steps. 2. **Theoretical limit:** Unlikely universally superior. Potential niche advantage for specific large-scale problems vs classical scaling, balanced by biochemical limits (speed, errors, cost). *Revision:* Clarify this trade-off. 3. **Handling off-target effects:** Via HiFi Cas, gRNA/sequence design, modeling (Sec 4.3/Supp A); experimental quantification planned. **Conclusion** We believe Topo-Miner introduces a significant conceptual advance. This paper lays the necessary theoretical groundwork, algorithmic design, and strong calibrated simulation evidence for its feasibility. We are confident the proposed revisions—addressing scope, claims, context, errors, and scalability—will substantially improve the manuscript. **We believe this work offers a significant contribution by providing a rigorous foundation and validated computational feasibility study for a promising new computational paradigm**, justifying its value at this stage and paving the way for crucial experimental investigations. We hope the revised manuscript, strengthened by your feedback, warrants acceptance. Thank you again for your constructive and valuable feedback. Sincerely, The Authors
Summary: The paper introduces Topo-Miner, a computational framework leveraging CRISPR-enhanced DNA computing to accelerate topological data analysis (TDA). The proposed method encodes graph structures into DNA sequences and utilizes CRISPR to perform parallel boundary operations and matrix reductions, which are critical in computing persistent homology. The authors claim 50x-200x speedups over classical methods and suggest that the approach could revolutionize TDA by making it feasible for large-scale data. Claims And Evidence: - The claim of 50x-200x speedup is based purely on simulations with numerous assumptions rather than real-world experiments, making it highly speculative. - The paper asserts that CRISPR-based DNA computing can reliably execute matrix operations, but this remains unproven beyond small-scale proof-of-concept studies. - The claim that the system could generalize to AGI and string theory-inspired topological structures is overreaching and lacks theoretical justification. Methods And Evaluation Criteria: - The computational pipeline is well-structured, but benchmark comparisons are limited to Ripser, omitting other TDA tools like GUDHI or Dionysus. - The lack of wet-lab experiments weakens the credibility of the approach. A detailed experimental validation plan is outlined, but no results are provided. - There are no real-world datasets tested, only synthetic graphs and simulated results. Theoretical Claims: - The paper presents a time complexity reduction analysis suggesting improved scalability, but the assumptions about parallelism and reaction kinetics may not hold in practice. - The proof of lower-bound accuracy assumes ideal conditions for DNA hybridization and CRISPR targeting, ignoring real-world error rates and inefficiencies. - The error analysis does not consider long-term stability issues in DNA computing, such as strand degradation and off-target effects. Experimental Designs Or Analyses: - The entire experimental validation remains theoretical, with no actual biological implementation presented. - Simulations assume idealized CRISPR cleavage rates and perfect sequence specificity, which are not realistic. - The authors do not discuss about the time and money required for leveraging DNA computing and CRISPR for TDA. Supplementary Material: No. Relation To Broader Scientific Literature: Not applicable. Essential References Not Discussed: Not applicable. Other Strengths And Weaknesses: Not applicable. Other Comments Or Suggestions: Not applicable. Questions For Authors: - How does Topo-Miner compare against GPU-accelerated TDA methods, which also provide speedups? - How do time and money needed for wet-lab impact scalability? —Can your method truly be practical for large-scale graphs? - How do you account for errors in DNA hybridization and off-target CRISPR cleavage in practical implementations? Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: Dear Reviewer 4ZuV, Thank you for your time and for providing a critical evaluation of our manuscript (Submission 14396). We acknowledge your recommendation for Reject (1) and have carefully considered the significant concerns raised regarding the speculative nature of our claims due to the reliance on simulations, the assumptions made in our theoretical models, the lack of experimental validation, missing comparisons, and practical considerations. This paper presents a **foundational theoretical framework and computational feasibility study** for Topo-Miner, a novel paradigm integrating CRISPR-DNA computing with TDA. Introducing such a radically new approach necessitates establishing the core concepts, algorithms, and potential viability *before* undertaking complex, resource-intensive wet-lab experiments. Standard practice in developing novel computational systems often involves initial theoretical modeling and simulation under simplifying assumptions to understand fundamental potential before layering all real-world complexities. We believe this foundational work, detailed herein and in the supplement, is a valuable contribution in itself. **1. On Simulation Basis, Assumptions, and Speculative Claims** We acknowledge performance claims derive from simulations and models involve simplifications. * **Simulation Calibration:** Crucially, simulations (Sec 5.1) were **calibrated using experimental kinetics** from literature (e.g., CRISPR rates - Kleinstiver '16; DNA kinetics - Chen '13), providing quantitative estimates of potential, not based on arbitrary assumptions. * **Assumptions:** Simplifying assumptions were used for initial theoretical analysis (e.g., accuracy proofs) to establish baseline potential, a standard step before incorporating full complexity. * **Speculative Claims:** We agree claims about performance and advanced applications (AGI/String Theory) require clearer framing. * **Planned Revision:** We will revise to: (a) Explicitly detail simulation calibration; (b) Clarify assumptions and their justification for this foundational stage; (c) Temper performance claims, framing as *calibrated potential*; (d) Clearly label AGI/String Theory as speculative, long-term possibilities. **2. On Lack of Experiments and Practicality (Time/Cost)** The absence of wet-lab results is acknowledged; this work necessarily precedes complex experiments. The plan (Sec 5.2 & Supp) outlines the next steps. Regarding time/cost (Your Question 2): * **Planned Revision:** Add discussion acknowledging current high cost/time. State that assessing practical scalability and cost-effectiveness requires data from planned experiments and is crucial future work. Frame the goal as exploring potential long-term scaling advantages for specific hard problems. **3. On Theoretical Concerns (Errors, Stability)** Error handling (Your Question 3 - hybridization/off-target) is included via mitigation strategies (HiFi Cas, sequence design - Sec 4.3) and modeling using literature error rates. * **Planned Revision:** Enhance error discussion (Sec 4.3, Supp D.3), clarifying current modeling. Note that detailed error *propagation* analysis and addressing long-term stability (e.g., degradation) are important future refinements, building upon this work and integrating experimental data. Clarify accuracy proof assumptions (Sec 4.4). **4. On Methodological Gaps (Benchmarks, Datasets)** * **Benchmarks:** Comparisons beyond Ripser are needed. Regarding GPU TDA (Your Question 1): * **Planned Revision:** Expand Related Work/Discussion with **conceptual comparison** vs GPU methods (and GUDHI/Dionysus). Contrast molecular vs. hardware parallelism and discuss potential distinct niches (e.g., extreme memory limits, bio-integration). State direct benchmarking requires experiments. * **Datasets:** * **Planned Revision:** Clarify rationale for initial synthetic data use (controlled testing). Real-world data tests follow core validation. **5. On Supplementary Material** We must respectfully clarify: **Comprehensive supplementary material (>20 pages) *was* submitted**, detailing theory, methods, protocols, simulation setup etc. We urge the reviewer to please re-verify access, as this contains essential details supporting our work. **Conclusion** We appreciate the rigorous critique. While acknowledging limitations like the lack of experiments, we believe this paper offers a valuable foundational contribution (framework, algorithms, calibrated simulation feasibility). Planned substantial revisions will address concerns regarding simulation clarity, tempered claims, comparisons, error discussion, practicalities, presentation (per other reviews), and the supplementary material status. We hope these improvements demonstrate the value of this groundwork. Sincerely, The Authors --- Rebuttal Comment 1.1: Comment: ***Re-posting as a rebuttal comment*** Thank you to the authors for the thorough rebuttal. However, several important concerns remain unresolved in the current version of the paper: 1. **Simulation-Only Validation and Idealized Assumptions**: While the authors clarified that the simulations are calibrated using empirical kinetics from literature, the results are still based on idealized conditions with no experimental or real-world datasets. The system's performance remains hypothetical and unvalidated. Without even small-scale wet-lab experiments or a test on practical data, the proposed speedups and accuracy claims remain speculative. 2. **Unclear Practical Viability and Cost Modeling**: The rebuttal addresses reaction kinetics and simulation calibration in good detail, but practical considerations such as error propagation, cost, throughput, and robustness of wet-lab implementations remain underexplored. For a method proposed as a paradigm shift in TDA, these real-world limitations are central to assessing feasibility—especially for scaling to large graphs. As noted in the review, the paper also does not quantify time or cost tradeoffs compared to GPU-based methods, which undermines its positioning in the broader ML and systems community. 3. **Limited Empirical Comparisons and Benchmarks**: The experimental evaluation remains limited to comparisons with Ripser. The rebuttal acknowledges this and proposes future additions, including comparisons with GPU-accelerated and tensor-based TDA tools (e.g., GUDHI, Dionysus), but they are not currently included. This weakens the empirical evidence for the method's claimed advantages and makes it difficult to contextualize the proposed approach within the existing landscape. The authors propose a large number of major revisions—including reorganizing the methodology, clarifying simulation assumptions, expanding benchmark coverage, reframing speculative claims, and improving presentation. These changes would significantly alter the content and framing of the paper. Given the scope of the proposed updates, I do not believe it is appropriate to adjust the overall recommendation without reviewing a revised version.
Summary: The paper presents Topo-Miner, a CRISPR-enhanced DNA computing framework designed to improve topological data analysis (TDA) by leveraging DNA computing’s parallelism and CRISPR-Cas systems' precision. The authors claim 50x-200x speedups over existing tools like Ripser and suggest broad applications. However, the paper lacks proper organization, formatting, and visual representation (figures), making it difficult to assess the clarity and rigor of the proposed methodology. Additionally, while the claims are supported by simulations, the absence of experimental validation further weakens its impact. Claims And Evidence: The paper makes ambitious claims, particularly: - Significant computational speedups (50x-200x) over traditional TDA tools: Supported by simulation-based results but lacks empirical verification - Ability to compute advanced topological features beyond persistent homology: Some justifications are provided - Potential applications across multiple disciplines: While the rationale is reasonable, the lack of experimental data makes these claims speculative Methods And Evaluation Criteria: The presentation of methods is fragmented and lacks clarity. Evaluation is primarily based on simulations, but no real experimental results are provided. The lack of structured benchmarks, formal experimental validation, and figures further detracts from the robustness of the methodology. Theoretical Claims: The paper provides complexity analyses. Experimental Designs Or Analyses: The paper lacks actual experimental results. Supplementary Material: The supplementary material includes descriptions of DNA encoding schemes, theoretical derivations, and experimental protocols. Relation To Broader Scientific Literature: The work draws from TDA (e.g., persistent homology), DNA computing, and CRISPR-based bio-computation. While the integration of these fields is conceptually interesting, the paper does not clearly position its contributions relative to existing work. Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: Dear Reviewer Gzh8, Thank you for your review of our manuscript (Submission 14396) and the Reject (1) recommendation. We have carefully considered your feedback. We understand and acknowledge your concerns regarding the paper's presentation—specifically its organization, clarity, formatting, and lack of figures—which you rightly state hindered assessment, as well as the critical absence of experimental validation at this stage. **1. On Paper Presentation (Organization, Clarity, Formatting, Figures)** We sincerely apologize that the manuscript's presentation made it difficult to assess our methodology and its rigor effectively. We take this feedback very seriously; improving presentation clarity is a top priority for revision. **Planned Revision:** We commit to a **substantial revision** focused on presentation to allow for a clear evaluation: * **Reorganize Structure:** We will restructure the paper, particularly Section 3 (Methodology), ensuring a logical, coherent flow detailing the DNA encoding, CRISPR-based operations (boundary and matrix reduction), tensor computations, and decoding stages. This will address the fragmented presentation concern. * **Enhance Clarity & Precision:** We will revise the writing throughout for improved clarity, conciseness, and unambiguous technical descriptions. Complex concepts will be explained more straightforwardly, and terms defined consistently. * **Add Essential Figures:** Recognizing the lack of visual aids, we will introduce several key figures: * A formal diagram illustrating the overall Topo-Miner architecture/pipeline (replacing the current text-based Figure 1). * Visual examples clarifying the DNA encoding scheme for nodes, edges, and simplices. * Conceptual schematics illustrating the core mechanisms of CRISPR-based boundary and matrix reduction operations. * **Improve Formatting:** We will ensure consistent and professional formatting, including mathematical notation, adhering strictly to conference style guidelines. We are confident these revisions will significantly improve readability and facilitate a much clearer assessment of our framework. **2. On Lack of Experimental Validation** We acknowledge the **absence of *in vitro* results** is a major limitation of this submission. This initial paper focused on establishing the theoretical foundation, the novel computational design, and demonstrating potential feasibility/performance via carefully calibrated simulations, which we argue is a necessary *in silico* validation step before undertaking complex and resource-intensive wet-lab experiments for such an interdisciplinary approach. We appreciate you noting the supplementary material includes our detailed experimental protocols, outlining the concrete next steps for empirical validation which are central to our ongoing research. **Planned Revision:** The text will be revised to strictly delineate between simulation-based potential and the requirement for future empirical verification, accurately framing this work's contribution as providing the foundational design and theoretical basis. **3. On Claims and Evidence (Speedup, Advanced Features, Applications)** We recognize that without direct experimental data, claims regarding the precise magnitude of speedups, the realized capability for advanced feature computation (higher-order, string theory-inspired), and the breadth of applications remain **speculative**. **Planned Revision:** We *will* carefully **temper these claims**. Performance figures will be explicitly presented as *potential* outcomes suggested by our analysis and calibrated simulations under stated assumptions. Advanced features and applications will be framed as possibilities *contingent on experimental success*, illustrating potential scope rather than achieved results. **4. On Benchmarking and Positioning Relative to Literature** We agree benchmarking is currently limited and the paper's positioning needs sharpening relative to the rich TDA, DNA computing, and CRISPR literature. **Planned Revision:** We *will* significantly **expand the Related Work (Sec 2) and Discussion** sections. This will include clearer conceptual comparisons discussing Topo-Miner relative to other relevant TDA acceleration approaches (including tensor-based methods on classical hardware, GPU implementations) and alternative DNA computing strategies. We will highlight the unique aspects (molecular parallelism, programmability), potential advantages (e.g., scaling profile for specific problem types), and inherent challenges (kinetics, errors, cost) of our proposed molecular computing paradigm to better delineate its specific contribution and potential niche. **5. Conclusion** We appreciate your feedback, particularly the actionable comments on presentation. We are committed to the **major revisions** outlined above (presentation, claim framing, comparisons) to enable a clearer evaluation of Topo-Miner's novel framework. Sincerely, The Authors
Summary: This paper presents a CRISPR-based DNA computing approach designed to accelerate persistent homology computations in topological data analysis (TDA). Specifically, the authors encode nodes, edges, and simplices as DNA molecules and leverage CRISPR to perform operations, thereby exploiting the massive parallelism of DNA computing to enhance computational efficiency. The authors claim that this method achieves a 50x-200x speedup over existing TDA tools. Additionally, they outline an in vitro experimental validation plan. ## update after rebuttal Thanks to the authors for rebuttal. In my opinion, this work requires wet-lab experiments and precise in silico simulation results to support the effectiveness of the proposed method. I will keep my score. Claims And Evidence: The paper claims that the proposed method achieves a 50x-200x speedup in simulation experiments and presents Table 1 to support this claim by listing the computation times. However, all reported times in Table 1 are exact hundreds or thousands of seconds, which raises concerns about the reliability of the data. Given that computational experiments inherently involve measurement variability. Methods And Evaluation Criteria: The paper lacks a clear definition of evaluation metrics for assessing the proposed algorithm. Theoretical Claims: N/A Experimental Designs Or Analyses: There should be a wet-lab experiment, but only a plan is provided. The only simulation experiment presented in this study raises concerns due to its implausibly uniform results. Supplementary Material: The submission does not include any supplementary material, and no source code is provided for reproduction. Relation To Broader Scientific Literature: The DNA computation represents an emerging field at the forefront of molecular computing. The idea is novel, but claims are not firmly supported. Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: The paper seems not ready for submission. + using odd template + missing Figure 1 + Def Node i: Inconsistent representation. Math environment or not? + Double defined abbr TDA PH in Introduction and related works. The current manuscript demonstrates limited methodological engagement with core machine learning paradigms. While the technical contributions are noteworthy, their alignment with ICML's specific focus areas requires stronger justification. Questions For Authors: Why the reaction kinetics of the molecules involved are not considered, since they maybe the main reason that hinder the computational efficacy. Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: Dear Reviewer M9zw, Thank you very much for your time and for providing detailed critical feedback on our manuscript (Submission 14396). We sincerely appreciate the effort involved in reviewing our work. **1. Response to Question on Reaction Kinetics** Thank you for highlighting the crucial importance of reaction kinetics – your question prompted us to ensure this is clearer. Perhaps our manuscript did not emphasize this sufficiently, but **reaction kinetics *were* indeed explicitly and centrally considered in our simulations.** * **Clarification:** As detailed in Section 5.1, our performance estimates are derived directly from simulations that incorporate **experimentally measured rates** obtained from peer-reviewed literature for key molecular processes (e.g., CRISPR kinetics - Kleinstiver '16; DNA kinetics - Chen '13, Zhang '09). These published kinetics form the very basis for assessing efficacy and estimating speedup potential within our simulation framework. * **Planned Revision:** We will **significantly enhance Section 5.1** to make it unequivocally clear *how* these literature-derived kinetic parameters were integrated and directly influenced the timing estimates, ensuring this core aspect of our modeling is fully transparent. **2. On Simulation Data Reliability (`Table 1`)** We understand the concern regarding the round numbers in `Table 1` potentially suggesting a lack of reliability. We appreciate you pointing out this lack of clarity in our presentation. * **Explanation & Planned Revision:** These values represent **representative order-of-magnitude estimates** derived from our **kinetically-calibrated simulations**. They were rounded primarily to illustrate the potential **scaling trend and speedup magnitude** in a concise table, rather than representing precise timings showing statistical variability, which we agree is expected in computational experiments. We commit to **thoroughly revising Section 5.1 and the `Table 1` caption** to clarify the nature of these values (calibrated estimates). We will provide more detail on the estimation method and consider adding representative non-rounded data or ranges to the supplement, while reiterating that precise timings require experimental validation. **3. On Lack of Evaluation Metrics** We apologize for not explicitly defining the evaluation metrics used. Our assessment focused on: Speedup Factor (vs. Ripser), Accuracy (>95% via Bottleneck/Wasserstein), and Error Rate (<5% based on models). * **Planned Revision:** Based on your feedback, we will **add a dedicated subsection** to explicitly define these metrics and how they were assessed in our simulation studies. **4. On Presentation Issues** We sincerely appreciate you identifying specific presentation flaws (template, Fig 1, notation, abbr). We agree improvements are needed. * **Planned Revision:** We commit to a **major revision** addressing these points: adopting a standard template, **adding the requested Figure 1 diagram** and other illustrative figures, ensuring **consistent mathematical notation**, and correcting **duplicated abbreviations**. **5. On ICML Fit / ML Engagement** Thank you for raising the question of fit. We believe the work offers significant relevance to the ICML community. * **Justification:** TDA is increasingly vital for analyzing complex data ubiquitous in ML (graphs, geometric data). Addressing the **computational bottleneck** in TDA enables broader application *within* ML. Furthermore, we propose a **novel computational paradigm** (molecular computing) for algorithmic acceleration, aligning with ICML's interest in foundational algorithms and hardware. The integration via **STING (GNNs)** & **TopoPath (optimization)** provides direct links to core ML tasks. * **Planned Revision:** We will **significantly strengthen the Introduction and Discussion** to explicitly articulate these connections and better justify the paper's relevance to ML advancements. **6. On Lack of Experiments & Paper Readiness** We acknowledge the lack of *in vitro* results is a significant limitation at this stage. Introducing a radically new computational paradigm often necessitates establishing the theoretical underpinning and computational feasibility first. We believe this paper provides that essential groundwork: detailed theory (Supp. D), novel algorithms (Sec 3), error modeling (Sec 4.3), simulations calibrated with experimental kinetics (Sec 5.1), and a detailed roadmap (Sec 5.2, Supp.). We hope this context clarifies why we believe this foundational work is valuable, even preceding complex experiments. **Conclusion** Thank you once again for your thorough feedback and critical perspective. We acknowledge the need for significant revision, particularly regarding simulation clarity and overall presentation. We hope that the planned revisions result in a much-improved manuscript that clearly demonstrates the value of this foundational work. Sincerely, The Authors --- Rebuttal Comment 1.1: Comment: Thanks to the authors for rebuttal. In my own opinion, this work is not ready for publish, especially in absent of wet-lab experiments and the in silico simulation results are calibrated estimates. I will keep my score. --- Reply to Comment 1.1.1: Comment: Thank you Reviewer M9zw for considering our rebuttal and providing your final assessment. We understand your position regarding the necessity of experimental validation to fully substantiate our findings based on the presented design and calibrated estimates. Obtaining this empirical data remains our top priority for future work. We appreciate your feedback throughout the review process.
null
null
null
null
null
null
Adversarial Robustness via Deformable Convolution with Stochasticity
Accept (poster)
Summary: This paper introduces DCS (Defensive Convolution with Stochasticity), a novel adversarial defense method that integrates randomness directly into convolution operations to obscure gradient directions. By embedding stochasticity within the network architecture, DCS enhances robustness against both white-box and black-box attacks. The authors provide theoretical analysis and experimental validation across multiple datasets and adversarial settings. Claims And Evidence: The paper claims that DCS effectively mitigates gradient-based attacks, does not require post-training modifications, generalizes well across architectures, and is data-independent. These claims are supported through theoretical derivations, empirical results, and ablation studies. However, the notion of "data independence" needs clarification, as it appears to apply to hyperparameters rather than the full training process. Methods And Evaluation Criteria: The experiments are well-designed, covering a diverse range of attacks, datasets, and architectures. Evaluation metrics focus on robust accuracy, standard accuracy, and computational complexity. The approach is rigorous, but clearer documentation of the training process and hyperparameter choices would improve reproducibility. Theoretical Claims: The theoretical foundation is strong, but some areas need further elaboration. Lemma 2, in particular, lacks sufficient explanation, and its derivation should be expanded. Mathematical notation could also be refined to enhance clarity. Experimental Designs Or Analyses: The empirical results effectively demonstrate the effectiveness of DCS. However, terminology could be improved—using "pixel" to describe both feature maps and convolution kernels may cause confusion. Figure 1 also needs more explanation, especially regarding the meaning of the circles between Step 3 and Step 4. Supplementary Material: The appendix provides useful derivations and additional results, but the explanation of Lemma 2 should be expanded for better transparency. Relation To Broader Scientific Literature: The paper situates itself well within adversarial defense research but could better contrast DCS with other stochastic defenses, such as randomized smoothing or stochastic activation functions. Citing additional references on these topics would strengthen the discussion. Essential References Not Discussed: none Other Strengths And Weaknesses: A major strength is the seamless integration of adversarial defense within convolution operations, eliminating the need for post-hoc modifications. The evaluation is comprehensive, and the theoretical contributions are meaningful. However, issues with terminology, theoretical clarity, and figure explanations should be addressed. Other Comments Or Suggestions: Clarifying the training process and improving the explanation of key theoretical components would enhance the paper’s impact. More precise terminology and a clearer interpretation of Figure 1 would also help avoid confusion. Questions For Authors: Is Step 4 trained alternately or in a unified manner? Do the circles between Step 3 and Step 4 in Figure 1 represent an expansion of the potential distribution range? What aspect of the method is truly "data-independent"? How does DCS compare with other stochastic adversarial defenses? Would combining DCS with adversarial training further improve robustness? Ethical Review Concerns: n/a Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your detialed comments and your interest in the content of our experiments. We summarize and rebut your 8 major concerns in your comments. ## Re 0. The notion of "data independence" needs clarification.[Claims,Q3] Thank you for correcting our statement. The "data independence" means the hyperparameters at the DCS layer are data independent. However, the DCS layer still needs to be trained. We will revise it in the final version. ## Re 1. Clearer training process and hyperparameter choices.[Evaluation] Thank you for this nice concern. The hyperparameters of the experiments for all baselines and datasets are listed separately in the Sec. 5.1. In terms of the locations of replaced DCS layer, we conduct an ablation study on DCS locations in Sec. B.3, where we changed the location of DCS and find that the second layer is the best for DCS. For clarification, we will add this setting to Sec. 5.1. in the final version as: - In our experiments, unless specifically labeled, DCS replaces the second convolutional layer. All other layers keep the original settings. ## Re 2. Explanation of Lemma 2.[Theorem,Supplementary] Thank you for your advice. We noticed 2 major concerns for the explanation of Lemma 2, we will improve them in the appendix as follows. **(1)** We note that there is no explanation for the symbols $\lfloor\cdot\rfloor$ and $\lceil\cdot\rceil$. This will be explained in line 570 when it first appears as: - $\lfloor\cdot\rfloor$ for rounding down and $\lceil\cdot\rceil$ for rounding up. **(2)** Regarding the proof of Lemma 2, we note that the step from Eq. (23) to Eq. (24) need to be expanded: For the 3 terms to the right hand side in Eq. 23, the goal is to leave only $n$ into the numerator part. Thus the relationship between $p_u$ and $n$ can be clear. We loose the terms by considering $-\lceil\cdot\rceil$ as $-(\cdot)$ and $\lfloor\cdot\rfloor$ as $(\cdot)$ - For the first term $$pu \leqslant 1-\frac{n\alpha^2}{\lceil\alpha\lceil^2k^2}$$ - For the second term $$p_u\leqslant 1-(\frac{m^2\alpha^2}{k^2}+\frac{n\alpha}{k^2\lceil\alpha\rceil}-\frac{m^2\alpha}{k^2\lfloor\alpha\rfloor})\leqslant 1-(\frac{n\alpha}{k^2\lceil\alpha\rceil})$$ - For the third term $$p_u\leqslant 1-\frac{n\alpha^2}{k^2\lfloor\alpha\rfloor^2}$$ - We set the strictest bounds for $p_u$. Bringing in $\alpha=\frac{k}{S}$, the third term is found to be the strictest: $1-\frac{n}{S^2\lfloor\alpha\rfloor^2}$, as is shown in Eq. 24. After bringing the bounds in Eq. 24 into Eq. 22 and simplify it, Eq. 25 can be obtained. After transforming the rounding up and rounding down to the corresponding limits, Eq. 26 can be obtained. Finally, considering $\Delta^g \in [0,1]$, Eq. 26 become Eq. 27, which is consistent with Lemma 2. ## Re 3. Terminology[Experiment,W1] Thank you for the constructive suggestion. We will change "pixel" to "point" when coresponding to the DCS kernels in the final version. ## Re 4. Explanation of Figure 1.[Experiment,Q2] Thank you for your constructive comments. The circle marked as $\mathbb{K}$ in Figure. 1 represents the potential distribution of all DCS kernels. DCS automatically samples a random kernel from K at each forward propagation. ## Re 5. Connections to other relevant domains.[RelatedWorks,Q4] Thank you for your valuable comments. We have compared DCS against other stochastic adversarial defenses methods in Table 2. We divided the stochastic adversarial defenses into random weights and random structures. We are happy to add certified defenses and stochastic activation functions to the table and cite related work. Table 2 will then be expanded as the following table (only expanded part and ours). |Type|Method|CIFAR10||Imagenet|| |:-:|:-:|:-:|:-:|:-:|:-:| |||PGD|AA|PGD|AA| |Ceritied Defense|Cert-RA[1]|68.6|-|-|-| |Random Structure|stochastic activation functions[2]|67.4|-|-|-| ||DCS (ours)|**75.84**|75.46|52.38|66.79| ## Re 6. Is Step 4 trained alternately or in a unified manner?[Q1] Thank you for your valuable questions. To avoid misunderstanding, we first claim that the randopm sampling processes in Figure 1 are not trainable. For the sampled kernels, we trained each kernel alternatively. Only one kernel is trained during each forward propagation, regardless of whether it is normal adversarial training or GSAT. Therefore we consider this to be alternating training. ## Re 7. Would combining DCS with adversarial training further improve robustness?[Q5] Yes. AT will further improve robustness. Corresponding results were compared in our experiments by Sec. 5.3.2 and Fig. 4(a). The results show that using AT or GSAT imporves the performance of DCS. ### Reference [1] Certified Defenses for Adversarial Patches, ICLR 2020. [2] Adversarial Defense Via Data Dependent Activation Function and Total Variation Minimization, ICLR 2019. --- Rebuttal Comment 1.1: Comment: The authors have addressed my concerns. Thus, I increase my scores by one. --- Reply to Comment 1.1.1: Comment: We greatly appreciate your interest in the proof of theory and your time. It helped to make this paper more theoretically complete.
Summary: This paper proposes a random structural defense method called Deformable Convolution with Stochasticity (DCS) to improve adversarial robustness of convolutional neural networks. DCS replaces fixed convolutional kernels with randomly sampled deformable kernels to reduce adversarial transferability between inference paths in a data-independent way. The authors theoretically analyze the trade-off between robustness and clean accuracy in DCS and propose a Gradient-Selective Adversarial Training algorithm to further enhance robustness. Claims And Evidence: The main claims are mostly supported by adequate evidence in terms of theoretical derivations and experimental results. However, the claim of generalization and data independence equires more empirical support beyond CIFAR and ImageNet. Methods And Evaluation Criteria: The white-box robustness evaluations against PGD and AutoAttack are reasonable, but additional experiments against black-box and especially adaptive attacks would give a more complete picture. Theoretical Claims: I checked the proofs in Appendix A. Experimental Designs Or Analyses: The experimental setup for evaluating adversarial robustness against PGD and AutoAttack is mostly sound. Supplementary Material: Yes, I reviewed this supplementary material. Relation To Broader Scientific Literature: The authors discuss several categories of related methods, including input/feature randomization, structure randomization, and stochastic networks. However, connections to other relevant domains like certifiable defenses could be drawn. Essential References Not Discussed: The related work section covers the most relevant prior work. Other Strengths And Weaknesses: Weakness: 1. Rather than PGD and AA, the paper lacks sufficient evaluation against other important and updated classes of attacks. In particular, more results are needed against common transfer-based and query-based black-box attacks, adaptive attacks specifically designed for randomized defenses, and attacks that incorporate adversarial examples in the training set. 2. While the data-independent framework is a key advantage, the paper lacks sufficient empirical evaluation of DCS's generalization to different datasets beyond CIFAR-10/100 and ImageNet. More extensive experiments on a diverse range of datasets would be necessary. 3. The computational efficiency and training/inference cost of DCS is not sufficiently addressed. Replacing fixed convolutions with randomly sampled deformable convolutions could incur nontrivial computational overhead. Other Comments Or Suggestions: The paper makes some good contributions in developing a randomized structural defense and providing theoretical insights, but has significant limitations in terms of generalization experiments, attack evaluations, analysis of computational efficiency and novelty. Questions For Authors: No Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Your expert comments are constructive for our paper. We summarize and rebut your 4 major concerns in your comments. ## Re 0. Claim of generalization and data independence equires empirical support.[Claims,W2] Thank you for suggesting additional experiments to validate our claim. To verify the sensitivity of DCS hyperparameters to more distributions, we extended our experiments on STL-10[1] using ResNet18 as the baseline. The input size keeps the same as raw image as $96$. The results are shown in the following table: |Method|Clean|PGD|AA| |:-:|:-:|:-:|:-:| |baseline|61.95|38.65|35.49| |DCS|**62.40**|**42.25**|**47.95**| This results show the effectiveness of DCS on different distributions without changing the settings. We will include this experiment in the final version. ## Re 1. Additional experiments against attacks.[Evaluation,W1] Thank you for helping us to refine our experiments. **(1)** We conducted adversarial robustness tests on CIFAR-10 using ResNet18 for query-based, transfer-based and adaptive attacks that you mentioned. We expanded the results to Table 6 as |Model|Clean|SQUARE(query-based)|Pixel(query-based)|trnasfer-FGSM(transfer-based)[2]|transfer-FGSM-base(transfer-based)[2]|BPDA(adaptive)[3]|BPDA+EOT(adaptive)[3]| |:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:| |RN18|89.11|80.53|86.32|84.95|65.71|78.66|77.78| |WRN34|90.58|82.45|86.31|90.53|68.34|80.03|80.47| In the table, SQUARE and Pixel attacks are included in the initial paper. We added 4 columns to the table. The added attacks used the following settings: transfer-FGSM: base model=pretrained WRN50_2, epsilon=16/255 transfer-FGSM-base: base model=pretrained baseline model, epsilon=8/255 BPDA: epsilon=8/255,max steps=20, learning rate=0.5 BPDA+EOT: epsilon=8/255,max steps=20, learning rate=0.5, EOT steps=3. It can be seen that DCS appears to be robust against black-box attacks. We attribute this to the fact that the random masking of the convolutional kernel by DCS also shields a portion of the adversarial perturbations. For ataptive attacks like BPDA and EOT-BPDA, DCS also shows high robustness. We attribute this to randomness in conjunction with gradient masking. We will use the expanded Table 6 alone and analysis in the final version. **(2)** For attacks where adversarial samples are added to the training set, some predefined adversarial samples are added to the training set in both traditional adversarial training(AT) and GSAT. Related results of normal AT and GSAT can be found in Table 1 and Table 3. ## Re 2. Connections to other relevant domains like certifiable defenses could be drawn.[RelatedWorks] Thanks for this nice suggestion. We will add certifiable defense to Table 2 in the final version as: |Type|Method|CIFAR10||Imagenet|| |:-:|:-:|:-:|:-:|:-:|:-:| |||PGD|AA|PGD|AA| |Ceritied Defense|Cert-RA[4]|68.6|-|-|-| |Random Structure|DCS (ours)|**75.84**|75.46|52.38|66.79| ## Re 3. Concern about computational overhead.[Q1] Thanks for this nice concern. We recorded the training and reasoning times for DCS+GGSAT vs. baseline+AT. We choose RseNet18 as the baseline. For training, we use CIFAR-10 training-set (50000 examples) following the hyperparameters: epochs: 200 batch size: 128 optimizer: SGD weight decay: 5e-4 initial learning rate: 0.1 scheduler: multiste (lr/10 at epoch 60 and 120) For reasoning, we use CIFAR-10 test-set (10000 examples) with batch size=1024. We recorded the entire time cost for training and reasoning in the table below. |Model|Training Overhead/min|Reasoning Overheal/sec| |:-:|:-:|:-:| |baseline|213.47|5.94| |DCS|379.96|6.67| Despite the difference in training time, their time consumption during inference is very similar. The time gap in training is due to the introduction of additional parameters in the DCS layer. However, during inference, the extra time consumption caused by the extra parameters is not as dramatic as training due to the absence of backpropagation. We are delighted to add this comparison in the appendix in the final version. ### Reference [1] An analysis of single-layer networks in unsupervised feature learning, Journal of Machine Learning Research - Proceedings Track 15, 215–223 (01 2011). [2] Explaining and harnessing adversarial examples, in International Conference on Learning Representations, 2015. [3] Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples, ICML 2018. [4] Certified Defenses for Adversarial Patches, ICLR 2020. --- Rebuttal Comment 1.1: Comment: Thanks for this response and additional experiments. Most of my concerns have been addressed, so I have increased my overall rating. --- Reply to Comment 1.1.1: Comment: We greatly appreciate the addition of your experiments and your time. Your comments have helped us to present a more complete picture of DCS.
Summary: This paper introduces deformable convolution with stochasticity (DCS) to enhance the adversarial robustness of deep neural networks. Unlike traditional random defense methods that inject randomness into input data, this work incorporates randomness directly into the network architecture by replacing fixed convolutional offsets with random masks. Through a theoretical analysis of the trade-off between robust accuracy and natural accuracy, the authors identify kernel size as a key factor in balancing this trade-off. Additionally, the paper proposes a new adversarial training strategy that enhances performance by selectively masking pixels. Experimental results across multiple datasets demonstrate that DCS achieves superior adversarial robustness and clean accuracy compared to existing baselines. ### update after rebuttal My concerns are well addressed during the rebuttal. I have updated my rating to 4: accept. Claims And Evidence: Yes, the claims are well-supported. The authors provide both empirical evidence and theoretical analysis to substantiate their findings. Methods And Evaluation Criteria: Yes, the proposed method is well-motivated by the theoretical analysis. The evaluation criteria are consistent with those widely adopted in adversarial robustness studies. Theoretical Claims: Yes, I have verified the correctness of Lemma 1 and Lemma 2, and they appear to be sound. Experimental Designs Or Analyses: Yes, the experimental design and analyses are reasonable. However, there is a potential missing aspect regarding the experimental analysis of stride in Eq. 12. While the authors mention that S is set to a fixed value to ensure the same output feature map dimensions, downsampling layers exist in networks such as ResNet-18. These layers could be replaced by DCS to conduct a more thorough ablation study on the impact of stride. Supplementary Material: The supplementary material includes proofs of theorems and additional experimental results, which contribute to the completeness of the work. Relation To Broader Scientific Literature: This paper builds upon research in deformable convolutions and random defense mechanisms. The application of deformable convolutions in adversarial robustness is a novel contribution, and the theoretical analysis of gradient similarity provides insights into the broader understanding of how randomness impacts model robustness. Essential References Not Discussed: No critical references appear to be missing. Other Strengths And Weaknesses: Strengths 1. This paper introduces an innovative strategy by embedding randomness into the network architecture rather than relying on data augmentation or noise injection. This design addresses key limitations of traditional random defense methods, such as data dependency and hyperparameter sensitivity. 2. The analysis of the trade-off between robust accuracy and natural accuracy provides valuable theoretical insights, potentially guiding the design of future random defense mechanisms. 3. The experiments convincingly demonstrate that DCS outperforms existing random defense methods in both adversarial robustness and natural accuracy. The consistency of results across multiple datasets and architectures further reinforces the effectiveness of the approach. Weaknesses 1. It is evident from Table 3 that GSAT is more unstable than standard adversarial training (AT). However, the paper does not provide a clear explanation for this instability. Since Algorithm 2 should be responsible for stability, and the only difference between GSAT and standard AT lies in Algorithm 1, further clarification is needed. Additionally, the connection between this instability and Eq. 12 is not well-discussed. 2. The robust accuracy of Non-AT baselines is not reported in Figure 4(a). Including this information would provide a more comprehensive comparison. Other Comments Or Suggestions: None. Questions For Authors: 1. Can you conduct an ablation study by replacing downsampling layers in networks such as ResNet-18 with DCS to further analyze the effect of stride? 2. Can you provide a more detailed explanation regarding the instability of GSAT observed in Table 3 and its potential connection to Eq. 12? 3. Can you include the robust accuracy results of Non-AT baselines in Figure 4(a) for a more comprehensive comparison? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your pertinent review and your interest. Your review is summarize and rebut by 3 major concerns. ## Re 0. Ablation study by replacing downsampling layers.[Experiment,Q1] This is an interesting problem. Your subtle experimental design help us to study the effect of stride in a limited network structure. We compare the results by replacing the DCS to the downsampled layer (**layer 6**) V.S. the closest normal convolution layer (layer 5, 7). The results are shown in the table below. |Layer|PGD|AA| |:-:|:-:|:-:| |5|68.12|70.72| |**6**|**72.03**|**75.74**| |7|69.36|72.11| We notice that the performance of DCS on downsampling layer is bettter than the normal convolution layers. According to the lemmas, larger sreide $S$ helps to reduce the assumed small numbers $\epsilon_c$ and $\epsilon_l$ in the lemmas. It results in a smaller gradient similarity and output distance, and finally increase the robustness and clean accuracy. In addition, larger $S$ also helps the bounds in both lemmas to be numerically looser in practice. This helps to find a suitable $n$ easier. ## Re 1. Explanation regarding the instability of GSAT in Table 3 and potential connection to the Equations.[W1,Q2] This is a constructive question. We will first explain the source of instability in GSAT and with Eq. 13, and then analyse the potential connection of Table. 3 with Eq. 12. **(1)** According to Eq. 13, GSAT modifies the sample space sampled by the DCS layer in forward propagation during training by selection. Below we explain the reason for the instability of GSAT shown in Table 3. We have demonstrated both theoretically and experimentally that paths in $\mathbb{X}^g$ and $\mathbb{X}^u$ are not easy to attack, while paths in $\mathbb{X}^s$ are very easy to be attacked by gradient-based algorithms (see Eqs. 7, 10 and Sec. 5.3.3 for details). This means that when the network is attacked, the gradients generated by the paths in $\mathbb{X}^g$ and $\mathbb{X}^u$ will be in different direction from the attacked gradients generated by the paths in $\mathbb{X}^s$. To minimize the influence of inaccurate attacked gradients in training, we select and remove paths in $\mathbb{X}^s$ from $\mathbb{X}$, and build the selected random space, as demonstrated in Eq. 13. This corresponds to step 11 in Algorithm 1. The selected random space is used for sampling paths in the forward and backward propagation for each training step sepreately. We understand that using GSAT the network are only limited optimized on the paths in $\mathbb{X}^s$ under each adversarial examples. So when the DCS samples paths in $\mathbb{X}^s$ in infernce, there are performance decrement compared to the other sampled paths. **The decrement will cause the instability in GSAT.** The larger probability the performance drop in DCS, the greater instability in GSAT. **(2)** For the relationship of results in Table 3 and Eq. 12, it mainly connected by $n$. From Table. 3, we can find that with larger $n$, GSAT will be more unstable. We give an explanation as: when Eq. 12 gives **larger bounds** for $n$, which refers to larger $n$ in practice, the percentage paths in unselected random space will increase. This increases the probability for DCS performance decrement in the networks using GSAT for training, which leads to greater instability. Thus, as shown in Table 3, $n$ is suggested to be small for a stable GSAT. The explanation of instability will be added in the appendix and the analysis of connection between Table 3 and Eq. 12 will be added in experiment section in the final version to helping audiances in understanding the nature of GSAT. --- In addition, we would like to argue that the paths in $\mathbb{X}^s$ are inherently difficult to optimize, given the difficulty of obtaining the correct gradients after a precision gradient-based attacked. Together with the results in Table 3 demonstrating that the network remains robust even with performance decrement. So we consider the performance decrement to be acceptable. ## Re 2. Unreported robust accuracy of Non-AT baselines.[W2,Q3] Thank you for your valuable suggestions. We supplemented the results of the NON-AT baseline in the table below. Cos in the table means gradient cosine similarity. |Model|Clean|PGD|Cos| |:-:|:-:|:-:|:-:| |baseline w/o AT|95.12|0.04|1.00| We will add these results as bars in Figure 4(a) in the final version. --- Rebuttal Comment 1.1: Comment: Thanks for the detailed reply. Most of my concerns are addressed. --- Reply to Comment 1.1.1: Comment: We greatly appreciate your interest in the DCS and GSAT experiments and discussions and your time. Your comments have helped us refine our analysis of DCS and GSAT.
Summary: This paper introduces Deformable Convolution with Stochasticity (DCS), a defense method that injects randomness into convolutional layers by replacing fixed offsets with random masks, thereby creating a data-independent random space for deformed kernels. This paper provides a theoretical analysis using gradient cosine similarity to derive strict, data-independent bounds on the receptive field. This paper further enhances this approach with Gradient-Selective Adversarial Training (GSAT), which selectively masks pixels with similar gradient origins to reduce adversarial transferability. Extensive experiments on CIFAR and ImageNet demonstrate that DCS with GSAT significantly improves both clean and robust accuracy compared with other random defense methods. Claims And Evidence: The claims made in the submission are well-supported by clear and convincing evidence. Methods And Evaluation Criteria: In general, the evaluation criteria make sense for robust classification problems. For example, CIFAR-10, CIFAR-100 and ImageNet are widely acknowledged as benchmark datasets in this field. In addition, this paper uses a wide variety of attack methods to evaluate the robustness of baseline methods. However, since the proposed method introduces randomness, it is necessary to evaluate the proposed method using BPDA+EOT attack. Theoretical Claims: I reviewed the proofs provided for Lemma 1, which establishes the data-independent upper bound on the receptive field n, and the outline for Lemma 2 regarding the lower bound related to output inconsistency. The proof for Lemma 1 appears to be internally consistent and follows standard techniques in bounding gradient cosine similarity, though it relies on worst-case assumptions (e.g., setting Cg = 1) that simplify the analysis. One potential issue is that both proofs assume certain distributions and independence properties of the gradients that might not hold exactly in practice. Overall, the proofs are mathematically plausible. Experimental Designs Or Analyses: The experimental design was evaluated primarily on CIFAR and ImageNet datasets using standard architectures like ResNet18 and WideResNet34, and included tests under multiple attack scenarios. I checked the design of these experiments and found that they are sound in terms of using established datasets and widely accepted attack methods for benchmarking adversarial robustness. However, as mentioned before, it is necessary to evaluate the proposed method using BPDA+EOT attack. Supplementary Material: This paper does not contain any supplementary materials (here I assume the appendix is not supplementary material). Relation To Broader Scientific Literature: The paper builds on a rich body of literature in adversarial robustness by taking the idea of incorporating randomness to a structural level via deformable convolutions. Prior works (e.g., Xie et al., 2017; Li et al., 2019) have shown that randomness can hinder adversarial attacks, but they typically suffer from data dependency and require careful tuning of noise hyperparameters. This work extends those ideas by leveraging deformable convolutions (as introduced by Dai et al., 2017, and Zhu et al., 2018) to generate a randomized kernel space that is independent of input data. The proposed GSAT method also connects to and extends the body of research on adaptive adversarial training techniques by selectively mitigating gradient transferability issues, thereby refining existing defense strategies in a novel manner. Essential References Not Discussed: None. Other Strengths And Weaknesses: The proposed method is interesting and novel, which is significantly different from defense methods that inject randomness into data inputs. However, the proposed method is tailored for convolutional layers, and therefore it may require significant adaptations to be applied to other model architectures such as Transformers. Furthermore, defenses that incorporate randomness are vulnerable to adaptive attacks like BPDA [1] combined with EOT, which are designed to estimate the gradients despite the stochasticity. The proposed method should be evaluated against BPDA+EOT to demonstrate its robustness. [1] Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples, ICML 2018. Other Comments Or Suggestions: None. Questions For Authors: Regarding experimental design, how did you ensure that the randomness introduced by DCS and GSAT is properly controlled and that the robustness improvements are not a result of gradient masking? I am particularly wondering what is the robust accuracy of the proposed method under BPDA+EOT attack compared to the baseline methods. In addition, could you explain in more detail how the selective masking of pixels is performed during training and how this impacts the clean accuracy and robustness? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your expert comments and your interest in the content of our experiments. We summarize and rebut your 6 major concerns in your comments. ## Re 0. Evaluate under using BPDA+EOT attack.[Method,Experiment,W2,Q3] Thanks for this nice concern. We evaluated DCS under BPDA and BPDA+EOT attacks on CIFAR-10, following the hyperparameters: epsilon=8/255,max steps=20, learning rate=0.5, EOT steps=3. In the final version, we will expand Table 6 to include: |Model|BPDA[1]|BPDA+EOT[1]| |:-:|:-:|:-:| |RN18|78.66|77.78| |WRN34|80.03|80.47| DCS is robust under BPDA+EOT attacks. We attribute this to randomness in conjunction with gradient masking. ## Re 1. Both proofs assume certain distributions and independence properties of the gradients that might not hold exactly in practice.[Theorem] Thanks for this nice concern. In practice, the data should be normalized. We assume the worst case of all **normalized** data to ensure that both lemmas stands in any data distribution after normalization. This does not mean that the worst case will necessarily occur. Therefore, in practice, the values of these two bounds is looser. We would like to emphasize that the worst case assumptions in proofs are specified on data distributions with infinite data points. This avoids the dependence of the bounds on specific data sets. This is the reason why the DCS setup can be data-independent. ## Re 2. Significant adaptations to apply DCS to other model architectures such as Transformers.[W1] Thanks for this nice concern. We believe this question reveals the future direction in randomized structure defense. **(1)** We confirm that DCS is tailor-made for convolutional operations. We believe that convolution is still an important tool in image processing. Transformer is out of the research scope of this work. **(2)** However, we briefly explored that DCS fits Vision Transformer(ViT). ViT uses a $16\times16$ convolution in patch embedding. Large kernel size makes lower bound of $n$ increases and hinders finding a suitable $n$. To avoid this, we notice that patch embedding can be split into multiple concatenated $3\times3$ convolutions[2]. Our baseline follows the settings in [2] and then replace the second $3\times3$ convolution with DCS. We obtained the following results |Method|PGD| |:-:|:-:| |ViT-t+Conv[2]|32.31| |ViT-t+DCS|**55.71**| DCS works well with ViT-tiny and we will add this experiment in appendix. We would like to argue that due to time constraints, the network is roughly trained on CIFAR-10 with two stages: **Stage 1**: Fix a pretrained ViT-tiny and train the FC layer and convolutions from scratch with: -epochs: 200 -batch size: 128 -optimizer: SGD -weight decay: 5e-4 -initial learning rate: 0.01 -scheduler: multiste (lr/10 at epoch 50 and 100) **Stage 2**: Adversarial finetune the entire network using GSAT with: Stage 2: -epochs: 90 -batch size: 128 -optimizer: SGD -weight decay: 5e-4 -initial learning rate: 0.01 -scheduler: cosine ## Re 3. How to ensure that the randomness introduced by DCS and GSAT is properly controlled?[Q1] Thanks for this nice question. **(1)** For DCS, we control randomness by $n$ through Lemma1,2. Other setting in DCS are decided by the replaced convolution layer, to keep the data dimensions constant. The stride keeps the same. The kernel size is added by $2$, while padding is adapted accordingly. **(2)** For GSAT, the size of random space is smaller but **fixed** after removing the masks, since the number of removed masks is unchanged in each step. ## Re 4. How to ensure robustness improvements are not a result of gradient masking?[Q2] Thank you for your constructive comments on the additional experiment. To validate this, we manually cancel randomness. Instead, we selected two fixed deformable convolutional kernel $X^i$ and $X^j, n=4$ with no repeated points. $X^i$ is attacked and $X^j$ is used for reasoning. The results are |Method|PGD| |:-:|:-:| |DCN|52.23| |Fixed|54.71| |DCS|**62.93**| The robustness is mainly from randomness. We will add this observation into the appendix. ## Re 5. How is selective masking performed and how this impacts the clean Acc and robustness?[Q4] Thanks for this nice concern. **(1)** According to Algorithm 1, GSAT record the included points in the DCS kernel when generating the adversarial training examples. Then, all masks that unmask the recorded points are banned, until the end of current forward propagation. With the other masks, DCS will be trained with kernels sampled in $\mathbb{X}^g$ and $\mathbb{X}^u$. **(2)** From Eq(7,10), $\mathbb{X}^g$ and $\mathbb{X}^u$ helps to enlarge the gradient cosine similarity and minimize the output distances, which leads to higher robustness and clean Acc. ### Reference [1] Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples, ICML 2018. [2] Early convolu-tions help transformers see better, NIPS 2021. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for their comprehensive rebuttal. My major concerns have been well-addressed: 1), The authors provide additional results to show that their method performs well on BPDA+EOT (which is used to check gradient obfuscations). 2). The authors show that DCS fits Transformer, which demonstrates the generalizability of the proposed method. Therefore, I am willing to increase my score to 4. --- Reply to Comment 1.1.1: Comment: We are very grateful for your constructive comments and your time. Your suggested additional experiments on BPDA+EOT and transformer adaptation. This helps a lot in validation and generalizability of DCS. Your comment is instructive for the future study of stochastic structural adversarial defense.
null
null
null
null
null
null
S2-Track: A Simple yet Strong Approach for End-to-End 3D Multi-Object Tracking
Accept (poster)
Summary: This paper proposes a novel end-to-end 3D multi-object tracking method, aimed at addressing complex scenarios in autonomous driving perception, such as occlusions and small object tracking. The authors decompose the existing end-to-end 3D MOT framework into three core components: query initialization, query propagation, and query matching, and introduce corresponding improvements for each part. Experimental results demonstrate that the proposed method achieves state-of-the-art performance on the several datasets. Although the optimizations proposed by the authors in the three parts are combinations of existing techniques, they still effectively enhance the performance of the end-to-end multi-object tracking paradigm, which sounds quite impressive. ## Update after rebuttal Thank you for the authors' response, which has addressed most of my concerns. Although the authors have adopted some existing techniques, the performance improvement in their end-to-end tracking algorithm remains commendable in my assessment. Particularly noteworthy is its successful application in real-world factory settings. Additionally, I hope the code could be made open-access to advance research in end-to-end MOT field. I keep my score. Claims And Evidence: Yes, the claims made in the submission are well-supported by clear and convincing evidence. Methods And Evaluation Criteria: Yes, the proposed methods and evaluation criteria are well-suited for the problem of 3D multi-object tracking in autonomous driving. Theoretical Claims: Yes, i have checked the theoretical Claims of the proposed method. Experimental Designs Or Analyses: Yes, i have checked the experimental designs and analyses. Supplementary Material: Yes, i have reviewed the experimental supplementary material, including Additional Details and Additional Results and Additional Visualizations. Relation To Broader Scientific Literature: The paper builds on recent advances in end-to-end query-based trackers and leverages ideas from 2D-to-3D perception and probabilistic modeling to address limitations in 3D MOT. Therefore, i believe one of the core contributions of this paper is that the performance of its proposed end-to-end multi-object tracking method surpasses both end-to-end and non-end-to-end methods. I think end-to-end approaches represent the future trend for both autonomous driving and multi-object tracking, and this paper validates the potential of end-to-end methods. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths: 1.The proposed 2D-Prompted Query Initialization and Uncertainty-aware Probabilistic Decoder represent innovative improvements to existing end-to-end multi-object tracking methods. The integration of 2D information with 3D localization for Query Initialization sounds intresting. 2.The Hierarchical Query Denoising strategy is a novel contribution that addresses noise issues in training, providing a new solution for enhancing the robustness of end-to-end frameworks. Although the core idea originates from DN-DETR. 3. Achieving state-of-the-art (SOTA) performance is highly significant for end-to-end multi-object tracking. Weaknesses: 1.I believe the authors should discuss the computational complexity (not only inference speed) of the proposed method in comparison to previous approaches, in order to more clearly highlight the advantages and disadvantages of their method. 2.The authors should specify whether other methods use full resolution or reduced resolution, in order to make the comparison more fair. Other Comments Or Suggestions: See strengths And weaknesses Questions For Authors: I noticed that the authors mentioned the inference speed of their proposed method is only 7.5 FPS on an NVIDIA A100. If deployed on an in-vehicle GPU, would it meet the requirements for real-time inference? If not, how far is this end-to-end multi-object tracking approach from practical application? Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your time and insightful feedback. We especially appreciate your recognition of our well-designed modules with innovative improvements, and found our impressive performance validates the potential of end-to-end methods. We responded in detail below and will add them to the revision. > Q1: Discuss the computational complexity, in order to more clearly highlight the advantages and disadvantages of their method. Thanks for your valuable suggestion! We provide the additional computational complexity of the proposed method. S2-Track only adds about 7.0% parameters, 2.4% FLOPs over PF-Track, while this trade-off results in a 12.3% improvement in AMOTA. Further improvements in tracking efficiency are a promising direction for future research. To highlight the disadvantages of our method, we have added a "Limitations and Future Work" section in the revision, hoping to inspire further exploration in this field. | Method | FLOPs | Parameters | | -------- | ----- | ---------- | | PF-Track | 534G | 91.8 M | | S2-Track | 547G | 98.3 M | > Q2: Specify whether other methods use full resolution or reduced resolution Thank you for your valuable suggestion! The numbers in our tables are directly taken from the corresponding methods' papers or the official nuScenes leaderboard, and we report the highest values for each method. Based on your suggestion, we have added a column to the tables in the revision to include the corresponding resolutions, allowing readers to make a more comprehensive assessment. Below, we provide some of the resolution settings. | S2-Track-F | PF-Track-F | S2-Track-S | PF-Track-S | ADA-Track | Sparse4Dv3 | DQTack | | ---------- | ---------- | ---------- | ---------- | ---------- | ---------- | ---------- | | 1600 × 640 | 1600 × 640 | 800 x 320 | 800 x 320 | 1600 × 900 | 1408 × 512 | 1408 × 512 | > Q3: If deployed on an in-vehicle GPU, would it meet the requirements for real-time inference? If not, how far is this end-to-end multi-object tracking approach from practical application Thank you for your comment! We have successfully implemented our S2-Track on the real-world autonomous vehicle equipped with the NVIDIA Drive AGX Orin platform. By incorporating various engineering optimizations, such as TensorRT quantization, we have achieved real-time performance exceeding 20 Hz on the real-world vehicle. Additionally, we have provided a video showcasing the real-world results on an anonymous GitHub Page: https://anonymous-github-8ab1cv.github.io/s2-track/. However, to achieve large-scale production and deployment for high-level autonomous driving (e.g., L3/L4), further improvements in efficiency or platform (e.g., NVIDIA Drive AGX Thor) will be required. Thanks again for your thoughtful feedback. We believe that your suggestions, along with our revision, have greatly enhanced the persuasiveness and completeness of our work. We hope our rebuttal can address your concerns.
Summary: This paper presents a new method called S2-Track for 3D multiple object tracking (MOT), an essential component for the perception of autonomous driving systems. Existing methods adopt end-to-end query-based trackers to simultaneously detect and track objects, but they fail to track objects in complex scenarios like occlusions and the small size of target object. To address these issues, the authors first summarize current end-to-end 3D MOT framework by decomposing it into three parts and propose corresponding improvements to every part. Experiments on the nuScenes dataset show that the method achieves 0.663 AMOTA on test split, surpassing the previous best end-to-end solution by 8.9%. Overall, S2-Track decomposes current end-to-end 3D MOT framework into three parts and proposes corresponding improvements to every part, which improves AMOTA by 8.9% on test split of the nuScenes dataset. Claims And Evidence: 1. As mentioned in the Queries initialization part, each query consists of a feature vector and a 3D location. I find that it is not explained clear enough about how to initialize object queries with just 3D location in the 2D-Prompted Query Initialization section. Is the feature vector initialized randomly? 2. I find Hierarchical Query Denoising section hard to follow. Without referring to the DN-DETR paper, it may be difficult to understand. Maybe more explanation would help and let the readers understand the core ideas. Methods And Evaluation Criteria: I think the proposed three improvements are indeed useful for the 3D multiple object tracking task. But I think all these improvements are just other papers’ ideas and the authors just simply combine these ideas together. I can’t figure out the connections between these improvements and think they are just separate tricks to improve the performance. Theoretical Claims: I think most technical concepts are explained with appropriate detail and context except the Hierarchical Query Denoising section which I find it hard to understand. Experimental Designs Or Analyses: This paper follows the experimental setup from previous works. I believe the thorough ablation studies revealing the specific contributions of each proposed improvement and the overall superior performance compared to the recent state-of-the-art methods demonstrates its efficiency. Supplementary Material: I have read all the supplementary material the authors provided. Relation To Broader Scientific Literature: Prior work has shown that using depth information can significantly improve the localization of objects in 3D space. The proposed 2D-prompted query initialization leverages this by using predicted 2D object locations and depth information to guide the object detection process more effectively, thereby addressing the challenge of correctly initializing queries, which is a known limitation of previous transformer-based object detection methods. By using a probabilistic decoder that models and captures uncertainty, the approach is aligned with the idea that complex environments and real-world data require models that can quantify and deal with uncertainty in predictions. This helps improve robustness and performance, especially in tasks where the true object locations or classes are not easily discernible due to noise or occlusions. Essential References Not Discussed: There are no essential references that appear to be missing from the paper. Other Strengths And Weaknesses: The paper presents three efficient improvements over existing methods but I think these proposed methods have already been broadly implemented in other computer vision fields. Other Comments Or Suggestions: None Questions For Authors: 1. How do you come up with these improvements? Are there some connections among these ideas? 2. In Hierarchical Query Denoising section, there are few explanations. Can you explain the core ideas there more clearly? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your time and insightful feedback. We especially appreciate your recognition of our three useful improvements and superior tracking performance. We responded in detail below and will add them to the revision. > Q1: How to initialize object queries with 3D location. Thanks for your comment! We ulitze the 3D location to initialize queries by following steps: 1) normalize input coordinates to the [0, 2π] range; 2) generate frequency bands using exponential temperature scaling; 3) compute sine/cosine components for each dimension (X,Y,Z); 4) concatenate the encoded dimensions; 5) project the concatenated features through two linear layers with ReLU activation. We will include these implementations in the revision. > Q2: More explanations of core ideas in HQD. Thanks for your suggestion! In complex 3D MOT scenarios, challenges such as occlusions and varying object sizes can hinder the learning and convergence of query-based methods. The slow convergence and suboptimal results from the instability of bipartite graph matching. To this end, we perturb GT bounding boxes with noises into the decoder and train the model to reconstruct the original boxes, which effectively reduces graph matching difficulty and leads to faster convergence. Moreover, we define hierarchical challenging levels for the perturbed queries to enhance the model’s ability to handle diverse driving scenarios. We have included these explanations in the revision. > Q3: How do you come up with these improvements? Are there connections among these ideas? Thanks for your comment! As mentioned in the Introduction L71–101, with the goal of enhancing existing end-to-end tracker in complex driving environments, we first decompose current query-based framework into three constituent parts: query initialization, propagation, and matching (**Fig. 1(b)**). Then we propose corresponding improvements for each part: PQI, UPD, and HQD. These modules are connected by their shared foundation—the query-based framework, with all improvements targeting challenges posed by complex environments. As **Reviewer WzGu** acknowledged, "The three modules (PQI, UPD, HQD) address the tasks **in different stages of the query tracking lifecycle.**" > Q4: all these improvements are just other papers’ ideas Thanks for your comments! We, with respect, did not agree. Current end-to-end trackers are still in the early stages and struggle with complex driving scenarios. In response, S2Track comprehensively improves existing framework. For **PQI module**, we leverage predicted 2D locations and depth information to enhance the queries initialization. While previous works in detection have explored the use of depth information, none have leveraged it for query initialization. For **UPD module**, the uncertainty issue has never been explored in the 3D MOT, let alone propose an Uncertainty-aware Probabilistic Decoder for tracking. For **HQD module**, although it draws inspiration from DN-DETR, we have improved it by introducing Hierarchical Query Denoising. As demonstrated in the ablations (Tab 5), our improvements outperform the original DN-DETR. **None of these proposed modules are just other papers’ ideas.** Moreover, S2Track, is not a simple combination of these ideas; it delivers impressive tracking results, showcasing the potential of end-to-end framework, which are acknowledged by all other reviewers: - **Reviewer WzGu**: "The design of UPD is **novel**... The framework is both simple and **strong**, .... It also achieves outperforming performance with the **refined transformer query mechanisms**." - **Reviewer WzGu**: "... incorporating **three novel modules**, ... brought by the **newly-designed modules**." - **Reviewer 4TdZ**: " (PQI&UPD) **represent innovative improvements**, (PQI) **sounds intresting**, (HQD) **is a novel contribution .. providing a new solution**, (framework) **the core contributions...validates the potential of end-to-end methods**." Finally, we try to understand the reviewer's perspective. However, without S2Track decomposing the current framework into constituent parts, and proposing targeted improvements, it would be difficult for the community to grasp current limitations clearly. **If these problems were so easily solvable in tracking, how could S2Track achieve such a significant improvement (+8.9% AMOTA) over previous SOTA?** As **Reviewer 4TdZ** stated: "Effectively enhances the performance of the end-to-end multi-object tracking paradigm, **which sounds quite impressive.**" We sincerely appreciate that the reviewer can recognize the substantial efforts behind our simple and strong framework. Thanks again for your thoughtful feedback and time. We believe that your suggestions, along with our revision, have greatly enhanced the persuasiveness and completeness of our work. We hope our rebuttal can address your concerns. --- Rebuttal Comment 1.1: Comment: We appreciate the reviewer’s suggestion and have carefully considered it. --- Reply to Comment 1.1.1: Comment: Dear Reviewer, Thank you for your response. However, we found your reply slightly unclear. If the message was not posted in the wrong chat box, we speculate that you intended to express agreement with the other reviewers'suggestions and to acknowledge our efforts in developing a simple yet strong end-to-end 3D MOT framework. We are glad that our rebuttal may have addressed your concerns, and we sincerely appreciate that you can potentially consider updating your score. If you have any further questions or require additional clarification, we would be happy to provide more information. Thank you once again! Sincerely, Authors
Summary: The paper aim to improve the existing end-to-end 3D multi-object tracking framework. Specifically, the authors propose 2D-prompted query initialization, uncertainty-aware probabilistic decoder, and hierarchical query denoising. Experimental results on nuScenes benchmark show the effectiveness of the proposed framework. ## update after rebuttal Please see the rebuttal comment below. Claims And Evidence: Yes. Methods And Evaluation Criteria: The proposed methods and evaluations are reasonable. Theoretical Claims: No theoretical claims and proofs involved. Experimental Designs Or Analyses: The proposed method is only evaluated on nuScenes dataset, which may not be enough to demonstrate the effectiveness of the proposed method on the MOT task. Supplementary Material: The reviewer reviews all the supplementary material, including the video. Relation To Broader Scientific Literature: The paper further improve the existing multi-object tracking framework by incorporating three novel modules. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strength: 1. The paper is well-written, with clear structure and illustrations. Weakness: 1. The proposed method is only evaluated on one dataset. Not enough to show the robustness in the multi-object tracking task. 2. The reviewer acknowledge the improvement brought by the newly-designed modules, but the general framework is still based on existing methods. The insights to this task and the community is limited. Other Comments Or Suggestions: The qualitative results in Figure 4(a) is hard to see. The reader could not easily tell the ground truth. Maybe show the ground truth as separate image, or a pure image without any annotation. Questions For Authors: 1. In the analysis of uncertainty (line 323-325-left column), the authors do not provide an analysis on the observation that other modules also effectively reduce the uncertainty. This weakens the motivation of the designed UPD module, as other modules could also achieve that. Do the authors have insights of why it is the case? 2. In section 4.4.1, the authors conduct ablation study on upper bound and lower bound thresholds. Does it require searching on parameter pairs? It is very time-consuming. Also, these parameters may need to be re-search for different data distributions. 3. The major concerns of the reviewer are listed in Other Strengths And Weaknesses section. Although the limitations might not be easily addressed during rebuttal, the reviewer would appreciate any explanation or interpretation on these limitations. The reviewer may adjust the final rating after rebuttal based on the clarifications from the authors. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your time and insightful feedback. We especially appreciate your recognition of our effective framework with newly-designed modules and well-written paper. We responded in detail below and will add them to the revision. > Q1: The proposed method is only evaluated on nuScenes dataset Thanks for your suggestion! First, since the nuScenes dataset provides comprehensive detection and tracking tasks, along with evaluation leaderboards, most previous detection and tracking methods (e.g., DQTrack [ICCV23], PF-Track [CVPR23], and ADA-Track [CVPR24]) have been only evaluated on nuScenes. Therefore, following them, we also perform a fair comparison on this dataset. Second, we agree with the reviewer’s point that methods evaluated on only one dataset have not demonstrated robustness across multiple datasets. To address this concern, we present additional evaluations on an in-house autonomous driving dataset, which is collected from real-world scenarios. The results show that our method effectively tracks objects in challenging environments, demonstrating the generalization and robustness of our S2-Track. Additionally, we have provided a video showcasing the real-world results on an anonymous GitHub Page: https://anonymous-github-8ab1cv.github.io/s2-track/. | Method | MOTA | MOTP | RECALL | | ------------ | ----- | ----- | ------ | | PF-Track | 0.549 | 0.476 | 54.8% | | **S2-Track** | 0.712 | 0.334 | 77.3% | > Q2: The general framework is still based on existing methods. The insights to this task and the community is limited. Thank you for your comments! However, we respectfully disagree with the "limited insights". While S2-Track employs the query-based end-to-end framework as previous works, as mentioned in the Introduction, current end-to-end trackers are still in the early stages of development and are unable to effectively handle the various complex driving scenarios and achieve satisfactory tracking results. Therefore, S2-Track comprehensively enhances the existing end-to-end 3D MOT framework, delivering impressive robust and accurate tracking results, and demonstrating the potential of end-to-end frameworks. As **Reviewer 4TdZ** acknowledged, "The paper builds on recent advances in end-to-end query-based trackers and ... to address limitations in 3D MOT. **Therefore, I believe one of the core contributions of this paper is that the performance of its proposed end-to-end multi-object tracking method surpasses both end-to-end and non-end-to-end methods. I think end-to-end approaches represent the future trend for both autonomous driving and multi-object tracking, and this paper validates the potential of end-to-end methods.**" > Q3: Analysis of other modules also effectively reduce the uncertainty. Thanks for your comment! Our **PQI module** leverages learned certain priors, i.e., 2D object location and depth information, to enhance the initialization of queries, thus effectively **reducing the uncertainty in query initialization** and resulting in more accurate object localization and tracking. The **HQD strategy** introduces different levels of noise to the queries and then applies a denoising process, allowing the model to encounter varying magnitudes of noise (i.e., uncertainty) during training. This effectively helps the model **reduce uncertainty during query matching**, leading to more stable and accurate tracking performance. Although the motivation of these two modules is not uncertainty, they both help the model reduce uncertainty during query initialization and matching. Moreover, they are incorporated together with the UPD module, which aims to **reduce uncertainty during query propagation**. We will incorporate this discussion into the revision. > Q4: The parameters of HQD require searching. Thanks for your comment! The HQD module indeed has two thresholds that need to be set. As shown in Tab 5, the impact of these two thresholds on the results remains relatively stable within a certain optimal range, meaning that we do not need to perform an extremely fine-grained search for their values. Simply identifying the approximate optimal range is sufficient. Furthermore, on our in-house autonomous driving dataset, we used the same values as those in nuScenes and also achieved satisfactory results. While a more detailed search might yield further improvements, the current settings already provide stable and satisfactory performance gains in most cases. > Q5: Add the GT in Fig 4. Thanks for your suggestion! We have carefully included the ground truth as separate images in Fig4 in the revision, making it easier for readers to understand. Thanks again for your thoughtful feedback. We believe that your suggestions, along with our revision, have greatly enhanced the persuasiveness and completeness of our work. We hope our rebuttal can address your concerns and sincerely appreciate that you can rejudge our efforts and potentially update your scores. --- Rebuttal Comment 1.1: Comment: Thanks for the detailed rebuttal. The authors have addressed most of my concerns. I preserve my point of limited novelty, but acknowledge the improvement from newly-designs modules. I have also read reviews from other reviewers and author's rebuttal. I would like to change my rating to weak accept. --- Reply to Comment 1.1.1: Comment: Dear Reviewer, Thank you for your thoughtful feedback and for updating your scores. We appreciate that you acknowledge the improvement from our newly-designed modules, and we hope our S2-Track will inspire future research in this field. Thank you! Sincerely, Authors
Summary: This paper proposes an end-to-end stronger yet simple 3D multi-object tracking framework named S2-Track, which decomposes the tracking pipeline into three core modules: query initialization, propagation, and matching. Experiments show the effectiveness of each module in complex scenarios, including 2D-Prompted Query Initialization (PQI), Uncertainty-aware Probabilistic Decoder (UPD), and Hierarchical Query Denoising (HQD). The proposed framework is simple yet strong. It achieves excellent tracking performance when dealing with occlusions and small objects. ## update after rebuttal The authors have addressed most of my concerns, I decide to keep the positive rating. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. It shows that the proposed method works excellent in small object tracking. Theoretical Claims: No theoretical claim in this paper. Experimental Designs Or Analyses: Yes. For the performance comparison: On nuScenes dataset, S2-Track achieves state-of-the-art performance with an AMOTA of 66.3%, outperforming previous methods by 8.9%. Supplementary Material: Yes. It provides a tracking video demo. I have a question here. It seems that even in the initial frame of the video. One very nearing lady is missed by the baseline PF-Track. I wonder if it is due to the difference in the detection part, rather than tracking. I mean the authors are supposed to use the same detection baseline to convince that the tracking part of the proposed method is better. Relation To Broader Scientific Literature: NA. Essential References Not Discussed: One of the 3D object detection methods that also follows a query-based paradigm and exploits depth net is missed: 3DPPE, published in ICCV 2023. The authors are supposed to discuss it. Other Strengths And Weaknesses: Strengths: 1. The three modules (PQI, UPD, HQD) address the tasks in different stages of the query tracking lifecycle. These designs improve the tracking performance in complex scenarios. Meanwhile, the design of UPD is novel, as it integrates uncertainty perception through a probabilistic attention mechanism. This allows the model to maintain stable predictions even in challenging scenarios such as occlusion, small targets, and distant objects. 2. The framework is both simple and strong, avoiding complex designs such as multi-stage tracking pipelines. It also achieves outperforming performance with the refined transformer query mechanisms. 3. Extensive ablation studies and visualization results show the effectiveness of each module. The results (66.3% AMOTA) on the nuScenes test set achieve SOTA tracking performance in query-based methods. Weaknesses: 1. The paper lacks quantitative analysis of performance degradation in extreme scenarios (e.g., heavy occlusion, low-light nighttime conditions). The paper also lacks the results of category-aware AMOTA (e.g., pedestrian vs. vehicle) on the nuScenes validation set. 2. Is the assumption of a Gaussian distribution reasonable, or are there other distributions that might be more suitable for UPD? Other Comments Or Suggestions: I suppose that the proposed method can be also generalized to some query-based multi-modal 3D object detection methods. And a few of them also report the tracking results. The authors are suggested to add an experiment to show its generalizability. Questions For Authors: Please refer to the above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your time and insightful feedback. We especially appreciate your recognition of our simple and strong framework with effective modules and SOTA tracking performance. We responded in detail below and will add them to the revision. > Q1: More analysis of other extreme scenarios Thanks for your suggestion! In Table 4, we have already analyzed different occlusion situations, i.e., different visibilities. Here, we provide additional experiments of different weather and lighting conditions, and the results show that S2-Track is robust under different lighting and weather conditions, significantly boosting the performance under challenging rainy and nighttime scenes. The metric is AMOTA. | Method | Day | Night | Sunny | Rainy | | ------------ | ---------- | ---------- | ---------- | ---------- | | PF-Track | 41.3 | 12.6 | 41.8 | 36.1 | | **S2-Track** | 46.5(+5.2) | 19.7(+7.1) | 46.7(+4.9) | 42.6(+6.5) | > Q2: Results of category-aware AMOTA (e.g., pedestrian vs. vehicle) Thanks for your valuable comment! We provide detailed category-aware AMOTA results on both the val and test set for better comparisons, as the previous methods or leaderboard reports comprehensive test set results. The results show that S2-Track achieves larger improvements in more challenging categories, e.g., pedestrian. | AMOTA | car | pedestrian | bicycle | bus | motorcycle | trailer | truck | | --------------------- | ----------------- | ----------------- | ------- | ---- | ---------- | ------- | ----- | | Val-PF-Track-CVPR23 | 57.9 | 41.5 | - | - | - | - | 40.3 | | Val-**S2-Track** | 62.0(+4.1) | 47.0(+5.5) | 38.6 | 55.3 | 40.6 | 32.5 | 44.6 | | Test-PF-Track-CVPR23 | 62.2 | 45.1 | 32.2 | 40.8 | 44.8 | 38.0 | 40.5 | | Test-ADA-Track-CVPR24 | 66.4 | 53.4 | 33.4 | 38.2 | 48.4 | 43.7 | 35.9 | | Test-**S2-Track** | 77.4 (+15.2/11.0) | 70.1 (+25.0/16.7) | 57.6 | 65.8 | 67.5 | 64.3 | 61.0 | > Q3: Other distributions for UPD Thanks for your suggestion! We conducted additional experiments to explore different distributions. The results show that other distributions did not achieve satisfactory performance, which may be attributed to the natural statistical properties following the central limit theorem, i.e., many natural phenomena (e.g., lighting variations, sensor noise) arise from the superposition of numerous small and independent effects, leading to normal distribution. | Distribution | Gaussian | Uniform | Exponential | | ------------ | -------- | ------- | ----------- | | AMOTA | 45.8 | 22.3 | 4.7 | > Q4: Generalized to query-based 3D object detection. Thanks for your suggestion! We have already presented our detection results on the nuScenes test and val sets in Table 9 and 10. As a framework designed for tracking, our model also achieves leading detection performance (62.7% mAP and 68.0% NDS on the test set), which clearly demonstrates our strong generalizability. While integrating our powerful modules into existing SOTA detection methods could lead to further improvements, due to computational resource constraints and the scope of this tracking paper, we leave this exploration to future work. > Q5: Missed reference: 3DPPE Thanks for your comment! While 3DPPE [1] also involves depth priors in a query-based framework, it differs from S2-Track in several aspects. First, 3DPPE focuses on 3D object detection, whereas we tackle 3D MOT. Second, 3DPPE introduces 3D point positional encoding, while our PQI is designed for query initialization. Moreover, we also retain randomly initialized queries to explore missing objects. We will add this discussion into the revision. [1] 3DPPE: 3D Point Positional Encoding for Transformer-based Multi-Camera 3D Object Detection > Q6: One very nearing lady is missed by the baseline PF-Track. I wonder if it is due to the difference in the detection part. Thanks for your comment! In Table 7, S2-Track achieves comparable results with both detection heads, PETR (the default head of PF-Track) and DETR3D (the default head of S2-Track), indicating that the missed lady in the demo is not caused by the detection component. In fact, if you carefully review the video, you will notice that the nearby lady is detected in the first frame. However, due to interference from other vehicles and pedestrians, her bounding box is lost in subsequent frames. This is precisely where our carefully designed modules enhance performance, enabling stable and robust tracking in challenging scenarios. Thanks again for your thoughtful feedback. We believe that your suggestions, along with our revision, have greatly enhanced the persuasiveness and completeness of our work. We hope our rebuttal can address your concerns.
null
null
null
null
null
null
unMORE: Unsupervised Multi-Object Segmentation via Center-Boundary Reasoning
Accept (poster)
Summary: The paper proposes a novel framework for unsupervised object segmentations. It proposes a two stage solution by incorporating an objectiveness network that is trained on an object centric dataset (ImageNet) to predict the existence, location, and boundary of each object, and a reasoning module to generate final predictions based on several heuristics. Experiments show that the proposed methods outperforms existing baselines on COCO and SA-1B dataset. Claims And Evidence: The paper's main claim was well-supported with empirical performance gains on a wide range of benchmarks and metrics, suggesting the proposed method is effective Methods And Evaluation Criteria: The author conducted experiments on many common benchmarks such as COCO, SA-1B, LIVIS following the standard practice of the community. The choices of benchmarks are sound. Theoretical Claims: There are no proofs or theoretical claims. Experimental Designs Or Analyses: The author provide a set of comprehensive ablation Supplementary Material: I checked the provided video. Relation To Broader Scientific Literature: This paper establishes a new state-of-the-art in the field of unsupervised image segmentation by identifying the key limitations of existing approaches such as slot-based ones and clustering-based ones. The proposed two stage solution is well-motivated and novel, and is a clear departure from existing methods. I see this work has meaningful impact to the unsupervised segmentation community. Essential References Not Discussed: N/A Other Strengths And Weaknesses: There are small issues with regard to completeness of experiments. For example, the author did not include LVIS results of UnSAM in Table 2, while the UnSAM paper reported such results. Other Comments Or Suggestions: I'm curious about how authors chose the selected metrics for reporting. It seems the results are mostly focused on AP, and only include AR_100. This setup largely follows Cutler. However, it appears the authors of UnSAM focused more on AR. I'm wondering what is the performance on AR_mask (not AR_mask_100) on COCO. In particular, Table 1 of UnSAM paper showed that their results (measured by AR_mask) is only marginally behind the supervised SAM baseline. Since this work offers considerable gain over UnSAM, it may be able to further reduce the gap between supervised and unsupervised methods, when measured by AR. Incorporating these comparisons would further strengthen the contribution of this work. Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate the reviewer's thoughtful comments and address all concerns below. An anonymous PDF with figures and tables is available at: <https://github.com/icml5450/icml5450/blob/main/FiguresTables.pdf> # Q1: Include UnSAM in Table 2 A1: We report zero-shot results of UnSAM in the attached ***Table 4*** (will replace Table 2 in the main paper), with two more metrics AR$^{box}$/AR$^{mask}$ as requested. We can see that UnSAM achieves the highest AR$^{box}$ or AR$^{mask}$ scores on all datasets, but its other important metrics are rather low. This is because UnSAM tends to oversegment objects, as also confirmed in the attached ***Table 1***, as well as qualitative results in ***Figure 5*** and ***Figure 6***. # Q2: Add AR$^{box}$ and AR$^{mask}$ in Table 1 A2: We present the attached ***Table 5*** (will replace Table 1 in the main paper) by adding three more metrics: AR$^{box}$, AR$^{mask}$, and "\# of pred obj.". The AR$^{box}$/AR$^{mask}$ scores are used in the original UnSAM paper to measure the average recall rate without limiting the number of predictions, but the AR$^{box}\_{100}$/AR$^{mask}\_{100}$ scores only consider the top 100 predictions per image and are commonly adopted for object segmentation. In addition, AP scores evaluate the ability to discover more objects with fewer trials (i.e., detections), constituting a balanced view between accuracy and recall. In this regard, existing object segmentation works typically focus on AP scores as they are more informative and less biased. From the attached ***Table 5***, we can see that UnSAM achieves very high AR$^{box}$/AR$^{mask}$ scores, primarily because it tends to predict an excessive number of objects by grouping granular image segments. This clearly explains its rather low scores on all other critical metrics commonly-used for object segmentation. This is also qualitatively validated in the attached ***Figure 5*** and ***Figure 6***, where UnSAM tends to generate oversegmented patches. We appreciate UnSAM's effort in effectively reducing the gap between unsupervised and supervised methods in terms of "unlimited detection", mainly measured by AR scores. Nevertheless, our method targets at comprehensive and accurate object discovery, measured by the balanced evaluation protocols for object segmentation. --- Rebuttal Comment 1.1: Comment: Thanks for the response. I keep my recommendation for acceptance. The results and discussions with regard to UnSAM are insightful. --- Reply to Comment 1.1.1: Comment: Dear reviewer CHsG, Thank you for dedicating your time and effort to review our paper. We are grateful for your positive feedback and insightful suggestions, which have greatly contributed to the improvement of our manuscript. Best, Authors
Summary: This paper proposes a multi-object segmentation approach that first trains objectness networks to identify the existence, object center, and object boundary of individual objects, and then use the trained networks to discover objects on images without further training modules. The paper claims that the approach can discover multi-objects more accurately compared to baselines without having access to image annotations. ### Most of my concerns are addressed. I have updated my rating. Claims And Evidence: The claims are mostly supported by evidence. Methods And Evaluation Criteria: The proposed methods make sense for the problem. Theoretical Claims: No theoretical claims are provided. Experimental Designs Or Analyses: Experiments are properly designed to support claims. Supplementary Material: No comment on Supplementary Material. Relation To Broader Scientific Literature: The object-centric representations discussed in the paper is different from that studied in slot-based methods. The object centric representations learned by slot-based methods cannot only be used for segmentation, but also for generalized composition or generation [1][2][3]. The object centric representations in the proposed paper is mainly for object discovery and segmentation. [1] Jiang, J. and Ahn, S., 2020. Generative neurosymbolic machines. Advances in Neural Information Processing Systems, 33, pp.12572-12582. [2] Wang, Y., Liu, L. and Dauwels, J., 2023, July. Slot-vae: Object-centric scene generation with slot attention. In International Conference on Machine Learning (pp. 36020-36035). PMLR. [3] Wu, Y.F., Lee, M. and Ahn, S., 2024. Neural language of thought models. arXiv preprint arXiv:2402.01203. Essential References Not Discussed: N/A Other Strengths And Weaknesses: **Strength** 1. The writing is smooth and easy to follow. 2. The paper proposes a pipeline to deal with multi-object segmentation without explicit human annotations. 3. Extensive experiments are conducted to demonstrate the effectiveness of the approach. **Weekness** 1. The novelty seems somewhat limited considering that the objectness networks have been widely studied in the literature. The proposed approach simply use them for object discovery. 2. The no supervision claim is somewhat overstated. Though the paper emphasizes it is a fully unsupervised approach and does not require human labels, the approach dose involve label obtaining implicitly. First, the use of pretrained models makes the approach less unsupervised and the accuracy of the object representation hinges on the performance of VoteCut; Second, the approach **needs to create a twin negative sample by cropping the largest rectangle on background pixels excluding the tightest object bounding box** as stated in the paper. Considering these points, is it fair to compare the proposed approach with fully unsupervised approaches? 3. Though the Multi-Object Reasoning Module is training free by directly using the objectness network, it relies on randomly initialization and iterative update of multiple bounding boxes. Does this scale well to images containing a large number of objects? Appendix A.16 studies the iterations of the approach needs. Is there any analysis on segmentation speed? Other Comments Or Suggestions: For novelty concern, I would like to hear more from other reviewers, and I am open to change my rating. Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate the reviewer's thoughtful comments and address all concerns below. An anonymous PDF with figures and tables is available at: <https://github.com/icml5450/icml5450/blob/main/FiguresTables.pdf> # Q1: Representations for segmentation and generation A1: This is an interesting point. First, this paper indeed mainly tackles unsupervised multi-object segmentation. Second, our object-centric representations hold potential for generative tasks. We could train a diffusion model to create object center and boundary fields given various prompts. This approach can be expanded to generate complex multi-object images by first creating individual objects and layouts, allowing for controlled sampling from the learned representations to ensure high-quality boundaries and shapes. However, this is non-trivial and left for future exploration. # Q2: Novelty of objectness networks A2: We will consider an alternative title "Unsupervised Multi-Object Segmentation via Center-Boundary Aware Reasoning", highlighting our core contribution to the challenging task of unsupervised multi-object segmentation and our key technique of center-boundary aware reasoning algorithm. Regarding objectness networks, as extensively discussed in Section 2 of the main paper, relevant works mainly use the concept of center or boundary in isolation to tackle fully-supervised tasks. In contrast, we extend the concepts of object existence, object center field, and object boundary field in a joint manner to tackle the challenging unsupervised task. Overall, our proposed two-stage pipeline, object-centric representation learning followed by multi-object reasoning, clearly departs from existing unsupervised methods such as slot-based approaches. # Q3: Hinges on VoteCut A3: To clarify, our object representation is independent of any specific self-supervised features or grouping strategies. We conduct the following ablation study on four types of pseudo-masks: - SelfMask[CVPRW22]: For each image, we employ the strong unsupervised saliency detection model SelfMask to predict a salient region as the pseudo label. - MaskCut: For each image, we use the first object discovered by MaskCut as the pseudo label. - VoteCut: It's used in our paper. - VoteCut+SAM: For each image, a rough mask is generated by VoteCut, and its bounding box is used as a prompt for SAM to predict the final pseudo mask. While this setup yields the best pseudo labels, SAM is a fully supervised model, so this ablation is for reference only. As shown in the attached ***Table 8***, our method is amenable to all types of rough masks, though their quality affects OCN$\_{disc}$ performance. While SAM scores highest, its improvement over VoteCut is not substantial, as it still relies on bounding box prompts from VoteCut. Importantly, our method does not depend on specific pretrained features, enabling the use of enhanced pretrained models in the future. # Q4: Twin negative sample A4: Different unsupervised methods leverage self-supervised features and raw images in various ways. In our approach, generating twin negative samples from rough masks is effortless and can be regarded as data augmentation without additional human annotations. Thus, this operation should not be seen as a weakness. # Q5: Fairness to fully unsupervised approaches A5: Since there is no official definition of what constitutes an unsupervised approach, the fairness criteria for comparison are derived from the extensive body of existing unsupervised methods, notably including recent works such as CutLER, CuVLER, and UnSAM. These methods share a core principle: avoiding the use of human labels while fully utilizing self-supervised features and the derived pseudo labels in various ways. Our method adheres strictly to this core principle, relying solely on self-supervised features without introducing any additional human annotations. To clarify, all baselines in our paper utilize pretrained self-supervised features. Therefore, it is evidently fair to compare our method with all the unsupervised baselines listed in our paper. # Q6: Scale to a large number of objects A6: We present a detailed evaluation on COCO* validation dataset based on object count in the attached ***Table 1***. We can see that, as the number of objects per image increases (e.g., $\geq5$ objects), our OCN$\_{disc}$/ OCN consistently outperforms all baselines by growing margins, showing the superiority of our method in dealing with a large number of objects. # Q7: Segmentation speed A7: Time consumption is detailed in the attached ***Table 3***. Our OCN$\_{disc}$ takes 10 hours to train the objectness network and is slower for Direct Object Discovery. However, our subsequent detector OCN requires only 30 hours to train, benefiting from the high-quality pseudo labels from OCN$\_{disc}$, while baseline detectors take over 60 hours. Ultimately, the inference speed of our OCN matches that of CutLER and CuVLER. --- Rebuttal Comment 1.1: Comment: Thanks for the detailed rebuttal. Most of my concerns are addressed. I have updated my rating. --- Reply to Comment 1.1.1: Comment: Dear reviewer su1q, We sincerely appreciate your time, efforts, and positive feedback on our paper. Your valuable suggestions and insights have significantly helped us to improve our manuscript. Best, Authors
Summary: This paper presents OCN, a new two-stage framework for unsupervised multi-object segmentation in images. The proposed pipeline consists of two stages: the first stage involves learning three levels of object-centric representations—object existence, object center field, and object boundary distance field. In the second stage, a center-boundary aware reasoning algorithm is introduced to iteratively discover multiple objects in single images without relying on neural networks or human annotations. OCN demonstrates superior performance compared to existing unsupervised methods across six benchmark datasets, including COCO, achieving state-of-the-art results in object segmentation, especially in crowded scenes where other methods struggle. ##update after rebuttal: Thanks for the rebuttal. My concern has been addressed mostly. I recommend accept this paper. Great work! Claims And Evidence: The claim of superior performance compared to existing unsupervised methods is well-supported with quantitative results on 6 benchmark datasets, including COCO. Tables and comparisons are provided. Methods And Evaluation Criteria: Yes, the proposed methods and evaluation criteria in the paper make sense for the problem of unsupervised multi-object segmentation in single images Theoretical Claims: Not applicable. Experimental Designs Or Analyses: The design is robust, and it is particularly important to evaluate whether the method performs exceptionally well in scenarios involving crowd images (i.e., cases with multiple objects), as demonstrated in Table 5 of Appendix A.8. Supplementary Material: The supplementary material includes details on training the object network and an ablation study analyzing the average number of iterations required. Relation To Broader Scientific Literature: I think the the finding in this paper can be applied in the video domain, too. Essential References Not Discussed: The related work included is sufficient to understand the key contributions. Other Strengths And Weaknesses: Strengths: 1. The paper provides a detailed explanation of the method's motivation and description. The experiments are comprehensive, including comparisons with state-of-the-art methods across diverse benchmarks and an ablation study. 2. The method is both sound and intuitive. Using three levels of objectness to feed into the object reasoning network makes sense, and leveraging DINO's self-supervised features allows it to perform well in crowded scenarios. Weaknesses: 1. The three levels of object priors are heavily reliant on the pretrained features. This dependence could introduce biases from the training dataset, potentially limiting the model’s generalization capabilities. Other Comments Or Suggestions: The paper could provide more discussion of failure cases and limitations, which would give a more balanced view of the capabilities. Questions For Authors: 1. Does the method heavily rely on VoteCut when training the Objectness Network? Is there an ablation study to replace the pseudo-masks generated by other methods? 2. In Appendix A.16, the paper mentions that the average number of iterations in the object reasoning module is typically 10. Could this be considered time-consuming? Additionally, what is the average throughput of this module? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We appreciate the reviewer's thoughtful comments and address all concerns below. An anonymous PDF with figures and tables is available at: <https://github.com/icml5450/icml5450/blob/main/FiguresTables.pdf> # Q1: Applied on videos A1: We agree with the reviewer and conduct the following experiments on YouTubeVIS-2021 dataset to verify the effectiveness of OCN for unsupervised video object segmentation. - Dataset: The YouTubeVIS-2021 dataset consists of 2,985 training videos and 421 val videos whose labels are held for competitions. Thus, we split original training videos into two subsets: YouTubeVIS-2021 Train\# (2,795 videos) and YouTubeVIS-2021 Val\# (200 videos). - Baselines: 1) We compare with CutLER's extension to video domain: VideoCutLER(CVPR24). 2) We also adapt CuVLER to video domain: VideoCuVLER. 3) We adapt OCN to video domain: VideoOCN. - Experiments: We follow VideoCutLER: Step-1: generate pseudo labels for unlabeled images. Step-2: generate synthetic videos with images and pseudo labels from Step-1. Step-3: train a Mask2Former model for video segmentation on synthetic videos and labels from Step-2. - Results: As shown in the attached ***Table 7***, the baselines VideoCutLER and VideoCuVLER are trained on 2 types of training sets, ImageNet and YouTubeVIS-2021 Train\#. VideoOCN consistently outperforms all baselines on most metrics without extensively tuning hyperparameters due to limited time. Qualitative results for video segmentation can be found in the attached ***Figure 7***. # Q2: Pretrained features and generalization A2: This is an insightful point. Like almost all pretrained models, the learned features are always depending on training datasets, and can hardly generalize to extremely different domains due to the fundamental data-driven learning principle. As shown in the attached ***Table 4***, our OCN (trained on natural images) demonstrates excellent zero-shot performance on unseen datasets with diverse types of natural images. Nevertheless, for datasets with significant domain gaps (e.g., medical images), our learned object priors from natural images may not achieve comparable results as expected. To enhance generalization, one potential solution could be to increase the diversity of training datasets. However, this is a non-trivial task and is left for future exploration. # Q3: Failure cases and limitations A3: We present failure cases in the attached ***Figure 8*** and discuss limitations as follows. 1. The Direct Object Discovery of our OCN$\_{disc}$ takes time. It could be possible to leverage reinforcement learning techniques to learn an efficient policy network to discover objects. 2. Our method struggles to separate overlapping objects with similar textures, as shown in the attached ***Figure 8***. Adding additional language priors may help alleviate this issue. # Q4: Ablation study on pseudo-masks A4: We conduct the following ablation study on four types of pseudo-masks: - SelfMask[CVPRW22]: For each image, we employ the strong unsupervised saliency detection model SelfMask to predict a salient region as the pseudo label. - MaskCut: For each image, we use the first object discovered by MaskCut as the pseudo label. - VoteCut: It's used in our paper. - VoteCut+SAM: For each image, a rough mask is generated by VoteCut, and its bounding box is used as a prompt for SAM to predict the final pseudo mask. While this setup yields the best pseudo labels, SAM is a fully supervised model, so this ablation is for reference only. As shown in the attached ***Table 8***, our method is amenable to all types of rough masks, though their quality affects OCN$\_{disc}$ performance. While SAM scores highest, its improvement over VoteCut is not substantial, as it still relies on bounding box prompts from VoteCut. Importantly, our method does not depend on specific pretrained features, enabling the use of enhanced pretrained models in the future. # Q5: Time consumption and throughput A5: We present the time consumption in the attached ***Table 3***. Our OCN$\_{disc}$ takes 10 hours to train the objectness network and is slower for Direct Object Discovery. However, our subsequent detector OCN requires only 30 hours to train, benefiting from the high-quality pseudo labels from OCN$_{disc}$, while baseline detectors take over 60 hours. Ultimately, the inference speed of our OCN matches that of CutLER and CuVLER. Regarding the throughput, for each image on average, the number of initial proposals is 1122.7, whereas the number of predicted objects from OCN$_{disc}$ is 8.9. Most initial proposals have low existence scores and are discarded at the first iteration. The Non-Maximum Suppression (NMS) will also remove redundant proposals.
Summary: The paper proposes OCN, which improves unsupervised multi-object discovery by introducing three objectness scores to measure existence, centers, and boundaries, along with a reasoning module to distinguish objects. The model is trained by bootstrapping rough masks from DINOv2 and refined through distillation with inductive biases, leading to superior performance over CutLER and CuVLER. Claims And Evidence: The paper does not focus on "object-centric representation" as it only learns object segments, unlike SlotAttention, which learns embeddings for each segment. The title and terminology should be corrected to reflect its focus on "unsupervised multi-object segmentation." The main contribution is improving unsupervised segmentation in scenes with many objects, but this is not thoroughly analyzed: - The tables present only average performance per dataset. Showing performance based on object count would be more informative. While the paper uses COCO*, a multi-object extension of COCO, the analysis is insufficient. - The figures display only object centers and boundaries for a single object, including those in Appendix A.13. More qualitative results demonstrating objectness in multi-object segmentation would be beneficial. Methods And Evaluation Criteria: 1. The paper aims for fine-grained object discrimination through objectness measurement, but the results are unconvincing due to the lack of performance analysis by object count and qualitative results on multi-object images, as mentioned above. 2. The model relies on rough masks for supervision, which may introduce errors if they fail to distinguish objects. It would help to show that while the original masks suffer from merging issues, refinement through objectness improves segmentation both qualitatively and quantitatively. 3. The paper introduces multiple complex modules beyond prior work. It should compare training and inference time, not just list trainable modules in Table 1. 4. While CutLER and CuVLER suffer from undersegmentation, UnSAM does not. Why does Table 2 omit a comparison with UnSAM? 5. Why is UnSAM’s performance in Table 1 so low? Can the authors justify this? Also, why reimplement its results instead of comparing directly with the benchmarks reported in the UnSAM paper? Theoretical Claims: N/A Experimental Designs Or Analyses: Mentioned above. Supplementary Material: Checked all results. Relation To Broader Scientific Literature: Object discrimination is a fundamental problem in computer vision with applications across various visual tasks, including scientific problems like cell segmentation. Essential References Not Discussed: Well-cited, as far as I know. Other Strengths And Weaknesses: I appreciate that the authors have updated the paper rather than simply resubmitting a rejected version, addressing prior reviews. Here are my thoughts on additional strengths and weaknesses after reading A.17. Technical contribution (strength): - While concerns about technical novelty are valid, the paper makes a reasonable contribution by demonstrating how combining objectness components improves multi-object discovery benchmarks. - Prior work is well surveyed and properly discussed in Sec 3.2. Presentation (weakness): - The presentation could be further refined for clarity and readability. - Figure 1 lacks clarity and should convey a clear message without relying on the text. - The ablation study in Table 3 is difficult to read. Instead of describing variants 1–8 in the text, clarify them directly in the table using checkmarks or dashes. - Visualizations in Figures 11–17 are strong. Consider curating some for the main paper to provide additional insights beyond numerical results. Adding more baselines, especially UnSAM, would further illustrate how OCN outperforms it qualitatively. Other Comments Or Suggestions: Mentioned above. Questions For Authors: Questions 1-5 in "Methods And Evaluation Criteria." Visual comparison between OCN and UnSAM (extension of Figures 11–17). Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate the reviewer's thoughtful comments and address all concerns below. An anonymous PDF with figures and tables is available at: <https://github.com/icml5450/icml5450/blob/main/FiguresTables.pdf> # Q1: Title and terminology A1: Thanks for this advice. We will consider an alternative title "Unsupervised Multi-Object Segmentation via Center-Boundary Aware Reasoning". In addition, we will ensure that the relevant terminology throughout the paper is updated accordingly. # Q2: Table and Figure based on object count A2: We present a detailed evaluation on COCO* validation dataset based on object count in the attached ***Table 1*** with two more metrics AR$^{box}$/AR$^{mask}$ as requested by reviewer **CHsG**. We can see that, when the number of objects in each image is rather small (e.g., [0 - 4]), the results of top-performing baselines VoteCut/CuVLER are comparable to our method, all yielding high scores. However, as the number of objects per image increases (e.g., $\geq5$ objects), our OCN$_{disc}$/ OCN consistently outperforms all baselines by growing margins, demonstrating the superiority of our method in dealing with challenging crowded images. Notably, UnSAM achieves high AR$^{box}$/AR$^{mask}$ scores (used in the original UnSAM paper to measure the average recall rate without limiting the number of predictions), but its AR$^{box}\_{100}$/AR$^{mask}\_{100}$ scores (only considers the top 100 predictions per image and commonly adopted for object segmentation) are clearly lower. This is because UnSAM focuses on excessively partitioning images by clustering granular segments, which sacrifices the accuracy of object discovery, but tends to oversegment objects. This is also qualitatively validated in attached ***Figure 5*** and ***Figure 6***. We present more qualitative results in the attached ***Figure 1*** for multi-object reasoning. # Q3: Refinement through objectness A3: This is an insightful point. We compare VoteCut and our OCN$\_{disc}$ on COCO train2017 and ImageNet val splits. As shown in the attached ***Table 2***, our OCN$\_{disc}$ is on par with VoteCut on ImageNet val, validating that our OCN$\_{disc}$ indeed learns valid objectness from rough masks generated by VoteCut on the train split of ImageNet. Since most images of ImageNet have a single object, it is expected that our OCN$\_{disc}$ performs similarly to the pseudo label generator VoteCut. However, on the challenging COCO train2017, our OCN$\_{disc}$ clearly outperforms VoteCut, validating that the learned (refined) objectness by our OCN$\_{disc}$ can better deal with undersegmentation on multi-object images, whereas VoteCut cannot. As shown in the attached ***Figure 2***, rough masks from VoteCut on both COCO train2017 and ImageNet val are prone to undersegmentation, while OCN$\_{disc}$ shows a stronger ability to distinguish multiple objects. # Q4: Training and inference time A4: We report the training and inference time in the attached ***Table 3***. Our OCN$\_{disc}$ takes 10 hours to train the objectness network and is slower for Direct Object Discovery. However, our subsequent detector OCN requires only 30 hours to train, benefiting from the high-quality pseudo labels from OCN$_{disc}$, while baseline detectors take over 60 hours. Ultimately, the inference speed of our OCN matches that of CutLER and CuVLER. # Q5: Add unSAM to Table 2 of the main paper A5: We report zero-shot results of UnSAM in the attached ***Table 4*** (will replace Table 2 in the main paper), with two more metrics AR$^{box}$/AR$^{mask}$ as requested by reviewer **CHsG**. We can see that UnSAM achieves the highest AR$^{box}$ or AR$^{mask}$ scores on all datasets, but its other important metrics are rather low. This is because UnSAM tends to oversegment objects, as also confirmed in the attached ***Table 1***. # Q6: unSAM in Table 1 of the main paper and its reproduction A6: To clarify, all results of UnSAM in our paper are based on its official checkpoints and code. We will rephrase sentences. In the attached ***Table 5*** (will replace Table 1 in the main paper), we add three more metrics: AR$^{box}$, AR$^{mask}$, and "\# of pred obj.". Again, we can see that UnSAM achieves very high AR$^{box}$/AR$^{mask}$ scores, primarily because it tends to predict an excessive number of objects. This clearly explains its rather low scores on all other critical metrics commonly-used for object segmentation. # Q7: Figure 1 improvement A7: We present an updated version in the attached ***Figure 3*** which will replace Figure 1 of the main paper. # Q8: Table 3 improvement A8: We present an updated version in the attached ***Table 6*** which will replace Table 3 of the main paper. # Q9: Visualizations in Figures 11–17 A9: We present new visualizations in the attached ***Figure 4/5/6*** by re-organizing existing materials and adding results of UnSAM on both COCO* validation and zero-shot datasets. We will include them in the main paper for better illustration.
null
null
null
null
null
null
An Error Analysis of Flow Matching for Deep Generative Modeling
Accept (spotlight poster)
Summary: This paper presents the first end-to-end analysis of Continuous Normalizing Flows (CNFs) built upon Flow Matching. The theoretical results demonstrate that the generated distribution is guaranteed to converge to the true distribution under a mild assumption. Furthermore, the convergence rate is significantly improved assuming a mild Lipschitz condition on the target score function.、 ## update after rebuttal My assessment has not changed. The authors have successfully addressed the raised issues, and my recommendation remains as an "Accept" Claims And Evidence: All the claims are supported by the theoretical results in this paper. Methods And Evaluation Criteria: There is no experiments in this paper. Theoretical Claims: I checked the proof for the theoretical claims and did not find any mistake. Experimental Designs Or Analyses: There is no experiments in this paper. Supplementary Material: There is no supplementary material in this submission. Relation To Broader Scientific Literature: The consistency of FM is mainly based on a mild assumption, i.e. boundedness, which justifies the use of CNFs based on FM. Theorem 1.7 highlight the effectiveness of CNFs based on FM in learning the underlying smooth distribution. Essential References Not Discussed: I find the literature review comprehensive. Other Strengths And Weaknesses: This paper relaxes the strong assumptions on the underlying velocity field in the previous work. Other Comments Or Suggestions: None. Questions For Authors: Is assuming early stopping equivalent to considering $\sigma_{min}$ as the original FM paper? Can the analysis be simplified for some predefined small $\sigma_{min}$ where we resort to a noisy approximation of the target distribution? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your positive evaluation. **1. Is assuming early stopping $\sigma_{min}$ equivalent to considering as the original FM paper? Can the analysis be simplified for some predefined small $\sigma_{min}$ where we resort to a noisy approximation of the target distribution?** **A:** Thank you for providing the insightful question. Your clarification makes sense. The analysis can be simplified when the stopping time is pre-defined. However, as we would prefer the generated distribution to converge to the target distribution, we let the stopping time converge to $0$ as the size of samples increases.
Summary: This paper presents an analysis of Continuous Normalizing Flows (CNFs) built upon Flow Matching (FM) for deep generative modeling. It proves the generated distribution of FM converges to the target distribution in the Wasserstein-2 distance for general target distributions with bounded support. The convergence rate is significantly improved under a mild Lipschitz condition of the target score function. Claims And Evidence: The claims made in the paper appear to be well-supported by theoretical analysis, mathematical proofs, and assumptions outlined throughout the text. Below is an evaluation of the main claims and their supporting evidence: 1. End-to-End Error Analysis: Theorem 1.6, 1.7. 2. Improved Convergence Rate Under Lipschitz Conditions: Theorem 1.7. Methods And Evaluation Criteria: Yes. Theoretical Claims: I have skimmed the proofs and believe they are correct. Experimental Designs Or Analyses: None. Supplementary Material: None. Relation To Broader Scientific Literature: FM is highlighted as a pivotal development that enhances the training and efficiency of CNFs. Prior works on FM and interpolated transport paths, notably by Liu et al. (2023) and Karras et al. (2022), are referenced, indicating that this paper builds on and extends these concepts. This situates the work within a lineage of research that seeks to improve existing flow models by addressing their sampling efficiency. Essential References Not Discussed: None. Other Strengths And Weaknesses: Strengths: 1) It presents itself as the first end-to-end error analysis of CNFs using FM. This novelty is significant, as it fills a gap in the existing literature regarding the theoretical underpinnings of FM and its implications for generative models. 2) The authors focus on mild assumptions, such as bounded support and Lipschitz conditions, which increases the applicability of their findings to a wider range of practical situations. This accessibility is a strength as it allows for broader implications in real-world applications. Weaknesses: 1)There is no experiments to support the theoretical results. Other Comments Or Suggestions: None. Questions For Authors: 1) In your analysis, you emphasize the importance of the Lipschitz continuity of the velocity field. Can you elaborate on how varying the Lipschitz constant impacts the convergence rate and the overall performance of the model? 2) What are the most pressing open questions or future research directions that you believe need to be addressed to further advance the understanding and applicability of FM in generative modeling? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your positive evaluation and insightful questions. **1. There is no experiments to support the theoretical results.** **A:** We appreciate the reviewer’s concern regarding the absence of experiments. Our primary focus in this work is to establish a rigorous theoretical foundation for Flow Matching (FM), providing theoretical guarantees for its effectiveness. **2. In your analysis, you emphasize the importance of the Lipschitz continuity of the velocity field. Can you elaborate on how varying the Lipschitz constant impacts the convergence rate and the overall performance of the model?** **A:** As stated in Theorem 4.1, an increase in the Lipschitz constant leads to greater neural network complexity. This, in turn, expands the hypothesis class, making estimation more challenging and potentially impacting the convergence rate. **3. What are the most pressing open questions or future research directions that you believe need to be addressed to further advance the understanding and applicability of FM in generative modeling?** **A:** One crucial direction is to relax the current assumptions while maintaining convergence guarantees, thereby broadening the applicability of FM.
Summary: This paper presents the first comprehensive analysis of Continuous Normalizing Flows (CNFs) based on Flow Matching. The theoretical results establish that the generated distribution converges to the true distribution under a mild assumption. Additionally, the convergence rate is notably improved when a mild Lipschitz condition is assumed on the target score function. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: I have reviewed the proof for Theorems 1.6 and 1.7 and find no mistakes. Experimental Designs Or Analyses: NA Supplementary Material: NA Relation To Broader Scientific Literature: These results not only enrich the toolbox for theoretical analysis for Flow Matching but also provide new perspectives and methods for practical applications. Essential References Not Discussed: There are some references needed to be cited: [1] Gat I, Remez T, Shaul N, et al. Discrete flow matching. NIPS, 2025. [2] Shi Y, De Bortoli V, Campbell A, et al. Diffusion Schrödinger bridge matching. NIPS, 2024. [3] Klein L, Krämer A, Noé F. Equivariant flow matching. NIPS, 2024. Other Strengths And Weaknesses: Strength 1: The paper is well-structured and easy to follow. The authors have conducted a thorough literature review, covering key works on Flow Matching and Diffusion, including some of the most recent advancements in the field. Strength 2: The results presented in the paper include consistency, generalization bounds, and sample complexity bounds, which comprehensively address the key theoretical questions related to this line of methods. Notably, most of the results are derived under mild assumptions, making the findings both robust and broadly applicable. Weakness 1: The results depend on specific assumptions (bounded support, Lipschitz continuity) which may not hold in all practical scenarios. Other Comments Or Suggestions: No Questions For Authors: No Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your positive evaluation. **1. The results depend on specific assumptions (bounded support, Lipschitz continuity) which may not hold in all practical scenarios.** **A:** We appreciate the reviewer’s insightful comments on the assumptions of bounded support and Lipschitz continuity. These conditions are standard in the literature, but we acknowledge the importance of exploring more general settings. In future work, we aim to extend our analysis to relax these assumptions and investigate their impact on our results.
Summary: This paper provides an analysis of flow matching. The authors prove that generative models based on flow matching converge to the target distribution under mild assumption. ## update after rebuttal ## I have reviewed the rebuttal and decided to maintain the original score. Claims And Evidence: I'm very unfamilar with this domain, although I tried my best to understand its content and provide thoughtful reviews. Methods And Evaluation Criteria: The paper does not propose specific methods or evaluation criteria, as it is a theoretical contribution. The validity of the results depends on the correctness and rigor of the proofs rather than empirical benchmarks. Theoretical Claims: The discussion of the theoretical aspects in the paper appears to be reasonable. Experimental Designs Or Analyses: This is a theory-related paper, with no experiments provided. Supplementary Material: Yes. Relation To Broader Scientific Literature: The key contributions of this paper are closely related to the broader scientific literature in generative modeling. Essential References Not Discussed: I did not identify any major related works that are missing. Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: The introduction is too brief. Many existing works have analyzed FM, and the introduction should clearly explain how this work differs from previous theory-related works. The authors should provide an overview at the end of the introduction. Currently, the transition from the main contributions directly to "1.1 Assumptions" is abrupt, making it unclear what this part aims to achieve. Both Section 2 and Section 3 are titled "Preliminaries", overly confusing. Would it not be more logical to merge them into a single section? Questions For Authors: Please address my concerns or correct me if there is anything wrong in other comments. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your valuable suggestions. **1. Many existing works have analyzed FM, and the introduction should clearly explain how this work differs from previous theory-related works.** **A:** While recent works [1,2,3] have analyzed ODE-based generative models, they typically assume that the score function or velocity function is uniformly Lipschitz over all time steps. In contrast, our analysis drops these assumptions, which makes the analysis more general. **2. The authors should provide an overview at the end of the introduction.** **A:** Here is a brief overview, which we will include in our revision. In Preliminaries, we introduce the necessary notations and background on Flow Matching (FM) and Continuous Normalizing Flows (CNFs). Section 4 analyzes the approximation error that arises when using neural networks to approximate the true velocity field. Section 5 studies the estimation error in learning the velocity field from data. Section 6 studies the discretization error introduced when solving the ODE flow numerically. **3. Currently, the transition from the main contributions directly to "1.1 Assumptions" is abrupt, ... Would it not be more logical to merge them into a single section?** **A:** We agree that the transition is currently abrupt and will revise the structure to ensure the readability. Specifically, we will either integrate the key assumptions into the contributions section or provide a brief motivation before introducing them. [1] Albergo, M. S., Boffi, N. M., and Vanden-Eijnden, E. Stochastic interpolants: A unifying framework for flows and diffusions. [2] Chen, S., Daras, G., and Dimakis, A. G. Restoration-degradation beyond linear diffusions: A non-asymptotic analysis for ddim-type samplers. [3] Lu, C., Zheng, K., Bao, F., Chen, J., Li, C., and Zhu, J. Maximum likelihood training for score-based diffusion odes by high order denoising score matching.
null
null
null
null
null
null
DCTdiff: Intriguing Properties of Image Generative Modeling in the DCT Space
Accept (poster)
Summary: This paper introduces a new paradigm in diffusion models by using DCT coefficients, specifically low-frequency components, as operands instead of pixel or latent representations. Inspired by JPEG compression, this method aims to improve efficiency. The model achieves 512x512 resolution without latent representations. The paper also claims that diffusion models can be interpreted as spectral autoregression. Claims And Evidence: The core idea is interesting, which is the primary reason why I recommend this paper, but some claims lack convincing theoretical or experimental support. **Lack of qualitative evidence**: Unlike related works such as VAR (Tian et al., 2024) and LCM (Luo et al., 2023), this paper does not provide enough qualitative results in the main text. Typically, such studies include extensive uncurated results to prevent cherry-picking concerns. However, the main paper lacks qualitative figures, and the appendix only includes Figs. 9-11, missing key results for FFHQ 512. **Overclaim on spectral autoregression**: Unlike VAR, which is truly autoregressive in nature, the proposed model is not structurally autoregressive. Diffusion models naturally generate low-frequency components early and high-frequency components later, a well-known phenomenon (Patashnik et al., ICCV 2023). This claim lacks novelty and does not provide practical benefits for DCTdiff. Methods And Evaluation Criteria: The core idea of using DCT coefficients as diffusion model inputs is strong. However, evaluation has weaknesses. The metrics and baselines are limited (only FID and UViT), which may not ensure a fair comparison. UViT was chosen for parameter consistency, but there is no clear reason why other pretrained models were not evaluated. A broader evaluation with models trained on the same datasets (even if #Parameters, GFLOPs, and training steps differ) would improve the study. Additional metrics, such as Inception Score, could be included, but more qualitative results would be the most impactful improvement. Theoretical Claims: Theorems 5.1 and 5.2 appear correct but do not significantly contribute to the main claims. Theorem 5.1: States that diffusion removes high-frequency components first and low-frequency components later. This is well known and not specific to DCTdiff. Theorem 5.2: Reformulates existing DCT-based upsampling methods in a mathematical form. This principle is already widely used and not novel. Overall, these theorems do not offer strong contributions. Experimental Designs Or Analyses: The main experiments and analyses are generally sound. Supplementary Material: Reviewed the supplementary material, particularly looking for more qualitative results, but found them insufficient. More results should be provided. Relation To Broader Scientific Literature: The paper mainly compares DCTdiff with UViT-based latent diffusion using FID scores. However, the advantages of DCTdiff may extend beyond what is discussed. For example, latent diffusion models using VAEs typically produce high-quality images but introduce issues with encoder-decoder invertibility. Hong et al., NeurIPS 2024, propose a correction algorithm for this, suggesting that DCTdiff could bypass such issues entirely. ### refs: Hong, Seongmin, et al. "Gradient-free Decoder Inversion in Latent Diffusion Models." Advances in Neural Information Processing Systems 37 (2024): 82982-83007. Essential References Not Discussed: The claim that "diffusion models are spectral autoregression" is not novel. Unlike true autoregressive models like VAR, diffusion models do not have structural autoregression. They just *tend* to make low-frequency in the early stages, and then *tend* to refine high-frequency in the later stages. The authors should cite Or Patashnik et al. (ICCV 2023) and tone down this claim. ### ref: Patashnik, Or, et al. "Localizing object-level shape variations with text-to-image diffusion models." Proceedings of the IEEE/CVF international conference on computer vision. 2023. Other Strengths And Weaknesses: The study tackles an important problem, and its ideas could extend to many future works, such as Video diffusion models. This makes it a promising research direction. Other Comments Or Suggestions: The authors should provide more qualitative results, and it would be better that they go to the first figure. Questions For Authors: 1. Why are qualitative results so limited in the main paper and appendix? Can you provide more uncurated samples? 2. Why were only UViT-based baselines used? Can you compare DCTdiff against more diverse models, even if their training settings differ? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you very much for the reviews and constructive suggestions. **Q1: the main paper lacks qualitative figures, and the appendix only includes Figs. 9-11, missing key results for FFHQ 512** We will add more qualitative samples in the appendix, including the randomly drawn ones, and compare them with the baseline as well. Surely, the qualitative results on FFHQ 512x512 will be added to the paper, too. **Q2: Theorems 5.1 and 5.2 appear correct but do not significantly contribute to the main claims. Theorem 5.1: States that diffusion removes high-frequency components first and low-frequency components later. This is well known (Patashnik et al., ICCV 2023) and not specific to DCTdiff. Theorem 5.2: Reformulates existing DCT-based upsampling methods in a mathematical form.** Theorem 5.1 provides a theoretical foundation for understanding how noise affects different frequency components in the diffusion process. It formalizes the observation that high-frequency signals are more susceptible to noise perturbations due to the energy concentration properties of the DCT. This insight is crucial for justifying why DCTdiff exhibits a faster noise accumulation process than pixel diffusion, ultimately motivating the introduction of the SNR scaling method. Regarding Theorem 5.2, we present an alternative derivation that differs from the existing approach in the literature. We believe that DCT-based upsampling holds significant potential in deep learning applications. We consider moving Theorem 5.2 to the appendix while emphasizing its potential in the main paper. In summary, we will refine the writing in these two paragraphs by adding proper citations and explicitly highlighting the message we want to deliver in this paper. **Q3: The metrics and baselines are limited (only FID and UViT), which may not ensure a fair comparison. A broader evaluation with models trained on the same datasets (even if #Parameters, GFLOPs, and training steps differ) would improve the study. Additional metrics, such as Inception Score, could be included.** In addition to UViT, we have also provided the results on the baseline DiT (please refer to our paper). The reason for choosing these two base modes is that they are well-known models and widely used in the community. Evaluating DCTdiff on more base models is valuable as long as the base model is easy to implement. For example, we have preliminarily implemented DCTdiff using the UNet architecture, the results are promising (Table 11). We believe that DCTdiff is a general method to be applied in different diffusion networks. Table 11. FID of UNet-based DCTdiff on CIFAR-10 (NFE=100). | training steps | 200k | 300k | 400k | | --- | --- | --- | --- | | DCTdiff (UNet) | 5.06 | 4.88 | 4.48 | Regarding extra metrics, we have added IS, precision, recall, CMMD (as suggested by reviewer VaLT) to evaluate the models. The results are shown in Tables 3, 4, 5. Table 3. Comparison between UViT and DCTdiff on CIFAR-10 using DDIM sampler (NFE=100) | | FID ↓ | CMMD ↓ | IS ↑ | precision ↑ | recall ↑ | | --- | --- | --- | --- | --- | --- | | UViT | 5.05 | 0.052 | 7.08 | 0.668 | 0.589 | | DCTdiff | 4.25 | 0.043 | 7.70 | 0.660 | 0.606 | Table 4. Comparison between UViT and DCTdiff on FFHQ 128 using DPM-solver (NFE=100) | | FID ↓ | CMMD ↓ | IS ↑ | precision ↑ | recall ↑ | | --- | --- | --- | --- | --- | --- | | UViT | 9.18 | 0.610 | 3.54 | 0.648 | 0.485 | | DCTdiff | 6.50 | 0.470 | 3.67 | 0.668 | 0.512 | Table 5. Comparison between UViT and DCTdiff on AFHQ 512 using DPM-solver (NFE=100) | | FID ↓ | CMMD ↓ | IS ↑ | precision ↑ | recall ↑ | | --- | --- | --- | --- | --- | --- | | UViT (latent) | 10.86 | 0.373 | 11.00 | 0.547 | 0.496 | | DCTdiff | 8.76 | 0.335 | 11.00 | 0.632 | 0.496 | **Q4. Latent diffusion models using VAEs typically produce high-quality images but introduce issues with encoder-decoder invertibility. Hong et al., propose a correction algorithm for this, suggesting that DCTdiff could bypass such issues entirely** Thank for suggesting the case of encoder-decoder invertibility. We do agree that DCT is a flexible and invertible method of image representation, which differs from the neural network-based image tokenizers. We will include this discussion with paper [1] in the final version of our paper. [1] Hong, Seongmin, et al. "Gradient-free Decoder Inversion in Latent Diffusion Models." NIPS (2024) --- Rebuttal Comment 1.1: Comment: Thanks for the additional experiments and clarifications. For Q1, I think it's a strong point that your method can generate 512×512 samples without using a latent autoencoder. It’s a clear advantage, and I wish the qualitative samples for FFHQ 512 were shared via an anonymous link in the rebuttal. Still, I appreciate the added results. I would like to keep my score as is.
Summary: In this work authors introduce a novel idea to model images in their frequency spaces with diffusion models. Authors show that they can use Diffusion Transformer architectures to model the frequencies of images in a smart way without changing the architecture. There are several new observations on how to achieve this task regarding the need for new scaling etc. With the series of experiments it is shown that the method outperforms pixel-based alternative baseline approach achieving good level of generation quality. Claims And Evidence: The submission introduces a new method that uses in a brilliant way several observations from compression. Those combinations of known tricks from JPEG with generative modelling are not trivial, and compelling: - The observation that DCT blocks correspond to VIT patches is clever and insightful - The zigzag flattening (originally introduce in JPEG) together with reconstruction FID metric that is used for estimating the best trade-off between generation quality and compression rate (speed) introduces great controllability of those two crucial elements in generative modeling with diffusion models. Methods And Evaluation Criteria: The benchmarks used for the evaluation are sufficient in size, authors even include some high-resolution datasets. Nevertheless the method is only compared with the baseline model with the same architecture run in the pixel space. This is fine regarding that the contribution is solely limited to the change of the modeling space, but might be seen as a limitation of the evaluation. Theoretical Claims: There are 3 Theorems with "Sketch of Proofs" in the submission, while there are detailed proofs in the appendix. I did not check the correctness of all proofs, but I am unsure if Theorem 5.1 which discusses connection between pixel-based diffusion and autoregressive modeling is really necessary for this work. I see how it might be a bit relevant to the proposed approach, but it does not provide theoretical grounds for the proposed method. Experimental Designs Or Analyses: - The evaluation of upsampling is very limited to comparison with baseline pixel upsampling. This is fine as a proof of concept, but please note that novel diffusion architectures (e.g. DeepFloyd IF) use more advanced approaches also employing several diffusion steps. - The final evaluation is a comparison to the baseline UViT architecture on a number of different datasets including some high-resolution ones. In those experiments DCTdiff consistently outperforms pixel-space UViT. However, the observed results are far from state-of-the-art (e.g. FID on FFHQ256 2.19 for StyleGAN from 2022 vs 5.08 from this work). Supplementary Material: I reviewed the Ablation Studies presented in Appendix B which highlight the tradeoff between the final performance and training speed in DCTdiff when switching different hyperparameters. There is no code provided for the submission, even though it is said so in the abstract Relation To Broader Scientific Literature: This submission well discusses the broader scientific literature Essential References Not Discussed: I think the one important reference missing is [1] where authors combine a hierarchical VAE with a diffusion model by using a Discrete Cosine Transform (DCT) to create low-dimensional pseudoinputs, which are then modeled via a diffusion-based prior to efficiently approximate the aggregated posterior and enhance latent space utilization. [1] Kuzina, Anna, and Jakub M. Tomczak. "Hierarchical VAE with a Diffusion-based VampPrior." Transactions on Machine Learning Research. Other Strengths And Weaknesses: Strengths: - The proposed method is novel, sound, and makes a lot of sense- - This work introduces several smart observations from frequency space analysis and compression methods into generative modelling. Weaknesses: - The evaluation of the methods is limited to the comparison to the baseline approach in the pixel space Other Comments Or Suggestions: Small suggestions: - I don't know how was it defined, but the bold math letters (e.g. x'_y and so on in line 121 right) are hard to read Questions For Authors: - The scaling method introduced in the work is a challenging task. The proposed Entropy-Consistent Scaling is reasonable, but have you considered using different terminal distribution for the diffusion model instead of gaussian? Maybe something like Pareto distribution would be more suitable for the unusual distribution defined by DCT? Such an approach might also help for the SNR Scaling Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate the insightful suggestions and comments provided by the reviewer. **Q1: The benchmarks used for the evaluation are sufficient in size, authors even include some high-resolution datasets. Nevertheless the method is only compared with the baseline model with the same architecture run in the pixel space** In addition to the comparison with pixel-based diffusion models, we also compared DCTdiff with the latent diffusion model UViT which utilizes the VAE of StableDiffusion for image compression. Please refer to Table 3 of our paper for the details. **Q2: Theorem 5.1 might be a bit relevant to the proposed approach, but it does not provide theoretical grounds for the proposed method.** Theorem 5.1 provides a theoretical foundation for understanding how noise affects different frequency components in the diffusion process. It formalizes the observation that high-frequency signals are more susceptible to noise perturbations due to the energy concentration properties of the DCT. This insight is crucial for justifying why DCTdiff exhibits a faster noise accumulation process than pixel diffusion, ultimately motivating the introduction of the SNR scaling method. We will improve the writing of Section 5.3 to highlight the connection. **Q3: The evaluation is a comparison to the baseline UViT on a number of different datasets including some high-resolution ones. In those experiments DCTdiff consistently outperforms pixel-space UViT. However, the observed results are far from state-of-the-art (e.g. FID on FFHQ256 2.19 for StyleGAN from 2022 vs 5.08 from this work).** Thanks for the interesting observation. GANs have been a strong class of models that generate high fidelity images (often measured by FID) (more evaluations can be found in https://paperswithcode.com/sota/image-generation-on-ffhq-256-x-256) yet do not produce diverse images. The main advantages of diffusion models are that they are stable to train and have higher diversity (often measured by recall) than GANs (see papers [1][2]). Our DCTdiff actually achieves better FID than many other diffusion-based models, e.g [3] and [4]). [1] Dhariwal, Prafulla, and Alexander Nichol. "Diffusion models beat gans on image synthesis." NIPS. 2021. [2] Boutin, Victor, et al. "Diffusion models as artists: are we closing the gap between humans and machines?." ICML 2023. [3] Rombach, Robin, et al. "High-resolution image synthesis with latent diffusion models." *CVPR*. 2022. [4] Kim, Dongjun, et al. "Soft Truncation: A Universal Training Technique of Score-based Diffusion Model for High Precision Score Estimation." ICML. 2022. **Q4 I think the one important reference missing is [1] where authors combine a hierarchical VAE with a diffusion model by using a Discrete Cosine Transform (DCT) to create low-dimensional pseudoinputs, which are then modeled via a diffusion-based prior to efficiently approximate the aggregated posterior and enhance latent space utilization** Except for explicit signal compression, it is inspiring to see that DCT can be treated as a low-dimensional latent representation and applied to VampPrior framework to obtain the flexible prior distribution modeled by diffusion. We will add the discussion of [1] to our literature review. [1] Kuzina, Anna, and Jakub M. Tomczak. "Hierarchical VAE with a Diffusion-based VampPrior." Transactions on Machine Learning Research. **Q5: Have you considered using different terminal distribution for the diffusion model instead of gaussian? Maybe something like Pareto distribution would be more suitable for the unusual distribution defined by DCT? Such an approach might also help for the SNR Scaling** We sincerely appreciate the reviewer providing this insightful suggestion. In our DCTdiff work, we follow the Gaussian prior of diffusion due to the fair comparison. But we do think that Pareto distribution which exhibits the power-law (same as image spectral distribution) is very likely to be a great solution for DCT distribution modeling. Moreover, Laplace distribution is also worth exploring for DCT modeling. We will investigate this topic in our next work. --- Rebuttal Comment 1.1: Comment: Thank you for the rebuttal, Regarding Q3, to clear the misunderstanding, I am familiar with limitations of GANs, and by no means I want to advocate in their favor. What I wanted to highlight is that this work is far from recent state-of-the-art results for image generation. In fact it's even far from such results 3 years ago, as highlighted by FID comparison on FFHQ dataset. I don't have any further questions, and I intend to keep my initial evaluation score
Summary: The paper introduces an end-to-end diffusion modeling framework in the frequency space, instead of in the original pixel space. It shows that the DCT (discrete cosine transform) space could be an effective and near-lossless compression for diffusion modeling, mitigating pixel redundancy and enabling efficient scaling to 512×512 image generation without requiring auxiliary VAEs. The authors propose a pipeline for token preparation for Diffusion Transformers, accompanied by adjustments to hyperparameters such as noise schedules and loss re-weighting. Experimental results on UViT and DiT architectures show that DCT-based diffusion models outperform pixel-space and latent-space counterparts on FID scores and training efficiency. Claims And Evidence: - "*suggesting its potential for both discriminative and generative tasks*" (Lines 67-68). - The paper only evaluates some generative tasks (unconditional or class-conditional image synthesis), while the capability of DCT space on other generative (e.g. image editing, inpainting and restoration) and discriminative (e.g. image classification and segmentation) tasks is still unknown. - Recent studies show that intermediate representations within diffusion networks are effective for discriminative tasks through generative pre-training, observed in both pixel-space (DDAE, arXiv:2303.09769; DDPM-seg, arXiv:2112.03126) and latent-space models (l-DAE, arXiv:2401.14404; REPA, arXiv:2410.06940). Does DCT-based modeling retain similar properties? - "*outperforms the pixel-based and latent diffusion models regarding generation quality and training speed*" (Lines 62-64). - The comparison is restricted to latent diffusion using SD-VAE, an outdated compression model. Many modern image tokenizers (VA-VAE, arXiv:2501.01423; MAETok, arXiv:2502.03444) can also achieve near-lossless reconstruction (rFID < 0.5) and provide compact, diffusion-optimized spaces. Can the DCT space outperform these modern tokenizer specifically designed for "*more diffusible*" latent representations? - I understand that complete training on large datasets like ImageNet256x256 is too resource-intensive. However, as mentioned above, recent tokenizer advancements have reduced costs (e.g. 10-20x faster convergence). Therefore, please, as much as you can afford (e.g. even with limited training in a few hundred epochs), provide some results on standard benchmarks like ImageNet-256x256. - The baselines also appear under-optimized. For example, I can achieve an FID of 4.5 on unconditional CIFAR-10 with a 100-NFE DDIM sampler, whereas Table 2 reports FIDs of 5.29-6.23. This suggests the pixel- and latent-space models may not be fully converged, undermining the claimed outperformance. Methods And Evaluation Criteria: The proposed method is technically sound and novel, but its applicability appears limited to ViT-based backbones (e.g. UViT, DiT). - UNet-based architectures remain competitive, particularly for unconditional and class-conditional image generation on datasets like CIFAR-10, FFHQ, and ImageNet-64 (EDM2, arXiv:2312.02696). Can the proposed DCT-based diffusion be adapted to UNet backbones and also outperform pixel-space counterparts? - ViT-based architecture, on the other hand, is more preferable when involving text modalities (e.g. text-to-image synthesis) and scaling to larger network capacities. However, the provided evaluations align more closely with UNet strengths rather than ViT advantages. Theoretical Claims: I do not find any obvious errors in the theoretical analysis. Experimental Designs Or Analyses: Please refer to the second discussion in "Claims And Evidence" section. Supplementary Material: I have reviewed the supplementary material, mostly about the experimental details, ablation studies, and qualitative results. Relation To Broader Scientific Literature: N/A Essential References Not Discussed: The essential related works are mostly discussed. Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: Typo: "UViT" in Table 5 should be "DiT". Questions For Authors: After reading the paper carefully, I acknowledge that the DCT-based space is indeed a more preferable replacement for the raw pixel-space (if the claimed "*potential for both discriminative and generative tasks*" is correct and sound). However, as mentioned above, I doubt whether it is better than modern VAE/AE-based image tokenizers. Those tokenizers compress images with higher compression ratio, offering a more compact and "*diffusible*" latent space for diffusion modeling, and may optionally preserve semantic information for unified generation-and-understanding tasks, particularly for multi-modal models. Btw, I think training an auxiliary tokenizer on the DCT space (instead of pixel-space) may also sound reasonable. **2025/04/04: [Replying to Reply Rebuttal Comment by Authors]**: Thank you for the additional discussion. Regarding the performance and efficiency comparison, I acknowledge the authors' point that evaluating DCT-space designs against latent-space sampling involves complexities influenced by factors such as model size, patch size, and NFEs. The revised Tables 2a & 2b offer improved clarity. For the final version, I suggest incorporating more direct visualizations, similar to the performance trade-off curves (e.g., Fig. 1, 8, 9 presented in EDM2 [arXiv:2312.02696]), to further enhance the presentation. I also accept the explanation that the implementation is particularly well-suited for Transformer-based models. The current UNet-based comparison, while noted as incomplete, is acceptable and does not critically impact my overall assessment. Regarding the UViT baselines, I am glad the authors confirm that the reimplementation is strong and valid. The patch_size in UViT on 512x512 is indeed 4 (instead of the common practice ps=2 in modern architectures). I apologize for the false alarm. Based on the clarifications provided, I am raising my rating to Weak Accept (actually not very curcial since other reviews are all positive). I strongly recommend that the authors dedicate to reorganizing the results presentation and refining the training details in the final version for improved clarity. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you very much for the detailed and helpful reviews. **Q1: The paper only evaluates some generative tasks, while the capability of DCT space on other generative and discriminative tasks is still unknown** We will first rephrase this sentence to avoid any misunderstanding. Our work primarily aims to explore the feasibility of performing diffusion in the DCT space for image generation, with a particular focus on generative capabilities. While we acknowledge the potential discriminative properties of DCT-based modeling, we believe these properties are primarily derived from the generative pre-training mechanism rather than the information format itself. This could also explain why both pixel and latent diffusion models exhibit similar characteristics. We leave the investigation of the discriminative, editing, inpainting and restoration ability of DCT-based diffusion models as future work. **Q2: The comparison is restricted to latent diffusion using SD-VAE, an outdated compression model, Can the DCT space outperform these modern tokenizer?** First, we will rewrite this claim by clarifying that the latent diffusion is SD-VAE based, making the scope of our claim clearer. Second, Our goal is not to beat these emerging tokenizers with dedicated designs, we explore how far the image diffusion modeling can go without a pretrained tokenizer. Moreover, our work does not conflict with image tokenizers, e.g. we totally agree that exploring DCT tokenizer is promising, and the technique of these new tokenizers can also be used to develop DCT tokenizer. **Q3: Can you provide some results on ImageNet-256?** Given the limited GPUs, we have prioritized our experiments on the other tasks: scaling up, extra evaluations, and explorations of UNet-based DCTdiff. **Q4: The baseline on CIFAR-10 appears under-optimized** For fair comparison, our implementation of UViT and DCTdiff on CIFAR-10 used patch size (ps=4) (different from ps=2 in original UViT paper). If we apply ps=2 on DCTdiff, block size will be 1, yielding only the DC frequency and preventing frequency reweighting. To address your concern, we have implemented another baseline of UViT using ps=2 (256 tokens). Correspondingly, we increase the model size of DCTdiff to ensure the same computational complexity for a single network forward pass. Results on Tables 9 and 10 show that DCTdiff is significantly better than UViT in terms of FID. We acknowledge that the FID 5.05 on the UViT is still higher than the 4.5 you tested (might be due to the warmup steps, machine and software difference etc.), but our code and all trained models will be released for reproducibility. Table 9. Extra CIFAR-10 benchmark, FID-50k using DDIM sampler | | 100 | 50 | 20 | 10 | | --- | --- | --- | --- | --- | | UViT (small, 256 tokens) | 5.05 | 6.24 | 17.83 | 73.05 | | DCTdiff (mid_deep, 64 tokens) | **4.25** | **4.54** | **5.96** | **11.17** | Table 10. Extra CIFAR-10 benchmark, FID-50k using DPM-solver | | 100 | 50 | 20 | 10 | | --- | --- | --- | --- | --- | | UViT (small, 256 tokens) | 4.82 | 4.85 | 4.92 | 10.72 | | DCTdiff (mid_deep, 64 tokens) | **4.40** | **4.43** | **4.56** | **8.82** | **Q5: Can the proposed DCT-based diffusion be adapted to UNet backbones and also outperform pixel-space counterparts?** It is an interesting question. Although we think that Transformer has many advantages than UNet regarding the image DCT implementation: ease of (1) Cb Cr 2x subsampling, (2) elimination of high frequencies, and (3) frequency loss reweighting, we have investigated the possibility of UNet-based DCTdiff using the ADM codebase. Concretely, we remove the Cb Cr 2x subsampling and loss reweighting in the preliminary experiment, and just convert the RGB channel to YCbCr followed by DCT transform, the resulting frequencies yield the input tensor with shape (32, 32, 3) on CIFAR-10. We trained the UNet-based DCTdiff for 400k steps, the FID is shown in Table 11 and the results are promising. We believe further exploration of UNet-based DCTdiff will lead to better generation quality. Table 11. FID of UNet-based DCTdiff on CIFAR-10 (NFE=100). | training steps | 200k | 300k | 400k | | --- | --- | --- | --- | | DCTdiff (UNet) | 5.06 | 4.88 | 4.48 | **Q6: ViT-based architecture is more preferable when scaling to larger network capacities** We provide the scaling experiments below. Table 6. FID on CIFAR-10 using DDIM sampler, ps=4 | | NFE=100 | NFE=50 | NFE=20 | | --- | --- | --- | --- | | UViT (small) | 7.25 | 8.45 | 21.18 | | DCTdiff (small) | 6.51 | 6.62 | 7.87 | | | | | | | UViT (mid) | 6.23 | 7.88 | 20.48 | | DCTdiff (mid) | 5.02 | 5.21 | 6.81 | | | | | | | UViT (mid, deep) | 6.05 | 7.33 | 20.27 | | DCTdiff (mid, deep) | 4.25 | 4.54 | 5.96 | Table 7. FID on FFHQ 128 using DPM sampler | | NFE=100 | NFE=50 | NFE=20 | | --- | --- | --- | --- | | DCTdiff (small) | 6.50 | 6.55 | 7.72 | | DCTdiff (mid) | 5.13 | 5.20 | 6.19 | | DCTdiff (mid, deep) | 4.98 | 5.05 | 5.94 | --- Rebuttal Comment 1.1: Comment: Thank you for providing a detailed rebuttal and addressing many of the points raised in the initial reviews. I appreciate the effort, particularly the additional evaluations and the preliminary investigation into UNet-based DCTdiff. However, several key concerns regarding the experiments persist, which prevent me from fully endorsing the paper at this stage. - Table 2 indicates that under common sampling configurations (e.g., NFE=100-250, as often used for ADM and DiT), DCTdiff does not appear to offer a speed advantage compared to Latent UViT and can, in fact, be considerably slower. - While I am grateful for the UNet-based results in Table 11, the evaluation lacks a comparison against a standard UNet baseline. The reported FID=4.48 (at 400k) seems worse compared to common baselines (e.g., the official DDIM paper reported FID=4.16 with NFE=100, for a 35.7M DDPM Ho's UNet after 400k * 256 / 50000 = 2000 epochs). - Concerns regarding the under-optimization of the baselines used still linger, particularly in Tables 6 and 9. For example, based on common results, a standard UViT (patch_size=2, 44M #params) trained for 1200 epochs on CIFAR-10 should readily achieve an FID_50k around 4.5 using 100 DDIM steps. The reported FIDs appear significantly weaker than this. - Alignment with common practices: There seem to be instances where experimental configurations for baselines might deviate from common practices. For example, the response to Reviewer VaLT (Q2) mentions using 256 tokens for the 512x512 UViT baseline. This implies a patch_size of 4 (512 / 8 downfactor / 4 patchsize = 16, 16x16=256 token), which represents a considerably lower downsampling rate than typically used in latent UViT/DiT (downfactor=8, patch_size=2). Ensuring fairness in comparisons is vital, aligning not only #params but also these choices (like patch sizes, downsampling rates) with standard practices. It is currently challenging for the reader to judge the fairness and significance of the reported gains. In conclusion, while I appreciate the novelty of exploring diffusion models in the DCT space, the highlighted experimental limitations regarding sampling speed, UNet performances, reported baselines, and overall experimental fairness prevent me from raising my score. Therefore, I maintain my score of Weak Reject. However, I acknowledge the paper's interesting direction and the authors' constructive engagement. I would not strongly object to its acceptance. --- Reply to Comment 1.1.1: Comment: We are glad to hear that some of the concerns of the reviewer have been addressed, and we thank you for recognizing the novelty of our work. To address the remaining concerns, we would like to add one more discussion along with a kind reminder of **the factual errors the reviewer had** in the above comments. **Q1: The inference time of DCTdiff is slower than latent UViT on 512x512 benchmark in the large NFE condition.** To again highlight our fairness, we report the wall-clock time without considering their sampling quality (we surely admit the results). However when considering inference time at comparable generation quality, our DCTdiff demonstrates clear advantages: **latent UViT requires 20 mins to achieve FID 10.89, whereas DCTdiff achieves FID 8.04 in just 9 mins on FFHQ 512.** Table 2a. Wall-clock inference time and FID on **AFHQ** 512. | | NFE=100 | NFE=50 | NFE=20 | NFE=10 | | --- | --- | --- | --- | --- | | UViT (latent) | **20m 14s (FID 10.86)** | 13m 24s (FID 10.86) | 9m 18s (FID 11.94) | 7m 57s (FID 28.31) | | DCTdiff | 47m 50s (FID 8.76) | 23m 53s (FID 8.87) | **9m 34s (FID 10.05)** | 4m 47s (FID 21.05) | Table 2b. Wall-clock inference time and FID on **FFHQ** 512. | | NFE=100 | NFE=50 | NFE=20 | NFE=10 | | --- | --- | --- | --- | --- | | UViT (latent) | **20m 14s (FID 10.89)** | 13m 24s (FID 10.94) | 9m 18s (FID 11.31) | 7m 57s (FID 23.61) | | DCTdiff | 47m 50s (FID 7.28) | 23m 53s (FID 7.09) | **9m 34s (FID 8.04)** | 4m 47s (FID 19.67) | **Q2: The UNet-based DCTdiff (initial trial) underperforms the UNet pixel diffusion** As requested by the reviewer, we conducted a preliminary exploration of the UNet-based DCTdiff during the rebuttal period, with the aim of evaluating its feasibility. As mentioned, we did not implement (1) Cb Cr 2x subsampling and (2) frequency loss reweighting due to the inherent constraints posed by the fixed input shape of UNet and regular Conv kernel (see the table below). This also justifies our decision to explore Transformer-based models for DCTdiff, with the approach acknowledged by reviewer 5jtj as clever and insightful. It is important to note that our initial experiment was intended solely to assess the viability of DCTdiff with UNet, rather than to achieve optimal performance. Unlike the reviewer’s immediate assessment, we believe that exploration in this direction remains promising with dedicated designs for the input space and the application of dilated convolution kernels. Implementation comparison of DCTdiff on Transformer and UNet. | | Transformer | UNet | | --- | --- | --- | | Cb Cr 2x subsampling | easy | difficult | | frequency loss reweighting | easy | difficult | | elimination of high frequencies | easy | difficult | **Q3 and Q4: The reviewer raises concerns about the optimization of the UViT baseline and questions the fairness of our experiments, noting that our reimplementation yields a slightly worse FID on CIFAR-10 compared to his/her reported 4.5.** - To ensure a fair comparison, we evaluated UViT and our DCTdiff using the same parameter settings as recommended in the UViT paper. We found that using batch_sz=256 yields much better FID on CelebA 64 dataset than the original UViT using batch_sz=128 (**our reimplementation received FID 1.57, much better than the 2.87 reported in UViT paper**). **This demonstrates the high quality of our baseline implementation**. Given this finding, we use batch_sz=256 in all benchmarks except for ImageNet and 512x512 datasets (see Table 6 of our paper). We have confirmed that the difference on CIFAR-10 (5.05 vs. 4.5) is caused by the batch size setting (128 vs. 256). But we used the same batch_sz=256 for both UViT and DCTdiff for fair comparison. Once again, we emphasize that our code and ckpts will be fully released for reproducibility. We would appreciate it if the reviewer could revisit all our experimental settings (detailed in Table 6 of our paper) regarding the fair comparison with UViT. - Notably, on CIFAR-10, our DCTdiff (FID 4.25) still outperforms the baseline (FID 4.5) reported by the reviewer. - The official UViT implementation for 512x512 datasets is indeed 256 tokens (downfactor=8, patch_size=4), **not the one mentioned by the reviewer** (downfactor=8, patch_size=2). Please check https://github.com/baofff/U-ViT/blob/main/configs/imagenet512_uvit_large.py)
Summary: The paper propose DCTDiff that models images in the discrete cosine transform (DCT) space. The paper discusses the design space of DCTdiff and reveals interesting properties of image modeling in the DCT space such as spectral autoregression nature of pixel diffusion models. Claims And Evidence: The paper claims that "DCT Upsampling Outperforms Pixel Upsampling". However, it is only compared against interpolation methods and not any super resolution methods. Table 4 shows that the training cost for DCTdiff is lower than UViT in terms of GFLOPs. However, the actual wall clock time will be different due to GPU optimizations. What is the inference time comparison for the two models? The gap in the performance difference between the two methods diminishes as the NFE increases, especially for class conditional generation (Table 2). So, the advantage of the proposed method is not clear when scaling the test time compute. Methods And Evaluation Criteria: The proposed method is quantitatively evaluated only using FID metric. How about Inception Score and Precision/Recall [1] scores? There have been better metrics proposed in recent years such as CMMD [2]. How about the performance comparisons using CMMD? [1] Karras, T., Laine, S., and Aila, T. A style-based generator architecture for generative adversarial networks. CVPR, 2019 [2] Sadeep Jayasumana, Srikumar Ramalingam, Andreas Veit, Daniel Glasner, Ayan Chakrabarti, Sanjiv Kumar, Rethinking FID: Towards a Better Evaluation Metric for Image Generation, CVPR 2024 Theoretical Claims: Yes, no major concerns Experimental Designs Or Analyses: Missing experiments on the scalability of the proposed method. Does the performance improve with increasing size of the model? Supplementary Material: Yes, fully Relation To Broader Scientific Literature: The key contribution of the method is to use DCT space for image modeling instead of the pixel space. However, prior work has already shown that DCT space is effective for image generation and so the advantage of the proposed method over the prior work is not clear. Essential References Not Discussed: The results do not compare against the prior work DCTransformer [1] that also uses DCT space for image modeling. How is the proposed method better? A detailed comparison of the similarities and differences with [1] will elucidate the advantages of the proposed DCTdiff. [1] Charlie Nash, Jacob Menick, Sander Dieleman, and Peter W Battaglia. Generating images with sparse representations. ICML, 2021 Other Strengths And Weaknesses: The paper does not discuss the limitations of the proposed method. Other Comments Or Suggestions: Typo - Table 5 should be DiT Questions For Authors: I would like to see answers specifically to the following questions in the rebuttal. 1. How is the proposed method compared to closely related DCTransformer? Is training a DiT-based model with the dense DCT image from DCTransformer better? 2. How about performance comparisons using Inception Score. Precision/Recall scores and CMMD metric? 3. What is the wall clock inference time comparison? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the valuable comments which help improve the paper. **Q1: The paper claims that "DCT Upsampling Outperforms Pixel Upsampling". However, it is only compared against interpolation methods** The upsampling we mean in the paper is indeed interpolation. We will make this statement clearer. **Q2: What is the inference time comparison** Pixel diffusion and DCTdiff share the same GFLOPs and inference time (Table 1) Table 1. wall-clock time (NFE=100, 10k samples, one A100) | | CIFAR-10 | CelebA 64 | FFHQ 128 | | --- | --- | --- | --- | | UViT | 2m 54s | 5m 06s | 5m 14s | | DCTdiff | 2m 58s | 5m 11s | 5m 12s | Comparing latent diffusion and DCTdiff, the inference time is different because - GFLOPs of UViT (latent) = 34*NFE (diffusion) + 1240 (Decoder) - GFLOPs of DCTdiff = 133*NFE (diffusion) The decoder of the VAE is expensive, but DCTdiff has a larger complexity in diffusion as DCTdiff has 1024 tokens and UVIT has 256 tokens. Table 2 shows that DCTdiff is faster in low NFE but is slower in the high NFE condition. Table 2. Wall-clock inference time on AFHQ 512 (10k samples, A100 GPU). GFLOPs is appended in the brackets. | | NFE=100 | NFE=50 | NFE=20 | NFE=10 | | --- | --- | --- | --- | --- | | UViT (latent) | 20m 14s (4640) | 13m 24s (2940) | 9m 18s (1920) | 7m 57s (1580) | | DCTdiff | 47m 50s (13300) | 23m 53s (6650) | 9m 34s (2660) | 4m 47s (1330) | **Q3: The performance gap diminishes as NFE increases** In most cases, our DCTdiff has significantly lower FID (20%~30%) than the base model. We believe the small improvement on ImageNet 64 is due to the small model capacity since we have limited GPU to scale on ImageNet. However, we do provide the scaling experiments on other datasets (Table 6,7) **Q4: How about evaluation using IS, Precision/Recall and CMMD** Table 3. Comparison between UViT and DCTdiff on CIFAR-10 using DDIM sampler (NFE=100) | | FID ↓ | CMMD ↓ | IS ↑ | precision ↑ | recall ↑ | | --- | --- | --- | --- | --- | --- | | UViT | 5.05 | 0.052 | 7.08 | 0.668 | 0.589 | | DCTdiff | 4.25 | 0.043 | 7.70 | 0.660 | 0.606 | Table 4. Comparison between UViT and DCTdiff on FFHQ 128 using DPM-solver (NFE=100) | | FID ↓ | CMMD ↓ | IS ↑ | precision ↑ | recall ↑ | | --- | --- | --- | --- | --- | --- | | UViT | 9.18 | 0.610 | 3.54 | 0.648 | 0.485 | | DCTdiff | 6.50 | 0.470 | 3.67 | 0.668 | 0.512 | Table 5. Comparison between UViT and DCTdiff on AFHQ 512 using DPM-solver (NFE=100) | | FID ↓ | CMMD ↓ | IS ↑ | precision ↑ | recall ↑ | | --- | --- | --- | --- | --- | --- | | UViT (latent) | 10.86 | 0.373 | 11.00 | 0.547 | 0.496 | | DCTdiff | 8.76 | 0.335 | 11.00 | 0.632 | 0.496 | **Q5 Missing experiments of model scalability** We performed scaling experiments on CIFAR-10 and FFHQ 128 during the short rebuttal period. Table 6. FID-50k on CIFAR-10 using DDIM sampler, patch_sz=4 | | NFE=100 | NFE=50 | NFE=20 | | --- | --- | --- | --- | | UViT (small) | 7.25 | 8.45 | 21.18 | | DCTdiff (small) | 6.51 | 6.62 | 7.87 | | | | | | | UViT (mid) | 6.23 | 7.88 | 20.48 | | DCTdiff (mid) | 5.02 | 5.21 | 6.81 | | | | | | | UViT (mid, deep) | 6.05 | 7.33 | 20.27 | | DCTdiff (mid, deep) | 4.25 | 4.54 | 5.96 | Table 7. FID-50k on FFHQ 128 using DPM sampler | | NFE=100 | NFE=50 | NFE=20 | | --- | --- | --- | --- | | DCTdiff (small) | 6.50 | 6.55 | 7.72 | | DCTdiff (mid) | 5.13 | 5.20 | 6.19 | | DCTdiff (mid, deep) | 4.98 | 5.05 | 5.94 | **Q6 Compare DCTransformer with DCTdiff** DCTdiff differs from DCTransfomer in several key aspects: probabilistic modeling, image representation, network and image tokenization. We summarize their differences in Table 8. Overall, DCTdiff exhibits a straightforward way for image frequency generative modeling. Performance-wise, DCTdiff achieves FID 7.28 on FFHQ while DCTransformer has FID 13.06. The only overlap between DCTransformer and DCTdiff is using the DCT transform and YCbCr color transform. But these well-known techniques of JPEG compression cannot be credited to DCTransformer. Table 8. Differences between DCTransformer and DCTdiff | | DCTransformer | DCTdiff | | --- | --- | --- | | Probability modeling | Autoregression (each conditional distribution is further factorized into 3 distributions, see their Eq(2)) | Diffusion | | Image representation | tuples (channel, position, value) | Y, Cb, Cr | | Network | a Transformer with 1 encoder and 3 decoders used for predicting channel, position, value, respectively | a single diffusion Transformer | | use quantization | yes (has information loss) | no | | DCT block size | fixed (8/16) | flexible for different resolution generation | **Q7: What is the limitation of this paper?** The limitation is that we did not explore other generative applications (e.g. image inpainting) and discriminative tasks. Also, frequency-oriented Transformer architecture and super-resolution image generation were not covered in this paper. We think these are promising directions for future work. --- Rebuttal Comment 1.1: Comment: Thanks to the authors for the rebuttal. It addressed some of my concerns. **DCT Upsampling**: How about comparison to super resolution methods in term of quality and complexity? Why was it not compared? **Scaling**: What is the complexity of the small, mid and mid-deep models? It is difficult to compare the results without knowing the complexity of the three models. **Inference time**: Table 2 shows that DCTdiff has slower inference time due to the larger FLOPs resulting from the higher resolution (more tokens). But it has better quality in terms of FID as mentioned in the reply to reviewer vBsQ. However, a more fair comparison is using a larger model for UViT with the same NFE and having similar inference time. Is DCTdiff better than using a larger UViT model? **Comparison with DCTransformer**: Is the quantitative comparison on FFHQ between the two models of same complexity? **Limitations**: The authors mentioned the limitations of the paper in terms of experiments but I was referring to the limitations of the method. Is using DCT transform always better for image generation? Are there any tasks where DCT based diffusion is not better than using pixel based diffusion? --- Reply to Comment 1.1.1: Comment: **Q1: DCT Upsampling: compare it with super-resolution methods** Initially, we indeed think that DCT upsampling can be applied in the cascade diffusion for super-resolution generation, For example, ADM generates a 512x512 image by first generating a 128x128 image, then using pixel interpolation to get the coarse 512x512 image, it is finally refined by a super-resolution model. We had tried to replace the pixel interpolation with DCT upsampling in this approach. However, the ckpt loading of the super-resolution model (released by ADM) was problematic, which prevented us from further implementation. We will continue trying to solve this issue and add the super-resolution results in the final version of our paper. **Q2: What is the complexity of the small, mid and mid-deep models** Due to the 5000-character limitation, we had to leave out these details. Now, they are shown in Table 6a. Table 6a. Model parameters and training GFLOPs on CIFAR-10. UViT and DCTdiff share the same settings and GFLOPs. | | hidden_dim | depth | #para | GLOPFs | | --- | --- | --- | --- | --- | | UViT (small) | 512 | 12 | 44M | 2.87 | | UViT (mid) | 768 | 16 | 130M | 8.45 | | UViT (mid-deep) | 768 | 20 | 161M | 10.44 | **Q3: A fairer comparison is using a larger model for UViT with the same NFE and having similar inference time** Thanks for the insightful suggestion. From Table 2, we first know that the inference time and GFLOPs are strongly correlated. To answer your question, we can compare the FID of the UViT and DCTdiff under the same GFLOPS and NFE. Although training a larger UViT on 512x512 is time-consuming, we can still do the comparison on CIFAR-10 (Table 6a). The results show that DCTdiff receives better FID than UViT under the same NFE and GFLOPs. Table 2. Inference time on AFHQ 512 (10k samples). GFLOPs is in the brackets. | | NFE=100 | NFE=50 | NFE=20 | NFE=10 | | --- | --- | --- | --- | --- | | UViT (latent) | 20m 14s (4640) | 13m 24s (2940) | 9m 18s (1920) | 7m 57s (1580) | | DCTdiff | 47m 50s (13300) | 23m 53s (6650) | 9m 34s (2660) | 4m 47s (1330) | Table 6a. FID on CIFAR-10 using DDIM sampler. Inference GFLOPs is in the bracket. | | NFE=100 | NFE=50 | NFE=20 | | --- | --- | --- | --- | | UViT (small) | 7.25 (287) | 8.45 (143) | 21.18 (57) | | DCTdiff (small) | 6.51 (287) | 6.62 (143) | 7.87 (57) | | | | | | | UViT (mid) | 6.23 (845) | 7.88 (422) | 20.48 (169) | | DCTdiff (mid) | 5.02 (845) | 5.21 (422) | 6.81 (169) | | | | | | | UViT (mid, deep) | 6.05 (1044) | 7.33 (522) | 20.27 (209) | | DCTdiff (mid, deep) | 4.25 (1044) | 4.54 (522) | 5.96 (209) | **Q4: Comparison with DCTransformer** From Table 3 of the paper DCTransformer, we know that DCTransformer used a much larger network (473M parameters) than our DCTdiff (130M) on FFHQ benchmark. It is not surprising to us because DCTransformer applies 1 encoder + 3 decoders, while our DCTdiff only uses 1 decoder-only architecture. **Q5: Method limitations** The only case where DCTdiff did not outperform pixel diffusion (UViT) is CelebA 64, as we have shown and discussed in our paper. Regarding the limitation of our method, our most important finding is that image DCT modeling has the challenge of dealing with the ‘power-law’ (low frequencies have much larger magnitudes than the high frequencies). This power-law property pushes us to propose our entropy-consistency scaling and SNR scaling (deal with the low energy of high frequencies) for later Gaussian perturbation. By contrast, pixel diffusion does not have these extra two operations. However, as mentioned by reviewer 5JtJ, replacing the Gaussian prior with a Pareto prior might be a better choice for DCT diffusion modeling because image frequency and Pareto share a similar ‘power-law’ distribution. Applying Perato potentially enables us to remove the operation of entropy-consistency scaling and SNR scaling, and likely yields a better generation quality. As highlighted in the title of our paper, we aim to deliver the message to the researchers that image modeling in the frequency domain is truly promising. While some recent works [1] [2] have noticed the potential of spectrum, spectral image modeling still lacks sufficient attention compared to traditional pixel modeling. We plan to continue investigating the image DCT modeling and hopefully bring more insights to the community. [1] Frequency Autoregressive Image Generation with Continuous Tokens." arXiv:2503.05305. [2] NFIG: Autoregressive Image Generation with Next-Frequency Prediction." arXiv:2503.07076. **Q6: Results on ImageNet 64 by scaling from small to mid** In our initial rebuttal, we said that ‘We believe the small improvement on ImageNet 64 is due to the small model capacity. We have now verified our hypothesis by scaling the model from small to mid. The resulting FID comparison is 4.69 (DCTdiff, mid) vs. 5.85 (UViT, mid). Full results will be added to our paper. We hope our responses have addressed your concerns and doubts
null
null
null
null
null
null
Unlocking the Capabilities of Large Vision-Language Models for Generalizable and Explainable Deepfake Detection
Accept (poster)
Summary: This paper proposes a novel framework that leverages Vision-Language Models (VLMs) for deepfake detection, addressing their current limitations in forensic analysis. The core innovation is a three-component approach: (1) a knowledge-guided forgery adaptation module that aligns VLM semantic space with forensic features through contrastive learning, (2) a multi-modal prompt tuning framework that optimizes visual-textual embeddings for improved localization and explainability, and (3) an iterative refinement strategy enabling evidence-based reasoning through multi-turn dialog. The architecture integrates a Knowledge-guided Forgery Detector with a Large Language Model, allowing the system to not only detect and localize deepfakes across diverse manipulation types but also provide natural language explanations about the detected forgeries, significantly advancing both generalizability and interpretability in deepfake detection. Claims And Evidence: The paper's claims are partially supported by evidence, with several limitations: 1. The multi-turn dialogue capability claim is prominently featured in the abstract and conclusion but receives minimal validation in the main paper. Figure 5 shows only single-turn examples, and the paper notes that "Additional multi-turn dialogue examples are provided in the supplementary material." However, no appendix or supplementary material is found. Without these examples, this key claimed contribution lacks sufficient evidence. 2. The "knowledge-guided forgery adaptation module" lacks clear documentation of what specific external manipulation knowledge is incorporated. While Section 3.1 describes the mechanism, it doesn't detail the nature of the textual descriptions ($D _ {real}$ and $D _ {fake}$) that serve as the knowledge source. 3. The forgery localization capability is demonstrated qualitatively through GradCAM visualizations in Figure 4, but no quantitative evaluation of localization accuracy is provided, making it difficult to objectively assess this capability. 4. The ablation study in Table 8 evaluates the Reference-based Optimization Process but doesn't fully isolate the contribution of each of the three claimed novel components, particularly the multi-modal prompt tuning framework's specific impact. 5. The data simulation process described in Section 3.3 is used for training, but there's insufficient evaluation of how well this simulated data represents real-world deepfakes, which could affect generalization claims. Methods And Evaluation Criteria: The proposed methods generally align with the deepfake detection problem, but several evaluation aspects raise concerns: 1. For evaluation datasets, the authors appropriately use standard benchmarks (FF++, CDF2, DFD, DFDCP, DFDC) and conventional metrics (AUC and AP), which is reasonable for comparative assessment. 2. The cross-dataset and cross-manipulation evaluations are particularly appropriate for testing generalization capabilities, which is a critical challenge in deepfake detection. 3. However, for a framework claiming explainability as a key contribution, there's no quantitative evaluation of explanation quality. While textual outputs are shown in Figure 5, no metrics assess whether these explanations accurately identify the manipulation technique or forgery characteristics. 4. For the claimed localization capability, only qualitative GradCAM visualizations are provided without quantitative localization accuracy metrics, making it difficult to objectively compare with other localization approaches. 5. The paper emphasizes multi-turn dialogue as a key capability. However, it neither establishes a standardized evaluation protocol to measure its effectiveness in deepfake analysis nor provides any examples of such interactions. 6. The forgery data simulation approach for training relies on Poisson blending of affine-transformed real images, but there's limited analysis of whether this adequately represents the artifacts found in modern AI-generated deepfakes, potentially limiting real-world applicability. Theoretical Claims: This paper does not present significant theoretical claims requiring formal mathematical proofs. The work is primarily empirical, focusing on the design and evaluation of a VLM-based framework for deepfake detection. The mathematical formulations presented in the paper (Equations 1-6) employ standard techniques commonly used in machine learning: - Equations 1-2 describe similarity computations between visual and textual features - Equation 3 uses the established Dice loss for segmentation - Equation 4 employs standard binary cross-entropy loss for classification - Equation 5 uses cross-entropy loss for the LLM - Equation 6 describes a basic blending process for generating training data These formulations appear correctly applied for their intended purposes, but they don't constitute novel theoretical contributions requiring verification of mathematical proofs. Experimental Designs Or Analyses: I examined several aspects of the experimental methodology, finding both strengths and limitations: 1. The dataset selection and evaluation protocols using standard benchmarks (FF++, CDF2, DFD, DFDCP, DFDC) with established metrics (AUC, AP) follow sound practices in the field. 2. The cross-dataset evaluation appropriately tests generalization capability, training on FF++ real data and testing across multiple datasets. 3. However, the LLM evaluation process lacks methodological clarity. The paper states they "utilize the LLM's output ('Yes' or 'No') to classify authenticity" but doesn't explain how they handle cases where LLM outputs may be nuanced or ambiguous rather than strictly binary. 4. The LVLM-based method comparison in Table 3 shows baseline performances that are surprisingly low (e.g., Qwen2-VL at ~48% AUC on some datasets, near random chance), raising questions about implementation fairness or model configuration. 5. The training data simulation using Poisson blending of affine-transformed images (Section 3.3) may not adequately represent artifacts found in modern AI-generated deepfakes, yet this potential limitation isn't acknowledged. 6. The ablation studies, while informative, don't systematically isolate the contribution of each of the three claimed core components, particularly for the multi-modal prompt tuning framework. Supplementary Material: No appendix or supplementary material is provided in the paper. Relation To Broader Scientific Literature: This paper's contributions relate to several research streams: 1. **Deepfake detection methods**: Traditional approaches (Li et al., 2020; Shiohara & Yamasaki, 2022; Nguyen et al., 2024) focused on data augmentation, feature consistency, and frequency domain analysis. This work acknowledges their limitations in capturing human knowledge about forgery characteristics and proposes VLMs as a solution. 2. **Vision-Language Models**: Builds upon recent LVLM architectures like BLIP-2 (Li et al., 2023), LLaVA (Liu et al., 2024), and MiniGPT-4 (Zhu et al., 2024), but adapts them specifically for forensic analysis --- a departure from their general image understanding focus. 3. **Multimodal forensics**: Extends work like FakeShield (Xu et al., 2024) and FKA-Owl (Liu et al., 2024b), which also applied LVLMs to forgery detection, by introducing the knowledge-guided forgery detector and incorporating detailed localization capabilities. 4. **Prompt tuning literature**: The forgery prompt learning approach builds upon prompt tuning techniques (Lester et al., 2021; Liu et al., 2022) but adapts them for the multimodal forensic context. 5. **Explainable AI**: While works like FFAA (Huang et al., 2024) also explored explainable forgery analysis, this paper's integration of multi-turn dialogue capabilities represents an evolution in interactive forensic analysis. Essential References Not Discussed: The paper adequately cites relevant related works, with no major omissions. Other Strengths And Weaknesses: **Strengths:** 1. The integration of VLMs with deepfake detection addresses a genuine need for improved generalization and explainability in forensic analysis. 2. The knowledge-guided approach acknowledges an important gap in current methods: the difficulty of capturing human forensic knowledge through data augmentation alone. 3. The reference-based optimization process shows promising results for enhancing feature robustness. 4. The paper demonstrates versatility by evaluating both cross-dataset and cross-manipulation scenarios, which is crucial for real-world applications. **Weaknesses:** 1. The performance improvements, while consistent, are relatively modest (average 1.34% AUC improvement) given the complexity of the proposed framework. 2. The technical description of the knowledge acquisition process is vague --- specifically how the "learnable context" is integrated with predefined real/fake descriptions. 3. The framework's complexity (multiple interconnected components) may hinder practical deployment compared to simpler approaches. 4. The paper doesn't adequately analyze failure cases or where the approach struggles, which would provide valuable insight into its limitations. 5. The training data simulation approach may not capture the sophisticated artifacts produced by state-of-the-art deepfake generators, potentially limiting real-world effectiveness. 6. The lack of quantitative metrics for both localization accuracy and explanation quality makes it difficult to objectively assess two of the framework's key claimed capabilities. Other Comments Or Suggestions: 1. **Typos and clarifications needed:** - Abstract: "due to the misaligned of their knowledge" should be "misalignment" - Section 3.2: "$E _ {forgery} \in \mathbb{R}^{{n _ f} \times C _ {emb}}$" - $n _ f$ is not properly defined 2. **Technical clarifications needed:** - The "learnable context" mentioned in Section 3.1 needs more detailed explanation - The specific format of prompts used during LLM evaluation should be explicitly shown - The process for converting LLM outputs to binary decisions for AUC calculation requires clarification 3. **Evaluation suggestions:** - Include quantitative metrics for localization accuracy - Provide more examples of multi-turn dialogues in the main paper - Compare computational efficiency and inference time with existing methods - Consider human evaluation for explanation quality 4. **Missing discussion:** - Analysis of potential failure cases would strengthen the paper Questions For Authors: 1. Could you provide quantitative evaluation metrics for the localization accuracy of your approach? The current paper only shows qualitative visualization without objective metrics to compare against other localization-capable methods. 2. What specific "external manipulation knowledge" is incorporated into your framework? Section 3.1 mentions "real and fake image descriptions" but doesn't detail their content or source, making it difficult to assess this key component. 3. The multi-turn dialogue capability is prominently claimed but not demonstrated. Could you explain how you evaluate dialogue quality, and how your approach specifically enables multi-turn reasoning beyond what existing LVLM methods provide? 4. In Table 3, several baseline LVLM methods (e.g., Qwen2-VL at 47.99% average AUC) perform near random chance. What explanation do you have for these unexpectedly low baselines? 5. How does your forgery data simulation approach using Poisson blending of affine-transformed images capture the artifacts produced by modern AI-generated deepfakes? This seems critical for real-world generalization. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: **1) Multi-turn Dialogue Capabilities:** See response to KySN Q1. **2) Learnable Context:** Textual descriptions are generated via GPT-4, validated by human annotators. Examples include “Inconsistent head poses” or “Mismatched skin texture”. These annotations are available in https://anonymous.4open.science/r/DFDGPT-8E5C. **3) Quantitative Evaluation of Localization Accuracy and Explanation Quality:** We thank the reviewer for the valuable suggestion. As suggested, we have conducted a quantitative evaluation for both the localization and explanation capabilities of our approach using two metrics: Text Localization Accuracy (TLA) and Cosine Semantic Similarity (CSS). TLA measures the consistency between the tampered region descriptions produced by our LLM and the ground-truth localization annotations, by using the Dice coefficient. To objectively assess the explanation quality, following FakeShield (ICLR’25), we calculate the CSS by computing the cosine similarity between the high-dimensional semantic vector representations of the generated explanation text and the corresponding ground-truth text. For this evaluation, we trained both our approach and a fine-tuned version of PandaGPT on our synthetic forgery dataset. The results, summarized in the table below, indicate that our method achieves significantly higher localization accuracy and explanation quality than PandaGPT. We appreciate the reviewer's suggestion, and we will incorporate these quantitative evaluation metrics and the corresponding results into the revised manuscript. || CDF1 | | CDF2 | | DFDC | | DFDCP | | |:--------:|:--------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:| | | TLA | CSS | TLA | CSS | TLA | CSS | TLA | CSS | | PandaGPT | 0.6239 | 0.7666 | 0.6241 | 0.7717 | 0.6389 | 0.7606 |  0.6220 | 0.7846 | | Ours | 0.7762 | 0.8532 | 0.7755 | 0.8498 | 0.7662 | 0.8235 | 0.7842 | 0.8370 | **4) Component Contribution:** We appreciate the opportunity to clarify the contributions of each module in our framework. In Table 5, we present ablation experiments on our three primary modules, which validate their individual effectiveness. The Reference-based Optimization Process (ROP) is specifically designed to enhance the training stability and feature discriminability of the Knowledge-guided Forgery Detector (KFD). To isolate its contribution, we compare its performance with and without ROP. The ROP is not directly connected to other components (LLM and LoRA), and its benefits are implicitly propagated through KFD’s refined features during training. We will explicitly clarify ROP’s role in Section 3.1 to avoid ambiguity. **5) Simulation Limitations:** See response to MUQX Q4. **6) AUC Calculation by LLM Output:** See response to MUQX Q1. **7) Implementation Fairness about Qwen:** We note that Qwen-VL is a general-purpose visual question answering model and is not specifically designed for deepfake detection, which leads to its lower performance. To ensure fairness, all methods are evaluated under identical pre-processing conditions (32 frames per video, 224×224 resolution) and consistent evaluation protocols. Furthermore, we will release our code to facilitate reproducibility. **8) Inference Time:** See response to KySN Q5. **9) Failure Cases and Limitations:** Our approach faces limitations in the training strategy. The alternating training strategy for multi-turn dialogue introduces domain gaps: general-purpose VQA datasets prioritize object-centric reasoning, whereas fine-grained forgery detection requires localized artifact analysis. This misalignment occasionally results in a decrease in forgery detection performance (see Table 2). The Failure cases are available in https://anonymous.4open.science/r/DFDGPT-8E5C. To address these limitations, we will construct domain-specific forgery QA datasets with spatially grounded annotations. We will add a discussion section to elaborate limitations and future works. **10) SOTA Generators Tested:** We evaluated various deepfakes generated by state-of-the-art (SOTA) models, including StyleGAN-3, DiT-XL/2, Stable Diffusion, etc. For experimental results, refer to the response to MUQX Q3. We thank all reviewers for their constructive feedback. Revisions will address every point rigorously. --- Rebuttal Comment 1.1: Comment: I appreciate the authors’ detailed response during the rebuttal phase. Although it improved my understanding in some areas, it does not sufficiently shift the overall strength or novelty of the submission to warrant a change in score. --- Reply to Comment 1.1.1: Comment: We appreciate the reviewer’s acknowledgment of our detailed responses during the rebuttal phase. We respectfully request a further consideration of our contributions, which we believe offer significant novelty and strength in several key areas: **Comprehensive Multi-Modal Framework:** Our work introduces a novel LVLM-based deepfake detection framework that integrates three key components: a knowledge-guided forgery detector, a multi-modal prompt tuning mechanism, and an iterative refinement strategy for multi-turn dialogue. Unlike previous methods that focus solely on spatial or frequency-domain artifacts, our framework leverages external forensic knowledge and fine-grained prompt embeddings to bridge the gap between visual cues and textual descriptions. This architecture enables our model not only to classify images as real or fake but also to generate localized, human-readable explanations of forgery regions. **Visual-Textual Consistency for Deepfake Detection:** Our approach capitalizes on the strong visual–textual representations learned by large-scale pretrained models, specifically the CLIP visual encoder within the ImageBind. CLIP, trained on billions of image–text pairs, inherently captures rich semantic and fine-grained visual features. We leverage this capability by aligning the visual features extracted from input images with corresponding textual embeddings that describe pristine and potentially manipulated content. After fine-tuning with SBI image–text pairs, our approach is able to detect deepfake artifacts accurately. **Robustness Across Diverse Forgery Scenarios:** We have conducted extensive experiments on multiple benchmarks, including FF++, CDF1, CDF2, DFD, DFDCP, DFDC, and DF40. Our approach achieves state-of-the-art AUC values under cross-dataset evaluations. Moreover, our approach effectively handles various forgery methods, ranging from conventional face-swapping to entirely synthesized forgeries. In doing so, it substantially advances the current performance envelope of deepfake detection techniques. **Explainability and Multi-Turn Dialogue Capability:** Beyond detection accuracy, our approach supports multi-turn dialogues that allow users to inquire about specific image content and forgery regions. This interactive capability not only enhances transparency but also contributes to the overall explainability of the detection process—a critical need in forensic applications. Our extensive qualitative and quantitative evaluations (e.g., through GradCAM visualizations, CSS metric, and video-level AUC/AP metrics) further substantiate this contribution. In summary, we believe that the integration of multi-modal forensic knowledge, the visual-textual consistency, and the novel multi-turn dialogue capability collectively represent a significant step forward in the field of deepfake detection. We respectfully hope that these clarifications will prompt a favorable re-assessment of our submission. Thank you for your consideration.
Summary: This paper proposes leveraging LLM and VLM to improve the model generalization and explainability. It is achieved by a two-stage pipeline: A Knowledge-guided Detection using humans prior to generating feature embedding; leveraging these embedding for LLM to output detection results. The experimental results show that it successfully incorporates the capacity of LLM into deepfake detection. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes, it makes sense. Theoretical Claims: N/A Experimental Designs Or Analyses: It is partially reasonable. I have a few suggestions. 1. Fig. 5 is a crucial experiment to validate one of the most important advantages of using LLM&VLM, that is, its explainability. Hence, it should be conducted on a larger scale, at least not limited to the in-dataset scenarios. In addition, you should have appropriate strategies for evaluating the accuracy of the generated text, otherwise, how do you know if the output text is appropriate? 2. More recent SoTA should be compared. For example, [1] [2] [3]. Among them, [1] also discusses the usage of SBI, which is deployed in this paper. 3. More datasets are recommended. The applied datasets in Tab. 1 are indeed only three types, i.e., CDF, DFDC, and DFD. You may include more datasets like DF40 for better illustrations. [1] Can We Leave Deepfake Data Behind in Training Deepfake Detector?// NIPS24 [2] Exploring Unbiased Deepfake Detection via Token-Level Shuffling and Mixing // AAAI25 [3] DiffusionFake: Enhancing generalization in deepfake detection via guided stable diffusion //NIPS24 Supplementary Material: N/A Relation To Broader Scientific Literature: It is relevant to the generalizable deepfake detection. Essential References Not Discussed: Please refer to Experimental Designs Or Analyses. Other Strengths And Weaknesses: The method employed is rather simple, essentially following a contrastive learning approach. However, the problem it addresses is intriguing and holds practical significance. My primary concern is that the experiments may be somewhat insufficiently conducted, please refer to Experimental Designs Or Analyses. Other Comments Or Suggestions: **update after rebuttal** The authors have partially addressed my concern, therefore I retain my original rating. Questions For Authors: Notably, SBI can only simulate the face-blending artifacts. Therefore, it cannot provide forgery clues about generative artifacts. How can your method deal with the fake image generated by Entire-face Synthesis without blending? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **1)AUC Calculation by LLM Output:** To ensure rigorous and reproducible evaluation of text-level AUC, we implemented a deterministic rule-based parsing strategy for extracting binary labels ("yes"/"no") from model output. If the output contains "yes" or "is deepfake", the frame is labeled fake. If the response contains "no" or "not deepfake", the frame is labeled real. If none of the keywords are detected, the response is default labeled real. The experimental results in Table 3,4,5,7 prove the effectiveness of our scheme. **2)Cross-dataset evaluation of LLM&VLM:** We appreciate the suggestion. A figure illustrating the cross-dataset evaluation has been added to the anonymous GitHub repository and is available at https://anonymous.4open.science/status/DFDGPT-8E5C. We will incorporate this figure into Section 4.3. **3) Comparison with Recent SOTA:** Similar to Table 2, we add more comparisons against recent SOTA detection methods across various test sets. The new results are listed in Table A below and will be incorporated into our manuscript. ProDet is evaluated using publicly available code, while the results for other approaches are obtained from their original publications. Table A. Generalization Performance across various datasets. |Methods|Venue|CDF2||DFDC||CDF1||DFDCP|| |-|-|-|-|-|-|-|-|-|-| |||AUC|AP|AUC|AP|AUC|AP|AUC|AP| |ProDet|NIPS’24|92.62|96.05|71.52|72.8|94.48|96.66|82.83|88.89| |RepDFD|AAAI’25|89.94|-|80.99|-|-|-|95.03|-| |CFM|TIFS’24|89.65|-|80.22|-|-|-|-|-| |ED|AAAI’24|93.6|-|75.4|-|-|-|90.2|-| |UDD|AAAI’25|93.13|-|81.21|-|-|-|88.11|-| |Ours|-|94.71|93.59|79.12|77.69|97.62|97.67|91.81|88.26| **4) More Datasets:** As suggested, we have used the DF40 dataset to further evaluate the generalization capability of our approach. The DF40 dataset comprises several synthetic deepfake datasets that are generated using real images in FF++. We use these datasets to evaluate our method’s cross-manipulation detection ability. Notably, our approach continues to exhibit robust detection performance against state-of-the-art deepfake generation techniques (e.g., FSGAN, E4S, LIA, StyleGAN, DDIM, PixArt-α, etc.). Table B. Generalizaiton performance across various deepfake methods. The models are all trained on the Faceforensic++ dataset and then evaluated on unseen types of deepfakes. |Face-swapping|||||||| |:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:| ||UniFace|SimSwap|InSwapper|FSGAN|FaceDancer|BlendFace|e4s|FaceSwap| |SBI|89.02|93.22|88.52|89.62|78.18|95.09|86.36|94.37| |CADDM|86.86|90.41|78.65|88.86|76.54|90.75|87.92|97.96| |Ours|90.61|90.97|87.64|93.75|82.97|92.10|94.68|93.34| |Face-reenactment||||||||| ||PIRender|OneShot|HyperReenact|FOMM|FS_vid2vid|TPSMM|MCNet|LIA| |SBI|81.81|87.54|65.31|88.05|83.72|82.13|83.47|89.22| |CADDM|77.37|85.05|69.26|84.77|72.86|71.35|73.40|69.67| |Ours|88.29|90.45|81.55|93.34|71.56|78.14|81.48|99.99| |EntireFaceSynthesis||||||||| ||VQGAN|StyleGAN2|StyleGAN3|StyleGAN-XL|DDIM|DiT-XL/2|PixArt-α|RDDM| |SBI|91.50|97.91|97.91|23.26|99.56|79.04|98.78|53.66| |CADDM|99.99|100.00|100.00|98.69|98.10|79.90|99.74|98.59| |Ours|99.99|100.00|100.00|100.00|99.93|94.18|100.00|86.59| **5) Limitations about Simulation:** Yes. The SBI-based synthetic forgery pipeline is primarily designed to simulate face-blending artifacts by focusing on modeling boundary inconsistencies. Although it is effective in detecting certain forgeries, its performance degrades when applied to fully synthesized images generated by models such as StyleGAN-XL and RDDM, as evidenced by the experimental results in our response to Q3. While our approach is built on SBI, the integration of pre-trained knowledge from LVLM has notably enhanced its generalized detection ability across various types of forgeries (see Table B). In future work, we will incorporate model-related artifacts into the forgery generation process to further improve detection performance. We will include a dedicated discussion section to further elaborate on these limitations and outline potential directions for future work. --- Rebuttal Comment 1.1: Comment: Thanks for your rebuttal. I have a few questions: 1. AUC Calculation by LLM Output According to your statement, the final prediction is obtained through a deterministic rule-based parsing strategy, resulting in binary output. In other words, your prediction lacks confidence scores, which may introduce a significant issue. First, is it not the case that the LLM might generate ambiguous judgments, such as “Maybe it is a deepfake, but I'm not sure”? Would this lead to a misalignment between the final results and the LLM output? Secondly, could the absence of confidence scores prevent the model from distinguishing between easy and hard samples, thus impairing its understanding capability? Finally, without confidence scores, with only 0 and 1 as outputs, it would be impossible to utilize different thresholds to plot the ROC curve in terms of FPR and TPR. Similarly, it would not be feasible to plot the PR curve. Consequently, how do you calculate the AUC and AP? 2. In Table A, the AUC for "Ours" is the best, but the AP is lower than that of ProDet. Why is this the case? 3. The learned DFD capability of your LLM model during training entirely depends on the blending clues generated by the SBI. In other words, the LLM has never seen any model-based synthetic artifacts. How, then, does it learn to differentiate model-based artifacts? --- Reply to Comment 1.1.1: Comment: Thanks for your comment. Below are our detailed responses. **(1) AUC Calculation by LLM Output:** Although our LLM prediction does not provide a confidence score for a single frame, we can still compute the forgery confidence score for each video by calculating the proportion of frames that are classified as fake. Below is our detailed response to the reviewer’s concerns: 1) **Ambiguous Judgments:** Our experiments on 7,000 images from the CDFv2 dataset have shown that our system does not produce ambiguous judgments. During fine-tuning, we enforced a standardized response protocol during model fine-tuning, which guides the LLM to generate clear, binary outputs. Although the LLM is capable of producing ambiguous statements in principle, our controlled fine-tuning has significantly minimized such occurrences, and no ambiguous outputs were observed in our experiments. 2) **Absence of Confidence Scores:** At the frame level, the LLM outputs a binary decision; however, the associated Vision-Language Model (VLM) is capable of generating a continuous confidence score for each frame. Moreover, we can incorporate this frame-level confidence information into the LLM through a text-based Q&A process (evidenced by our screenshots in the supplementary material). For video-level evaluation, we calculate the forgery probability as the ratio of frames classified as fake, yielding a continuous score in the range [0,1]. This video-level confidence score enables us to compute ROC and PR curves, thereby allowing us to accurately calculate both AUC and AP. 3) **AUC and AP Calculation:** Our evaluation metrics (AUC and AP) are calculated at the video level. For each video, we uniformly sample 32 frames and define the video’s forgery probability as the proportion of frames classified as fake. Consequently, each video receives a confidence score in the range [0, 1], which allows us to compute AUC and AP accurately. **(2) Discrepancy between AUC and AP:** In Table A, our method achieves the highest AUC while exhibiting a slightly lower AP compared to ProDet. We attribute this discrepancy primarily to the differences in how these metrics are calculated. AUC (Area Under the ROC Curve) weighs all false positives equally, whereas AP (Average Precision) weighs false positives at a threshold $\tau$ with the inverse of the model’s likelihood of outputting any scores greater than $\tau$ [1]. This phenomenon, where a method shows high AUC but relatively lower AP, has also been observed in other studies [2, 3], further underscoring that the two metrics capture different aspects of detection performance. [1] McDermott, M., Zhang, H., Hansen, L., Angelotti, G., & Gallifant, J. (2024). A closer look at auroc and auprc under class imbalance. NIPS, 37, 44102-44163. [2] Nguyen, Dat, et al. "Laa-net: Localized artifact attention network for quality-agnostic and generalizable deepfake detection." CVPR. 2024. [3] Yan, Zhiyuan, et al. "Transcending forgery specificity with latent space augmentation for generalizable deepfake detection." CVPR. 2024. **(3) Model-Based Artifacts:** Our approach generalizes beyond the blending cues generated by the SBI process due to three key aspects: 1) **Blending Operations:** The SBI pipeline inherently incorporates blending operations. This allows our model to effectively detect forgeries in datasets like FF++, CDF, and DFDC, where blending is a common post-processing operation. 2) **Convolution and Up-sampling–like operations in SBI:** In generating the SBI dataset, we apply various image processing techniques such as blurring and scaling. These operations involve convolution and up-sampling, which can introduce artifacts similar to those found in model-based synthetic images. This exposure helps the model learn discriminative features that extend beyond simple blending clues and mimic the artifacts typically seen in fully synthesized images. 3) **Pretrained CLIP Visual Encoder:** Our approach leverages the CLIP visual encoder from ImageBind, which has been pretrained on large-scale image–text pairs. Several recent studies have demonstrated that fine-tuning such models can yield high detection accuracy for synthetic images and can learn to discriminate model-based artifacts—even without task-specific training [4, 5]. Consequently, after fine-tuning with SBI image–text pairs, our LVLM is able to detect not only blending artifacts but also model-based artifacts. [4] Ojha, U., Li, Y., & Lee, Y. J. (2023). Towards universal fake image detectors that generalize across generative models. CVPR, 24480-24489. [5] Khan, S. A., & Dang-Nguyen, D. T. (2024). Clipping the deception: Adapting vision-language models for universal deepfake detection. ICMR 2024, 1006-1015.
Summary: This papers introduces a method based on large vision language models (LVLMs) for the task of deepfake detection. To this end, the authors proposed a number of modules to enhance LVLMs performance on deepfake detection, including a knowledge-guided forgery adaptation module (KFD), a multi-modal prompt tuning framework and an iterative refinement strategy. As for data, the authors prepared their multimodal training data based on the real videos in FF++. In experiments, the authors conducted comprehensive experiments, including intra-dataset evaluation, cross-dataset evaluation, cross-manipulation evaluation of KFD, GradCAM visualization, etc. Experimental results demonstrate the effectiveness of the proposed method. ## update after rebuttal The authors have partially addressed my concerns in the rebuttal phase. However, based on the authors' responses, I believe that a significant portion of the main paper content requires substantial revision, such as the missing demonstrations for multi-turn dialogue capabilities and details for metric calculations. I feel uncertain whether such changes can be properly reflected in the revised version of the paper. Therefore, I will maintain my original rating and recommend that the authors consider resubmitting the revised paper to a future conference. Claims And Evidence: Some of the claims are not fully supported by experimental results. For example, the authors claimed that their method not only supports deepfake detection but also facilitates multi-turn dialogues in Section 4.3. However, the results in Figure 5 seems only cover the results for single-turn dialogues. Methods And Evaluation Criteria: In this paper, the authors mainly used the criteria of video-level Area Under the Receiver Operating Characteristic Curve (AUC), namely video-level AUC. Based on my understanding, the video-level AUC requires a probability score for each video, and such a probability score is usually calculated by averaging the probability score of sampled frames in each video. However, in this paper, the authors' method can only provide 'yes' or 'no' for each sampled frame, namely a one-hot vector instead of a probability score. How do the authors calculate the video-level AUC then, by considering the fraction of 'yes' responses in all responses of sampled frames? If so: - How do the authors extract 'yes' from models' responses, with chatgpt or manual rules? - Is it still fair to compare with other SOTA-methods, where they calculate video-level AUC based the mean probability score of sampled frames? - How many sampled frames are used per video? I believe such discussions are required. Theoretical Claims: N/A Experimental Designs Or Analyses: Most of the experiments are sound and valid. However, I have some concerns over the results in Cross-Manipulation Evaluation of KFD, namely Table 2. Based on my understanding, the training of CADDM typically required fake data. How to the authors trained CADDM with only real data in this section? Supplementary Material: The supplementary material is missing, though mentioned in Dialogues Visualization of Section 4.3. Relation To Broader Scientific Literature: The task of this paper is to perform generalized deepfake detection based on LVLMs. Firstly, this could expand the applicability of LVLMs to the area of deepfake detection, further fostering the development of AGI. Besides, this could also boost the development of deepfake detection, by presenting models of better performance. Essential References Not Discussed: The authors have conducted comprehensive literature review. Other Strengths And Weaknesses: - One major weakness of the proposed method is its inference time. Using the Vicuna-7B model for inference could significantly slow down the inference speed compared with other SOTA methods, which typically contain less than 1B parameters. It is recommended that authors should discuss about the inference time and the overall throughput. - I did not find the specific details for "An iterative refinement strategy enabling multi-turn dialog for evidence-based reasoning", which is mentioned in the abstract. I feel like this paper is still unfinished. Other Comments Or Suggestions: None. Questions For Authors: None. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: **1) Multi-turn Dialogue Capabilities:** We appreciate your feedback. Following the strategy in AnomalyGPT, the alternating training strategy (Section 3.3, Implementation Details) inherently preserves Vicuna-7B’s multi-turn dialogue capabilities. While Figure 5 illustrates single-turn examples for clarity, the supplementary material (available on https://anonymous.4open.science/r/DFDGPT-8E5C) includes multi-turn dialogue screenshots. We apologize for the submission negligence and confirm full accessibility of all examples post-publication. **2) Video-level AUC Calculation for LLM:** The video-level AUC is computed by aggregating frame-level binary outputs. For each video, we sample 32 frames uniformly and calculate the ratio of “yes” responses (indicating “fake”). This ratio indicates the video’s probability score to be a fake. To extract “yes” response from model output, we implemented a deterministic rule-based parsing strategy for extracting binary labels ("yes"/"no") from model outputs. If the output contains "yes" or "is deepfake", the frame is labeled fake. If the response contains "no" or "not deepfake", the frame is labeled real. If none of the keywords are detected, the response is default labeled real. It is the common strategy to SOTA LLM-based methods and we use the same sampling strategy (32 frames/video) and aggregation (mean pooling) for fair comparison. We will add this clarification in Section 4.1. **3) CADDM Data Simulation:** Thank you for identifying this mistake. Yes, CADDM used fake data for training. We corrected it in Table 2. **4) Inference Time:** We add the evaluation of inference time on the CDF2 dataset. The inference time in our method is mainly consumed by the LLM we used, as shown in Table A. Our method with VLM model only takes more time for inference than CADDM, but get much better precision. Our method incorporating both the VLM and LLM takes more time than FAK-Owl, but also have much better precision. In addition, FAK-Owl can only provide the binary (Yes/No) responses and lacks multi-turn dialogue capabilities. As a whole, despite taking more inference time, our method has better precision and enables explainability and generalization. ||Inference time per frame(s)|AUC| | :-: | :-: | :-: | |CADDM|0\.026|85\.68| |Ours-VLM|0\.059|97\.62| |FAK-Owl|0\.642|69\.84| |Ours-LLM|1\.211|95\.97| **5) Iterative Refinement Strategy:** The iterative refinement strategy refers to the alternating training between the deepfake detection task and general visual dialogue task, as described in Section 4.1 (Implementation Details). This strategy enables our model to retain multi-turn reasoning capabilities by optimizing forgery detection loss and dialogue loss cyclically (see the supplementary materials in https://anonymous.4open.science/r/DFDGPT-8E5C).
null
null
null
null
null
null
null
null
A Chaotic Dynamics Framework Inspired by Dorsal Stream for Event Signal Processing
Accept (poster)
Summary: Current state-of-the-art event stream processing methods are data-driven deep learning methods. Although these models have achieved high accuracy, they are heavily dependent on the structure of the training dataset. At a time when event sensors are not yet popular and there is a lack of large-scale event stream training data, these methods cannot be directly deployed in the real world. Thus, event stream data processing requires novel processing methods. This paper presents an event signal processing inspired by dorsal stream visual pathways of the brain. The proposed framework utilized chaotic dynamics to express the event data, and combines it with traditional classification networks to realize the event classification and achieve the superior performance. Claims And Evidence: The claims made in the paper are well-supported by clear and convincing evidence. The theoretical justifications, experimental results, and comparisons with prior work effectively validate the proposed approach. The methodology is sound, and the conclusions drawn are consistent with the presented data. Methods And Evaluation Criteria: The proposed methods and evaluation criteria are well-suited to the problem at hand. The methodological choices are well-justified, and the evaluation is conducted using appropriate benchmark datasets and metrics. The experiments are designed carefully, and the comparisons with baseline methods are meaningful. Overall, the paper adopts a sound approach to assessing the proposed method’s effectiveness. Theoretical Claims: The paper proposes a chaotic dynamical framework for processing event signals, utilizing a CCNN to encode event signals. In this approach, polarity-invariant event signals are encoded as periodic signals, while polarity-changing event signals are encoded as chaotic signals. The paper provides a comprehensive validation of the proposed theory, from theoretical derivations to experimental analyses. The theoretical claims appear to be well-supported, with logical derivations and rigorous justifications. The experimental results further reinforce the correctness of the theoretical framework, demonstrating its effectiveness in handling event signals. Overall, the paper presents a solid theoretical foundation and empirical validation for the proposed method. Experimental Designs Or Analyses: The paper employs a dorsal-stream-inspired chaotic dynamical framework to process event signals, generating a dorsal-stream-inspired event representation. This representation is then used as input to conventional deep learning models for object recognition experiments. The proposed approach achieves state-of-the-art performance on certain datasets, further validating the effectiveness of the framework. The experimental design is well-structured, covering multiple datasets and providing a comprehensive comparison with recent state-of-the-art methods. Additionally, the paper includes a complexity analysis of the model and IoU experiments, further reinforcing the feasibility of the proposed approach. The results are clearly presented, and the conclusions are well-supported by the experimental findings. Overall, the study provides a solid theoretical foundation and empirical validation for the proposed method. Supplementary Material: I have reviewed the supplementary materials, and I appreciate that the authors have open-sourced their code. This significantly enhances the reproducibility of the proposed method and experiments, contributing to the transparency and reliability of the research. Moreover, making the code publicly available benefits the broader research community by facilitating further exploration and development in this area. Relation To Broader Scientific Literature: The paper builds upon prior work in event-based vision and chaotic dynamics, drawing inspiration from the dorsal visual stream to develop a novel event representation. Previous studies have explored event-based feature extraction and object recognition, but this work uniquely integrates a chaotic dynamical framework to encode event signals, distinguishing it from conventional approaches. Essential References Not Discussed: The paper provides a thorough review of relevant literature and appropriately cites key prior works related to event-based vision, chaotic dynamical systems, and biologically inspired representations. Based on my review, I did not identify any essential references that are missing. The citations effectively contextualize the proposed approach within the broader scientific literature. Other Strengths And Weaknesses: Strength This paper refers to the dynamic visual cognition mechanism of the brain and proposes an event representation method based on chaotic dynamics. Generally, the results are interesting, the process is correct to the best of my knowledge. The paper is well organized and clearly written. Weakness The author does not adequately describe the dynamic visual pathways of the real brain, which limits the reader's understanding of the paper. In the dynamic analysis section, the author only provides phase space plots and equilibrium point analysis, and lacks more rigorous analysis methods such as Lyapunov exponents. Other Comments Or Suggestions: It is recommended to use Lyapunov exponent to analyze the dynamic characteristics of CCNN neuron, which is a more reliable way. Questions For Authors: 1. What is the correspondence between the proposed framework and the real brain? Event cameras mimic the three-layer structure of the peripheral retina in humans. What part of the visual cognition process does CCNN correspond to? 2. The variables in Eq. (4) lack explanation, such as exp(-ae), exp(-af), and V(ek) is the inputted event data? 3. In Eq. (6) the authors says " Thus, CCNN neurons output a periodic sequence Y(k) under constant stimulation, with the frequency of the period determined by the intensity of the input stimulus." However, in event stream processing, isn't the input signal a boolean variable of polarity? Therefore, in the framework proposed in this article, the frequency of the output periodic signal is constant, right? 4. I observed that Fig. 4(d) and Fig. 4(e) seem to have transient behavior, which is unnecessary when showing the dynamic characteristics. 5. In pg. 5, col. 2, line 248, the authors says " Using the Taylor series expansion of exp(x)", This sentence may cause misunderstanding. Which variable in the paper does x refer to? 6. In pg. 5, col. 2, line 256, the authors says, "In this case, E(k)>0 can be expressed as......", why E(k) should greater than zero? 7. Can the author analyze the stability of the equilibrium point, is it a saddle point or a focus point? 8. Could the author provide detailed parameter settings for CCNN, CWT and subsequent training models? This will be helpful for reproducing the work. 9. Could you please clarify the role of F(x, y) in the wavelet transform experiment in Equation (15)? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate your thorough evaluation and valuable feedback on our manuscript. We are also grateful for the constructive suggestions, which have helped us further refine the theoretical derivations, experimental design, and analysis in our paper. In response to your comments, we have revised and supplemented our manuscript, including providing a more detailed description of the biological background, conducting a more rigorous dynamic characteristics analysis (such as Lyapunov exponent analysis), and offering further clarification on key equations and experimental parameters. We believe these improvements enhance the clarity, completeness, and impact of our work. Below, we provide detailed responses to each comment and explain the corresponding revisions. **W1:** The author does not adequately describe the dynamic visual pathways of the real brain, which limits the reader's understanding of the paper. In the dynamic analysis section, the author only provides phase space plots and equilibrium point analysis, and lacks more rigorous analysis methods such as Lyapunov exponents. \ **A1:** Thank you for your valuable feedback. We have supplemented the discussion on the dynamic visual pathways of the brain to enhance the understanding of the biological visual system. Additionally, we have incorporated discussions and calculations of the Lyapunov exponent to provide a rigorous mathematical analysis of the dynamic characteristics of CCNN neuron. **A1:** The chaotic dynamic framework proposed in this paper includes CCNN, CWT, and LPF. The CCNN is a brain-inspired network based on the primary visual cortex V1. After processing the event signals with CCNN, CWT and LPF are applied for analysis and extraction, enabling the detection of dynamic objects. This process corresponds to the brain's processing from V1 to MT. **A2:** In Equation (4), $U(k)$ represents the modulation input, $Y(k)$ is the continuous output, $E(k)$ is the dynamic threshold, and $e^{-\alpha_f}$, $e^{-\alpha_e }$ denote the exponential decay factors that record the previous input states. $V(e_k)$ represents the input event signal data. **A3:** The polarity of the event signal is a Boolean value that can only be 0 or 1. Therefore, the output periodic signal generated by the event data through the CCNN is constant. **A4:** Thank you for your valuable suggestions. We will remove the transient time periods in Figures 4(d) and (e), keeping only the steady-state behavior to display the dynamic characteristics. **A5:** Thank you for your suggestion. This paper performs a Taylor expansion of $e^{-(U(k)-E(k))}$, retaining only the first two terms, where $x$ refers to $–(U(k)-E(k))$. We will revise the explanation in the paper accordingly. **A6:** Thank you for your reminder. This is a detail expression error in the paper. It should be revised to 'In this case, $E(k)$ can be expressed as...'. The magnitude of $E(k)$ cannot be determined. **A7:** To analyze the stability of the equilibrium point, we need to compute the Jacobian matrix of the equation $E(k)^2-(U(k)-2)E(k)-\frac{V_E}{1-e^{-\alpha_e }} =0$, that is, by differentiating $F(E):J=\frac{dF}{dE}=\frac{d}{dE}(E^2-(U(k)-2)E-\frac{V_E}{1-e^{-\alpha_e }})$. The derivative is $J=2E-(U(n)-2)$. The Jacobian matrix at the equilibrium point $E^\ast$ is:$J(E^\ast )=2E^\ast-(U(n)-2)$. When $E^\ast=\frac{U(k)-2-\sqrt{(U(k)-2)^2+4V_E (1-e^{-\alpha_e} )}}{2}, J(E^\ast)<0$, indicating that the equilibrium point is stable (attractive point); when $E^\ast=\frac{U(k)-2+\sqrt{(U(k)-2)^2+4V_E (1-e^{-\alpha_e} )}}{2}, J(E^\ast)>0$, indicating that the equilibrium point is unstable (possibly a saddle point). **A8:** Parameters of the CCNN model: $\alpha_f = 0.1, \alpha_e = 1.00, V_e = 50, U(0) = 0, E(0) = 0, Y(0) = 0$. Parameters of the CWT: 'gaus1' is used as the base wavelet, with a scale range of 10. Training parameters: The data is split into training, validation, and test sets in a 3:1:1 ratio, with the random seed set to 2024. The model was trained for 5 epochs, with a batch size of 16. During optimization, the cross-entropy loss function was used, and the Adam optimizer was applied with an initial learning rate of 1e-4. To alleviate overfitting, a Dropout layer was added before the fully connected layers, and early stopping was incorporated during training. **A9:** Let $F(x,y)$ be the generative function of the event representation. The input $S_{ij}$ is the sum of the real part of the coefficient matrix, corresponding to different types of event sequences. It is convolved with the low-pass filter $H(f)$, and the coordinate points corresponding to event sequences where $S_{ij} >0$ are assigned a value of 255, while others are assigned a value of 0, enabling the detection of moving objects. --- Rebuttal Comment 1.1: Comment: Thanks for the authors' rebuttal, which addresses all my concerns. I keep my original rating. --- Reply to Comment 1.1.1: Comment: We sincerely appreciate your recognition of our revisions and valuable feedback throughout the review process. We are glad to hear that our responses addressed your concerns adequately. Thank you once again for your time and constructive suggestions, which have significantly strengthened the quality of our work. We will carefully incorporate all remaining edits in the final version.
Summary: This paper proposes a chaotic dynamical framework inspired by the dorsal visual pathway for processing event signals and generating stable and generalizable event representations. By integrating it with deep neural networks, the authors achieved high accuracy on multiple event-based object classification datasets while demonstrating efficient inference. The work exhibits strong completeness in theoretical derivation, experimental validation, and cross-dataset generalization analysis. Claims And Evidence: The claims presented in this paper are well-supported through rigorous theoretical derivations and experimental validations. The mathematical modeling of CCNN is robust, and its chaotic properties are demonstrated through phase space analysis. The experimental results on multiple datasets, including N-Caltech101, N-CARS, N-MNIST, and ASL-DVS, indicate superior classification performance compared to existing methods. Additionally, the proposed method achieves a low inference time of 2.1ms per sample, demonstrating computational efficiency. Overall, the claims are substantiated with strong theoretical and empirical evidence, making the conclusions credible. Methods And Evaluation Criteria: The proposed methods and evaluation criteria are well-designed and appropriate for event signal processing. The CCNN model presents a novel approach for event stream encoding, while CWT enhances the event representation. The benchmark datasets (N-Caltech101, N-CARS, etc.) are widely used, ensuring fair comparisons. The evaluation metrics (classification accuracy, IoU, inference time) comprehensively assess the framework’s performance. Furthermore, comparisons with various existing methods, including voxel-based and ANN-based approaches, validate the proposed framework’s advantages. The methodology is sound and well-justified, with appropriately chosen evaluation criteria. Theoretical Claims: The theoretical derivations are clear, and the chaotic behavior of CCNN is rigorously analyzed through mathematical modeling and phase space analysis. However, some assumptions in the derivations, particularly in Equations (4)–(7), could be more explicitly stated. Additionally, a discussion on the generalization of the chaotic encoding approach to other tasks (e.g., motion segmentation, object tracking) would further strengthen the theoretical contributions. Overall, the theoretical claims are well-founded, albeit with minor areas for clarification. Experimental Designs Or Analyses: The experimental design is comprehensive and well-structured, covering multiple datasets (N-Caltech101, N-MNIST, N-CARS, ASL-DVS) to ensure the robustness and generalization of the proposed framework. The training and evaluation protocols (ResNet-34 pretrained model, Adam optimizer) are appropriate for the task. The results consistently demonstrate the superiority of the proposed method in accuracy, robustness, and computational efficiency compared to prior works. Supplementary Material: The supplementary material provides additional experimental results and implementation details, which are well-organized and contribute to a better understanding of the proposed framework. Relation To Broader Scientific Literature: The paper is well-situated within the broader scientific literature on event-based vision and bio-inspired neural computation. It extends existing event representation methods, integrates insights from biological visual processing, and aligns with research on spiking neural networks (SNNs) and chaotic dynamics. The combination of chaotic dynamics and event camera processing is novel and contributes valuable insights to the field, paving the way for further advancements in neuromorphic computing. Essential References Not Discussed: The paper cites most of the essential prior works, but additional references on chaotic neural networks in neuromorphic computing and self-supervised learning approaches for event streams could further strengthen the literature review. Including these references would provide a more comprehensive discussion of the related work. Other Strengths And Weaknesses: Strengths The biologically inspired chaotic dynamical model integrated with neuromorphic vision sensor data provides novel theoretical insights and methodological frameworks for event signal processing. This interdisciplinary approach offers valuable inspiration for both computational neuroscience and computer vision. Comprehensive experimental design, including comparative studies across multiple event-based datasets, effectively validates the method's universality. Theoretical derivations and visualizations complement each other, enhancing the credibility of conclusions. Weaknesses The lateral brain diagram shows misalignment in the circular annotation, and the left arrow in the right-side network model is not centered. Layout adjustments are recommended to improve readability. In Figure 4, overlapping axis labels are observed in the first two subplots, requiring precision adjustments. Theoretical Completeness: The stability proof in Section 3.2 regarding "periodic outputs from constant event signal inputs" is overly concise. Critical derivation steps should be supplemented. Other Comments Or Suggestions: Clarifying the assumptions in the CCNN equations and providing inference speed comparisons across different hardware platforms would enhance the paper’s practical relevance. Questions For Authors: 1. Figure 1(a): The purpose of the two rectangular boxes in the Event image is unclear. Do their spatial positions correspond to specific event-triggering patterns? Why were these regions selected for annotation? 2. Biological Relevance of Figure 3: While visualizing human brain motion recognition processes, the text lacks explanations linking these results to dorsal visual pathway functions. Please supplement biological interpretations of these visualizations. 3. The paper claims CCNN is inspired by the primary visual cortex but fails to clarify how model parameters relate to biological mechanisms. Biological constraints during model design should be explicitly discussed. 4. The authors highlight real-time inference efficiency but provide no hardware deployment tests. Has the model been tested in real-world scenarios? What is the actual inference latency? 5. Could the authors provide detailed hyperparameter selection criteria (e.g., learning rate, batch size) and their impact on results? Are the improvements dataset-specific or task-agnostic? 6. What are the computational resource requirements and training time? Are there optimizations for resource-constrained environments? 7. The current method is primarily used for classification. Can it be extended to other tasks (e.g., object detection, optical flow estimation)? 8. Why does PointNet++ have a longer runtime despite fewer parameters? Shouldn't it have a shorter runtime despite fewer parameters? Why does the method in this paper have more parameters than the Frame-based method but has a shorter runtime? 9. Have you considered combining CCNN with SNN (Spiking Neural Networks) to further enhance biological plausibility? 10. Why was no comparison made with Event Transformer (EvT)? Have you considered using Transformer for event representation? 11. Is CCNN still effective with low data quantities? Have small-sample learning experiments been conducted? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate your thorough evaluation and valuable feedback on our paper. We are pleased that our chaotic dynamical framework and experimental results have been recognized, and we are grateful for the insightful questions that have helped us further improve the paper. In response to your comments, we have made the necessary revisions, and we believe these improvements will enhance the quality and impact of our work. Below, we address each of your comments in detail, providing clarifications and additional analyses where necessary. **A1:** Thank you for the reminder. The annotations in Figure 1(a) highlight the superiority of the event representation generated by our framework. While the pyramid in the original image shows numbers 1-6, our framework clearly restores their contours, unlike event representations from end-to-end networks, which fail to preserve these details. **A2:** The chaotic dynamic framework proposed in this paper includes CCNN, CWT, and LPF. CCNN is a network inspired by the primary visual cortex (V1) of the brain. After the event signals are processed by CCNN, they are further analyzed and extracted using CWT and LPF for dynamic object detection, which corresponds to the brain's processing from V1 to MT. **A3:** The unconnected CCNN comprises modulated input $U(k)$, continuous output $Y(k)$, and dynamic threshold $E(k)$. When $U(k)>E(k)$, the output is $Y(k) = \frac{1}{1+e^{-(U(k)-E(k))}}$, indicating excitation. After stimulation, $E(k)$ increases, requiring a stronger stimulus for the next output, mimicking the neuronal refractory period. Siegel observed chaotic behavior in the primary visual cortex of cats under periodic stimulation [1]. Similarly, CCNN exhibits periodic signals under constant stimulation and chaotic signals under periodic stimulation, adhering to this biological constraint. [1] Siegel, R. M. Non-linear dynamical system theory and primary visual cortical processing. Physical D: Nonlinear Phenomena, 42(1-3):385–395, 1990. **A4:** Thank you for your reminder. The testing has been conducted on a local workstation, and the experimental results demonstrate the model's potential for real-time applications. Future work will focus on further validation in real-world scenarios. **A5:** Hyperparameters include batch size 16, cross-entropy loss, Adam (lr=1e-4), 40% Dropout before FC layers, and early stopping. A smaller batch size aids generalization, cross-entropy improves accuracy, Adam ensures stable convergence, and Dropout with early stopping prevents overfitting. The appendix and open-source code include the parameter settings, facilitating reviewers and readers in reproducing and improving this work. **A6:** The model was tested on a workstation with an Intel Core i9 CPU, NVIDIA RTX 4060 GPU (8GB VRAM), and 16GB RAM. Training on a small dataset (N-Caltech101/N-CARS, ~50,000 samples) took 12-18 minutes, while a large dataset (N-MNIST/ASL-DVS, >200,000 samples) took 30-60 minutes. To optimize for resource-constrained environments, techniques like pruning, quantization, knowledge distillation, lightweight architectures, and mixed-precision training can be used to reduce computational demands and improve efficiency. **A7:** The chaotic dynamic framework proposed in this paper processes event signals to obtain a general event representation, which can be extended to other tasks. We also plan to apply this event representation method to more event-based tasks in future work. **A8:** Parameter count affects memory and training time but not inference speed. PointNet++ runs slower due to costly neighborhood queries, while dense CNNs benefit from optimized parallelism. Our method (21.9M params) achieves 2.1ms inference on an RTX4060 via a regularized convolutional design (MACs = 3.7G, 7.5% lower than PointNet++) and PyTorch + TensorRT optimizations. **A9:** We propose a hybrid CCNN-SNN framework that replaces the ResNet-34 classifier with SNNs to reduce computational energy consumption. By integrating the STDP mechanism, this architecture achieves adaptive learning capabilities while enhancing biological plausibility and interpretability. Future investigations will prioritize systematic evaluation of its low-power computing performance and neuro-inspired operational principles. **A10:** We acknowledge EvT's strengths in global spatiotemporal modeling but exclude it from comparisons due to its computational inefficiency with high-resolution event data. Our current focus on efficiency-generalization trade-offs motivates the proposed CCNN-Transformer hybrid architecture for enhanced event representation in complex scenarios. **A11:** We have not yet conducted systematic small-sample learning experiments, but CCNN’s chaotic dynamics allow it to maintain good generalization with limited data. Future research will include small-sample experiments and explore combining CCNN with meta-learning to enhance adaptability in low-data scenarios.
Summary: The methods combining event cameras and deep learning mainly involve integrating traditional deep learning techniques with the high temporal resolution and low latency characteristics of event cameras, aiming to process the event stream data. However, the limitation of existing methods for event cameras is their heavy reliance on data structures, which restricts the stability and generalization ability of the models. These models may not adapt well to different tasks or scenarios, leading to unstable performance in real-world applications. This paper proposes a chaotic dynamics signal processing framework inspired by the dorsal visual pathway of the brain. It utilizes the Continuous Coupled Neural Network (CCNN) to encode the event stream, encoding polarity-changing event sequences as chaotic signals. Continuous wavelet transforms are then used to analyze the dynamic states of CCNN neurons and establish high-order mappings of the event stream. ## update after rebuttal Authors' rebuttal have solved my concerns. I think previous score is high enough and I will keep my rating. Claims And Evidence: The claims made in the submission are well-supported by clear and convincing evidence. The experimental results are comprehensive, with appropriate comparisons to existing methods, and the statistical analyses are thorough. The theoretical justifications are sound, and the cited literature is relevant and up-to-date. Overall, the evidence provided strongly supports the paper’s conclusions. Methods And Evaluation Criteria: The proposed methods are well-designed and appropriate for the problem at hand. The paper provides a clear explanation of the methodology, with sufficient theoretical justifications and algorithmic details. The evaluation criteria are well-chosen, using relevant benchmark datasets and standard performance metrics. Additionally, the experiments include comprehensive comparisons with state-of-the-art methods, which further validate the effectiveness of the proposed approach. Theoretical Claims: The paper proposes a generalizable event representation, validated across multiple datasets. Comparative experiments with state-of-the-art methods show competitive performance, achieving the highest accuracy on certain datasets. The experimental results support the proposed theoretical claims. Experimental Designs Or Analyses: The experimental design is well-structured, covering multiple datasets and providing a comprehensive comparison with state-of-the-art methods from recent years. Additionally, the paper includes a complexity analysis of the model and IoU experiments, further validating the feasibility of the proposed approach. The results are clearly presented, and the conclusions are well-supported by the experimental findings. Supplementary Material: Yes, I reviewed the supplementary material, and I appreciate that the authors have open-sourced their code. This significantly enhances the reproducibility and credibility of the paper, allowing for further validation and potential extensions of the proposed method. Relation To Broader Scientific Literature: The paper provides a comprehensive discussion of related work, thoroughly analyzing the principles, advantages, and limitations of frame-based event representations, contrast maximization-based event representations, and end-to-end network-based event representations. Inspired by the dorsal visual pathway in the primary visual cortex, the study introduces CCNN to encode event signals, employs CWT to analyze the dynamic properties of neurons, and finally utilizes LPF to extract information about moving objects. The comparison with existing methods is extensive, effectively highlighting the proposed approach’s generality and robustness. Essential References Not Discussed: The paper provides a thorough discussion of prior work, covering key contributions in the field. It appropriately cites and compares relevant studies on event representation methods, ensuring a comprehensive contextual understanding. No essential references appear to be missing. Other Strengths And Weaknesses: Strength The advantages of this proposed chaotic dynamics signal processing framework include improved stability and generalization, dynamic state analysis, high-order mapping capabilities, improved performance in real-world applications. The structure is well-organized and logical, with the design principle clearly and appropriately explained. Weakness The author did not explain some of the parameters in the brain-inspired model, especially certain setting parameters in the CCNN, which play an important role in understanding the brain's visual mechanisms. Other Comments Or Suggestions: In Figure 4, the x-axis labels for subfigures (a), (b), and (c) should be corrected from ‘Iterations’ to ‘Iterations’. Additionally, in Table 3, the value in the second row, fifth column should be changed from ‘70000’ to ‘100000’. Questions For Authors: 1. Many parameters in the CCNN model are not explained, such as Y(k), VE, exp(-af), and exp(-ae) in Equation (4). 2. What is the relationship between the unexplained parameters in Equation (4)? Please provide relevant explanations. 3. In Equation (7), how should the value of parameter K be set? 4. It seems that E(0) and VE determines the period of the output signal. What effect do the settings of these parameters have on the results? 5. In Equation (8), why is E(K+1) = E(K)? What is the significance of this setting? 6. In Equation (9), why is VE > 0 and αe > 0? What is the impact of this on the calculation results? 7. In Equation (11), what is the rationale for choosing the Gaussian function? 8. In Fig. 6(a), what is the relationship between the heatmap corresponding to the chaotic sequence and the final computation results of the CCNN? 9.The author has already presented the CCNN, why is there further research on the CWT? What is the purpose of this? 10. What is the relationship between the Low-pass Filter and the brain-inspired mechanism of CCNN? Why is it necessary to conduct relevant experiments? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: we would like to express our sincere gratitude for your in-depth review of our paper and your valuable feedback. We greatly appreciate your recognition of the proposed method and experimental results, and we also thank you for raising some important questions that will help us further improve the quality of the paper. In response to your comments, we have made several revisions and additions to the manuscript, and we provide detailed answers to the concerns you raised below. We believe that, with your feedback, the paper will be further enhanced. **W1:** The author did not explain some of the parameters in the brain-inspired model, especially certain setting parameters in the CCNN, which play an important role in understanding the brain's visual mechanisms. \ **A1:** The unconnected CCNN comprises modulated input $U(k)$, continuous output $Y(k)$, and dynamic threshold $E(k)$. The terms $e^{-\alpha_f}$ and $e^{-\alpha_e}$ denote exponential decay factors that record the previous input states. $V_E$ is the weight factor for adjusting the neuronal potential. When $U(k)>E(k)$, the output is $Y(k) = \frac{1}{1+e^{-(U(k)-E(k))}}$, indicating excitation. After stimulation, $E(k)$ increases, requiring a stronger stimulus for the next output, mimicking the neuronal refractory period. Siegel observed chaotic behavior in the primary visual cortex of cats under periodic stimulation [1]. Similarly, CCNN exhibits periodic signals under constant stimulation and chaotic signals under periodic stimulation, adhering to this biological constraint. [1] Siegel, R. M. Non-linear dynamical system theory and primary visual cortical processing. Physical D: Nonlinear Phenomena, 42(1-3):385–395, 1990. **C1:** In Figure 4, the x-axis labels for subfigures (a), (b), and (c) should be corrected from ‘Iterations’ to ‘Iterations’. Additionally, in Table 3, the value in the second row, fifth column should be changed from ‘70000’ to ‘100000’. \ **A1:** Thank you for your careful review. We have revised the x-axis labels in subfigures (a), (b), and (c) of Figure 4 as suggested and corrected the value in the second row, fifth column of Table 3 to ensure accuracy. These changes have been incorporated into the revised manuscript. **A1:** Thank you for your reminder. In Equation (4), $U(k)$ represents the modulation input, $Y(k)$ is the continuous output, and $E(k)$ is the dynamic threshold. The terms $e^{-\alpha_f}$ and $e^{-\alpha_e}$ denote exponential decay factors that record the previous input states. $V_E$ is the weight factor for adjusting the neuronal potential. **A2:** When the modulation input $U(k)$ exceeds the dynamic threshold $E(k)$, the output $Y(k)$ is given by $\frac{1}{1+e^-(U(k)-E(k))}$, which corresponds to the neuron generating an excitatory potential in response to sufficient stimulation. Upon the next stimulus, the dynamic threshold increases, and the model requires a stronger input to produce an output, mimicking the refractory period of biological neuronal cell membranes. **A3:** In Equation (7), $k$ is a parameter that controls the input. When the event signal input follows the periodic sequence {0,1,0,1,$\cdots$,0,1}, its equivalent formula is $\sin(\frac{k\cdot \pi}{2})$, where $k$={0,1,2,$\cdots$, n}. **A4:** Both $E(0)$ and $V_E$ can affect the period of the output signal. However, the setting of these parameters does not impact the results, as long as the CCNN output is a periodic signal when the input polarity of the event signal remains constant. **A5:** Under the stimulation of periodic signals, the CCNN exhibits complex chaotic dynamic characteristics. This paper discusses its equilibrium state, where a balance point exists in the output, such that $E(k+1)=E(k)$. **A6:** This section discusses the equilibrium point of the CCNN neuron. When $V_E >0$, $\alpha_e >0$, and $4\frac{V_E}{1-e^{-\alpha_e}}>0$, with $\Delta>0$, the equation has a solution, and an equilibrium point exists. This discussion does not affect the computational results. **A7:** Gaussian wavelets are widely used in time-frequency analysis due to their smoothness and good time-frequency locality. They are well-suited for analyzing periodic and chaotic signals generated by CCNN neurons. **A8:** The heatmap of the output sequence is a visualization of the sequence after CWT waveform analysis, showing the value distribution of the real part of the coefficient matrix. The sum of the real part of the coefficient matrix for chaotic sequences is an integer, while for periodic sequences, the sum is negative. A low-pass filter is then applied to extract the coordinate points of the periodic sequence. **A9:** The Low-pass Filter is a module proposed in this paper's chaotic dynamic framework and is not directly related to the brain-like mechanism of the CCNN. It is used to extract the coordinate points corresponding to the constant polarity event sequences, enabling the detection of moving objects.
Summary: This paper proposes a chaotic dynamics framework inspired by the dorsal visual pathway of the brain for processing event camera signals. By encoding event streams into periodic or chaotic signals using Continuous Coupled Neural Networks (CCNN) and analyzing dynamic states via Continuous Wavelet Transform (CWT), the framework integrates traditional classification networks for object recognition. Experiments demonstrate state-of-the-art classification accuracy on datasets such as N-Caltech101 (84.3%) and N-CARS (99.9%), with high inference efficiency (472 samples/sec). The authors emphasize the framework’s generalization and stability advantages and have open-sourced the code. ## update after rebuttal Authors' rebuttal have solved my concerns. I think previous score is high enough and I will keep my rating. Claims And Evidence: The paper's main contributions are well-supported by experimental evidence. It introduces a chaotic dynamics-based event representation using CCNN, validated through mathematical modeling and phase-space trajectory analysis. Event mapping with CWT effectively distinguishes stable and dynamic events, as demonstrated by heatmaps of transformed matrices. Experimental validation on multiple datasets shows superior classification accuracy compared to baseline methods, highlighting strong generalization and stability, further confirmed by IoU evaluation. While the results are convincing, further discussion on the applicability of CCNN’s chaotic properties in different data scenarios would be beneficial. Methods And Evaluation Criteria: The proposed method adopts appropriate evaluation criteria, including classification accuracy, IoU, and computational complexity (number of parameters, MACs, and inference time). The choice of datasets covers both static (N-MNIST) and dynamic (N-CARS) event classification tasks, ensuring a comprehensive evaluation. One potential improvement would be to analyze the impact of different time window lengths on the temporal representation. Theoretical Claims: The paper’s theoretical foundation is based on the dynamical equations of CCNN, providing a mathematical proof of periodic and chaotic signal responses. The nonlinear analysis, including equilibrium point derivation and Taylor expansion, is reasonable. Experimental Designs Or Analyses: The experimental setup is well-structured, primarily evaluating classification accuracy and model complexity. A ResNet-34 network pre-trained on ImageNet is used for classification, with a clear training strategy. However, the study does not investigate the impact of different CCNN coupling strengths on the quality of event representations. Including such an analysis could provide deeper insights into the generalizability of the proposed method. Supplementary Material: The supplementary material provides open-source code along with detailed implementation details and parameter settings, ensuring the reproducibility of the experiments. This significantly enhances the transparency of the study, enabling other researchers to replicate the experimental results and further validate the effectiveness of the proposed method. Moreover, making the code publicly available contributes to the advancement of the field, facilitating future research to build upon and extend this work. Relation To Broader Scientific Literature: This work is closely related to event-based data representation methods (HATS, EST, TORE, TOKEN) and biologically inspired computing (Spiking Neural Networks). The paper provides a clear literature review of mathematical, deep-learning-based, and bio-inspired event processing approaches, highlighting their limitations, such as strong dependence on specific data structures. The proposed method introduces a novel perspective by incorporating chaotic dynamics modeling, which is rarely explored in event vision research. Essential References Not Discussed: The paper cites most relevant literature but could benefit from additional references on chaotic dynamics in computational neuroscience, such as Lorenz systems and Hindmarsh-Rose models for visual cortex modeling, as well as event camera applications in low-power embedded systems to explore the method’s practical deployment potential. Other Strengths And Weaknesses: Strengths、 1. The biologically inspired integration of dorsal stream mechanisms with chaotic dynamics presents a novel event representation framework with theoretical significance. 2. Outperforms existing methods significantly on multiple mainstream datasets while achieving high inference efficiency, demonstrating practical deployment potential. 3. Robustness is validated through cross-dataset experiments, particularly showing stability in dynamic event streams. Provides mathematical modeling and visualization analyses of CCNN and CWT. Weaknesses 1. The biological plausibility of CCNN’s connection to the dorsal stream is insufficiently supported, lacking explanations of how it mimics the spatiotemporal encoding mechanisms of the "where pathway." 2. No comparisons with bio-inspired models (e.g., spiking neural networks) to justify the unique advantages of dorsal stream inspiration. Other Comments Or Suggestions: The mathematical analysis section could benefit from further discussions on the stability and bifurcation behavior of nonlinear systems to strengthen the theoretical foundation. Questions For Authors: 1. How does the neuronal dynamics of CCNN specifically simulate dorsal stream functions (e.g., motion perception, spatial localization)? Are there neuroscientific experiments supporting this design? 2. In Equation (6), does the derived periodic parameter k directly correlate with physical properties of event streams (e.g., velocity, direction)? How is it leveraged to enhance classification performance? 3. Why was the Gaussian wavelet selected as the CWT basis function? What advantages does it offer over alternatives (e.g., Morlet wavelet)? 4. The low-pass filter in Equation (14) sets negative values to 255 and positive values to 0. Could this lead to information loss? Is there a more refined thresholding strategy? 5. Table 3 shows improved IoU with increasing event density. Does this imply the framework relies on dense events? How is its performance optimized for sparse event streams? 6. In Table 2, the model’s parameter count (21.9M) exceeds EV-VGCNN (0.8M), yet inference is faster. Is this due to architectural design (e.g., parallelization)? Please elaborate. 7. In Fig. 8, Why do the other methods not work well? Why does the author's method work better? 8. I have observed that the results of different methods vary greatly between datasets, as shown in Fig. 1. Can the authors explain what differences in the datasets lead to such results? What advantages of the proposed method can alleviate such differences? 9. Does the open-sourced code include CCNN training details? If not, how can reproducibility be ensured? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate your time and effort in reviewing our paper. We are grateful for your constructive feedback and insightful questions, which have helped us refine our work and clarify its contributions. We acknowledge your concerns regarding the theoretical foundations, biological plausibility, and certain experimental analyses, and we appreciate the opportunity to further elaborate on these aspects. In this response, we address each of your comments in detail, providing clarifications, additional discussions, and further justifications where necessary. We hope that our responses effectively resolve your concerns and further demonstrate the significance and robustness of our proposed framework. **A1:** The CCNN simulates the encoding mechanism of primary visual cortex neurons. The proposed chaotic dynamics framework, consisting of three modules—CCNN, CWT, and LPF—mimics the dorsal stream's function in extracting dynamic object location information. \ **Experimental Evidence:** Siegel observed complex electrical signal fluctuations in the primary visual cortex neurons of cats under periodic signal stimulation, revealing the presence of chaotic behavior in the mammalian primary visual cortex [1]. Building on this, Liu further modified the dynamic threshold adjustment mechanism of PCNN and proposed CCNN [2]. This model exhibits periodic signals under constant stimulation and chaotic signals under periodic stimulation, aligning with findings from biological experiments. [1] Siegel, R. M. Non-linear dynamical system theory and primary visual cortical processing. Physical D: Nonlinear Phenomena, 42(1-3):385–395, 1990. [2] Liu, J., Lian, J., Sprott, J. C., Liu, Q., and Ma, Y. The butterfly effect in primary visual cortex. IEEE Transactions on Computers, 71(11):2803–2815, 2022. **A2:** The period $k$ has no physical relationship with the event stream (such as velocity or direction) and cannot be used to improve classification performance. The derived period $k$ is used to demonstrate that the CCNN generates periodic signal outputs when subjected to constant stimuli. **A3:** The Gaussian wavelet is widely used in time-frequency analysis. Compared to the Morlet wavelet, it offers superior smoothness and better time-frequency localization, making it particularly suitable for analyzing periodic and chaotic signals generated by CCNN neurons. **A4:** This design does not lead to information loss. Setting negative values to 255 is intended to extract the coordinates of event signal points with constant polarity, thereby detecting moving objects. A more refined threshold design will be presented in future papers. **A5:** As event density increases, the proposed framework captures finer textures, improving IoU. While denser events enhance performance, the framework is not dependent on them. For sparse events, effective denoising and enhancement techniques may be needed to optimize IoU. **A6:** Our model uses ResNet-34 with 21.9M parameters, more than EV-VGCNN (0.8M), but parameter count mainly affects memory usage and training time, not inference speed. The faster inference is due to: (1) Efficient 3×3 convolutions and residual connections optimizing computation flow; (2) GPU-optimized deep learning frameworks (cuDNN) accelerating ResNet inference; (3) Convolutional optimizations like Winograd transformations reducing computational complexity. In contrast, EV-VGCNN's fully connected layers may limit parallelism, leading to slower inference. **A7:** The LIF method suppresses many event points due to an unreasonable threshold, leading to an incomplete event pattern. The Time Surface captures motion trajectories but introduces redundancy, affecting IoU detection. Event Count maps event accumulation to pixel values, causing inconsistencies that impact pattern integrity. This paper proposes a framework that converts events with constant and periodic polarity changes into periodic and chaotic signals via CCNN, then processes them using CWT and LPF. Event coordinates with constant polarity are set to 255 to extract valid event signals effectively. **A8:** Dataset size and category count significantly impact classification performance. N-MNIST, N-CARS, and ASL-DVS, with fewer categories and high intra-class similarity, enable effective feature learning and high accuracy. In contrast, N-Caltech101's limited samples, diverse categories, and large intra-class variation increase training difficulty and lower accuracy. This study introduces a chaotic dynamic framework inspired by the dorsal visual pathway, generating robust event representations. Combined with lightweight ResNet34, it achieves high accuracy across all four datasets. **A9:** The appendix and open-source code provide the full CCNN implementation, including network architecture and hyperparameters. This documentation enables readers to reproduce results with high confidence in their reproducibility.
null
null
null
null
null
null
Understanding the Unfairness in Network Quantization
Accept (poster)
Summary: This work unveils the potential risk of exacerbating the unfairness in model accuracy among various groups. By theoretical analysis and empirical experiments with both Post-Training Quantization (PTQ) and Quantization-Aware Training (QAT), this work identifies several observations, including group White has less performance drop and PTQ behaves better in preserving fairness. The author also verifies several unfairness mitigation schemes, including geometric transformations and random erasing, and demonstrates that these data augmentation techniques could help mitigate the unfairness caused by quantization. Claims And Evidence: Most claims made in the submission are well supported by clear and convincing evidence. Methods And Evaluation Criteria: The proposed methods and evaluation criteria make sense for the problem or application. Theoretical Claims: I have checked the correctness of the proofs for theoretical claims in the main text and did not find any issues. Experimental Designs Or Analyses: I have checked the soundness/validity of experimental designs. Some issues include: - The work broadly uses QAT and PTQ to represent the quantization methods used in the analysis. However, these are two major categories instead of the concrete techniques. For QAT, I know from section 3.1 that signed symmetric uniform weight-only quantization, but it's not sure what concrete settings are the author following. I recommend adding the reference to specific methods, such as PACT/LSQ for QAT, or Q-drop/BRECQ for PTQ. - In addition, this work only verifies the fairness problems under weight-only quantization settings. The generalization of the conclusion on weight-activation quantization remains unknown. [1] PACT: Parameterized Clipping Activation for Quantized Neural Networks, arxiv 2018 [2] Learned Step Size Quantization, ICLR 2020 [3] QDrop: Randomly Dropping Quantization for Extremely Low-bit Post-Training Quantization, ICLR 2022 [4] BRECQ: Pushing the Limit of Post-Training Quantization by Block Reconstruction, ICLR 2021 Supplementary Material: Yes, I have checked all supplementary material, including the text and codes. It's good to see that all codes are provided to ensure reproducibility. Relation To Broader Scientific Literature: This is a brand new direction and research question. The key contributions are novel, while some fairness metrics and mitigation schemes follow previous work and are cited. Essential References Not Discussed: As many representative works in quantization and fairness are cited, I think the related works section in this paper are essential to understanding the key contributions of the paper. Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: - The title on each page should be revised to be the title of this paper instead of the placeholder "Submission and Formatting Instructions for ICML 2025" - Some typos could be fixed (Table 1 under QAT int8 ResNet18 CIFAR-10 "01.3" $\rightarrow$ "1.3", line 317 "is" $\rightarrow$ "are", line 423 "adopted on for" $\rightarrow$ "adopted for", ) Questions For Authors: - If the data imbalance is to blame for the unfairness, would data-free quantization methods with balanced synthetic datasets solve the problems? - From the results shown in Table 1, fairness metrics variation across different models is more significant than in different quantization settings (especially on VGG19). Any ideas or discussions on this phenomenon? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for kindly evaluating that "the key contributions are novel." We also sincerely appreciate your constructive suggestions, and believe that the additional experiments and explanations can address your concerns. The new experimental results are available at https://anonymous.4open.science/api/repo/Rebuttal-30C1/file/Reviewer%20QYq3.pdf?v=f7137c6b. - Typos: Thank you for your thorough review and for pointing out these formatting and typographical issues. We have made another careful polish to the paper. - Issue 1: To enhance clarity, we specify that our work adopts the quantization method described in [1], which serves as a standard approach in both PyTorch and TensorFlow. This method is widely used in practice and aligns with the default settings in major deep learning frameworks. As suggested, we have also added references to specific quantization methods, including PACT/LSQ for QAT and Q-drop/BRECQ for PTQ, in the Related Work section to improve clarity. Moreover, our theoretical analysis follows general quantization principles and is applicable to different quantization methods (see Table 1 in the link). Specifically, in the final step of the derivation in Theorems 4.1 and 5.1, we only need to replace $\Vert \Delta w \Vert$ with the corresponding quantization error of the specific method—for instance, when applying BRECQ[2], one would replace $\Vert \Delta w \Vert$ with the Frobenius norm of the difference between the full-precision weights and the quantized weights. - Issue 2: We would like to clarify that our theoretical analysis for weight-only quantization (WOQ) can be directly generalized to weight-activation quantization (WAQ). This generalization is supported by the findings in [3], specifically Theorem 1, which shows that the impact of activation quantization on the target loss can be transformed into a similar effect as weight quantization. Based on this, we have derived the theoretical bounds for WAQ, which reveal that the upper bound of the excessive loss $G(a)$ introduces two additional terms compared to WOQ: $\sqrt{n}\frac{\tilde{w}^*_{max}s_{max}}{2+s_{max}}\cdot\Vert g_{w^*}\Vert+\frac{1}{2}n\left(\frac{\tilde{w}^*_{max}s_{max}}{2+s_{max}}\right)^2\cdot \text{Tr}(H_{w^*})$. The coefficient $\frac{\tilde{w}^*_{max}s_{max}}{2+s_{max}}$ has a stronger impact on fairness compared to the WOQ case. To further validate this theoretical analysis, we have conducted experiments with WAQ and the results are consistent with our theoretical findings (see Table 2 in the link). We have included these theoretical analyses, proofs, and the corresponding experimental results and discussions in the revised paper. - Q1: Data-free quantization methods with balanced synthetic datasets may help mitigate unfairness caused by data imbalance. As a preliminary exploration, we have conducted experiments with GDFQ[4]. Specifically, for a task involving $n$ classes, the generator uniformly samples labels $y∈${$0, 1, \dots, n−1$} during training to ensure a balanced class distribution in the synthetic data. The results show that this approach can mitigate unfairness to some extent (see Table 3 in the link). However, it is important to note that data-free quantization is primarily a technique designed to enable quantization in the absence of training data, rather than a direct solution to address data imbalance. In contrast, data augmentation explicitly targets class imbalance. - Q2: We acknowledge that the variation in fairness metrics across different models suggests that, in addition to the two key factors identified in our theoretical analysis, fairness in quantized models may also be influenced by other potential factors, including model architecture. To further study this, we conducted additional ablation experiments to explore more factors affecting fairness in quantized models. Our results indicate that model architecture, optimization algorithm, and hardware selection all play a role, with the VGG architecture, Mini-batch SGD, and Ada L4 GPU particularly exacerbating unfairness (see Table 4-6 in the link). In particular, under int 4, VGG exhibits more severe unfairness than ResNet due to its architectural characteristics. Lacking residual connections, VGG is more prone to gradient vanishing and exploding under quantization, amplifying subgroup gaps. What's more, VGG’s uniform layer structure accumulates quantization errors more significantly, whereas ResNet’s residual connections help mitigate these effects by preserving information flow. Your insights are valuable to us, and we sincerely appreciate your reconsideration of our paper. [1] Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference, CVPR 2018. [2] BRECQ: Pushing the Limit of Post-Training Quantization by Block Reconstruction, ICLR 2021. [3] QDrop: Randomly Dropping Quantization for Extremely Low-bit Post-Training Quantization, ICLR 2022. [4] Generative Low-Bitwidth Data Free Quantization, ECCV 2020.
Summary: They used data enhancement to mitigate the unfairness of quantification of unbalanced data set models Claims And Evidence: convincing Methods And Evaluation Criteria: make sense Theoretical Claims: correctness Experimental Designs Or Analyses: soundness Supplementary Material: yes Relation To Broader Scientific Literature: unbalance data, model compression Essential References Not Discussed: N/A Other Strengths And Weaknesses: The content of this article is very interesting, however I would suggest that the title be more strictly limited, perhaps in the field of face recognition, racial differences, etc. In addition, the means to reflect the article are solved through data enhancement, that is, the title or abstract needs to reflect the limitations of the datasets used. Other Comments Or Suggestions: My concern is that 1. the takeaway2is obvious such as "Class imbalance is to blame for unfairness" in line 275 and might not contributes significantly. 2. Takeaway 3 "Although quantization-aware training always provides a better overall performance guarantee, deterioration in fairness induced by imbalanced datasets towards protected attributes is much more severe" is interesting enough but the problems that were found did not seem to be well addressed. Case in tabel 3 when $n$=20 seems to be similar to baseline in Tabel 1. Questions For Authors: 3. Tabel 2-3 lack the baseline model (Tabel 1) for comparison, making it not straightforward 4. Tabel 3 present a good ablation of problems that were found did not seem to be well addressed, which is not vary stable with different $n$, which should be further discuss. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Great thanks for kindly evaluating that "the content of this article is very interesting." We also sincerely appreciate your valuable feedback, and believe that the additional experiments, and explanations can address your concerns. The additional experimental results are available at https://anonymous.4open.science/api/repo/Rebuttal-30C1/file/Reviewer%204FiB.pdf?v=27c3e413. - Q1: We would like to clarify that our work goes beyond merely stating that "class imbalance causes unfairness" by offering detailed theoretical insights and empirical evidence to deepen the understanding of how class imbalance interacts with quantized model fairness, ultimately guiding the development of effective mitigation strategies. To enhance the clarity and impact of our conclusions, we have revised the manuscript to highlight the novel aspects of our analysis, such as the specific mechanisms by which quantization exacerbates unfairness in imbalanced datasets and the effectiveness of proposed mitigation strategies. To further address your concerns, we have included a new ablation study section after Section 5 to explore potential factors influencing fairness in quantized models. The results indicate that model architecture, optimization algorithm, and hardware selection all have potential impacts on fairness (see Tables 1-3 in the link). Specifically, the VGG architecture, Mini-batch SGD, and Ada L4 GPU tend to exacerbate unfairness in quantized models. - Q2: We believe there might be a misunderstanding concerning the purpose of the ablation study in Table 3. The primary aim of this ablation is to investigate how the choice of 𝑛 influences the mitigation of unfairness. In our experiments, the strategies with $n=20$ and $n=3$ proved ineffective, resulting in outcomes closely similar to the baseline reported in Table 1. In contrast, the strategy with $n=10$ closely approach the optimal performance of the proposed method (refer to the results labeled 'RE' in Table 2), effectively mitigating unfairness while narrowing the fairness gap between PTQ and QAT. This sensitivity to the choice of $n$ can be attributed to the impact of mask size on the quality and diversity of augmented data. Larger mask sizes, such as $n=20$, may excessively obscure critical image features, impairing the model's ability to learn. Conversely, smaller mask sizes, like $n=3$, might not provide sufficient diversity to capture the inherent variability of the data. Therefore, an intermediate value like $n=10$ strikes an optimal balance, enhancing both the quality and diversity of augmented data, which leads to more robust and effective fairness improvements. To further substantiate our findings, we have conducted a finer-grained sensitivity analysis on mask size $n$ over the range {$3, 5, 8, 10, 12, 15, 20, 30, 40$} (see Table 4 in the link). We have revised the manuscript to provide a detailed discussion of these results, aiming to clarify any potential misunderstandings. - Q3: Thank you for your insightful feedback regarding Tables 2 and 3. To address this, we have revised Tables 2 and 3 in our manuscript to include the baseline model results, facilitating clearer and more direct comparisons. - Q4: Great thanks for your positive feedback on the ablation study presented in Table 3. We would like to clarify that the ablation study in Table 3 is not designed to identify the optimal $n$, but rather to validate the superiority of a dynamic, randomized approach over static configurations. Specifically, this ablation demonstrates that the random selection strategy for the mask size $n$—defined within the range {$3, 4, \dots, 20$}, as derived from the optimal configuration in [1]—outperforms fixed choices for $n$.​ Since the selection strategy for appropriate mask size $n$ directly impacts the quality and diversity of augmented data, this findings emphasize that, beyond the amount of augmented data, the quality of augmentation also plays a crucial role in mitigating unfairness. Our results indicate that the random selection strategy not only increases the volume of augmented data but also enhances its diversity, leading to more robust and effective fairness improvements. In contrast, fixed choices for $n$ tend to fail in capturing the inherent variability in the data, resulting in less stable and suboptimal fairness improvements. To further support our conclusions, we have conducted a finer-grained analysis on fixed choices of mask size $𝑛$ over the range {$3, 5, 8, 10, 12, 15, 20, 30, 40$} and compared it to the random selection strategy from the range {$3, 4, \dots, 20$} (see Table 4 in the link). The results show that the random selection strategy is indeed the most effective in mitigating unfairness. We have revised the manuscript to clarify this point more explicitly and have included the additional experiments in the appendix. We truly value your feedback and are deeply grateful for your continued support. [1] Random Erasing Data Augmentation, AAAI 2020.
Summary: Network quantization, a widely studied model compression method, effectively converts floating-point models to fixed-point models with negligible accuracy loss. Despite its success in reducing model size, it can exacerbate fairness issues across different dataset groups. This paper examines Post-Training Quantization (PTQ) and Quantization-Aware Training (QAT), identifying two primary factors causing these fairness issues through theoretical analysis and empirical verification. The study reveals that while QAT maintains higher accuracy at lower bit-widths, it performs worse than PTQ in terms of fairness. Additionally, simple data augmentation methods can mitigate these fairness issues, especially in cases of class imbalance. Experiments on imbalanced datasets (UTK-Face, FER2013) and balanced datasets (CIFAR-10, MNIST) using ResNet and VGG models validate these findings. Claims And Evidence: The paper supports its claims through both experimental and theoretical evidence. Both the proofs and experiments effectively fulfill the role of supporting the claims made in the paper. However, some key experiments are included in the appendix. Methods And Evaluation Criteria: The proposed methods and evaluation criteria make sense for the problem Theoretical Claims: I checked all proof. The proof is simple and clear. Experimental Designs Or Analyses: I think the experiments in appendix should be in main body of paper. Supplementary Material: I read whole proof and experiments in appendix Relation To Broader Scientific Literature: This is a new topic Essential References Not Discussed: no Other Strengths And Weaknesses: The paper offers innovative perspectives and analysis on quantization, particularly addressing the impact of different quantization methods on various classes, which is a novel topic. The detailed analysis and experiments provided for this issue are very convincing. However, the paper has several shortcomings: 1. It lacks further analysis of different methods for Post-Training Quantization (PTQ) and Quantization-Aware Training (QAT). 2. I believe that the experiments in the appendix should be included in the main body of the paper, while some of the current main-body experiments should be moved to the appendix. 3. The conclusion of the paper should focus more on providing theorems rather than analyzing data distribution, as datasets can vary significantly in their distributions. Judgments based on whether a single dataset is balanced or not are not sufficient. Other Comments Or Suggestions: The content from the quantization to Theorems 4.1 and 5.1 in this paper is excellent, elaborating on the mathematical principles behind the unfairness caused by quantization. However, I find the subsequent analysis of the relationship between the numerical characteristics of datasets and quantization inappropriate. Therefore, I suggest that the author analyze the numerical characteristics of datasets in relation to quantization case by case on different datasets. In the experiments, move the experiments from the appendix to the main body, focusing mainly on the validity of 4.1 and 5.1. Questions For Authors: 1.Even for the same Post-Training Quantization (PTQ), there are currently many quantization methods. It is unclear whether different quantization methods would affect the results of this paper. If possible, please provide proofs and experiments to demonstrate this. 2. For model quantization, besides the quantization of parameters, there is also the quantization of activations. It would be helpful to provide an analysis of how the quantization of activations impacts the results. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you sincerely for commenting that "the detailed analysis and experiments provided for this issue are very convincing." We also truly appreciate your constructive suggestions. We have conducted additional experiments and provided further explanations to address your concerns. The additional experimental results are available at https://anonymous.4open.science/api/repo/Rebuttal-30C1/file/Reviewer%20c7bP.pdf?v=e20ea28c. - W1&Q1: Our theoretical analysis follows general PTQ and QAT principles and is designed to be applicable to various quantization methods. Specifically, in the final step of the derivation in Theorems 4.1 and 5.1, we only need to replace $\Vert \Delta w \Vert$ with the corresponding quantization error of the specific method—for instance, when applying BRECQ[1], one would replace $\Vert \Delta w \Vert$ with the Frobenius norm of the difference between the full-precision weights and the quantized weights. These theorems also indicate that the larger the quantization error of a method, the more severe the resulting unfairness. To further address your concerns, we have conducted additional experiments utilizing BRECQ for PTQ and LSQ[2] for QAT in the appendix of the revised manuscript, and the results are consistent with our original findings (see Table 1 in the link). - W2&W3&Suggestions: Thanks for your constructive suggestion. Actually, before submitting our manuscript, we faced a dilemma regarding the arrangement of the experiments between the main body and the appendix. In our previous submissions, we primarily focused on investigating the relationship between numerical characteristics of datasets (i.e., gradient norms and Hessian traces) and the fairness of quantized models through case-by-case analyses across different datasets, as you suggested. Moreover, we proposed mitigation strategies by introducing a regularization term in the loss function to penalize differences in gradient norms and Hessian traces across classes. However, several previous reviewers questioned the practicality of the extensive discussion on Theorems 4.1 and 5.1. They argued that focusing on the numerical characteristics of datasets lacked intuitive and in-depth insights. Specifically, they pointed out that it was unclear what factors directly influence the gradient norms and Hessian traces, and they suggested that more theoretical and experimental investigation was needed. Additionally, they noted that the mitigation strategies based on gradient norms and Hessian traces, although effective, were impractical for real-world applications due to the high cost. To address these concerns, we added further theoretical and experimental analysis of gradient norms and Hessian traces in the main body. We found that class imbalance significantly impacts gradient norms and Hessian traces. Consequently, we replaced the unfairness mitigation strategy with a simpler and more efficient data augmentation approach. These changes from our previous submission led to the current paper. We greatly appreciate your suggestion to reconsider the arrangement of the content. In the revised paper, to further strengthen the validity of Theorems 4.1 and 5.1, we have moved the experimental results from Appendix C.4, which demonstrate the validity of these theorems on different datasets, to the main body. Meanwhile, we retained Lemma 4.2, Corollary 4.3, and Lemma 4.4, but relocated their theoretical analysis and experimental verification to the appendix for better clarity and focus. - Q2: We would like to clarify that our theoretical analysis for weight-only quantization (WOQ) can be directly extended to weight-activation quantization (WAQ). This extension is supported by the findings in [3], specifically Theorem 1, which shows that the impact of activation quantization on the target loss can be transformed into a similar effect as weight quantization. Based on this, we have derived the theoretical bounds for WAQ, which reveal that the upper bound of the excessive loss $G(a)$ introduces two additional terms compared to WOQ: $\sqrt{n}\frac{\tilde{w}^*_{max}s_{max}}{2+s_{max}}\cdot\Vert g_{w^*}\Vert+\frac{1}{2}n\left(\frac{\tilde{w}^*_{max}s_{max}}{2+s_{max}}\right)^2\cdot \text{Tr}(H_{w^*})$. The coefficient $\frac{\tilde{w}^*_{max}s_{max}}{2+s_{max}}$ has a stronger impact on fairness compared to the WOQ case. To further validate this theoretical analysis, we conduct experiments with WAQ and the results are consistent with our theoretical findings (see Table 2 in the link). We have included these theoretical analyses, proofs, and the corresponding experimental results and discussions in the revised paper. Your insights are valuable to us, and we appreciate your further surpport a lot. [1] BRECQ: Pushing the Limit of Post-Training Quantization by Block Reconstruction, ICLR 2021. [2] Learned Step Size Quantization, ICLR 2020. [3] QDrop: Randomly Dropping Quantization for Extremely Low-bit Post-Training Quantization, ICLR 2022.
Summary: The paper investigates the fairness implications of network quantization, focusing on two widely used algorithms: Post-Training Quantization (PTQ) and Quantization-Aware Training (QAT). The authors identify two key factors that exacerbate unfairness in model accuracy across different groups: the gradient norm of the group loss function and the trace of the group loss function's Hessian matrix. They theoretically analyze and empirically validate these factors, showing that class imbalance leads to distinct values of these factors among different attribute classes, which in turn exacerbates unfairness. The paper also compares PTQ and QAT, finding that QAT, while generally preserving higher accuracy at lower bit-widths, exacerbates unfairness more severely than PTQ. To mitigate this unfairness, the authors propose and evaluate simple data augmentation techniques, demonstrating their effectiveness in reducing disparate impacts of quantization. Claims And Evidence: The claims made in the paper are generally well-supported by both theoretical analysis and empirical evidence. The authors provide a detailed theoretical framework to explain how gradient norms and Hessian traces contribute to unfairness in quantized models. They also conduct extensive experiments on multiple datasets (UTK-Face, FER2013, CIFAR-10, and MNIST) and models (ResNet and VGG) to validate their findings. The empirical results align well with the theoretical predictions, showing that groups with smaller datasets experience larger gradient norms and Hessian traces, leading to greater accuracy degradation after quantization. However, one potential issue is the reliance on synthetic imbalanced datasets (Imbalanced-CIFAR-10 and Imbalanced-MNIST) to validate the impact of class imbalance. While these datasets help illustrate the theoretical points, their artificial nature may limit the generalizability of the findings to real-world scenarios. The authors could strengthen their claims by including more naturally imbalanced datasets (e.g., iNaturalist). Methods And Evaluation Criteria: The methods proposed in the paper are appropriate for the problem at hand. The authors use standard quantization techniques (PTQ and QAT) and evaluate their impact on fairness using well-established fairness metrics. The choice of datasets (UTK-Face, FER2013, CIFAR-10, and MNIST) is reasonable, as they cover both imbalanced and balanced scenarios, allowing the authors to demonstrate the impact of class imbalance on fairness. The evaluation criteria, particularly the fairness metrics are well-defined and appropriate for measuring the disparate impacts of quantization across different groups. The authors also provide a clear explanation of how this metric is derived and why it is suitable for their analysis. Theoretical Claims: The theoretical claims in the paper are well-formulated and supported by rigorous proofs. The authors provide detailed derivations for the upper bounds of excessive loss in both PTQ and QAT, and they clearly explain how these bounds relate to the gradient norms and Hessian traces. The proofs are presented in the appendix and appear to be correct, though I did not verify every step in detail. One minor point is that the authors could provide more intuition or discussion around the theoretical results, particularly for readers who may not be familiar with the mathematical details. For example, explaining why the interaction terms in QAT lead to greater unfairness compared to PTQ could help make the theoretical insights more accessible. Experimental Designs Or Analyses: The experimental design is sound and well-executed. The authors conduct experiments on multiple datasets and models, covering both imbalanced and balanced scenarios. They also perform ablation studies to validate the effectiveness of data augmentation techniques in mitigating unfairness. The results are presented clearly, with appropriate visualizations (e.g., accuracy plots, fairness metric tables) to support the findings. One potential improvement would be to include more detailed ablation studies on the data augmentation techniques. For example, the authors could explore different augmentation strategies or hyperparameters to see how they affect the fairness of the quantized models. Additionally, the authors could provide more insights into why certain augmentation techniques (e.g., geometric transformations vs. random erasing) perform better in specific scenarios. Supplementary Material: Yes, all of them Relation To Broader Scientific Literature: N/A Essential References Not Discussed: None Other Strengths And Weaknesses: **Strengths:** 1. The paper addresses an important and timely issue in machine learning, namely the fairness implications of model compression techniques like quantization. 2. The theoretical analysis is rigorous and provides clear insights into the factors that contribute to unfairness in quantized models. 3. The empirical evaluation is thorough, covering multiple datasets, models, and quantization methods. 4. The proposed data augmentation techniques are simple yet effective, and the authors provide clear evidence of their impact on fairness. **Weaknesses:** 1. The reliance on synthetic imbalanced datasets (Imbalanced-CIFAR-10 and Imbalanced-MNIST) may limit the generalizability of the findings to real-world scenarios. 2. While the authors compare PTQ and QAT, they do not explore other quantization techniques (e.g., mixed-precision quantization) that might have different fairness implications. 3. The authors propose data augmentation techniques to mitigate unfairness, but they do not explore other potential mitigation strategies (e.g., reweighting, adversarial training). Other Comments Or Suggestions: Please refer to the above comments. Questions For Authors: 1. The paper identifies gradient norms and Hessian traces as key factors contributing to unfairness in quantized models. However, are there other potential factors (e.g., model architecture, optimization algorithms) that could also influence fairness in quantized models? 2. The paper focuses on fairness in the context of classification tasks. Have the authors considered whether their findings might extend to other types of tasks, such as generative models? 3. The authors mention that QAT exacerbates unfairness more severely than PTQ due to the interaction between gradient norms and Hessian traces. Could the authors provide more intuition or a simplified explanation for why this interaction leads to greater unfairness in QAT compared to PTQ? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We sincerely appreciate your acknowledgment that “the paper addresses an important and timely issue, the theoretical analysis is rigorous, and the empirical evaluation is thorough.” We believe that our experimental results strongly support our theoretical findings. ​In response to your concerns, we have conducted further experiments and confirmed the consistent effectiveness of our method. The detailed results are available at https://anonymous.4open.science/api/repo/Rebuttal-30C1/file/Reviewer%20Tpv5.pdf?v=917ec334. - W1: ​Our intention to conduct experiments on Imbalanced-CIFAR-10 and Imbalanced-MNIST is to facilitate **direct** comparisons with CIFAR-10 and MNIST, highlighting the impact of class imbalance on fairness. In addition, we also evaluated our method on UTK-Face and FER2013, two naturally imbalanced datasets. To further address your concern, we have included new experiments on iNaturalist 2017, a real-world highly imbalanced dataset, in the revised version. Our findings remain consistent across all datasets (see Figures 1-2 and Table 1 in the link), reinforcing the real-world generalizability of our conclusions. - W2: We claim that our method is applicable to **any quantization precision**, including mixed-precision quantization (MPQ), as demonstrated by our quantization error analysis. While MPQ assigns different bit-widths to different layers or channels, the quantization error at each layer follows the same statistical properties as in uniform-precision quantization. To validate this, we have conducted experiments with MPQ on different models and datasets in the appendix (see Table 2 in the link). We have also added a discussion in Section 3.1 to clarify this point. - W3: We confirm that other strategies may also help address unfairness, and we have conducted experiments on reweighting and adversarial training (see Tables 3-4 in the link). Our results show that these methods can indeed help mitigate unfairness. We have expanded the discussion in the appendix. Regarding our choice of data augmentation as a mitigation strategy, our motivation is to provide a **more intuitive validation** of our theoretical conclusion that the exacerbation of unfairness in quantized models is related to class imbalance. To achieve this, we adopted geometric transformation and random erasing at the data processing stage, as these methods have been shown to be both effective in prior research and computationally efficient. In addition, we also explored training-stage strategies by introducing a regularization term in the loss function to penalize differences in gradient norms and Hessian traces across classes. However, this approach is less effective and more costly than data augmentation. - Q1: Our study specifically focuses on gradient norms and Hessian traces, both theoretically and experimentally, as they **directly impact** optimization stability and sensitivity to quantization errors. In addition, fairness in quantized models can be influenced by multiple factors. To further investigate this, we have included a new ablation study in the appendix that explores additional factors influencing fairness in quantized models. The results indicate that model architecture, optimization algorithm, and hardware selection all have potential impacts on fairness (see Tables 5-7 in the link). Specifically, the VGG architecture, Mini-batch SGD, and Ada L4 GPU tend to exacerbate unfairness in quantized models. - Q2: We believe that our fairness measurement method can be **directly generalized** to generative tasks, as it relies on the loss difference before and after quantization, with the cross-entropy loss in classification replaced by the appropriate loss function for generative models. To validate this, we have conducted experiments on VAE in the revised paper, using Evidence Lower Bound (ELBO) loss as the evaluation metric (see Table 1 in the link). Our results indicate that the fairness trends observed in classification tasks remain consistent in generative models, further supporting the generalizability of our findings. - Q3: Thank you for your valuable suggestion. To provide more intuition, QAT exacerbates unfairness more than PTQ due to the dynamic interaction between gradient norms and Hessian traces under quantization constraints. Since QAT applies quantization throughout training, gradient updates must adapt to quantization-induced noise, leading to optimization in a more distorted loss landscape. In regions with high Hessian traces, the steep loss surface amplifies the effect of large gradient norms, causing uneven updates across subgroups. In contrast, PTQ quantizes only after full-precision training, avoiding these interaction effects and resulting in relatively lower unfairness. We have added a more detailed discussion in the revised paper and believe this addition improves the accessibility of our theoretical insights. We greatly appreciate your thorough review and thank you for reconsidering our paper.
null
null
null
null
null
null
Improved Lower Bounds for First-order Stochastic Non-convex Optimization under Markov Sampling
Accept (poster)
Summary: This paper studies non-convex stochastic optimization when the data is generated from a Markov chain. This is unlike most papers on the topic where one usually assumes that the noise process affecting the gradients is an i.i.d. process. The goal of this paper is to establish information-theoretic lower bounds on the number of samples needed to achieve an $\epsilon$-accurate stationary point (in expectation). For both countable and finite-state Markov chains, the paper provides tighter lower bounds than those existing. For the latter case, an SVRG-like algorithm is proposed that additionally maintains estimates of the entries of the stationary distribution of the underlying Markov chain. It is shown that this algorithm achieves the $O(1/\epsilon^2)$ rate for this setting. Claims And Evidence: The claims made in the paper are all rigorously supported by detailed proofs. Methods And Evaluation Criteria: This is a theoretical paper, and there is not much to evaluate here. Theoretical Claims: I skimmed through the main proof ideas and they appear correct to me. Experimental Designs Or Analyses: There are no experiments in this paper. This is by no means a limitation since the focus of the paper is to establish fundamental lower bounds. Supplementary Material: I went over the proof for the MaC-Sage algorithm. For the lower bound analyses, however, I only skimmed the main proof strategy. Relation To Broader Scientific Literature: The main contribution of the paper refines the existing literature on Markovian stochastic optimization by deriving tighter lower bounds than those available previously. The authors do a good job of positioning their contributions in this context. Essential References Not Discussed: Relevant references are all well cited and discussed. Other Strengths And Weaknesses: Strengths --------------- - Time-correlated Markovian data shows up in a variety of stochastic approximation problems. Compared to their i.i.d. counterparts, much less is understood for such settings. In this regard, I find the contribution of the paper significant in improving the understanding of what is fundamentally achievable in the non-convex setting. - While the MaC-SAGE algorithm is essentially very similar to variance-reduced algorithms like SAG, SAGA, and SVRG, I still find it very useful to know that such an algorithm is minimax optimal in the finite-state Markov setting. - I think the ideas used for proving upper and lower bounds in this paper can find much broader applicability beyond just non-convex optimization. There are no particular weaknesses that I can find. Other Comments Or Suggestions: In the finite-state setting, I have a comment regarding the MaC-SAGE algorithm. The algorithm needs to keep track of the number of occurrences of each state. I was wondering if this can be avoided using the following idea. Suppose the mixing time of the underlying Markov chain is $\tau$. Then, the subsampled sequence $s_0, s_{\tau}, s_{2 \tau}, \ldots, ..$ is ``almost" i.i.d. with high probability. Now suppose one uses exactly the same algorithm as one would in the i.i.d. case, but updates it once in every $\tau$ time-steps. In this way, one effectively runs the algorithm on i.i.d. data. So one should expect pretty much the same guarantees as in the i.i.d. case, inflated by a delay of $\tau$, since one now effectively uses $T/\tau$ samples, where $T$ is the total number of samples. Isn't this going to recover the same guarantees as MaC-SAGE, since it appears from Theorem 4.4. that the final error-bound is the i.i.d. bound scaled by $\tau$? More generally, unless I am mistaken, given data from an ergodic Markov chain, one could run any optimization algorithm (with no modification) on a subsampled data set (where the subsampling gap is informed by the mixing time of the Markov chain) and achieve the same guarantees as in the i.i.d. case, with $T$ replaced by the number of effective samples $T/\tau$. Some discussion on this matter would be very helpful. Essentially, I want to understand whether there is any need to develop new optimization algorithms for the Markov setting, or would subsampling suffice. Questions For Authors: I have some clarifying questions. I am happy to raise my score once they have been addressed. - Q1) The assumption of bounded variance in Assumption 2.2. seems a bit weird to me. In particular, at any given time $t$, the distribution of the state $s_t$ has not yet converged to the stationary distribution. Thus, $\nabla F(x)$ is not the expected value of $g(x; s_t)$. Isn't this correct? If yes, $\mathbb{E} \Vert g(x; s_t) - \nabla F(x) \Vert^2_2$ is not really the variance of $g(x; s_t)$ (unlike the i.i.d. case). What does it mean to assume a uniform bound then on this expectation? Wouldn't this bound depend on $x$, more generally, in the Markov case? Perhaps the authors can use the TD learning example presented earlier to validate this assumption. - Q2) How is the step-size sequence chosen to arrive at the result in Theorem 4.4.? This does not seem to be specified in the statement of Theorem 4.4., nor in the description of MaC-SAGE. Does it require knowledge of the mixing time $\tau$? If yes, it brings me back to my earlier comment of simply sub-sampling the data based on the knowledge of $\tau$. - Q3) The lower bounds pertain to the number of samples needed to ensure that the *expected* value of the gradient is below a specified tolerance. Can similar lower bounds be derived if one seeks guarantees with high-probability? To be more precise, given a failure probability $\delta$, suppose we wish to determine the minimum number of samples $N(\delta, \epsilon)$ needed to ensure that $\Vert \nabla F(x_t) \Vert $ is below $\epsilon$. Is it possible to characterize $N(\delta, \epsilon)$ using the techniques developed in this paper? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **Comments on subsampling**: We really appreciate the reviewer for initiating such an interesting discussion. In the following we briefly present our understanding and hopefully would provide some insights to the comments. We think the reviewer’s intuition is correct and is aligned with ours. However, we would like to emphasize that simply using a subsampled sequence might not be enough to achieve the same rate as MaC-SAGE. This is because even under the i.i.d. case, vanilla SGD suffers from a slower rate. In other words, to get a faster rate as MaC-SAGE, variance-reduced techniques are needed. For the finite-state case, the main difference between the Markov sampling and uniform sampling (i.e., finite-sum for the i.i.d. case considered in literature) lies in that the stationary distribution $\Pi$ is unknown and non-uniform, which makes its estimation necessary before applying existing algorithms designed for the i.i.d. case. Once it is guaranteed that the estimate of the stationary distribution converges no slower than the algorithm’s rate in the i.i.d. case, one can directly apply algorithms for the i.i.d. case (e.g. SAG, SVRG) to the Markovian case with the final rate scaled by $\tau$. That is also the idea behind MaC-SAGE, where we maintain $y_t$ as an estimate of $\Pi$. Also please refer to lines 300-310 on the right column for discussions. Finally, it is worth noting that the analysis for the Markovian case is non-trivial because of the correlation among time-dependent samples, although algorithmically the rate seems similar (up to $\tau$) to the i.i.d. case. **Q1**: We thank the reviewer for bringing this to our attention. Actually, we are able to weaken such a strong assumption by now taking the expectation according to the stationary distribution $\Pi$, i.e., $\mathbb{E}_{s \sim \Pi}\Vert g(x;s) - \nabla F(x) \Vert \le \sigma^2$, which is then similar to the bounded variance in i.i.d. setting. We note that considering this modified assumption does not affect our lower bound results (i.e., Theorem 3.1 still holds), as the proof of Lemma A.3 holds for any distribution of the chain. **Q2**: The stepsize $\gamma_t$ is chosen as line 1026, which only needs the knowledge of hitting time. We will add its expression to the main text in our updated version. **Q3**: We think our proof techniques could be used to obtain high-probability results. Specifically, in Appendix A.3, eq. (19) holds almost surely, which hence indicates line 652 holds almost surely. Therefore, Theorem 4.2 is valid with probability one. For Theorem 3.1, similarly, we have shown Lemma 5.2, which is a high-probability result (see line 799 for details). In the proof, we set $\delta = 0.5$ to get the final expectation bound (see lines 810-814). Thus, if we leave $\delta$ as a parameter, a high-probability version can be characterized. --- Rebuttal Comment 1.1: Comment: Thank you for addressing my comments. I am satisfied with the rebuttal and increase my score to '4'.
Summary: This paper studies the sample complexity of stochastic optimization for smooth, non-convex functions when the noise variables form a Markov chain instead of being i.i.d. The authors obtain a lower bound of $\Omega{\tau\epsilon^{-4}}$ for stationary Markov processes with a countable state space, where $\tau$ is the hitting time of the Markov chain, and a lower bound of $\Omega{\tau\epsilon^{-2}}$ for finite-state Markov chains, where $\tau$ is the hitting time of the Markov chain. A new algorithm, MaC-SAGE, which nearly matches the lower bound in case of finite-state Markov chains. The work provides theoretical complexity bounds and an efficient algorithm for optimization with Markovian sampling. Claims And Evidence: All statements and theorems given by the authors in the main text are proved, the proofs are given in the appendix. Methods And Evaluation Criteria: The analysis of sample complexity in first-order stochastic optimization is well-established. The authors provide an algorithm-independent lower bound on sample complexity for non-convex functions with markovian noise and, in the case of a finite-state Markov chain, propose an algorithm that nearly matches this bound. Theoretical Claims: 1. In the proof of Theorem B.3., the transition from $$ \sum_{k=1}^{T-1}\|P-e\Pi^T\|_{\infty} \leq c_0 \tau_{mix} $$ is not explained in the second and third claims (see line 912). To me it should be an additional factor $|S|$ coming from $$ \sum_{k=1}^{T-1} \|P^{k}- \1 \Pi^T\|_{\infty}, $$ since the supremum might be attained on different rows of $P^{k}$ for different indices $k$. If this is not the case, please provide nore detailed derivation. Otherwise I do not see how this fact directly follows Lemma B.2. 2. In Corollary B.4. the second inequality has a dependence on $\sigma$, but $\||v_i\|_{\infty} = 1$ in this case. Experimental Designs Or Analyses: Not applicable. Supplementary Material: I've reviewed all the supplemental materials. Relation To Broader Scientific Literature: This paper builds on existing research in stochastic optimization by strengthening lower bounds for nonconvex optimization under Markov sampling, improving on the work of [2] by establishing a tighter $\Omega(\tau \varepsilon^{-4})$ bound that matches the upper bound obtained in [1]. This extends previous results for independent noise [3] to a more complex Markovian setting where dependencies between samples affect the convergence rate. In addition, the authors refine the lower bound for a finite Markov chain $\Omega(\tau \varepsilon^{-2})$ and propose a new MaC-SAGE algorithm that tags the obtained lower bound. [1] Dorfman, Ron, and Kfir Yehuda Levy. "Adapting to mixing time in stochastic optimization with markovian data." International Conference on Machine Learning. PMLR, 2022. [2] Even, Mathieu. "Stochastic gradient descent under Markovian sampling schemes." International Conference on Machine Learning. PMLR, 2023. [3] Arjevani, Yossi, et al. "Lower bounds for non-convex stochastic optimization." Mathematical Programming 199.1 (2023): 165-214. Essential References Not Discussed: No essential references are missed. Other Strengths And Weaknesses: The provides an improvement in lower bounds for non-convex optimization under Markovian sampling. Its originality lies in tightening existing bounds and proposing the MaC-SAGE algorithm, which nearly matches the new lower bound for finite-state space Markov chain. However, the paper considers only discrete Markov noise, while in [1] an upper bound is given for non-convex functions and an arbitrary ergodic Markov chain admitting a stationary distribution. Also in [2] the uniformly ergodic Markov noise is considered and the lower bound is given for strongly convex functions. At the same time, is not clear to me, where exactly the current construction relies on the fact that we are working with a finite state space in the proofs of Section 4.2. In particular, factors relating $\tau_{mix}$ and $\tau_{hit}$ might depend on the size of the space $|S|$. [1] Arjevani, Yossi, et al. "Lower bounds for non-convex stochastic optimization." Mathematical Programming 199.1 (2023): 165-214. [2] Beznosikov, Aleksandr, et al. "First order methods with markovian noise: from acceleration to variational inequalities." Advances in Neural Information Processing Systems 36 (2023): 44820-44835. Other Comments Or Suggestions: The paper contains a few typos: 1. Line 595-596: should be $\|h_i(x)\|_{\infty}$ in the first norm. 2. In Section 2.3 it is written that $g(x;s) := \nabla f(x,s)$ and then the authors use one or the other notation in the text, e.g., in Assumption 4.1 or in formula (20) in the supplementary material. Different notations for the same object make it difficult to understand the text. 3. In Lemma A.2, somewhere there is an index $t$ in $s$ and somewhere there is not; 4. In Assumption 4.1 it is written that $\|g(x,s_t) - \nabla F(x)\| \leq \sigma^2$. Is it an expectation missing or we rely on an almost sure bound here? Questions For Authors: Where exactly the current construction relies on the fact that we are working with a finite state space in the proofs of Theorem 4.2? If there are some hidden constants depending on the cardinality of the space where the Markov chain runs, I suggest to trace them explicitly and provide them in the statement. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: ## Response to "Theoretical Claims" 1. The proof of Theorem B.3: We think the proof of Theorem B.3 is correct and we will add details in the updated version. We detailedly explain how line 912 is derived as follows: First, we derive line 907 from line 906: Denoting $v_{max} := \max_i \Vert v(i) \Vert_{\infty}$, we have $$ \begin{aligned} \sum_i \pi_i \Vert v(i) \Vert_{\infty} \sum_j \vert P^k - \mathbf{1}\Pi^T\vert_{i,j} \Vert v(j) \Vert_{\infty} &\le \sum_i \pi_i \Vert v(i) \Vert_{\infty} \sum_j \vert P^k - \mathbf{1}\Pi^T\vert_{i,j} \max_{j} \Vert v(j) \Vert_{\infty} \\ &\le \sum_i \pi_i \max_i \Vert v(i) \Vert_{\infty} \sum_j \vert P^k - \mathbf{1}\Pi^T\vert_{i,j} \max_j \Vert v(j) \Vert_{\infty} \\ &= v_{\max}^2 \sum_i \pi_i \sum_j \vert P^k - \mathbf{1}\Pi^T\vert_{i,j} . \end{aligned} $$ where due to display issue, we use $\vert A \vert_{i,j}$ to denote $|a_{ij}|$. Since by definition of the matrix infinity norm (i.e., for matrix $A$, $\Vert A \Vert_{\infty} := \max_i \sum_j |a_{ij}|$), for any $i$, $\sum_j | P - \mathbf{1}\Pi^T \mid_{i,j} \le \Vert P - 1\Pi^T \Vert_{\infty}$, then we conclude line 907 by further noting $v_{max}^2 = \max_i \Vert v(i) \Vert_{\infty}^2$ and $\sum_i \pi_i = 1$. Then, note that for finite-state case, since Lemma B.2 holds for any initial distribution $\mu$, setting $\mu = 1_{s_0 = i}$, meaning the chain starts from state $i$ with probability one, it yields that $$ \sum_{k=0}^T \max_i d_{TV}(P^k(\cdot \mid s_0 = i), \Pi) \le c_0 \tau_{mix}. $$ Moreover, by definition of total variation distance, i.e., $d_{TV}(p,q) := (1/2) \sum_{z}|p(z) - q(z)|$, we conclude $\Vert P^k - \mathbf{1}\Pi^T \Vert_{\infty} = 2 \max_{i} d_{TV}(P^k(\cdot \mid s_0=i), \Pi)$. Combining all these facts gives the second term of line 912. To see the first term of line 912, which is the bound for the first term in (23), we simply set $k=0$ for line 900, which hence is bounded by $\max_i \Vert v(i) \Vert_{\infty}^2 \Vert I - \mathbf{1}\Pi^T \Vert_{\infty}$. 2. Typo in Corollary B.4: We apologize for that typo. The second bound in Corollary B.4 should not depend on $\sigma$ and we will fix it in the updated version. ## Response to "Other Strengths And Weaknesses" **About the cardinality of finite state space on the results**: We note that the hitting time in Theorem 4.2 is usually a function of the cardinality of the state space in practice. That is to say the number of states affects the sample complexity by means of the hitting time captured in our theorem. How the cardinality relates to hitting time varies case by case. ## Response to "Other Comments Or Suggestions" 1. Typo on line 595: We apologize for the typo. We will fix it in the updated version. 2. Notation of $g$: In the paper, we consistently define $g(x;s) = \nabla f(x;s)$, i.e., $g$ is the first-order stochastic gradient sampling from the Markov chain. In the updated version, we will clarify such notations for easy understanding. 3. Notation in Lemma A.2: We will drop all subscripts of $t$ in the updated version. 4. About Assumption 4.1: There is no expectation on the norm, saying the bound is assumed to hold almost surely. That also distinguishes the construction for the finite-state case from the countable-state case. Please refer to lines 282-292 for detailed discussion. ## Response to "Questions for Authors" We note that in our construction, the cardinality of the finite Markov chain is $\Omega(\tau)$ due to the fact that there are at least $\tau$ states between $v^*$ and $s^*$ (see Figure 1). Since the hitting time of the chain is usually a function of the cardinality, our bound hence implicitly depends on the state space cardinality.
Summary: The paper proves a lower complexity bound of $ O(\tau_{mix} \varepsilon^{-4}) $ for smooth, non-convex stochastic optimization under Markovian noise with countable states. For finite state space, a lower complexity bound $ O( \varepsilon^{-2}) $ is also given, and a proposed method to match the lower bound to logarithmic factors is proposed. Claims And Evidence: The analysis well supports the claims. The lower bound improves from the previous result of Even 2023 under the same setup. The lower bound is also tight, in the sense of matching the upper bound in prior works. Methods And Evaluation Criteria: No numerical studies are presented. Theoretical Claims: The proof is not checked carefully. The proof seems to follow similar works in optimization and lower complexity bound analysis. Experimental Designs Or Analyses: Not applicable. Supplementary Material: Supplementary material is briefly reviewed. Relation To Broader Scientific Literature: The works add an important piece for stochastic optimization under the non-convex setting. Essential References Not Discussed: Related works are relatively complete. Other Strengths And Weaknesses: Overall, I think the paper makes good contributions to the broad stochastic optimization field, where many ML methods operate under Markov chains. The results are presented clearly, and the proofs are intuitive to understand. The consideration of finite state space is a good addition to the results, and the proposed algorithm, although similar to the classical variance reduced method for non-convex problems, yields strong convergence rate in the markovian noise setting. However, the writing might not be friendly enough. For example, I am a little confused about the definition of zero-respecting algorithms in section 2.3. It seems this algorithm class is general, but what are the exceptions? Additionally, this work focus on smooth functions, I wonder what's the results with more generic assumption, i.e., bounded gradients? Other Comments Or Suggestions: See above. Questions For Authors: See above. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for positive and encouraging comments on our paper. In our updated version, we will promote the writing to make it clearer and easier to follow for the readers. **For zero-respecting algorithms we consider in the paper**, we note that zero-respecting algorithms require the initial point $x_0$ to be zero, i.e., $x_0 = 0$ due to (5). That is to say random initialization is not allowed for zero-respecting algorithms. However, it is worth noting that such limitation on the initialization is also seen in lower bound analysis of literature [Even’23, Arjevani et al’23, Beznosikov et al’24, Duchi et al’12] and it does not affect the convergence result of the algorithm. We are willing to generalize our results to randomized algorithms in the future. **About extension to bounded gradient assumption**: We appreciate the reviewer’s suggestion. Extension to bounded gradient case is interesting and we hope to address it in the future. Also we would like to note that smoothness is a common assumption for convergence analysis in optimization literature [Ghadimi and Lan’13, Even’23, Roy et al’22, Dorfman and Levy’22]. Therefore, our results are applicable for optimization problems under standard assumptions.
Summary: This paper studies the lower bound of sample complexity of general first-order algorithms for stochastic non-convex optimization problems under Markov sampling. They first show that for samples drawn from a stationary Markov chain with countable state space, the sample complexity is at least $\Omega(\epsilon^{-4})$. Moreover, for finite-state Markov chains, they show a $\Omega(\epsilon^{-2})$ lower bound of the sample complexity and propose a new algorithm that is proved to (nearly) match the lower bound. Claims And Evidence: The claims are supported by clear and convincing evidence. The lower bounds are derived via carefully constructed functions and Markov chains, while the algorithm’s analysis rigorously addresses Markovian sampling. Methods And Evaluation Criteria: The construction of the hard instance is standard in the literature. The design of the near-optimal algorithm seems new and interesting to me. Theoretical Claims: I did not check the proof line by line, but the technique is standard. Experimental Designs Or Analyses: There is no experiment. Supplementary Material: There is no supplementary material. Relation To Broader Scientific Literature: As stated in the paper, the previous lower bound under this setting is $\mathcal{\Omega}(\tau \epsilon^{-1})$. This paper improves it to $\mathcal{\Omega}(\tau \epsilon^{-4})$. I think it is a huge improvement and helps us understand the boundary of the first-order algorithms. Essential References Not Discussed: None. Other Strengths And Weaknesses: None. Other Comments Or Suggestions: 1. Page 3, Left, Equation (2): It is better to explicitly give the definition of $g(\cdot)$ 2. Page 3, Left, line 151: Is it $\bar{s}_{t}$? 3. Page 3, Right, line 163: It is better to give the dimension of $x_{t, i}$ and $x_t$ 4. Page 4, Left, line 185: What is support? 5. Page 4, Left, line 205: Is $x_{t+1/2}$ used in the iteration? 6. Page 4, Definition 2: It seems to me $\tau_w$ should be defined with $\inf$ 7. Page 5, Equation (7): Should it be $|S_T(\mathcal{A})|$? 8. Page 5: Line 239: Can you explain more on When $N^{\epsilon}_{s}(M, \Delta, L, \sigma^2, \tau)$ is lower bounded by $N$, i.e., $N_{s}(M, \Delta, L, \sigma^2, \tau) \geq N_T$ 9. Why the sample complexity of Algorithm 1 is $O(\tau \epsilon^{-2})$? 10. The introduction of the algorithm class can have more explanation. Questions For Authors: 1. In terms of the sample complexity gap, what is the core difference in analysis between the infinite-state Markov chain and finite-state Markov chain? 2. Why consider different oracles in the infinite-state Markov chain and finite-state Markov chain? 3. What is the key difference when constructing the hard instance under this Markov sample scheme? 4. How does the number of states affect the performance of MaC-SAGE? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: 1. By using $:=$, we actually define $g(\theta; s, s’) := (\phi(s)^T \theta - r(s, s’) - \gamma \phi(s’)^T \theta)\phi(s)$. 2. We will fix it in the updated version. 3. We thank the reviewer for the suggestion. By our notation, we mean $x_{t,i}$ being the $i$-th point at $t$-th iteration, whose dimension is $d$. We denote $x_t$ just for a collection of all updated points $x_{t,1},\dots, x_{t,M}$. And when $M=1$, $x_t = x_{t,1}$. The reason we define such multiple points of query is to include a broader class of algorithms, e.g., momentum-based algorithms. We present the example of Randomized ExtraGradient in line 204, where $M=2$, meaning two points are maintained to update at every iteration. In the updated version, we will clarify further clarify it. 4. In our setting, the support of a vector means the collection of all non-zero coordinates, i.e., $support(x)= \\{ i : x[i] \ne 0, \forall 1 \le i \le d \\}$. 5. We apologize for the typo. $x_{t+1/2}$ should be used in the update of $x_{t+1}$ in line 206. We will fix it in the updated version. 6. We apologize for the typo of missing $\inf$. We will fix it in the updated version. 7. We will fix it in the updated version. 8. By the definition of $N_s^{\epsilon}$, to ensure $N_s^{\epsilon} \ge N_T$ with $N_T$ being some constant, we only need to guarantee 1) the existence of an oracle (due to $\sup$ over $O_s$); 2) the existence of a function (due to $\sup$ over $\mathcal{F}$); 3) for any algorithm $\mathcal{A}$ (due to $\inf$ over $\mathbf{A}_{zr}$), 4) the smallest number of samples used by $\mathcal{A}$ causes its output leading to the expectation of the gradient norm firstly lies below the level of $\epsilon$. 9. Denote $\tilde{x}_T$ as the point which realizes the minimization, i.e., $\tilde{x}_T \in arg \min E\Vert \nabla F(x_t) \Vert^2$. Then, in order to force $\mathbb{E}\Vert \tilde{x}_T \Vert \le \epsilon$, it suffices to force $\mathbb{E}\Vert \tilde{x}_T \Vert^2 \le \epsilon^2$. Setting the bound on the right hand side of Theorem 4.4 gives that $T = \tilde{O}(\tau \epsilon^{-2})$. Moreover, since only one sample is drawn at each iteration, it concludes the sample complexity of Algorithm 1 is $\tilde{O}(\tau \epsilon^{-2})$. 10. We appreciate the reviewer’s suggestion. We will provide more detailed explanations of the algorithm class in the updated version. Roughly speaking, the algorithm class we consider in the paper captures almost all popular first-order methods in literature. At every iteration, the algorithm is allowed to take all histories of previous samples and $ x_0,\dots, x_t$ to generate $x_{t+1}$ following (5) which is satisfied by almost all first-order methods in literature. Note that (5) generalizes [Even’23] where $x_{t+1}$ is linearly spanned by previous points and sampled gradients. **Q1**. The core difference lies in how the stochastic gradient $g$ is constructed in the two settings. Assumption 2.2 for countable-state chains is weaker than Assumption 4.1 for the finite case (see lines 282–292). To address this, we design a countable-state Markov chain (Figure 2) by splitting $v^*, w^*$ into substates $v_1^*, v_2^*, w_1^*, w_2^*$ and define $g$ via (20). Unlike the finite case (Figure 1), where the transition from the parent of $v^*$ to $v^*$ happens with probability one, in the countable case, $v_1^*, v_2^*$ share the same parent, and the transition splits probabilistically: to $v_1^*$ with probability $q$, and to $v_2^*$ with $1-q$. Equivalently, the chain transitions to $v^*$, then flips a coin to select $v_1^*$ or $v_2^*$. This added randomness, combined with (20), makes the event “$prog_0(x)$ increases by one” succeed with probability $q$ (as shown in Lemma 5.2). As a result, more samples are needed—compared to the finite case—to ensure $prog_0(x) = d$ and hence $\Vert \nabla F(x) \Vert$ is sufficiently small. See lines 333–359 for details. **Q2**. The main reason of considering different oracles is to bridge the gap between our lower bounds and upper bounds for different settings. Essentially we are searching for tight bounds under different settings. While we could get $\epsilon^{-4}$ for the finite case under a more general Assumption 2.2, we aim to prove that an improved bound $\epsilon^{-2}$ is tight under a more specific/easier setting for the finite state space case. This is both motivated by practical consideration and also the existing upper bounds. Then we propose a new algorithm such that the sample complexity can be improved to $\epsilon^{-2}$ under such a stricter but practical Markov-chain class further with the matching lower bound. **Q3**. Please refer to Q1. **Q4**. We note the hitting time is a function of the number of states which is different for different Markov chains. Therefore, the convergence rate of MaC-SAGE is affected case by case.
Summary: This paper presents sample complexity lower bounds for stochastic gradient descent under a Markovian sampling assumption. In particular, there are two theorems in the paper showing lower bounds $\Omega(\epsilon^{-4})$ for Markov chains with countably infinite state space and $\Omega(\epsilon^{-2})$ for finite state space chains. The results also depend on a constat $\tau$, which is an upper bound on the Markov chain's hitting time. In addition, the paper presents a new algorithm called MaC-SAGE for finite state space chains whose sample complexity upper bound matches the lower bound in the paper. Claims And Evidence: See Theoritical Claims section. Methods And Evaluation Criteria: N/A Theoretical Claims: I have read through the proofs of Theorems 3.1, 4.1, and 4.4. I have some concerns on the correctness of their argument for the main results. 1. I think the convergence rate in Theorem 4.4 may depend on the mixing time as well. The last line of the convergence proof of MaC-SAGE on page 19 asserts that $\tau_{\text{mix}} \preceq \tau_{\text{hit}}$ citing [Levin & Peres, 2017]. My understanding is that its is not true in general that the mixing time can be bounded by the hitting time. For such a bound to hold, Theorem 10.22 in [Levin & Peres, 2017] requires reversibility of the Markov chain and some holding probability ($P(x, x) \geq 1/2$) to ensure aperiodicity. 2. The proof of Theorem 3.1 (lines 780-793), the authors seem to argue that the sum $\sum_{l\le t} B_{l}$ of the indicators $B_{l}$ that progress was made at time $l$ can be written as the sum of i.i.d. Bernoully random variable, only based on the fact that the there is at most one 1 between a time interval of length $\tau/2$. I don't think this is true since $B_{l}$'s are defined according to the trajectory of a Markov chain. This part of the analysis must be fixed for me to recommend acceptance, since otherwise the validity of their key result, Thm. 3.1, the improved lower bound for Makovian first-order methods, is not justified. 3. Also, I have a question on their new algorithm MaC-SAGE and the upper bound $O(\tau/T)$ on the best-case expected gradient norm squared. The algorihtm seems to be a version of SAG (if not the same) in the Markovian setting, and [Even '23] already establishes a matching upper bound. Also the additional reference [Powell and Lyu '24] establishes a similar upper bound with random target time in place of the hitting time for regularized MISO ran on general recurrent data samples. This work also handles constrained nonconvex optimization. Compared to these existing results, I do not see a compelling reason to introduce a SAG-like algorithm if the only purpose is to obtain upper bound $O(\tau/T)$. What additional advantage does the proposed algorithm and the accompanying result provide? Does the proposed algorithm show competitive performance against these algorithms? Experimental Designs Or Analyses: N/A Supplementary Material: I read the proofs of Theorems 3.1, 4.2, and 4.4. Relation To Broader Scientific Literature: The paper's main contributions are lower bounds for SGD with Markovian sampling and non-convex objectives. The prior known lowerbound of [Even, 2023] was of the order $\Omega(\epsilon^{-1})$. This paper tightens this to $\Omega(\epsilon^{-4})$ for Markov chains on countably infinite state space and $\Omega(\epsilon^{-2})$ for finite state space. This is on par with similar results for i.i.d. sampling (e.g. [Arjevani et al. 2023]). Essential References Not Discussed: The authors are encouraged to compare their results (on the upper bound) with the results in the following papers on stochastic optimization with dependent data: [1] William Powell, Hanbaek Lyu, "Stochastic optimization with arbitrary recurrent data sampling", ICML 2024 [2] Ahmet Alacaoglu and Hanbaek Lyu, "Convergence of First-Order Methods for Constrained Nonconvex Optimization with Dependent Data", ICML 2023 Especially, Thm. 3.8 in [1] seems to establish the matching upper bound $O(\tau/T)$, where in fact \tau is replaced by the random target time, a quantity that is generally smaller than the hitting time. Please also see my question on the upper bound in the theory section. Other Strengths And Weaknesses: **Strengths:** 1. The paper presents two new lower-bounds for SGD with Markovian sampling which were not available in the literature. For the countable state space case, the dependence on $\epsilon$ matches that of i.i.d. sampling. The dependence of the lower bound on $\tau$ demonstrates the additional difficulty of the problem for poorly behaved Markov chains. The bounds for non-convex problems are much tighter than previous results. 2. The version of SAG in Algorithm 1 has sample complexity matching the lower-bound for finite state chains. It also works by maintaining an approximation of the stationary measure which is interesting and new to my knowledge. **Weaknesses:** 1. More detail in some of the proofs would also be helpful. The construction of the Markov chains for the lower bounds seems a somewhat vague. Especially the for the countably infinite case on page 13, a more explicit definition of the transition kernel would be helpful. Defining the transition to states $v_1^*$ and $w_1^*$ conditional on being in $\{v_1^*, v_2^*\}$ and $\{w_1^*, w_2^*\}$ as is done below (20) is a bit confusing to me. 2. I don't understand how the Markovian dependence of the data sampling is handled in the proofs. For instance, I am unsure how we reach the conclusion (19) on page 12 in the proof of Theorem 4.2. It i also not clear to me how the construction on page 10 guarantees the hitting time is bounded above by $\tau$. See also the comments in the Theoretical claims section. Other Comments Or Suggestions: I am reserving my recommendation due to the (1) question/gap in the proof of lower bound and (2) novelty of the proposed algorithm and upper bound. I am willing to revisit my score if my concerns are successfully addressed. Questions For Authors: 1. There are two separate lower bounds for countably infinite and finite state space chains. However, the proofs dont seem to rely on the cardinality of the state space. It seems like both constructions can be done with a finite state space with size depending on $\tau$. The key driver of the difference in the two theorems seems to be the strengthening of the bounded variance assumption. Why are the theorems separated based on infinite vs. finite state space? 2. Also, for a countably infinite state space the assumption of bounded hitting time seems very strong. I think the stationary measure would have to be supported on a finite subset of $S$ for this to hold, which effectively reduces to the finite state space case. Can another assumption be used for countably infinite state space? 3. The paper mentions a number of times that the Markov chains considered are assumed to be stationary. Why is this necessary? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: **TC1**: After double-checking, we modified the theorem. Now the rate scales with $\max ( \tau_{mix},\tau_{hit})$, but note MaC-SAGE remains optimal (up to constants). **TC2**: We clarify that we are **not** claiming $B_l$s are i.i.d. Bernoulli r.v., but we claim $z_i$s (see its definition in the following) are i.i.d. Bernoulli r.v. We explain it as follows: First, we construct a Markov chain in which there are two states $v^*, w^*$ such that $\tau/2$ steps must be taken to commute each other. Then, we further introduce randomness in states $v^*, w^*$ to make sure that in the ideal case, when $\tau/2$ steps are taken, the event “$prog_0(x)$ increases by one” happens with probability at most $q$. To do this, we split $v^*$ (and $w^*$) into two substates $v_1^*, v_2^*$ (and $w_1^*, w_2^*$) and construct the gradient by (20). By Lemma A.2, every $\Omega(\tau)$ steps there is at most one $B_l$ being one, with its succeeding probability (if it could) at most $q$. To see line 790, in the ideal case, at least $\tau / 2$ steps are needed to increase $prog_0(x)$ by one and this becomes possibly true with probability at most $q$. That is to say, when $v^*$ or $w^*$ is visited, a coin is flipped where the flip’s result determines whether or not the event “$prog_0(x)$ increases by one” succeeds. And also the results of coin flips are independent across every time when $v^*$ or $w^*$ is visited. Therefore, $z_i$s record the results of coin flips, which are i.i.d. **TC3**: To highlight novelty: [Even’23] assumes uniform stationary distribution $\pi$ ($\pi_i = 1/n$), while [Powell & Lyu’24] require prior knowledge of $\pi$. In contrast, MaC-SAGE requires no information about $\pi$ and allows it to be non-uniform. We design $y_t$ to estimate $\pi$ and address this challenge. **W1**: We explain our construction of Markov chains by referring to Figures 1,2. Actually, transition probabilities are not critical for proving lower bounds; only the chain's structure matters. In the countable-state case (Figure 2), for the $S\setminus S’$ part (orange circle), starting from $v^*$ (union of $v_1^*, v_2^*$, red dashed circle), the chain can move only in one direction until hitting $s^*$; similarly for $s^*$ to $v^*$. The number of states along each path between $v^*$ and $s^*$ is $\tau/2$. The structure of the subchain in $S’$ (blue circle) is flexible—for example, it may be a complete graph. This construction ensures commuting between $v^*$ and $w^*$ requires at least $\tau/2$ steps, while maintaining ergodicity of the chain. The substates $v_1^*, v_2^*$ lie within $v^*$, and similarly for $w^*$. Figure 2 shows: 1) exactly one common parent state of $v_1^*, v_2^*$; 2) conditioned on the parent, transition to $v_1^*$ happens with probability $q$, and to $v_2^*$ with $1 - q$. It is equivalent that, upon reaching $v^*$, a coin is flipped to determine $v_1^*$ (w.p. $q$) or $v_2^*$ (w.p. $1-q$). The conditional probability below (20) defines this flip. **W2**: The constructed Markov chain and $f$ in (16) force any algorithm to iterate at least $\Omega(\tau)$ steps to make an increase in $prog_0(x)$. Then by Lemma 5.1, $\Omega(\tau d)$ samples are needed to make $\nabla F$ small (see lines 360-380). To see boundedness of hitting time, we note by our construction, as long as the hitting time of the subchain supported on $S’$ (blue circle) is $O(\tau)$, due to hitting time of the subchain in orange circle is $O(\tau)$, the hitting time of the whole chain is $O(\tau)$. **Q1**: We note our lower bounds rely on hitting time, which is a function of cardinality of (finite) state space. Due to space limit, please refer to responses to Q1&2 of Reviewer eLLJ about differences and reasons of separating Theorems 3.1&4.2. **Q2**: We acknowledge the limitation in finite hitting time assumption. We explain the idea how to construct a chain with bounded mixing time and hope to show it in future. First we design a cyclic-like subchain in orange circle with the number of states between $v^*$ and $s^*$ being $O(\sqrt{\tau})$. Second, we consider any subchain in blue circle whose mixing time is $O(\tau)$. If we restrict on the orange or blue subchain, we have the mixing times of them are $O(\tau)$. Then the mixing time of the composed chain is upper bounded by the larger mixing time of the orange or blue component. This is shown by [Madras&Randall’02] for reversible chains, while we conjecture it is also true for non-reversible chains. [1].Madras, Neal, and Dana Randall. Markov chain decomposition for convergence rate analysis. Annals of Applied Probability,2002. **Q3**: We acknowledge our proof techniques only suit stationary chains. But we note that many applications (e.g. TD learning, RL) belong to stationary case. And we highlight it is the first time improved lower bounds are provided for Markov sampling, with a new algorithm showing optimality of our bounds. Extending to non-stationary case is interesting and we hope to address it in future.
null
null
null
null
Latent Variable Estimation in Bayesian Black-Litterman Models
Accept (poster)
Summary: The paper proposes Bayesian models for Portfolio management with good theoretical results and empirical validation. From what I can understand the contribution of the work is computational efficiency of portfolio management. Could the code be provided to do this? Given no code, I see no academic value for this work. The empirical validation is far too small to allow for any reasonable examination of the academic value of this work. ## update after rebuttal I have raised my score due to code release. Claims And Evidence: The claims and evidence are sufficient. Methods And Evaluation Criteria: The methods and evaluation are sufficient. Theoretical Claims: There is no justification given as to why Bayesian comes up in portfolio management. Experimental Designs Or Analyses: The experimental design seems sufficient for the paper. Supplementary Material: No Relation To Broader Scientific Literature: N/A Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: I'm not sure how or why this work is relevant or useful to the ICML community or the machine learning community. Questions For Authors: Please answer how this is relevant to the ICML community. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: >**Reviewer's Comment:** Could the code be provided to do this? Given no code, I see no academic value for this work. We provide code for reproducibility and the latest paper revision in this **[anonymous dropbox folder](https://www.dropbox.com/scl/fo/i13bhu138gjk76cf5r44v/ACog6LpDbYdBQbB87jtJA-Q?rlkey=mpv3b4xgbmr1cohaen6roxoyf&st=zndq6hnx&dl=0).** **On Code Availability.** We originally withheld the code due to industrial IP auditing procedures. However, per the reviewer's request, we have now completed the necessary internal reviews and are able to share an anonymized version of the code for reproducibility purposes. >**Reviewer's Comment:** The empirical validation is far too small to allow for any reasonable examination of the academic value of this work. We have extended the backtest period of the experiment on the DJI dataset from 20 years to 30 years, as shown by Table 2, Section 4 and Figure 5, Appendix G.4. We have also added a turnover rate analysis (Appendix G.6) to further demonstrate our model outperformance. We note that our experiments aim to prove the concept—specifically, to prove that this machine learning model could be readily implemented and useful (compared to the benchmarks). The two datasets we choose are practically useful as they provide some benefits compared to a larger set of stocks: ease of trading, smaller impact of slippage (due to higher liquidity), fewer trades per period (thus lower transaction costs), lower managerial costs, and lower computational costs. We need to clarify that the backtest period should be carefully chosen to avoid unfair comparison. Specifically, although the DJIA covers a long history, some stock data might be inaccessible. Omitting them (as many portfolio studies do) could introduce selection bias because delisted stocks often underperform. Thus, we believe our experimental results now (20 yrs for sector ETFs; 30 yrs on the Dow Jones Index) are sufficient for showing the outperformance of our models. >**Reviewer's Comment:** There is no justification given as to why Bayesian comes up in portfolio management. As we mention in \textbf{Why Bayesian? } in the related works (Appendix C), Bayesian framework has long been advocated and used in portfolio management. We state: *“To address the parameter estimation risk in traditional portfolio optimization shown by (Markowitz, 1952; Kalymon, 1971), Barry (1974); Klein & Bawa (1976); Brown (1976) advocate Bayesian framework upon prior information in portfolio optimization.”* Moreover, it is proved that the Bayesian framework has several benefits, as we state: *“Foundational works by (Jorion, 1986) and (Black & Litterman, 1992) demonstrate how Bayesian shrinkage improves covariance estimation, reducing overfitting and highly sensitive weight in Markowitz-style allocations (Meucci, 2005; DeMiguel et al., 2009).”* We hope this background information clarifies the use of Bayesian. >**Reviewer's Comment:** Please answer how this is relevant to the ICML community. From an essential and practical perspective, our work aims to take different types of data under different scenarios — some data is given, while others are not — to make predictions and decisions. Thus, it addresses machine learning problems using machine learning methods, and should be considered relevant to general ICML community. Specifically, 1. **Different types of data** include: raw data, feature data extracted from raw data, feature data involving additional information, and heuristic expert knowledge (views). One of the major focuses in our model design is to capture the effects of these data through Bayesian networks with latent variables, a common methodology in machine learning research. These data are described in the introduction (page 1), Sec. 3.2, 3.3, and 3.4 (page 3-5). 2. **Different scenarios** refer to Sec. 3.3, where investor views are observed, and Sec. 3.4, where no subjective views are given. In the scenario of Sec. 3.4, our model is showcased by two configurations for handling different types of data. These scenarios are described in the paragraph before Sec. 3.1 (page 3) and the paragraph after remark. 3.3 (page 5). 3. **Predictions** refer to the posterior probability distribution of unobserved asset returns given data estimated by each designed model. The prediction problems are described in problems 1, 2, 3 (page 2, 4, 5). The prediction models are Def 2.2, 3.3, 3.4, and 3.6, and they make predictions in Lemma 2.1, Corollary 3.1.1, 3.2.1, and 3.3.1, correspondingly. 4. **Decisions** refer to the solution of the portfolio optimization problem (Def. 2.1, page 2) based on the estimations of each designed model, including Lemma 2.1, Thm 3.1, 3.2, and 3.3. The portfolio optimization problem is also a popular topic in machine learning research. In this perspective, we deem our work to fall under the scope of machine learning and satisfy the topics of interest (https://icml.cc/Conferences/2025/CallForPapers) of ICML. --- Rebuttal Comment 1.1: Comment: I have raised my score given the code release. --- Reply to Comment 1.1.1: Comment: We are very happy that our revisions and clarifications have met your expectations. Thank you again for your detailed review! Your constructive comments have greatly improved this draft.
Summary: The paper extends the classical Black-Litterman model by incorporating asset features. In the traditional model, investor views and their associated uncertainty are assumed to be given. The author proposes leveraging asset features to estimate both investor views and their uncertainty. Two models are introduced: the first assumes that asset features influence both investor views and uncertainty through a shared hyperparameter, while the second allows asset features to directly influence investor views. Numerical experiments demonstrate that the proposed approach outperforms the Markowitz and market-index baselines. Claims And Evidence: The paper is well written and use abundant figures to demonstrate the proposed graphical models. Methods And Evaluation Criteria: Yes. The metrics used in the paper for porfolio selection are standard. Theoretical Claims: I checked some of the proofs and did not find obvious errors. Experimental Designs Or Analyses: - **Features Used in Experiments:** What asset features are used in the experiment section? Since the proposed method requires additional input (asset features) compared to the baseline methods, one could easily construct a portfolio that outperforms the baselines by simply assigning greater weight to the tech sector. - **Choice Between Configurations:** Can the authors provide a discussion on how to choose between the two proposed configurations? Since, in practice, practitioners primarily need a formula to compute portfolio weights, such a discussion would be valuable. Similarly, what kinds of asset features are most suitable for the proposed method? Supplementary Material: No. Relation To Broader Scientific Literature: The paper extends Black-Litterman model and is related to that literature. Essential References Not Discussed: The paper cites sufficient papers in the literature. Other Strengths And Weaknesses: Strengths The paper proposes a Bayesian framework for incorporating asset features, and the resulting formulas have closed-form expressions. This is advantageous and practical, as it allows for easy computation. Other Comments Or Suggestions: See Experimental Designs Or Analyses. Questions For Authors: See Experimental Designs Or Analyses. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thanks for the reviews. The latest revision is readily available in the **[anonymous dropbox folder](https://www.dropbox.com/scl/fo/i13bhu138gjk76cf5r44v/ACog6LpDbYdBQbB87jtJA-Q?rlkey=mpv3b4xgbmr1cohaen6roxoyf&st=zndq6hnx&dl=0).** Any changes made from the submitted version are highlighted in blue in this updated draft. ## Features Used in Experiments. >**Reviewer's Comment:** What asset features are used in the experiment section? The asset features we use in our experiments are listed in Appendix G.3. They include common indicators (e.g., EMA, MACD, RSI, ...) in financial analysis. The idea behind selecting these features is to keep things generic, following configuration 1 (as explained in Remark 3.3). We state the description of our features usage in **Backtest Task. (page 8)** with: *“In the model, the prior is set as traditional Markowitz model and the features are selected based on nine generic indicators (Table 5) derived from asset-specific data.”* >**Reviewer's Comment:** Since the proposed method requires additional input (asset features) compared to the baseline methods, one could easily construct a portfolio that outperforms the baselines by simply assigning greater weight to the tech sector. We would like to clarify that our method does not necessarily require “additional input": - Similar to the features used in our experiment, they can be extracted from existing raw data (e.g., price and volume). In this case, the feature selection does not favor any particular sector and is broadly applicable to various assets. - That said, in some cases, if an investor has hindsight regarding a range of assets (rather than individual ones), they could include non-asset-specific features (e.g., the QQQ index representing the tech sector) as part of the model. The incorporation of these features should follow configuration 2 (also as explained in Remark 3.3), alongside the generic asset-specific features. In our experiment with the sectors dataset, our model does not assign greater weight to the tech sector "XLK". The weight of "XLK" is, in fact, lower than that of "XLP" or “XLF” for most of the time. We have added an example asset allocation visualization (Figure 7, Appendix G.5, https://imgur.com/CCXyE7T) to demonstrate this. ## Choice Between Configurations >**Reviewer's Comment:** Can the authors provide a discussion on how to choose between the two proposed configurations? Yes, we have added a discussion explicitly on the choice of configurations. Below is the details: - In the original paper version, we have discussed the distinctive characteristics of the two configurations in the paragraph after Remark 3.3 (page 5): > *“We showcase the feature-integrated Black-Litterman network as two configurations: one incorporating Effect 1 and another incorporating Effect 2. Intuitively, the first one better captures generic features while the second one more effectively handles the non-asset-related features.”* - To make this concept more explicit to practice, we have added a statement after the above paragraph: > *“This implies that, in practice, if an investor takes generic features of assets (e.g. indicators derived from the time series of each asset, as shown in our experiment), configuration 1 should be used. If an investor takes features not specific to individual assets (e.g. interest rates), configuration 2 should be used. The two configurations are not contradicting, so one can take both types of features and incorporate them correspondingly.”* >**Reviewer's Comment:** in practice, practitioners primarily need a formula to compute portfolio weights, such a discussion would be valuable. Similar to the usage of the Black-Litterman (BL) Formula (Theorem 2.1), the practitioner formula to compute portfolio weights is provided by Theorem 3.2 and Theorem 3.3 corresponding to each configuration. >**Reviewer's Comment:** Similarly, what kinds of asset features are most suitable for the proposed method? Regarding what asset features are suitable, we do not limit the choice of data in this paper and only distinguish them for the purpose of choosing which configuration to use. For example, in our experiment, we randomly take 9 indicators of each asset price as generic features (see Appendix G.3 on page 25) to demonstrate the practice of configuration 1. In a features-driven model like ours, feature engineering literature [1][2] suggests taking diversified (uncorrelated) data to reduce multicollinearity so that it can provide robust mean/variance estimation and portfolio optimization decisions. However, detailed methods for feature selection problems in our model are not the focus of our work, thus we retain them for future works. [1]. Gujarati, Damodar, and Dawn Porter. "Multicollinearity: What happens if the regressors are correlated." Basic econometrics 363 (2003). [2]. Alin, Aylin. "Multicollinearity." Wiley interdisciplinary reviews: computational statistics 2, no. 3 (2010): 370-374.
Summary: The paper presents a new formulation for the well-known Black-Litterman model, introducing a Bayesian reinterpretation of the model for portfolio optimization, eliminating the need for subjective investor views and their associated uncertainties. The authors analyse the problem from a theoretical perspective and numerically validate their findings. --- Post-rebuttal: The authors provided adequate responses to my concerns. I will maintain my (positive) score. Claims And Evidence: All the claims are supported by evidence. Methods And Evaluation Criteria: Yes, the evaluation criteria is coherent with the scope of the proposed model. Theoretical Claims: I didn't carefully check the proofs. Experimental Designs Or Analyses: The experimental validation is limited but coherent with the scope of the work. Supplementary Material: I looked at the supplementary material, which consists in the proofs of the statements, the related works, and some experimental details. Relation To Broader Scientific Literature: This work is relevant to the specific literature about this portfolio optimization model. Essential References Not Discussed: To the best of my knowledge, relevant literature is presented. Other Strengths And Weaknesses: I think this work is very well-presented, even if, due to space constraints, the authors often omitted discussions of their results and choices. Other Comments Or Suggestions: My only comment pertains to the highly specific target and scope of the work, which may make it less suitable for the general audience of ICML. Questions For Authors: No questions. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for the reviews. The latest revision is readily available in the **[anonymous dropbox folder](https://www.dropbox.com/scl/fo/i13bhu138gjk76cf5r44v/ACog6LpDbYdBQbB87jtJA-Q?rlkey=mpv3b4xgbmr1cohaen6roxoyf&st=zndq6hnx&dl=0).** Any changes made from the submitted version are highlighted in blue in this updated draft. >**Reviewer's Comment:** My only comment pertains to the highly specific target and scope of the work, which may make it less suitable for the general audience of ICML. From a practical perspective, our work aims to take different types of data under different scenarios — some data is given, while others are not — to make predictions and decisions. Thus, it addresses machine learning problems using machine learning methods, and should be considered relevant to ICML community. Specifically, in our work, 1. **Different types of data** include: raw data, feature data extracted from raw data, feature data involving additional information, and heuristic expert knowledge (views). One of the major focuses in our model design is to capture the effects of these data through Bayesian networks with latent variables, a common methodology in machine learning research [1][2][3]. These data are clearly and consistently described in the introduction (page 1), Sec. 3.2, 3.3, and 3.4 (page 3-5). 2. **Different scenarios** refer to Sec. 3.3, where investor views are observed, and Sec. 3.4, where no subjective views are given. In the scenario of Sec. 3.4, our model is showcased by two configurations for handling different types of data. These scenarios are clearly and consistently described in the paragraph before Sec. 3.1 (page 3) and the paragraph after remark. 3.3 (page 5). 3. **Predictions** refer to the posterior probability distribution of unobserved asset returns given data estimated by each designed model. The prediction problems are described in problems 1, 2, 3 (page 2, 4, 5). The prediction models are Def 2.2, 3.3, 3.4, and 3.6, and they make predictions in Lemma 2.1, Corollary 3.1.1, 3.2.1, and 3.3.1, correspondingly. 4. **Decisions** refer to the solution of the portfolio optimization problem (Def. 2.1, page 2) based on the estimations of each designed model, including Lemma 2.1, Thm 3.1, 3.2, and 3.3. The portfolio optimization problem is also a popular topic in machine learning research [4][5][6][7][8]. In this perspective, we deem our work to fall under the general scope of machine learning and satisfy the topics of interest (https://icml.cc/Conferences/2025/CallForPapers) of ICML. [1]. Anandkumar et al. (2013). Learning linear Bayesian networks with latent variables. ICML. [2]. Xie et al. (2016). Diversity-promoting Bayesian learning of latent variable models. ICML. [3]. Lorch et al. (2021). DiBS: Differentiable Bayesian structure learning. NeurIPS. [4]. Agarwal et al. (2006). Portfolio management via the Newton method. ICML. [5]. Qiu et al. (2015). Robust portfolio optimization. NeurIPS. [6]. Ito et al. (2018). Online portfolio selection with cardinality constraints: Regret bounds. NeurIPS. [7]. Tsai et al. (2023). Data-dependent bounds for online portfolio selection. NeurIPS. [8]. Lin et al. (2024). Globally optimal m-sparse Sharpe ratio portfolios. NeurIPS. >**Reviewer's Comment:** due to space constraints, the authors often omitted discussions of their results and choices. The general conclusion of this work is stated in Appendix A: *“We propose ...... real-world datasets (Section 4).”* We also have more detailed discussions for the results of each problem in Sec. 3: - For Problem 2 (Sec 3.3), the discussion includes Remark 3.2 (classical Black-Litterman is a special case), Remark C.1/D.1 (the prediction has a ground truth limit), and a summary at the end of Sec. 3.3 (page 5). - For Problem 3 (Sec 3.4), the discussion ofthe SLP-BL model includes Remark 3.4 (equivalency to classical Black-Litterman) and a summary at the end of \textbf{Configuration 1} (page 6). - For Problem 3 (Sec 3.4), the discussion of the FIV-BL model includes Remark C.2/D.2 (equivalency to SLP-BL model) after Thm. 3.3 and a summary at the end of \textbf{Configuration 2} (page 8). - Due to the complex nature of configuration 2, we also discuss the further assumptions (conjugate prior) after Remark C.2 (page 7). To clarify the choice between configurations discussed in Section 3.4, we have added a guiding statement for practitioners at the end of page 5. *“This implies that, in practice, if an investor takes generic features of assets (e.g. indicators derived from the time series of each asset, as shown in our experiment), configuration 1 should be used. If an investor takes features not specific to individual assets (e.g. interest rates), configuration 2 should be used. The two configurations are not contradicting, so one can take both types of features and incorporate them correspondingly.”*
Summary: Paper removes the need for heuristic investor views while maintaining a Bayesian framework. It makes the Black-Litterman model more data-driven, robust, and automated. Claims And Evidence: Claims are well supported. Methods And Evaluation Criteria: The methods and evaluation criteria are reasonable, but the testing is limited to small asset universes (11-38 assets). To assess robustness, the model should be evaluated on larger datasets like those in Asness et al. (2013) and Fama-French (2015). Additionally, turnover analysis and net-of-transaction-cost performance are missing, which are crucial for real-world applicability. Expanding the analysis to start at least in the 1990s and cover a much larger universe of assets would help assess the model’s robustness across more market regimes and economic conditions. - Asness, C. S., Moskowitz, T. J., and Pedersen, L. H. (2013). Value and Momentum Everywhere. The Journal of Finance, 68(3):929–985​ - Fama, E. F. and French, K. R. (2015). A Five-Factor Asset Pricing Model. Journal of Financial Economics, 116(1):1–22​ Theoretical Claims: The proofs (Theorems 3.1, 3.2, 3.3) appear correct, following standard Bayesian inference. Experimental Designs Or Analyses: The backtests (2004-2024) are not enough (for the equities portfolio), and the small asset universe (38 assets) limits generalizability. As noted earlier, turnover, transaction costs, a longer backtest (1990s onward), and larger datasets (Asness et al. 2013, Fama-French 2015) should be included for robustness. Supplementary Material: No Relation To Broader Scientific Literature: The paper extends Bayesian Black-Litterman (Kolm & Ritter) by making views data-driven and explains its contributions well. Essential References Not Discussed: All the essential referenes are well discussed. Other Strengths And Weaknesses: No Other Comments Or Suggestions: No Questions For Authors: No Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thanks for the insightful questions and reviews. The latest revision is readily available in the **[anonymous dropbox folder](https://www.dropbox.com/scl/fo/i13bhu138gjk76cf5r44v/ACog6LpDbYdBQbB87jtJA-Q?rlkey=mpv3b4xgbmr1cohaen6roxoyf&st=zndq6hnx&dl=0).** Any changes or modifications made from the submitted version are highlighted in blue in this updated draft. >**Reviewer's Comment:** The backtests (2004-2024) are not enough (for the equities portfolio), and the small asset universe (38 assets) limits generalizability. As noted earlier, turnover, transaction costs, a longer backtest (1990s onward), and larger datasets (Asness et al. 2013, Fama-French 2015) should be included for robustness. In response to the suggestion of **backtesting periods**, we have added an extended version of our experiment on the Dow Jones Index to backtest from 1994 to 2024 (previously 2004-2024). As demonstrated in Table 2 (https://imgur.com/KcE5QxV), the general results remain consistent, with our model continuing to show (even greater) outperformance compared to the traditional Markowitz model. This outperformance is likely attributed to the more stable portfolio weights provided by our Bayesian-based model. We have added an example asset allocation visualization (Figure 7, Appendix G.5, https://imgur.com/CCXyE7T) to demonstrate this. Per the request for **turnover analysis**, we have added a new Appendix G.6 to compare the turnover rates between our SLP-BL model and the benchmark Markowitz model. The analysis includes Table 6 (https://imgur.com/Azl39Y0) presenting the average turnover rates, as well as visualizations of example turnover rates for both models (Figure 8 and 9, https://imgur.com/nqZ3rkk). Specifically, the average turnover rate for all the SLP-BL models is 24.79 on the DJI dataset and 19.51 on the sectors dataset, while for all the Markowitz models, the rates are 50.24 and 48.08, respectively. We would like to note that our experiments aim to prove the concept—specifically, to prove that this machine learning model could be readily implemented and useful (compared to the benchmarks): - In the theoretical sections, we provide closed-form solutions for every model in our work. In experiments, we use Thm. 3.2 for the SLP-BL model, showing it is readily implemented. - We take the asset sets, including the 41 DJI stocks and the 11 sectors, to represent the samples of at least large-cap equities. These sets are practically used as they provide some benefits compared to a larger set of stocks: ease of trading, smaller impact of slippage (due to higher liquidity), fewer trades per period (thus lower transaction costs), lower managerial costs, and lower computational costs. - The monthly rebalance setting is also designed to enhance these benefits (particularly reduce the impact of transaction costs) while maintaining a similar performance compared to weekly or daily rebalance. Thus, in response to the reviewer’s suggestion regarding **asset universes** and **transaction costs**, we choose a smaller — but representative — set of assets for their practical usefulness, including the advantage of lower transaction costs. Our proof-of-concept experiment design also aims to minimize the effect of transaction costs, without explicitly considering them. Lastly, we need to clarify that the backtest period should be carefully chosen to avoid unfair comparison. Specifically, although the DJIA covers a long history, some stock data might be inaccessible. For example, companies like American Can Company, Navistar International Corporation, and USX Corporation were part of the DJIA before May 1991 but have since been delisted or replaced, making their data difficult to retrieve. Omitting these stocks (as many portfolio studies do) could introduce selection bias because delisted stocks, in the aftermath, often underperform. This is particularly concerning when backtesting over earlier years, as more data is unavailable. Notably, we observed better performance in the benchmark equal-weighted portfolio (EQW) but worse performance in other benchmarks - we suspect that the EQW may be benefiting from this selection bias. In terms of fair comparison, we believe our experimental results (20 yrs for sector ETFs; 30 yrs on the Dow Jones Index) are now sufficient for demonstrating the outperformance of our models.
null
null
null
null
null
null
CUPS: Improving Human Pose-Shape Estimators with Conformalized Deep Uncertainty
Accept (poster)
Summary: The paper introduces CUPS, a video-based HMR approach with uncertainty quantification. Specifically, the method uses GLoT as a base model to extract global and local features from videos. An adversarial loss is defined on the output meshes during training, similar to VIBE. The discriminator output (from a sigmoid layer) estimates the uncertainty of the predictions (whether they belong to the real dataset or not). Next, the paper applies the statistical tools developed by Barber et al. to calibrate the uncertainties. The paper presents two theoretical bounds and demonstrates state-of-the-art performance on standard benchmarks. Claims And Evidence: The claims are generally supported, including achieving state-of-the-art and improved generalizability. The proofs also follow prior references. Methods And Evaluation Criteria: The paper uses a recent paper (GLoT) as a base model, which is appropriate. Additionally, it tests the model on popular benchmarks like 3DPW and Human3.6m, according to prior research. The evaluation metrics also follow the standards in this field. Theoretical Claims: All proofs are according to prior research from Barber et al. 2023. Experimental Designs Or Analyses: All experiments were reviewed, and they are sound. Supplementary Material: I cross-checked the proofs with prior works from Barber et al. 2023 but did not thoroughly check for typing errors. The last three sections also provide helpful information about computational cost and some details of training, testing, and comparisons, such as the hold-out set used for calibration. All works are appropriately cited, and I found no issues. Relation To Broader Scientific Literature: Uncertainty quantification and conformal prediction are valuable tools for machine learning tasks, especially for HMR, where a large portion of the SMPL parameter space does not represent a plausible pose. As a result, obtaining the uncertainty and calibrating it can significantly improve the estimation performance. Such approaches are often found in medical papers, where uncertainty is paramount. However, more recent papers have adapted uncertainty estimation into HMR, showing significant performance gains. Therefore, given the context of recent research, this paper is timely and addresses an interesting topic. Essential References Not Discussed: All references in the manuscript are adequate. However, I am interested in the authors' opinions on other uncertainty prediction approaches, like RLE [1] or similar works, and how they could be adapted into their framework. [1] Li, Jiefeng, et al. "Human pose regression with residual log-likelihood estimation." Proceedings of the IEEE/CVF international conference on computer vision. 2021. Other Strengths And Weaknesses: ### Strengths 1. The paper provides a creative combination of existing ideas, resulting in improvements. 2. The paper is well-written. ### Weaknesses 1. The paper does not fully explore the limitations of calibration methods, which often involve memory and computation costs. More info on this (more than supplementary) would be appreciated. 2. Some parts are unnecessarily explained, especially where it is not the paper's main contribution. Given the recency of the tools, it could be acceptable. 3. No comparisons are provided with other uncertainty modeling papers. 4. The results of the method are not very surprising, as adding any uncertainty-aware modules may result in better performance. 5. The contributions are somewhat limited, especially given similar works like [1]. The paper's main focus, uncertainty modeling, and calibration is not experimented upon, and there are minimal comparisons with other approaches. References: [1] Zhang, Harry, and Luca Carlone. "CHAMP: Conformalized 3D Human Multi-Hypothesis Pose Estimators." arXiv preprint arXiv:2407.06141 (2024). Other Comments Or Suggestions: I suggest including the values of hyperparameters, conformal dataset size, and architectural details in the manuscript or supplementary materials. Please see my other comments for more suggestions/questions. Questions For Authors: 1. Could you include the hyperparameter's chosen values in the manuscript or supplementary materials? 2. How impactful is the choice of feature distance function? How important is the choice of the temperature? And did you consider using rotational distance instead of L2 distance of rotations? 3. Why is the beta defined with different values across frames? Specifically, in the problem formulation in line 160 (left), you define a separate beta for each frame. If it represents the shape parameters, should it not be the same for all frames? 4. I understand if it is not possible to incorporate other uncertainty estimation approaches and provide their results, but could you elaborate and discuss how they could be implemented into your pipeline? and if it would be an appropriate choice? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank Reviewer wgxo for their thoughtful feedback and insightful questions. We especially appreciate their recognition of our **creativity** and **theoretical soundness**. Below, we address their comments in detail. > On calibration’s computational cost: Calibration in CUPS is **lightweight**. We hold out a small subset (<1000 samples) from the training data as a calibration set. In practice, we only use 500 calibration samples (i.e., about 8 forward-pass batches), and once the conformity threshold is computed, it remains fixed. No additional inference-time cost is incurred. > On hyperparameters and distance function: Thank you for pointing this out. We **will include a full list of hyperparameters** in the appendix. The missing one now is the **temperature**, set to 20. While both temperature and the ablated parameter $\lambda$ affect score loss strength, $\lambda$ has a greater impact. Regarding the distance function, since we operate in high-dimensional feature space, rotation-based metrics are inapplicable. An alternative we explored was using Cosine similarity, which also works well. > On alternative uncertainty methods and adapting other backbones: We appreciate the reference to RLE [1]. RLE formulates pose estimation as distribution matching, embedding uncertainty into the architecture via flow matching. In contrast, CUPS uses MC Dropout to emulate a probabilistic output space but remains modular—*any probabilistic/generative backbone like RLE could be used within the Conformal Prediction (CP) framework*. CP’s model-agnostic nature allows CUPS to **adapt to new backbones with minimal changes while maintaining theoretical coverage** guarantees. For example, across all metrics, CUPS outperforms Dwivedi et al. (2024), which uses learned occlusion confidences for pose uncertainty measurement but lacks shape uncertainty modeling and theoretical coverage guarantees. Unlike methods that inject robustness via heavy architectural modifications and constraints, CUPS provides **unified, reliable uncertainty quantification for both pose and shape** with minimal overhead from conformal prediction. Thus, when it comes to selecting other models for our pipeline, we would *still use conformal prediction* but maybe incorporate probabilistic (generative) models, which give multi-hypothesis outputs efficiently. > On comparison with CHAMP: Thank you for highlighting this. We cite CHAMP in Section 2, as it inspired CUPS. However, CUPS addresses key limitations noted in CHAMP’s paper: - Pose-only: CHAMP focuses solely on pose estimation, whereas CUPS extends conformal prediction to pose-shape models, enabling richer and more expressive representations of human motion. - Theoretical rigor: CHAMP’s application of CP to **non-exchangeable** datasets like human motion videos leads to mostly empirical coverage guarantees since the CP assumptions are violated. CUPS builds on recent advances in CP **beyond exchangeability**, providing a **rigorous theoretical framework** that accounts for the structure and characteristics of video-based datasets. > On shape parameter consistency (Line 160): We apologize for any confusion we might have caused. The reviewer is right in pointing out that the shape parameters should remain consistent across frames, but for the sake of **mathematical formulation**, especially with the use of a sequential transformer model, the pose and shape parameters are outputted as a **sequence**. While we do not enforce each frame’s shape to be exactly the same, empirical outputs show consistent shapes across frames, and it might be interesting to **explore losses that regularize shape consistency**. > Misc. We thank the reviewer for pointing out the redundancy in some introduction of the methods. We will make our writing more concise. The main uncertainty-based baselines we compare with are Dwivedi et al. (2024) and 3DMB (Biggs et al., 2020), which we outperformed by a noticeable margin. Other uncertainty-based baselines, which we are exploring right now, include swapping out GLoT with different probabilistic HMR models. Adding uncertainty itself **does not necessarily improve the results**. Our main contribution is a framework that quantifies the uncertainty with theoretical rigor while backpropagating the uncertainty into the pose learning process with delicate designs that improve the final outputs. Most UQ methods such as CP do not "close the loop"; they assume a well-trained model, and **very few prior works have explored incorporating uncertainty into the learning process.** As Reviewer aBRZ said, CUPS "offers novel synergies to advance **safety critical** vision systems." --- Rebuttal Comment 1.1: Comment: I thank the authors for their detailed rebuttal and for addressing my questions, particularly regarding the RLE and its incorporation into the CP framework. These clarifications have resolved my primary concerns. I have no further questions at this time. I am revising my recommendation after considering the authors' responses and the other reviews. I recommend Minor Acceptance, conditional upon incorporating these clarifications into the manuscript or supplementary materials. This work demonstrates sufficient novelty for video HMR within the context of this conference and offers a valuable foundation for future research in this area. --- Reply to Comment 1.1.1: Comment: We sincerely appreciate Reviewer wgxo for raising the score and for acknowledging that CUPS **offers a valuable foundation for future research**. We will make sure to incorporate the comments and clarifications into the final version.
Summary: This paper introduces CUPS, a method that integrates conformal prediction with deep uncertainty learning for 3D human pose-shape estimation from monocular videos. The key innovation lies in training an end-to-end deep uncertainty function alongside the reconstruction model, which serves as a conformity score for constructing prediction sets with statistical guarantees. By addressing non-exchangeability in video data through weighted conformal calibration, CUPS achieves state-of-the-art performance on benchmarks, while providing theoretical bounds on coverage gaps. The method is validated through extensive experiments, ablation studies, and in-the-wild tests. ## update after rebuttal As the author addresses most of my and others' concerns, esp. for the last clarification on the Multi-Hypo, alignment and additional quantitative truncation test, I decided to keep my recommendation of Weak Accept. It would be interesting to also test the method on a more powerful backbone. Claims And Evidence: The claims are well-supported: - Good performance is evidenced by quantitative comparisons. The effectiveness of uncertainty-aware training is demonstrated via ablation studies (e.g., training-time ensemble augmentation) - Some mathematical proofs are provided to support the correctness of the theorems. Theoretical coverage guarantees are derived for non-exchangeable data, with empirical validation. Minor improvements could include clarifying the practical implications of the coverage bounds. Methods And Evaluation Criteria: - The methodology is sound: the global-local transformer architecture aligns with recent advances, and the integration of adversarial training for uncertainty scoring seems interesting. - Evaluation metrics (MPJPE, PA-MPJPE, Accel) and datasets are standard for 3D pose estimation. The inclusion of in-the-wild tests strengthens practical relevance. May I ask: - **(training & inference)** Is it true that augmenting multiple training samples $H$ during the training but only doing **SINGLE** inference during test on standard benchmarks (e.g. Tab. 1)? MCDropout multi-hypo generation and set selection according to the DUF is an optional function? Or inference also samples several and aggregates them with the DUF. - **(Degenerated discriminator)** GAN studies find the goal of discriminator is to help the learning of the generator and it will eventually degrade. However, it seems discriminator in the paper works well. Could the author elaborate and provide some insights? Theoretical Claims: To my perspective, the proofs for Theorems 1–3 are logically structured. The adaptation of Barber et al.’s framework to handle non-exchangeable data is appropriate. - **($\beta$ distribution)** Theorem 3 assumes that the conformity scores follow a Beta distribution. How to validate this assumption in practice? Experimental Designs Or Analyses: Experiments are comprehensive, covering some datasets and baselines and ablation studies. May I ask: - **(Comparisons with multi-hypo methods)** The authors seem not to clarify why they discuss but do not compare with multi-hypo aggregation methods? - **(Figs. 1 & 6 multi hypos)** seem not so meaningful as like hands are not aligned with the clear image? I would expect to see they span along the depth from the side view. - **(GAN DUF)** Introducing adversarial loss is indeed meaningful, but could the instability of adversarial training potentially led to difficulties in model convergence? Additionally, to what extent does adversarial loss impact the results? I am not talking about $\lambda$ in Fig. 5 but GAN training setting and hyperparams. Corresponding ablation experiments can be conducted to explore these questions. - **(CP under occlusion)** Could analysis be done 3DPW-occlusion (like PARE), 3DPW-truncation (like NIKI)? What will the result change to? Supplementary Material: The appendices provide necessary details on exchangeability definitions, proofs of theorem, coverage proofs, and dataset details. Relation To Broader Scientific Literature: This work bridges three key research threads: conformal prediction, 3D human pose-shape estimation, and deep uncertainty quantification, offering novel synergies to advance safety critical vision systems. Essential References Not Discussed: To my knowledge, the paper cites relevant essential works. A few literature could also be discussed, including probabilistic multi-hypo methods (MHEntropy, ICCV'23) and uncertainty HPE (though 2D, PlausibleUncertainties DER, ICCV'23). Other Strengths And Weaknesses: **[Strengths]** - Reasonable integration of conformal uncertainty learning and multi-hypo HMR. - Superior empirical results across datasets. - also provides some practical contributions (e.g., MC dropout seems to well suit BERT-like masking for DUCS construction). **[Weaknesses]** - Limited discussion on **computational overhead** as multi-hypo DUCS prediction. - **(Corner hard cases)** The model's ability to quantify uncertainty under extreme scenarios like severe occlusions or rapid motion is crucial. While tested on in-the-wild videos, the paper does not explicitly evaluate performance under these situations. Other Comments Or Suggestions: Please see other sections. Questions For Authors: Please see the above. Ethical Review Concerns: NA. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank Reviewer aBRZ for their thoughtful feedback and insightful questions. We especially appreciate their recognition of the **strength of our experiments** and **practical and theoretical contributions,** offering **novel synergies to advance safety critical vision systems**. Below, we address their comments in detail. > On training augmentation and multi-hypothesis aggregation: The reviewer is correct: we compare single-hypothesis outputs across methods for **fairness**, as many baselines are single-output by design. MC Dropout is optional and provides two key advantages: (1) it naturally yields multiple hypotheses, and (2) using conformity scores, we can aggregate them via a weighted average that suppresses low-quality predictions: $ \bar{x} = \sum_i w_i \cdot x_i, \text{ where } w_i \sim \text{conformity score}(x_i)$ To clarify its benefit, we provide a comparison below showing how multi-hypothesis (H=20, cutting off samples below calibrated threshold) aggregation improves performance over the single-sample case. | | **Improvement on 3DPW** | | | | **Improvement on MPI-INF-3DHP** | | | **Improvement on Human3.6M** | | | |-----------|--------------------------|--------|--------|--------|-----------------------------|--------|------------|----------------------------|--------|--------| | | PA-MPJPE | MPJPE | MPVPE | Accel | PA-MPJPE | MPJPE| Accel | PA-MPJPE | MPJPE | Accel | | | -0.9 | -1.2 | -1.1 | -0.2 | -1.1 | -2.3 | -0.1 |-1.6 | -2.2 | -0.2 | As we can see, with multi-hypothesis aggregation, the results get further improved. The GPU usage is under 12 GB. > On the score function and hyperparameters: While the score function resembles a discriminator, it is fundamentally different from a GAN setup. Its role is not to classify real/fake samples but to **rank predictions** based on conformity. We use a weaker adversarial loss, and backpropagate the score loss only once every 100 SMPL updates to ensure training stability—more frequent updates (e.g., every 10–50 steps) lead to instability, as we will discuss in the final version. MC Dropout also induces **distributional oscillations** during training, which can prevent full convergence of the score function. This is acceptable: conformal prediction requires only that the conformity score be consistent across calibration samples, not fully converged. > On the choice of the beta distribution: We chose the beta distribution based on empirical evidence: - Calibration scores fitted well via MLE. Please checkout this [new link](https://sites.google.com/view/cups-occlusion-supp/home) for **plots of fitted vs. theoretical Beta alignment**. - The Kolmogorov–Smirnov test failed to reject the beta hypothesis. This choice allows for analytical tractability in deriving theoretical bounds while remaining grounded in the observed data distribution. > On hard OOD examples and occlusions: Some in-the-wild videos we show are fast-paced and more challenging than the training data. While a few examples in the paper may not illustrate this well, we encourage the reviewer to visit [our website](https://sites.google.com/view/champpp) for full videos. These show **more diversity in foot/hand motion and depth variation**. We also add **two new visualizations** highlighting CUPS’s robustness under heavy occlusions in a separate [anonymous website](https://sites.google.com/view/cups-occlusion-supp/home). CUPS does occasionally inherit failure cases from its GLoT backbone, but it requires only minimal architectural changes. Importantly, since CP is **model-agnostic**, stronger backbones can be **easily swapped** in for future improvements while preserving theoretical guarantees. > On missing related work: We thank the reviewer for flagging the two missing works: MHEntropy introduces entropy-based methods for hand pose-shape recovery, especially effective under occlusion. As discussed above, while CUPS does not explicitly model occlusions (we assume the underlying backbone is somewhat robust), we could further improve CUPS by incentivizing diverse outputs during training via entropy regularization. This could potentially yield more robust and diverse outputs. Plausible Uncertainties focuses on 2D pose regression. While useful, our choice of Conformal Prediction offers broader applicability, minimal architectural modification, and theoretical guarantees, making it better suited for general 3D human pose-shape tasks. > On set prediction’s computation cost: Multi-hypo prediction in CUPS is **lightweight**. We are able to achieve, on average, 20ms/segment on video data on a single V100 GPU. This is because MC Dropout itself is not memory-expensive. Other approaches, such as diffusion-based probabilistic backbones, might consume more memory, but the bottleneck does not come from CP or calibration. --- Rebuttal Comment 1.1: Comment: I sincerely thank the author for the careful feedback. There are still some concerns I want to clarify: - Multi-hypo methods do not mean the output is all multiple. They also report the result of a single aggregated prediction like D3DP. Yet, most of them are skeleton-based and have different settings. - For multi hypos, I still find the quality is unsatisfactory as there are many cases where the hypotheses are not aligned with the image and show unwanted diversity. - For occlusion/truncation sensitivity test, it would be encouraged to include quantitative results instead of just qualitative ones. I would likely keep my scores and see other review discussions. Thanks. --- Reply to Comment 1.1.1: Comment: We sincerely appreciate Reviewer aBRZ for their feedback and for acknowledging the effort of our reply. - Regarding multi-hypo fairness, we apologize for any confusion we might have caused. We do indeed aggregate the multiple hypotheses, and the performance gain above is achieved by this aggregation. We did not put this in the original manuscript because it is unclear if this aggregation is fair to single-hypothesis methods. However, we will add it to the final version. - Regarding some unsatisfactory results, we acknowledge that some results are off due to the backbone we are using. For many test cases from OOD in-the-wild videos, using just the backbone model, GLoT, the alignment was worse off -- CUPS actually **improved the alignment** here with the E2E conformity score function. While CUPS's score function improves the backbone performance **by a noticeable margin**, the improvement will not be unbounded. However, CUPS has two key properties that alleviate this issue: - CUPS is **modular**, which gives us a chance to swap out the backbone for better ones in order to achieve better alignment results. This involves minimal framework change, and we are working on incorporating a diffusion-based model as CUPS's backbone. - CUPS's DUCS ranks and filters out "bad" hypotheses. One of our main contributions is the ability to **score the outputs with mathematical guarantees**. While some outputs are off (i.e., unwanted diversity), they will be downweighted during the aggregation step, potentially **reducing the impact of bad outputs.** - Regarding occlusion/truncation tests, we have completed larger-scale quantitative experiments. We rerun Table 1 experiments (using the same trained model), but with all input image sequences' bottom 25% truncated. | Method | 3DPW | | | | MPI-INF-3DHP | | | Human3.6M | | |-----------------------|----------------------------------|----------|----------|----------|----------------------------------|----------|----------|----------------------------------|----------| | | PA-MPJPE | MPJPE | MPVPE | Accel | PA-MPJPE | MPJPE | Accel | PA-MPJPE | MPJPE | Accel | | **W/o Truncation** | 48.7 | 76.2 | 91.7 | 6.9 | 61.3 | 92.8| 7.2 | 44.0 | 63.8 | 3.5 | | **W/ Truncation** | 50.7 | 79.7 | 94.8 | 7.0 | 62.1 | 93.2 | 7.8 | 46.1 | 64.9 | 3.7 | Despite being 25% truncated, the performance loss is not much, and in many cases, still better than the untruncated baselines in Table 1, indicating CUPS's **robustness** to occlusion. We will try more datasets that Reviewer aBRZ mentioned, such as 3DPW-Occlude and 3DPW-Truncate, and incorporate the results in the final version. We hope our answers have further clarified your concerns and questions. Please let us know if you have further comments. Thanks.
Summary: This paper presents CUPS, an approach to infer 3D human shapes and poses from videos. At the core is a deep uncertainty function that is trained with 3D pose estimation, and it computes a conformity score to optimize the pose prediction in inference. Experimental results on different datasets and metrics demonstrate that the proposed method outperforms existing baseline methods on human pose estimation tasks. Claims And Evidence: This paper conducts experiments on different datasets including 3DPW, MPI-INF-3DHP, and Human3.6M, and the experimental results illustrate the effectiveness of the the approach. Methods And Evaluation Criteria: The method was evaluated on different metrics including PA-MPJPE, MPJPE, and MPVPE. The results show that CUPS outperforms existing methods. Theoretical Claims: I have checked the Definition and Theorem 1-12, which are technically solid. Experimental Designs Or Analyses: The experimental analysis is sound. This paper conducts experiments on different datasets including in-the-wild videos, and also provides experimental analysis (e.g., Empirical Coverage) to verify the proposed paper. Supplementary Material: I have reviewed the Supplementary Material A-F parts. Relation To Broader Scientific Literature: This paper discussed the relations to existing methods in Related Work. Essential References Not Discussed: The references are good. Other Strengths And Weaknesses: Strength: This paper is well-written and easy to follow. The paper solves an interesting problem of 3D human pose and shape prediction from videos. Weakness: Some components in the paper are not thoroughly verified. For example, how do the global and local transformer improve the results? How to define the global and local features, and how are they decoupled? Other Comments Or Suggestions: Cite and discuss CHAMP: Conformalized 3D Human Multi-Hypothesis Pose Estimators. Questions For Authors: How does the deep uncertainty function decouple pose and shape for motion correction? For example, we can adjust the pose or shape to make the prediction aligned with the ground truth in training. What’s the advantage of the deep uncertainty function over the pose discriminator used in VIBE? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank Reviewer Xj5H for their thoughtful feedback and insightful questions. We especially appreciate their recognition of the **strength of our experiments**, the **theoretical contributions** of our work, and the **effectiveness** of the approach. Below, we address each of the reviewer’s comments in detail. > On comparison with CHAMP: Conformalized 3D Human Multi-Hypothesis Pose Estimators Thank you for highlighting this connection. We are indeed aware of CHAMP and have cited it in Section 2. CHAMP served as a key inspiration for our work, as it is among the few papers that explore conformal prediction for human pose estimation. However, CUPS addresses two major limitations noted in CHAMP: - Pose only: CHAMP focuses solely on pose estimation, whereas CUPS extends conformal prediction to pose-shape models, enabling richer and more expressive representations of human motion. - Theoretical soundness: CHAMP’s application of CP to **non-exchangeable** datasets like human motion videos leads to mostly empirical coverage guarantees since the **CP assumptions are violated**. On the other hand, CUPS builds on recent advances in **CP beyond exchangeability**, providing *a rigorous theoretical framework* that accounts for the structure and characteristics of video-based datasets. > On the global and local transformer components: We apologize for any confusion. As cited in the paper, CUPS builds upon the GLoT backbone (Shen et al., 2023), which introduced the global and local transformer modules. While these are not novel contributions of CUPS (and this should be correctly represented in the paper), we summarize their roles here for clarity: - The global transformer captures *long-range temporal dependencies* to ensure consistency in human motion across frames. - The local transformer focuses on *fine-grained temporal dynamics*, refining predictions by modeling short-term variations around mid-frames. The combination of these modules produces a decoupled global-local representation. For implementation details, please refer to GLoT’s [official codebase link](https://github.com/sxl142/GLoT/blob/main/lib/models/GLoT.py#L53). > On the deep uncertainty function (DUF) for pose and shape: CUPS applies conformal prediction to the output SMPL parameters, thereby **quantifying uncertainty over both pose and shape**. The learned conformity score function *implicitly* decouples the two: when either pose or shape is off, the conformity score may be low; when only one is off, the score may still be high depending on the calibration distribution. While it is possible to explicitly decouple pose and shape by using two separate conformity scores (and thus two CP procedures), doing so would complicate the theoretical analysis. Modeling the *interdependencies* between pose and shape would be necessary to maintain valid performance bounds. > On comparison with VIBE’s discriminator: Thank you for raising this point. While both VIBE’s motion discriminator and CUPS’s DUF are trained adversarially, they serve distinct roles: - VIBE’s discriminator relies on **motion priors** from AMASS, introducing supervision from external datasets. - CUPS’s DUF, in contrast, is **self-supervised**. It is trained using an ensemble of predictions generated via Monte Carlo dropout, requiring no additional data. This self-supervised setup makes CUPS **more modular and efficient**, introducing minimal changes to the backbone architecture. Moreover, because CP is model-agnostic, CUPS can be easily adapted to other backbones beyond GLoT while retaining its theoretical guarantees.
Summary: This paper introduces a novel method for human pose and shape estimation, utilizing the SMPL representation, from video sequences. The proposed approach incorporates conformalized deep uncertainty modeling, which allows for the generation of multiple samples, in contrast to the single-output methods commonly found in the literature. The uncertainty is theoretically calibrated, providing a safety guarantee for robotics and other downstream applications. The experiments are comprehensive and demonstrate state-of-the-art performance compared to existing baselines. Claims And Evidence: The authors claim to propose a new method that provides both human pose and shape, along with the conformalized uncertainty scores. These claims are supported by detailed methods and extensive experiments demonstrating the performance. Methods And Evaluation Criteria: The high-level approach involves a transformer-based architecture that predicts the parameters of the SMPL representation, utilizing a global regressor and a local corrector that focuses on different parts of the video sequences. The uncertainty is predicted by training a neural network that takes both the input X and the output Y, providing a probability score between 0 and 1. This design is similar to a discriminator, with the authors employing a similar loss function to supervise the network. The authors also address the important issue of "calibration" to ensure that the output uncertainty score is well-calibrated within a probabilistic framework. Detailed and reasonable proofs are provided both in the main text and the supplementary material. Theoretical Claims: The architecture of the network follows standard deep learning practices, such as Transformers and MLPs. The main theoretical contribution lies in the calibration of uncertainty, which is sound and well-justified. Experimental Designs Or Analyses: The experiments are extensive, and the results are impressive. Under uncertainty modeling, the method achieves state-of-the-art performance across nearly all metrics when compared to the baselines. Supplementary Material: The proof in the supplementary materials is comprehensive, providing detailed insights into the designs and underlying theories. Relation To Broader Scientific Literature: A human pose and shape estimator with uncertainty scores can be applied in safety-critical areas, such as robotics, enabling a wide range of applications. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Uncertainty modeling is a highly valuable yet underexplored area in the literature. Typically, we focus on achieving high scores while overlooking the inherent ambiguity within neural networks. This method offers an additional measure to obtain robust outputs and certification with uncertainty, which is an important contribution to the community. Other Comments Or Suggestions: N/A Questions For Authors: 1. The multiple samples seem to lack diversity. For instance, in Figure 6, there is no sample covering the foot. What is the main reason for this, and how can this issue be addressed to improve the method further? 2. The motivation for using conformal prediction to model uncertainty is not entirely clear. This method requires exchangeable input data. While the related work discusses alternative methods that use, for example, explicit confidence values, the authors do not provide a summary of their limitations or how this paper's method stands out. What led the authors to choose this particular uncertainty modeling approach over others? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank Reviewer Xcn3 for their thoughtful feedback and insightful questions. We particularly appreciate their recognition of the **theoretical soundness** of our work, **the need for uncertainty prediction in human pose estimation in the community**, and the applicability to **safety-critical areas**. Below, we respond to their comments in detail. > On the lack of diversity in Figure 6: We appreciate the reviewer’s observation. CUPS was trained on multiple human pose-shape datasets, many of which, such as Human3.6M, were collected in controlled indoor environments. This may limit the diversity of foot and lower-body movements represented during training. The bottom example in Figure 6 depicts an outlier instance with highly dynamic motion that is not well-represented in the training set. Despite this, CUPS performs reasonably well, **tracking both pose and shape in such out-of-distribution (OOD) scenarios**. We note that Figure 6 presents only static snapshots. We encourage the reviewer to explore [our website](https://sites.google.com/view/champpp), where full prediction videos are available. These showcase significantly **more diversity in predicted meshes**—including many cases where foot motion is well captured. To improve diversity further, two directions are promising: (1) Incorporating more in-the-wild video data during training to reduce reliance on constrained datasets like Human3.6M, and (2) Introducing entropy regularization into the output ensemble [1], encouraging greater variability in the predictions. > On our choice of the conformal prediction (CP) framework: This is an excellent question. We selected CP due to its **flexibility and generalizability**. CP is a distribution-free framework for uncertainty quantification that can be applied to any machine learning model (Angelopoulos & Bates, 2021; Shafer & Vovk, 2008). Crucially, even without strict exchangeability (*as is the case with human video data*), CP enables us to estimate a theoretical lower bound on performance by leveraging the properties of the calibration dataset. This has two key advantages: - Theoretical guarantees: CP allows us to offer probabilistic performance guarantees via conformity scores—effectively certifying the reliability of predictions. - Modularity: CUPS requires only minimal changes to the backbone architecture. Because CP is model-agnostic, one could replace the GLoT-based backbone with other architectures in the future to improve accuracy, while still benefiting from CUPS’s theoretical guarantees. Alternative approaches, such as Dwivedi et al. (2024), introduce learned occlusion confidences but lack the theoretical coverage guarantees that CP offers. Moreover, Dwivedi et al. (2024)’s method is only applied to pose, while uncertainty quantification for shape estimation remains unexplored. CUPS provides a unified method for uncertainty quantification for both pose and shape since CP is used in SMPL space. Other prior work relies on injecting robustness via additional constraints, which often demands *substantial modifications* to the prediction pipeline. In contrast, CUPS offers **robust and theoretically grounded predictions with minimal architectural overhead**. [1] Chen, Rongyu, Linlin Yang, and Angela Yao. "Mhentropy: Entropy meets multiple hypotheses for pose and shape recovery." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for providing a detailed explanation to address the questions. I have no further questions and would like to keep my scores leaning toward acceptance. --- Reply to Comment 1.1.1: Comment: We sincerely appreciate Reviewer Xcn3 for their acknowledgement of our contributions and recommendation of acceptance. We will incorporate the discussed points in our final version.
null
null
null
null
null
null
Outlier Gradient Analysis: Efficiently Identifying Detrimental Training Samples for Deep Learning Models
Accept (oral)
Summary: This paper introduces a novel approach called "Outlier Gradient Analysis" for identifying detrimental training samples in deep learning models. The authors establish a conceptual bridge between influence functions (a traditional method for assessing training data impact) and outlier detection in the gradient space. The key innovation is transforming the computationally expensive task of calculating influence functions, which requires inverting the Hessian matrix, into a more efficient outlier detection problem in the gradient space. Claims And Evidence: The paper makes several key claims, which are generally well-supported by evidence: 1. **Detrimental samples can be identified as outliers in the gradient space**: The authors provide strong theoretical justification through Observation 3.1 and Hypothesis 3.2, establishing that detrimental samples are typically a minority in the training set and can be detected as outliers in the gradient space. 2. **Outlier Gradient Analysis is more computationally efficient than traditional influence functions**: The authors provide empirical evidence through running time comparisons (referenced in Section 8 and detailed in Appendix C.3), showing that their method is significantly faster than influence function approaches that require Hessian computation or approximation. 3. **Outlier Gradient Analysis performs competitively or better than existing methods**: This claim is well-supported through extensive experiments across multiple domains: - On synthetic datasets, the method achieves 96-98% accuracy in identifying mislabeled samples, outperforming all baselines (Table 1). - On CIFAR-10N and CIFAR-100N, the method consistently ranks among the top performers across different noise settings (Table 2). - On NLP fine-tuning tasks, the method outperforms all baselines on 3 out of 4 GLUE datasets and matches the best baseline on the fourth (Figure 3). - On LLM influential data identification, the method achieves perfect scores for both AUC and Recall metrics (Table 3). Methods And Evaluation Criteria: The methodology is sound and well-justified. The authors clearly explain the theoretical foundation of their approach, establishing the connection between influence functions and outlier detection in the gradient space. The transformation is elegantly formulated and the implementation details are thoroughly described. The experimental setup is comprehensive, covering a diverse range of applications and model architectures. The authors also conduct ablation studies on key hyperparameters and provide running time analyses to demonstrate computational efficiency. Theoretical Claims: The paper makes several theoretical claims, primarily in Section 3, which establishes the connection between influence functions and outlier detection in the gradient space. Experimental Designs Or Analyses: The experimental designs are thorough and well-executed. The authors evaluate their approach across a diverse range of applications, model architectures, and datasets, demonstrating its versatility and effectiveness. Supplementary Material: I have read the Appendix. Relation To Broader Scientific Literature: N/A Essential References Not Discussed: N/A Other Strengths And Weaknesses: 1. Although this work conducted experiments on LLMs, I find it strange that they only used LLMs for classification tasks. It would make more sense to experiment with LLMs' generation tasks. This is my biggest concern, and if the authors can address this concern, I would be happy to raise the score. 2. Sensitivity to outlier detection algorithm. The performance of the method depends on the choice of outlier detection algorithm and its hyperparameters. While the paper explores different options (iForest, L1/L2-norm), a more systematic analysis of this dependency would be valuable. Other Comments Or Suggestions: N/A Questions For Authors: N/a Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear Reviewer MZJ6, Thank you for your efforts in reviewing our work, we are grateful for your insights. We provide answers to the questions raised, below: - **Although this work conducted experiments on LLMs, I find it strange that they only used LLMs for classification tasks. It would make more sense to experiment with LLMs' generation tasks. This is my biggest concern, and if the authors can address this concern, I would be happy to raise the score.** Thank you. We would like to point out that the current LLM benchmarks are not classification tasks, but generation tasks. Each of the 3 influential data identification tasks (Math With Reasoning, Math Without Reasoning, and Sentence Transformations) we considered from past work (Kwon et al, 2024) have 10 classes/categories of subtasks (e.g. Sentence Transformation can have 10 different types of natural language transformations and Math problems can have 10 different categories of word problems) but are still generation tasks. More details regarding these benchmarks are provided in Appendix B.1.4 and B.1.5. In general, we agree with the reviewer that one of the challenges associated with influential data identification in LLMs is the lack of more complex and varied benchmarks. This is currently an evolving field, and designing benchmarks is challenging because ground-truth influence labels need to accurately reflect the model's inductive bias for a test sample (i.e. the model should find the training samples most influential for a particular test example). Past work does this by making the train-test problem sub-categories very similar. We are now working on designing better LLM influence identification benchmarks for future work, and hope to test our outlier gradient analysis methods on these benchmarks as well. ___ - **Sensitivity to outlier detection algorithm. The performance of the method depends on the choice of outlier detection algorithm and its hyperparameters. While the paper explores different options (iForest, L1/L2-norm), a more systematic analysis of this dependency would be valuable.** Thank you for the great question. Regarding outlier analysis algorithms, in the paper, we have utilized 4 outlier analysis algorithms: iForest (main paper), L1 norm (main paper), L2 norm (main paper), and OneClassSVM (Appendix C.9) for outlier gradient analysis. For all 4 of these algorithms, the general trend of outlier gradient analysis improving model performance against competitive baselines in the noisy learning regime can be observed. Our primary aim in choosing these algorithms was their high computational efficiency and minimal number of hyperparameters. Regarding hyperparameter sensitivity, we had provided details and additional experiments on hyperparameters in **Appendix C**, which we also discuss below. More specifically, our outlier analysis algorithm has two hyperparameters: (1) the trimming budget $k$ which is the number of samples to remove, and (2) the hyperparameters of the outlier detection algorithm being used (e.g. iForest). 1. For the first hyperparameter of the trimming budget $k$, we conduct additional experiments while varying the value of $k$ from 2.5% to 12.5% for all 4 noise settings of CIFAR-10N. These results are provided in **Table 5** of **Appendix C.2**. As can be seen, the highest values across each noise regime are obtained by outlier gradient analysis (L2 norm thresholding at 12.5% for Aggregate and Random; and L2 norm thresholding at 2.5% for Worst), indicating its broad suitability. These results also show that the budget of 5% is a good choice for the trimming budget, leading to desirable performance in most cases. 2. The second hyperparameter choice is only a facet of the iForest outlier analysis algorithm and constitutes the number of trees being used in the algorithm. Note that this is because for the L1 and L2 norm thresholding approaches, we do not have a norm threshold that needs to be chosen manually, since setting the budget automatically decides the threshold. For the iForest algorithm, we have provided results for varying the number of tree estimators on CIFAR-10N in **Table 8** of **Appendix C.4**. As can be observed, the performance of outlier gradient analysis remains stable across the board when the number of trees/estimators are varied, indicating low sensitivity of this hyperparameter to final results obtained. ___ Thank you once again for helping improve our paper, we appreciate it. --- Rebuttal Comment 1.1: Comment: Thanks to the authors for their responses during the rebuttal period. I now have a deeper understanding of the details in the paper, and I will accordingly raise my score. --- Reply to Comment 1.1.1: Comment: Dear Reviewer MZJ6, Thank you for engaging with us and for all your efforts spent in reviewing our work, they are greatly appreciated. Regards, Authors.
Summary: This paper proposed Outlier Gradient Analysis, establishing a theoretical bridge between influence functions (a common tool for this task) and outlier detection in the gradient space. The key insight is that detrimental samples can be effectively identified as outliers in the gradient space without computing the Hessian matrix—a major computational bottleneck in traditional influence function approaches. The method employs Isolation Forest, L1-norm, and L2-norm thresholding for outlier detection and demonstrates strong performance across CV and NLP. # **update after rebuttal** I would like to thank the authors for their sincere efforts to address my concerns. I have increase my score. Claims And Evidence: The paper's claims are generally well-supported by empirical evidence. 1. Gradient outliers correlate strongly with detrimental samples, validated through synthetic datasets where ground truth is known (showing 96-98% detection accuracy). 2. The approach outperforms baseline methods on vision tasks. 3. Computational efficiency claims are substantiated with runtime measurements showing orders of magnitude speedup over traditional influence methods. However, evidence could be strengthened: - LLM influence task shows perfect scores (1.0 AUC/Recall), which may read about task difficulty or potential ceiling effects. Methods And Evaluation Criteria: 1. The use of varied datasets spanning different domains (vision, NLP, LLMs) demonstrates generalizability. 2. In CV tasks, the evaluation protocol of detecting and removing detrimental samples followed by retraining is a sensible approximation of real-world application scenarios. 3. As a concern, I suggest that the authors expand their comparison beyond just Influence function baselines. In the area of learning with noisy labels, there are many mature sample selection methods with objectives similar to the proposed approach. For example, using the small-loss criterion from early-stopped models [1] to select clean samples, or leveraging whether models can consistently learn a sample as a criterion to identify mislabeled examples [2]. I recommend incorporating these methods into your comparison to evaluate both selection accuracy and computational efficiency. [1] Co-teaching: Robust Training of Deep Neural Networks with Extremely Noisy Labels, NeurIPS 2018. [2] Late Stopping: Avoiding Confidently Learning from Mislabeled Examples, ICCV 2023. Theoretical Claims: The paper primarily presents an intuitive conceptual transformation rather than formal theoretical proofs. The core theoretical claim is Hypothesis 3.2, which establishes a connection between detrimental sample identification and outlier detection in gradient space. This hypothesis is supported by empirical evidence rather than formal proof, which is reasonable given the nature of the problem. The justification in Section 3.2 about why the gradient term should be decisive in determining whether a sample is detrimental is logical. The validity of Observation 3.1 (that detrimental samples are a minority in converged models) is crucial to the approach and appears empirically sound, though not rigorously proven. Experimental Designs Or Analyses: 1. The synthetic experiments in Section 4 provide clear validation of the core hypothesis with controlled conditions. 2. The CIFAR-10N/100N experiments appropriately assess performance across varying noise regimes. 3. The ablation studies on trimming budget k and iForest parameters are valuable. 4. The computational complexity analysis is good enough. That is enough, other minor issues are not concern me. Supplementary Material: I reviewed all supplementary material, which provided valuable additional details including: - Full results with standard deviations for vision experiments - Additional ablation studies for trimming budget and iForest parameters - Running time experiments and complexity analysis - Experiments with ResNet-18 and ImageNet - Comparison with additional noisy learning baselines etc. Relation To Broader Scientific Literature: This work relates to several research directions in the machine learning literature: 1. It extends influence function research by providing a more computationally efficient method for the specific task of detrimental sample identification. 2. It connects to the noisy label learning literature by offering an effective approach for identifying mislabeled samples. 3. It contributes to data-centric AI by focusing on improving model performance through data quality rather than model architecture. 4. It relates to outlier detection literature (which I am not familiar). The authors appropriately position their work within these research areas and have a decent realted work. Essential References Not Discussed: There is relevant research on sample importance/informative evaluation and data valuation that deserves discussion in relation to the authors' proposed method (influence functions). These fall into two main categories: 1. Data Valuation Methods: These approaches provide alternative frameworks for quantifying sample importance: [1] LAVA: Data Valuation without Pre-Specified Learning Algorithms, arXiv 2023. [2] Data-OOB: Out-of-bag Estimate as a Simple and Efficient Data Value, ICML 2023. [3] Data Banzhaf: A Robust Data Valuation Framework for Machine Learning, AISTAT 2023. [4] Training data influence analysis and estimation: A survey, Machine Learning, 2024. 2. Data Pruning Methods: These methods are relevant, while some of them directly address problems similar to identifying detrimental samples: [5] Robust and Fully-Dynamic Coreset for Continuous-and-Bounded Learning (With Outliers) Problems, NeurIPS 2021. [6] Robust Data Pruning under Label Noise via Maximizing Re-labeling Accuracy, NeurIPS 2023. [7] Feature Distribution Matching by Optimal Transport for Effective and Robust Coreset Selection, AAAI, 2024. [8] Instance-dependent Early Stopping, ICLR 2025. Other Strengths And Weaknesses: **Strengths:** 1. The paper is a good read, and located everything on their way. 2. Tthe authors smartly narrowed down the problem. Instead of trying to calculate exactly how much each sample influences the model (like traditional methods do), they simply focus on identifying which samples are harmful. This simplification turns a complex calculation into a straightforward binary problem. 3. The proposed methd tested on CV/NLP/LLM, and it works. **Weaknesses:** Influence functions provide a richer quantification (in float) of each sample's importance, potentially enabling a wider range of applications beyond sample selection. In this sense, it might seem reasonable that these methods aren't directly comparable to simpler sample selection techniques. However, While the authors cleverly narrowed down the problem to avoid the computational complexity of influence functions, this simplification create the need for broader comparisons. The paper would be in stronger position if it included comparisons with: sample selection methods for learning with noisy labels and data pruning for training efficiency. These methods may have similar objectives (identifying samples to remove for different reason) but use different approaches. Other Comments Or Suggestions: 1. Moving table of runtime evaluation to the main text, which is a key contribution of the work. 2. Minor typos and formatting issues: - Line 116: "as a the discrete version" → "as the discrete version" Questions For Authors: 1. For the LLM influential data identification task, can iForest estimators scale to different tasks with many classes? 2. Have you explored using other outlier detection algorithms beyond iForest and L1/L2 norms? 3. The approach currently requires computing gradients for all training samples. Why not considered approximation or sampling technique? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear Reviewer r7gY, Thank you for your thoughtful review and feedback, we appreciate it. We have answered questions raised, below: - **The paper would be in stronger position if it included comparisons with: sample selection methods for learning with noisy labels and data pruning for training efficiency** Thank you for the great suggestion. Along with comparisons with other influence function baselines and simpler noisy label correction methods, we had compared with a number of sample selection methods designed for noisy learning in **Appendix C.7 (Table 11)**. Approaches for noisy learning can be categorized into (1) methods that either change the loss function or model architecture or (2) those that identify noisy samples and remove/relabel them for improving model performance (Algan & Ulusoy, 2021). While the main paper has results for the latter category, we compare with the former category on CIFAR-10N (all 3 noise settings) in Table 11 of Appendix C.7. As can be observed, outlier gradient analysis is the top performer across these methods as well. While we had thought of also comparing with training data efficiency methods, we could not undertake a fair comparison as several methods opt for reducing the size of the training set as much as possible while ensuring that performance on the reduced set remains as close to the original set. However, the goal in our work is to specify a small trimming budget and increase performance as much as possible, leading to the research questions between these approaches being fundamentally different. ___ - **For the LLM influential data identification task, can iForest estimators scale to different tasks with many classes?** Currently, each of the 3 influential data identification tasks (Math With Reasoning, Math Without Reasoning, and Sentence Transformations) we considered from past work (Kwon et al, 2024) each have 10 classes/categories of subtasks (e.g. Sentence Transformation can have 10 different types of natural language transformations and Math problems can have 10 different categories of word problems). More details regarding these benchmarks are provided in Appendix B.1.4 and B.1.5. While the reviewer makes a great suggestion of scaling to even more classes, one of the challenges associated with influential data identification in LLMs is the lack of more complex and varied benchmarks. This is currently an evolving field, and designing benchmarks is challenging because ground-truth influence labels need to accurately reflect the model's inductive bias for a test sample (i.e. the model should find the training samples most influential for a particular test example). Past work does this by making the train-test problem categories very similar. We are now working on designing better LLM influence identification benchmarks for future work, and hope to test our outlier gradient analysis methods on these benchmarks as well. ___ - **Have you explored using other outlier detection algorithms beyond iForest and L1/L2 norms?** In the paper, we have utilized 4 outlier analysis algorithms: iForest (main paper), L1 norm (main paper), L2 norm (main paper), and OneClassSVM (Appendix C.9) for outlier gradient analysis. For all 4 of these algorithms, the general trend of outlier gradient analysis improving model performance against competitive baselines in the noisy learning regime can be observed. Our primary aim in choosing these algorithms was their high computational efficiency and minimal number of hyperparameters. ___ - **The approach currently requires computing gradients for all training samples. Why not consider approximation or sampling technique?** The reason we did not opt for approximating or sampling gradients is the ease with which they are available for model deep learning models trained via backpropagation. Basically, we can obtain gradients in one-pass as the model is training post each backpropagation step. Owing to the ease of gradient access, we did not optimize this further via approximation. Moreover, approximation/sampling would lead to some reduction in performance as opposed to using the original first-order gradients. As an aside, the Hessian is not available during training since it contains second order information. ___ - **Moving table of runtime evaluation to the main text and minor typos**: Thank you for pointing these out. We will incorporate these suggestions into the revision as requested. ___ Thank you once again for all your time and effort in reviewing our work, and helping strengthen our contributions. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for their sincere efforts to address my concerns. I am inclined to increase my score by +1 (as a result, 4). I would like to seeing all the revisions regarding my comment in the updated version. --- Reply to Comment 1.1.1: Comment: Dear Reviewer r7gY, We are grateful for your engagement and are happy to hear that your concerns were addressed. We will definitely incorporate the suggested revisions in our paper as promised. Thank you once again for all your efforts, we appreciate it. Regards, Authors.
Summary: This paper addresses the challenge of identifying training samples that negatively impact deep learning model performance. The authors draw a connection between identifying detrimental training samples using influence functions and outlier detection in the gradient space. This connection leads to a Hessian-free formulation, reducing the computational cost associated with calculating the inverse of the Hessian matrix.   The authors propose an "outlier gradient analysis" approach. They validate this approach on synthetic datasets and demonstrate its effectiveness in noisy label correction for vision models. They also show its applicability to data selection for fine-tuning NLP models and influential data identification for Large Language Models (LLMs). Claims And Evidence: Claim 1: The paper builds a bridge between identifying detrimental training samples via influence functions and outlier detection on the gradient space of samples. Evidence: The paper dedicates Section 3.2 to "Bridging Influence Estimation and Outlier Analysis," detailing the conceptual transformation from influence function-based detrimental sample identification to outlier detection in the gradient space. Hypothesis 3.2 explicitly states the existence of outlier analysis algorithms for detecting detrimental samples in the gradient space. The proposed outlier gradient analysis approach is then detailed in Section 3.3.   Assessment: The claim is well-supported. The paper provides a clear explanation and justification for this connection. Claim 2: The "transformation features a straightforward and Hessian-free formulation, and reduces the computational cost associated with the Hessian matrix and its inverse."   Evidence: The paper emphasizes that outlier gradient analysis "not only features a straightforward and Hessian-free formulation" but also "reduces the computational cost associated with the Hessian matrix and its inverse." The method operates directly on the gradient space, avoiding the computation and inversion of the Hessian. Algorithm 1 outlines the approach, which does not involve Hessian calculations. Experiments in Section 8 discuss computational complexity and running time, showing that outlier gradient analysis is computationally efficient. Table 7 provides a comparison of computational complexities, highlighting that outlier gradient analysis has a lower complexity than Hessian-based methods.   Assessment: The claim is convincingly supported by the presented methodology and experimental results. In summary, the claims made in the submission are generally well-supported by the evidence provided. Methods And Evaluation Criteria: The authors propose "outlier gradient analysis," which connects the identification of detrimental samples using influence functions to outlier detection in the gradient space. This method is designed to address the computational challenges of traditional influence functions, which require Hessian matrix inversion. The choice of outlier detection algorithms like Isolation Forest is justified based on efficiency and effectiveness.   Evaluation Criteria: The paper uses a combination of synthetic datasets and real-world noisy label datasets (CIFAR-10N, CIFAR-100N). For the synthetic data, they measure ground-truth outlier predictive accuracy and performance gain. For real-world datasets, they evaluate the accuracy of noisy label correction. They also extend their evaluation to data selection for fine-tuning NLP models (GLUE datasets) and influential data identification for Large Language Models, using appropriate metrics (AUC and Recall).   Rationale: These evaluation choices are relevant because they cover a range of scenarios, from controlled synthetic environments to more complex real-world applications in computer vision and natural language processing. The use of noisy label datasets is particularly relevant to evaluating the method's ability to identify detrimental samples. Theoretical Claims: The primary theoretical claim revolves around "Hypothesis 3.2" and its connection to the proposed "outlier gradient analysis." Hypothesis 3.2: "There exist outlier analysis algorithms capable of detecting detrimental samples in the gradient space."   The authors build a conceptual bridge between influence functions and outlier detection in gradient space. They argue that detrimental samples, which negatively impact model utility, can be considered outliers in the gradient space. This is supported by Observation 3.1, which states that in a converged model, most training samples contribute positively, while detrimental samples are a minority.   The paper doesn't provide formal proofs in the mathematical sense for Hypothesis 3.2. Instead, it offers a logical argument and empirical evidence to support it. The argument relies on the observation that detrimental samples are analogous to outliers and the justification that gradients play a decisive role in determining a sample's influence.   While the paper doesn't offer formal proofs, the logical reasoning and empirical results provide strong evidence for the validity of the central theoretical claim. Experimental Designs Or Analyses: The experimental designs and analyses are sound and appropriate for evaluating the proposed method across different tasks and datasets. Supplementary Material: The supplementary material includes additional details on: * **Additional Related Work:** This section discusses data-centric learning research beyond detrimental sample identification and influence estimation, such as datamodels, data efficiency, data pruning, model pruning, strategies for recourse, antidote data augmentation, feature selection, active learning, and poisoning attacks. It also mentions the extension of training sample influence to generative models like diffusion models. * **Detailed Information on Datasets and Model Training:** This section provides details on the synthetic datasets, CIFAR-10N and CIFAR-100N vision datasets, GLUE binary classification NLP datasets, and benchmark datasets for influential data identification in LLMs. It also describes the ResNet-34 architecture, RoBERTa NLP transformer model, and Llama-2 LLM, along with implementation details and parameter values for label correction baselines, influence-based baselines, and the outlier gradient analysis approach. * **Code and Reproducibility:** This section provides a link to the open-source repository containing the code, instructions, and implementation details. It also specifies the hardware and software used for the experiments. Relation To Broader Scientific Literature: The key contributions of this paper are related to the broader scientific literature in the following ways: * **Influence Functions:** The paper builds upon the existing body of work on influence functions, a technique used for estimating the impact of training data on model predictions. It addresses the computational limitations of influence functions, specifically the high cost of inverting the Hessian matrix, which becomes a bottleneck for large-scale deep learning models. * **Data-Centric Learning:** The research aligns with the growing field of data-centric learning, which focuses on improving model performance by manipulating the training data rather than the model architecture. The problem of identifying and removing detrimental samples is a core challenge in this area. * **Outlier Detection:** The paper connects the problem of detrimental sample identification to the field of outlier detection. By framing detrimental samples as outliers in the gradient space, the authors leverage existing outlier detection algorithms to solve the problem of identifying harmful data points. Essential References Not Discussed: Gradient-based anomaly detection: There is a body of work that directly uses gradients for anomaly detection, where anomalies are identified based on their gradient patterns. This line of work is closely related to the paper's idea of using gradients to identify outliers, and discussing it would provide a broader context. References: - Huang, Rui, Andrew Geng, and Yixuan Li. "On the importance of gradients for detecting distributional shifts in the wild." Advances in Neural Information Processing Systems 34 (2021): 677-689. - Kwon, Gukyeong, et al. "Backpropagated gradient representations for anomaly detection." Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXI 16. Springer International Publishing, 2020. - Chen, Jinggang, et al. "Gaia: Delving into gradient-based attribution abnormality for out-of-distribution detection." Advances in Neural Information Processing Systems 36 (2023): 79946-79958. - ElAraby, Mostafa, et al. "GROOD: GRadient-aware Out-Of-Distribution detection in interpolated manifolds." arXiv preprint arXiv:2312.14427 (2023). Other Strengths And Weaknesses: Strengths: - Originality: The paper presents an original approach by connecting influence functions with outlier detection in the gradient space. This is a novel way to address the computational challenges of influence functions and offers a new perspective on identifying detrimental training samples. The idea of using outlier analysis for this purpose is creative and potentially impactful. - Significance: The problem of identifying detrimental training samples is a significant challenge in data-centric learning. The proposed method has the potential to improve the efficiency and scalability of influence estimation, making it more applicable to large-scale deep learning models. This could have a substantial impact on various applications, including noisy label correction, data selection, and model interpretation. - Clarity: The paper is generally well-written and the proposed method is explained clearly. The authors provide sufficient background information and motivation for their approach. - Thorough Evaluation: The authors evaluate their method on synthetic datasets, noisy label correction for vision, data selection for NLP, and influential data identification for LLMs. This comprehensive evaluation demonstrates the broad applicability and effectiveness of the proposed approach. Weaknesses: - Limited Exploration of Outlier Detection Methods: While the paper justifies the use of Isolation Forest, it does not thoroughly explore or compare a wider range of outlier detection algorithms. There might be other more suitable algorithms that could further improve the performance or efficiency of the proposed method. Lack of Discussion on Failure Cases: The paper could include a more detailed discussion of potential limitations and failure cases of the proposed method. Understanding when and why the method might not perform optimally is crucial for a comprehensive analysis. Other Comments Or Suggestions: Further Analysis of Outlier Detection Algorithms: The paper justifies the use of Isolation Forest (iForest) but could benefit from a more detailed analysis and comparison of other outlier detection algorithms. This would provide a more comprehensive understanding of the impact of different outlier detection techniques on the proposed method. Questions For Authors: How sensitive is your method to the choice of hyperparameters, such as the number of trees in iForest or the threshold for L1/L2 norm methods? Please provide more guidance on how to choose appropriate hyperparameter values for different datasets and model architectures. A discussion on the robustness of the method to hyperparameter settings would be valuable. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear Reviewer p9JH, Thank you for your insightful review, comments, and suggestions. We answer the questions raised, below: - **Essential References Not Discussed**: Thank you for providing these references on gradient-based outlier/anomaly detection, we appreciate it. As the reviewer correctly points out, these are closely related to our work in a technical sense (but explore different research questions), and we will be sure to include and discuss these in the revision. ___ - **Lack of Discussion on Failure Cases**: We can definitely aim to add more discussion on the limitations of our approach in the main paper. While outlier gradient analysis is useful in cases where training data can be noisy, it might not be as useful if the data is already very high quality and there are no outlying gradient samples. However, this might not be the case in the real-world unless some steps have already been taken to ensure high data quality. Furthermore, outlier analysis algorithms have a fundamental limitation of how to specify the budget for outlier detection, which is a non-trivial hyperparameter optimization problem. While this is a common problem with little consensus across the entire field of outlier analysis, our methods inherit this limitation as well (although our methods work well for different budget thresholds, as shown in additional experiments in the Appendix C.2). ___ - **Further Analysis of Outlier Detection Algorithms**: Thank you for this suggestion. In our paper, we have utilized 4 outlier analysis algorithms: iForest, L1 norm thresholding, L2 norm thresholding, and OneClassSVM (Appendix C.9) for outlier gradient analysis. For all 4 of these algorithms, the general trend of outlier gradient analysis improving model performance against competitive baselines in the noisy learning regime can be observed. Also, note that our primary aim in choosing these algorithms was their high computational efficiency and minimal number of hyperparameters. More complex approaches generally tend to be slower (e.g. those based on deep learning) and thus, wouldn't be as useful for outlier analysis of the gradient space. However, we are happy to include any other useful outlier detection methods that the reviewer would like us to. ___ - **How sensitive is your method to the choice of hyperparameters, such as the number of trees in iForest or the threshold for L1/L2 norm methods? Please provide more guidance on how to choose appropriate hyperparameter values for different datasets and model architectures. A discussion on the robustness of the method to hyperparameter settings would be valuable.** Thank you for the great question. We had provided details and additional experiments on hyperparameters in **Appendix C**, which we also discuss below. More specifically, our outlier analysis algorithm has two hyperparameters: (1) the trimming budget $k$ which is the number of samples to remove, and (2) the hyperparameters of the outlier detection algorithm being used (e.g. iForest). 1. For the first hyperparameter of the trimming budget $k$, we conduct additional experiments while varying the value of $k$ from 2.5% to 12.5% for all 4 noise settings of CIFAR-10N. These results are provided in **Table 5** of **Appendix C.2**. As can be seen, the highest values across each noise regime are obtained by outlier gradient analysis (L2 norm thresholding at 12.5% for Aggregate and Random; and L2 norm thresholding at 2.5% for Worst), indicating its broad suitability. These results also show that the budget of 5% is a good choice for the trimming budget, leading to desirable performance in most cases. 2. The second hyperparameter choice is only a facet of the iForest outlier analysis algorithm and constitutes the number of trees (as the reviewer correctly pointed out). Note that this is because for the L1 and L2 norm thresholding approaches, we do not have a norm threshold that needs to be chosen manually, since setting the budget automatically decides the threshold. For the iForest algorithm, we have provided results for varying the number of tree estimators on CIFAR-10N in **Table 8** of **Appendix C.4**. As can be observed, the performance of outlier gradient analysis remains stable across the board when the number of trees/estimators are varied, indicating low sensitivity of this hyperparameter to final results obtained. ___ We would like to thank the reviewer again for all the time and effort spent on the review, and helping improve our work.
Summary: The paper introduces a simple yet powerful alternative to traditional influence functions by leveraging outlier detection in gradient space. This method—Outlier Gradient Analysis—provides a scalable, efficient, and accurate way to identify harmful training samples, with broad utility across diverse deep learning domains. Extensive experiments demonstrate the method’s good performance in both accuracy and computational efficiency. Claims And Evidence: **Claim 1: The majority of training samples positively contribute to the model’s utility,and a much smaller subset than beneficial samples** Although this conclusion is evident, it would be more rigorous if there are quantitative results to support this claim. **Claim 2: Gradient-space outliers correspond to detrimental training samples.** The authors validate this key hypothesis on both **synthetic datasets** (linear and non-linear) and show that detrimental samples are clearly separable in gradient space. Due to the absence of theoretical equivalence proof, this key hypothesis should be validated on **real datasets**. **Claim 3: Outlier Gradient Analysis is a computationally efficient and outperforms other baselines.** Experiments are provided across domains including CIFAR-10N/100N and LLM benchmarks. Methods And Evaluation Criteria: The method addresses a significant gap in existing approaches. Influence functions, while effective, are computationally expensive due to the need for Hessian matrix inversion, especially in deep models. By shifting to gradient-space outlier detection, the authors propose an approach that avoids this bottleneck, making it both more scalable and efficient. This transformation from influence functions to outlier analysis is logically justified, and the use of simple and efficient outlier detection algorithms. However, this transformation requires more comprehensive validation on real datasets, or the authors should provide theoretical proof. Theoretical Claims: This paper does not furnish the theoretical claims. Experimental Designs Or Analyses: The experimental designs and analyses in the paper are generally sound and provide strong evidence for the effectiveness of the proposed Outlier Gradient Analysis method. The authors carefully design experiments across multiple domains (synthetic, vision, NLP, LLMs) and use appropriate evaluation metrics. Supplementary Material: I have read the additional results and experiments section within the supplementary materials. Relation To Broader Scientific Literature: None Essential References Not Discussed: None Other Strengths And Weaknesses: **Strengths** The method is simple and efficient. The idea of avoiding computing the Hessian matrix is innovative. **Weaknesses** The technical details have not been adequately elaborated. The article lacks a discussion on the limitations of the methodology. Other Comments Or Suggestions: I have no other comments and suggestions. Questions For Authors: **Q1: Could you elaborate on the computational specifics of Outlier Gradient (L1) and Outlier Gradient (L2)?** **Q2: Why is there an additional summation symbol in Equation 1?** **Q3: Avoiding the computation of the Hessian matrix can accelerate calculations, but it is strange that only calculating the third term of Equation 1 improves accuracy.** Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer 91NL, Thank you for your insightful review and feedback. We answer the questions raised, below: - **Q1: Could you elaborate on the computational specifics of Outlier Gradient (L1) and Outlier Gradient (L2)?** Thank you for your question. Our outlier gradient approach (Algorithm 1) takes in as input a trimming budget $k$ and an outlier analysis algorithm $\mathcal{A}$. Here, $\mathcal{A}$ can be any outlier analysis algorithm, including L1 and L2 norm thresholding. For these two algorithms, we compute the gradient of the training samples, and then simply select the top-$k$% (i.e. based on budget) norm values as outliers. For L1 (or L2) norm thresholding, the L1 (or L2) norm values need to be computed, but this is a computationally efficient simple tensor operation. ___ - **Q2: Why is there an additional summation symbol in Equation 1?** The additional summation symbol is simply aggregating the loss for each of the validation/training set samples (i.e. samples from either $V$ or $T$) on which training sample $z_j$'s influence is being measured. As the loss is additive, the derivative of the loss is also additive, and hence, can be summed over to compute the overall loss contributions made by the individual training sample ($z_j$) we are computing influence for. This summation over the loss is used in past work on influence function for model performance measurement, such as [1,2], among others. ___ - **Q3: Avoiding the computation of the Hessian matrix can accelerate calculations, but it is strange that only calculating the third term of Equation 1 improves accuracy** Thank you for the great question. One intuitive reason for this (that also past work has found) is that the Hessian is not mandatory for influence analysis in all cases. While (Koh and Liang, 2017) pioneered influence functions based on the Hessian, other work has considered influence functions without relying on the Hessian such as TracIn [3], TracIn-Last [4], VAE-TracIn [5], BoostIn [6], Hydra [7], etc. Note that TracIn here is the Gradient Tracing baseline we compare with in the paper as well. TracIn and its variant approaches are simply calculating a vector inner product on the gradient space without computing the Hessian. Our outlier gradient analysis approach undertakes the Hessian-free in a novel manner by discovering detrimental training samples using the outlyingness of the gradient terms. ___ - **The key hypothesis should be validated on real datasets**: For real-world datasets (with a large number of classes) and large models, visualizing the gradient space (which will be very high-dimensional) requires undertaking aggressive approximations and is not possible in 2D/3D without losing useful information that describes outlyingness. Thus, we visualized the gradient space for simpler models and synthetic datasets as these are controllable and allow us to demonstrate our hypothesis, while real-world datasets could be affected by unknown factors. However, we would like to emphasize that the success of our methods on downstream real-world datasets (all the experiments in our paper) showcases the benefits and efficacy of outlier gradient analysis (thereby validating it) on real-world datasets and large models as well. ___ - **The article lacks a discussion on the limitations of the methodology**: Thank you for the suggestion. We can definitely aim to add more discussion on the limitations of our approach in the main paper. While outlier gradient analysis is useful in cases where training data can be noisy, it might not be as useful if the data is already very high quality and there are no outlying gradient samples. However, this might not be the case in the real-world unless some steps have already been taken to ensure high data quality. Furthermore, outlier analysis algorithms have a fundamental limitation of how to specify the budget for outlier detection, which is a non-trivial hyperparameter optimization problem. While this is a common problem with little consensus across the entire field of outlier analysis, our methods inherit this limitation as well (although our methods work well for different budget thresholds, as shown in additional experiments in Appendix C.2). ___ Thank you once again for your help in strengthening our paper, we appreciate it. ___ **References**: 1. Understanding black-box predictions via influence functions. ICML 2017. 2. DataInf: Efficiently Estimating Data Influence in LoRA-tuned LLMs and Diffusion Models. ICLR 2024. 3. Estimating training data influence by tracing gradient descent. NeurIPS 2020. 4. First is better than last for language data influence. NeurIPS 2022. 5. Understanding instance-based interpretability of variational auto-encoders. NeurIPS 2021. 6. Adapting and evaluating influence-estimation methods for gradient-boosted decision trees. JMLR 2023. 7. Hydra: Hypergradient data relevance analysis for interpreting deep neural networks. AAAI 2021. --- Rebuttal Comment 1.1: Comment: Thank you for your response. Most of my concerns have been addressed, except for the point that the key hypothesis should be validated on real datasets. Why can't high-dimensional data be projected into a low-dimensional space for visualization? Although the results may not be as perfect as those on synthetic data, it should still roughly validate the hypothesis of the paper. If it is not validated on real datasets, how can you prove that the results on downstream real data can be attributed to the hypothesis you proposed? After all, there is a significant gap between real datasets and synthetic data. --- Reply to Comment 1.1.1: Comment: Dear Reviewer 91NL, We would like to thank you for engaging with us and helping improve our contributions. As requested, we have undertaken additional experiments on two of our NLP datasets (SST2 and QNLI with RoBERTa as the model) to discuss the validity of our outlier gradient hypothesis on real datasets with detrimental/noisy samples: - For both the SST2 and QNLI datasets, we take the full gradient space (with 2048 dimensions) and apply iForest for outlier analysis of this gradient space. **We find that for (a) SST2, iForest detects noisy/detrimental training samples with an accuracy of _90.11%_ and for (b) QNLI, iForest detects noisy/detrimental training samples with an accuracy of _85.55%_. For both these real-world datasets, we can observe that our hypothesis is validated, as detected outliers correspond very highly to whether a training sample is noisy/detrimental or not.** ___ - We had also stated in our rebuttal above that dimensionality reduction from the full gradient space to 2D or 3D might lose important outlyingness information. To validate this, we take the full 2048 dimensional gradient space for both datasets and reduce dimensionality using PCA. For ease of visualization, we randomly sample 100 samples and plot the top-2 PCA components for **SST2 (provided here: https://anonymous.4open.science/r/icml-rebuttal-2025/2d_grad_sst2.png)** and for **QNLI (provided here: https://anonymous.4open.science/r/icml-rebuttal-2025/2d_grad_qnli.png)**. The legend also shows whether a sample is noisy or not. **As can be observed from these figures, it is not possible to either algorithmically or manually detect outliers for such a low dimensional gradient space. Thus, the key takeaway here is that outlyingness might not be observed after aggressive dimensionality reduction, even with high outlier detection performance on the original high-dimensional gradient space.** ___ We hope these additional experiments alleviate your concerns. Thank you once again for your efforts; we are grateful. Regards, Authors.
null
null
null
null
null
null
Modified K-means Algorithm with Local Optimality Guarantees
Accept (poster)
Summary: This paper generalizes necessary and sufficient conditions for local optimality of a solution of the continuous relaxation of the k-means problem; they generalize these conditions from the case using the Euclidean dissimilarity (Peng & Xia, 2005) to Bregman divergence. Similarly, they then use these observations to design extensions to the k-means algorithm that take effect when the k-means algorithm converges to a partition where a point is equidistant to at least two cluster centers. The returned partition is then a local optimum of the continuously relaxed k-means problem. Experiments show that their methods achieve better objective values in certain synthetic and real world settings. Claims And Evidence: Given the work of (Peng & Xia, 2005), I think the claim there is still a gap between convergence and local optimality is a little misleading. However, this is not widely known. Having skimmed that paper, this paper does a much better job of presenting the issue to readers not intimately family with optimization, in particular, cutting plane methods. Methods And Evaluation Criteria: The methods and evaluation criteria make sense. Theoretical Claims: I have not checked any of the proofs rigorously. Experimental Designs Or Analyses: No. Supplementary Material: No. Relation To Broader Scientific Literature: I think more time should have been devoted to distinguishing the results of this paper from that of (Peng & Xia, 2005), both the theory and the algorithms. For example, is the only difference between Lemma 2.1 (Peng & Xia, 2005) and Theorem 4.2 of this paper, the difference in choice of dissimilarity, or does the proof use different techniques? Using the euclidean dissimilarity, how do the algorithms presented here compare to the cutting plane method from (Peng & Xia, 2005)? - I understand implementing that method may be a lot of work so even an intuitive understanding would be useful. For example, does the cutting plane method break down when the Bregman convergence is used? Essential References Not Discussed: (Peng & Xia, 2005) is briefly mentioned but should have been discussed a lot more. Other Strengths And Weaknesses: The main strength of this paper is a much simpler algorithm for returning continuously/discretely locally optimal partitions compared to the cutting planes method of (Peng & Xia, 2005) as well as addressing the common mis-understanding that convergence implies local optimality. The experimental results are promising but the runtimes seem very slow. I would have liked to have seen more results in a runtime constrained setting where each algorithm has a fixed time budget. The main weakness is the similarity to (Peng & Xia, 2005) and the lack of discussion on this point. Other Comments Or Suggestions: Adding the number of iterations to Table 1 etc would be useful. I assume the time complexity of D-LO-K-means++ seems to be quadratic in the number of clusters because as k increases, the number of iterations increases linearly. Having the number of iterations would let me verify this. This is interesting in of itself since for kmeans++, the runtime appears linear in the number of clusters. Questions For Authors: 1) In section 5.3.2, the results suggest that the k-means algorithm seems to converge to C-local optimal solutions in most real-world datasets. Is there a way you could formalize this? Maybe through a smoothed analysis? This would be useful to know either way. 2) There are many techniques for speeding up k-means (minibatches/coresets/triangle inequality tricks etc), would any of them compose nicely with your methods? Either in theory or practice? This would make this work more significant as vanilla k-means is often considered to be too slow for massive datasets. 3) See previous questions regarding (Peng & Xia, 2005). Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank Reviewer fHBw for their review. **1. High-level comparison with Peng & Xia:** **R1:** Both of our works study the K-means problem, but our focus is on convergence of Lloyd's algorithm, whereas they consider completely different methods to obtain local minima, to ultimately find a global optimal solution. Given their focus on global optimization, we agree that perhaps our paper "does a much better job of presenting the issue", as in Section 4, as part of their experiments, it is stated that "we run the K-means algorithm to find a local optimum of (1.5)", which our paper clearly shows does not hold in general. Other methods exist to solve the K-means problem with varying convergence guarantees, but Lloyd's algorithm is the clear dominant method, so our focus was placed on studying and establishing the convergence of Lloyd's algorithm, in particular, to a local minimum for practical reasons. Minimizing a concave function over a polytope (as in the K-means problem) is NP-hard, and according to (Peng & Xia, Section 4: Numerical Experiments), there seems to be no guarantee that their method will not have to traverse all vertices (clusterings) in the worst case scenario, with their algorithm never obtaining a globally optimal solution in any of their experiments. **2. Lemma 2.1 and Theorem 4.2:** **R2:** Theorem 4.2 is not simply a theoretical observation, but played a crucial role in the design of our modification of Lloyd's algorithm and the proof of its convergence. For Theorem 4.2, much effort was placed on finding a minimum number of conditions which could be easily verified and implemented within Lloyd's algorithm, also without increasing its per iteration computational complexity. We do not want to downplay the fact that we consider Bregman divergences, but we believe the difference between our results run much deeper. **3. Empirical comparison with Peng & Xia:** **R3:** We compared our algorithms, with (Peng & Xia, Section 3.2.1)'s algorithm to find D-local optimal solutions, which is not based on Lloyd's algoirthm, but rather on repeatedly performing single-point swaps to cluster assignments, which can be found in Section 2 of https://anonymous.4open.science/r/ICML-Kmeans-F32E/Additional_Experiments.pdf, where we have also included the number of iterations for the K-means based algorithms. From these results, we observe that D-LO-K always acheives the minimum mean error (blue), whereas D-LO-P&X is consistently much slower than all other methods (red). **4. Runtime constrained experiments:** **R4:** Our response to Reviewer BPFU (R2 paragraph 2) establishes that our methods will always perform at least as well as Lloyd's method under an iteration or time constrained setting. In our experiments, we never had a maximum iteration stopping rule, simply letting our methods run until convergence. Inspired by your request, we plot the objective function through time for D-LO in Section 3 of our attachment, where in black it is identical to K-means, and in red is when after K-means has converged, it begins to call Function 2. From these plots, if we limit iterations to around 200, we can still achieve approximately 15% objective function value improvement with 3.5-5x algorithm speedup. **5. Theoretical understanding of K-means converging to C-local optimal solutions:** **R5:** Assume the data X is sampled from an absolutely continuous probability distribution, e.g. normally distributed, and D is the squared Euclidean distance. The cluster centers, being weighted averages of elements of X are also absolutely continuously distributed. For the K-means algorithm to not converge to a C-local optimum, it needs to converge to a clustering P where there exists a point x and two clusters c_1 and c_2 such that d(x,c_1)=d(x,c_2), where d is the Euclidean distance. This means that c_1 and c_2 need to lie on the surface of the same sphere, which has measure (probability) 0. **6. Composing with other techniques:** **R6:** Minibatch K-means is not a variant of Lloyd's algorithm, but of stochastic gradient descent, so our analysis is not applicable. From (Bottou & Bengio, 1994, Section 3.4), the method seems to converge to a local minimum almost surely. In practice, minibatch K-means can be faster but with a sacrifice in quality (Web-Scale K-Means Clustering, Sculley, 2010, Figure 1). Using a coreset $Y\subset X$ of size m<N instead of X in Lloyd's algorithm, our method can be applied to generate a locally optimal cluster C*, whose loss function is no worse than $(1+\epsilon)$ times the loss over $X$. The per-iteration complexity of Theorem 4.5 will be reduced from $O(Nkd)$ to $O(mkd)$ (for squared Euclidean norm), and will certainly result in faster compute time. Given that Elkan's method is still performing Lloyd's algorithm, but in a (computationally) optimized way by using the triangle inequality to reduce the number of distance computations, our method can also be used with Elkan's method.
Summary: This paper investigates the local optimality properties of the K-means clustering algorithm and proposes modifications that guarantee local optimality in both continuous and discrete senses. The authors introduce theoretical results that highlight scenarios where the standard K-means algorithm does not always converge to a local minimum and propose an improved version, LO-K-means, which achieves local optimality while maintaining the same computational complexity as K-means. The method is evaluated on synthetic and real-world datasets, demonstrating improved convergence properties. Claims And Evidence: yes Methods And Evaluation Criteria: The choice of dataset sizes is relatively small. As suggested, larger datasets (N > 1000) should be considered to assess scalability. Theoretical Claims: The notation and presentation in some places (e.g., Appendix C) could be more refined for clarity. Experimental Designs Or Analyses: The paper would benefit from including larger datasets, as suggested, to evaluate the scalability of LO-K-means. Supplementary Material: Yes, appendix C Relation To Broader Scientific Literature: While the use of Bregman divergences is mentioned, a more detailed comparison with alternative clustering approaches (e.g., spectral clustering, Gaussian mixture models) would strengthen the discussion. Essential References Not Discussed: No Other Strengths And Weaknesses: Strengths: 1. Strong theoretical contributions with clear mathematical formulations. Weaknesses: 1. Larger datasets are needed to validate scalability. 2. Some theoretical claims (e.g., Equation 1 motivation) require better clarification. Other Comments Or Suggestions: Appendix C contains an incorrect reference: "(see Appendix C for full details)." This suggests a lack of careful proofreading. The authors should ensure consistency in citations and references. Questions For Authors: What is the primary motivation for Equation (1), and could it be presented in a more intuitive manner? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank Reviewer bhiT for their suggestions and the positive feedback, that our work has "strong theoretical contributions with clear mathematical formulations". **1. Test on larger datasets (N > 1000):** **R1:** We want to highlight that several experiments were done for datasets with N>1000: Wine Quality (N=6,497), Yeast (N=1,484), Predict Students’ Dropout and Academic Success (N=4,424), and News20 (with N=2,000). We refer Reviewer bhiT to Appendix E.2, where all of the details of our real-world datasets are contained. Given that these are all real-world datasets, we thought that perhaps the reviewer desired experiments also using large-sample synthetic datasets, as our synthetic dataset experiments were only initially done for N up to 300. Additional large-sample experiments on synthetic datasets can be found in Section 1 of https://anonymous.4open.science/r/ICML-Kmeans-F32E/Additional_Experiments.pdf. These experiments were done with datasets of up to 20,000 samples. Given that our C-local method produces more modest improvements compared to our D-local method, we did these experiments with C-LO-K-means, with the K-means++ initialization. We observe that even with these large-sample datasets, C-LO-K-means can outperform the standard K-means algorithm. **2. Comparing with alternative clustering approaches (e.g., spectral clustering, Gaussian mixture models):** **R2:** This work is focused on improving Lloyd's algorithm. We attempted to consider a general setting (weighted K-means using Bregman divergences), but unfortunately not all clustering methods could be considered in this first paper. We do hope to generalize our results though to other clustering problems in future work. Spectral clustering for graphs, which clusters data points based on their connectivity, can be accomplished by clustering the eigenvectors of the K smallest eigenvalues of the Laplacian using the K-means algorithm, so for this problem, our method can be applied to improve the clustering stage. There are similarities between the K-means algorithm and the EM algorithm, used for estimating Gaussian mixture models, i.e., in the E step, data points are assigned to clusters, and the M step computes the cluster centers. The possible extension of our work to EM algorithms naturally interests us. Knowledge that the data points are sampled from a mixture of Gaussian distributions is fully exploited in the EM algorithm to estimate and maximize the model parameters' log-likelihood function, whereas Lloyd's algorithm is much simpler (e.g. no covariance estimation). Our method is not attempting to change this about Lloyd's algorithm, so we can still expect its performance on GMM to be better than Lloyd's algorithm, but not to be able to compare to the EM algorithm in terms of the complexity of clusters that it can generate. **3. Equation 1 motivation:** **R3:** Given that the motivation for this work is about the local optimality of Lloyd's algorithm, it was most convenient to formulate the K-means problem as a mathematical optimization problem for our analysis. Similar formulations can be found, for example, in (Selim and Ismail, K-Means-Type Algorithms: A Generalized Convergence Theorem and Characterization of Local Optimality, 1984, Equation 1) and (Peng and Xia, A Cutting Algorithm for the Minimum Sum-of-Squared Error Clustering, 2005, Equation 1.2). Given that we study both continuous and discrete local optimality, we wanted to clearly isolate the difference between (P1) and its continuous relaxation (P2), which motivated the separation of constraints into sets S1 in (P1) and S2 in (P2). Given that we consider general Bregman divergences, we also needed to consider the domain of the cluster centers, written as $R$, which is no longer always simply $\mathbb{R}^d$ as is the case with the squared Euclidean distance. **4. Appendix C reference:** **R4:** We apologize for the confusion regarding "(see Appendix C for full details)" in Appendix C. In the body we tried to include as much of our full counterexample as possible, which is contained in the appendix, so this exact paragraph is contained in both places. We also noticed this, and removed it from our current version of the paper. Given that our main focus was on the technical correctness of our claims, this type of silly mistake was able to sneak through. We hope that we have properly answered all of your questions, and that in particular, you are satisfied with our use of N>1000-sample datasets in our experiments. We would greatly appreciate it if you would be able to consider increasing your overall recommendation score.
Summary: This paper considers a (natural) notion of local-optimality for the k-means problem, and shows that Lloyd's algorithm can lead to solutions that are not locally optimal. Generally when anyone discusses Lloyd's algorithm, they often claim that Lloyd's gets stuck in a "local minima" so this result is interesting. The paper further shows the necessary and sufficient conditions for a solution to be locally optimal and proposes a simple modification of Lloyd's algorithm, basically by augmenting it with a local-search step at the end, to provide an algorithm with guaranteed local minima. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: I went over the proofs and they mostly seem correct. It's unlikely there is any major flaw in the claims... Experimental Designs Or Analyses: Yes Supplementary Material: Yes, the first page of the Supplementary Material. Relation To Broader Scientific Literature: Yes, the paper's contributions are related to the scientific literature. At this point, Lloyd's algorithm is so commonly encountered that giving a link to a single (or a few) specific paper is not required. Essential References Not Discussed: There is a HUGE swath of work related to Local search algorithms for the k-means problem, starting with the paper *A local search approximation algorithm for k-means clustering* by Kanungo et.al (2004). Given that the paper (essentially) proposes a local-search to be done after Lloyd's iterations converge, it makes sense that the paper at least discusses how prior works on Local Search for k-means are related to this. Honestly, it felt like this paper could have come out in the 1980's (soon after the Selim and Ismail 1984 paper) and there's no reason to believe that it would have been very different... Other Strengths And Weaknesses: Yes, the contributions are somewhat interesting but I feel it falls short of an ICML 2025 paper... Other Comments Or Suggestions: Line 25 on the right hand paragraph should be *Grunau and Rozhon confirm that* instead of *Gruanu and Bock confirm that* Questions For Authors: None Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for their comments and suggestions, and are happy that they found our result to be interesting. **1. Local search algorithms:** **R1:** Similar to K-means++, Kanungo et. al present a heuristic to initialize centers, with a guarantee that the objective value (distortion) of their initialization is no greater than 9+epsilon times worse than the optimal objective function value. After using their local-search initialization, they suggest to use Lloyd's algorithm. This work is complementary to ours, as it is focused on how to start solving the K-means problem, whereas our work can be viewed as how to finish solving the K-means problem. Our work is not dependent on how the K-means solution is initialized so, following scikit-learn, we considered both K-means++ and randomly choosing data points as centers, which seem to be the most popular choices. K-means++ also bounds its initialization, though in expectation and as a function of K. How to initialize centers is adjacent to our work, though given the natural trade-off between speed and accuracy, the simplicity of K-means++ likely plays a major role in its popularity, as it is very easy to implement, whereas in Kanungo et. al's implementation of their method in their Experimental Results section, they were required to make simplifications to their method, seemingly losing its theoretical guarantees. In particular, Kanungo et. al's method defines a large set of candidate centers C such that it contains centers which can form an $\epsilon$-approximation. Initializaing S as a random sampling of K cluster centers from C, their approach consists of randomly swapping out clusters from S with clusters from C and seeing if the distortion is improved. This work presents their results in terms of "stability", but this should not be confused with some notion of the local optimality of the K-means problem, as it is in reference to only their initialization heuristic and the swapping of candidate cluster centers. Our method on the other hand is not a heuristic. The "search" aspect of our method, if we want to call it that, deterministically verifies if the condition for local optimality of the K-means problem holds after Lloyd's algorithm has terminated. If we find that it has not converged to a local minimum, meaning that we have found that the objective function is guaranteed to strictly decrease by moving a point to a different cluster, we do this operation, and then continue running Lloyd's algorithm. In Kanungo et.al, they assume that "Lloyd’s algorithm eventually converges to a locally optimal solution", so our method can be used to guarantee that this holds, perhaps strengthening the arguments in their work. **2. 1980’s paper:** **R2:** We agree, in that it is remarkable that after over 40 years we are able to point out an error in the convergence analysis of the K-means algorithm and properly address the non-convergence of this method. We think that our work is important for the community to show that these types of ``folklore" results cannot always be blindly trusted. **3. Grunau and Rozhon:** **R3:** Thank you for pointing that out. We have made the correction. **4. Falls short of an ICML 2025 paper:** **R4:** Our work brings a clear and deeper understanding of the K-means algorithm, and presents a simple method to improve its solution while guaranteeing local optimality. Given the large number of methods which use Lloyd's algorithm as a base to solve the K-means problem, and it being itself "by far the most popular clustering algorithm used in scientific and industrial applications” (Pavel Berkhin. Survey of clustering data mining techniques, 2002), we believe our work is important and has significant impact. We would be happy to answer any further questions, but if we have satisfied your concerns we would appreciate it if you would consider increasing your overall recommendation of 2.
Summary: The paper shows that the traditional K-means algorithm does not always converge to a local optimum (by a 1D counterexample). The paper proves the conditions for K-means to converge to a local optimum. By modifying the termination conditions of K-means (adding a new step), we can guarantee convergence to either a continuous (C-local) or discrete (D-local) local optimum. Claims And Evidence: The claims are well-supported by theoretical analysis and experiments. Methods And Evaluation Criteria: This paper is a simple modification based on existing algorithms (K-means, K-means++). The experiments were tested on synthetic and real-world datasets. Overall, the method and evaluation both make sense. Theoretical Claims: I checked the proof and it looks solid. Experimental Designs Or Analyses: Overall, the experiments covered all the theoretical claims. I'm not very familiar with the more detailed experimental design so I can't comment. I haven't tried to reproduce the result. Supplementary Material: The proof. Relation To Broader Scientific Literature: K-means has some variants, such as X-means and G-means: Pelleg, Dan, and Andrew Moore. "X-means: Extending K-means with Efficient Estimation of the Number of Clusters." ICML’00. Citeseer, 2000. Hamerly, Greg, and Charles Elkan. "Learning the k in k-means." Advances in neural information processing systems 16 (2003). This remains an open question: Will they converge to a local optimum? Are there similar improvements? Can the authors comment from their experience on the performance of such algorithms? Essential References Not Discussed: None. It is just a simple modification of the K-means algorithm. Other Strengths And Weaknesses: Strengths: -very natural setting, and very natural optimization question for k-means. -This paper gives the first rigorous disproof of K-means’ local optimality and a practical fix. -The algorithms and conclusions are simple and elegant. One possible weakness would be the improvement in C-LO is not significant, while D-LO adds a considerable computational overhead. Other Comments Or Suggestions: Overall, I believe this is a solid contribution. Questions For Authors: One quick question: Isn't the modification essentially a tie-breaker? Could randomly selecting neighbors/SGD also make the K-means algorithm converge to the local optimum with high probability? (Your algorithm is deterministic, which is an advantage. It would be better if there is a discussion on randomness.) Have people studied this version and what is it known in the literature? It seems like a simple modification that people must have studied? Thanks! Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful review and positive comments, namely that "the claims are well-supported by theoretical analysis and experiments", "the algorithms and conclusions are simple and elegant", and that "overall...this is a solid contribution". **1. X-means & G-means:** **R1:** In our work, which considers the classic K-means problem, we assume that K is given. X-means tries to find the optimal K based on the Bayesian information criterion (BIC). The goal of their work is somewhat adjacent to ours, but we verified that our algorithm can be used within their method: Step 1 (Improve-Params) consists of running conventional K-means to convergence, so this can be improved using our method. Step 2 (Improve-Structure) splits clusters in two, does local 2-means for each pair of clusters, which again can use our method, and then uses BIC to determine whether to keep the 2 children clusters or the original cluster. This work is using a heuristic method, searching for the best K over a given range, based on a statistical criterion. We note that in terms of minimizing the clustering error, one would always choose the largest possible K, so this work does not present any type of "local optimality guarantee" for the choice of K from an optimization perspective, nor for the K-means problem for the chosen K. As described in their paper, G-means is another "wrapper around K-means", trying to find the appropriate choice of K by running the K-means algorithm for consecutively higher K until a statistical test is satisfied. Somewhat similarly to X-means, clusters are split into two, with their centers computed using K-means, so our K-means algorithm can also be applied within the G-means algorithm to improve the accuracy of their method. G-means tests if the within cluster data is sufficiently Gaussian, hence it is highly dependent on the squared Euclidean loss, where our work does not rely on any assumptions on the distribution of the underlying data, while considering general Bregman divergences. Similarly X-means, using BIC, requires the log-likelihood of the data, which they calculate assuming the data is spherical Gaussian. **2. Improvement using C-LO & computation overhead of D-LO:** **R2:** Up until now there was no simple method to verify the quality of the solution of Lloyd's algorithm. Generating a C-local minimum is fast, and if Lloyd's algorithm has converged to a C-local minimum, it only requires a single call to Function 1 to guarantee it. D-local is a stronger notion of optimality than C-local (Proposition 2.5), but naturally it is generally slower. The only difference between our methods and Lloyd's algorithm occurs after Lloyd's algorithm has converged. If Lloyd's algorithm does not converge to a local minimum, our methods will perform additional iterations which are all guaranteed to strictly decrease the objective function. Therefore, D-local's potentially slow performance can be controlled by setting a maximum iteration limit. With any fixed time or iteration budget, our methods will perform as well as Lloyd's algorithm: If Lloyd's algorithm converges within budget, our methods will output a solution with a lower objective function if they can perform addition iterations, or else our methods' solutions will match Lloyd's. If Lloyd's does not converge in time, our methods' solutions will again exactly match Lloyd's algorithm. In order to directly improve the computational overhead of D-LO, we developed an Accelerated D-LO algorithm (Accel-D-LO), see Section 3 of https://anonymous.4open.science/r/ICML-Kmeans-F32E/Additional_Experiments.pdf, where this new heuristic is tested, with a demonstration of the previous paragraph's message. When running D-LO-K-means, instead of simply choosing the first value of $n$ and $k_2$ such that $\Delta_1(n,k_1,k_2)<0$ in Function 2, Accel-D-LO finds the $n$ and $k_2$ which minimize $\Delta_1(n,k_1,k_2)$, moving the cluster assignment to the adjacent vertex that decreases the objective function value the most. We observe that this simple heuristic speeds up D-LO-K-means 2-3X while still guaranteeing convergence to a D-local minimum. Since our method is a slight modification of the K-means algorithm, we also direct Reviewer BPFU to our rebuttal for Reviewer fHBw, where we discuss how our method is also compatible with techniques that can speed up the K-means algorithm such as using coresets and Elkan's method. **3. Randomly selecting neighbors/SGD for tie-breaking:** **R3:** Our focus on a deterministic rule was to present the simplest algorithm with our desired theoretical guarantees, but yes, finding all (or a subset) of tie-breakers and then randomly selecting one would still maintain our convergence guarantees. In our initial algorithms, we simply used the first tie-breaker that we found. We refer Reviewer BPFU to our rebuttal for Reviewer fHBw where we discuss minibatch K-means, which is an SGD-type method for the K-means problem.
null
null
null
null
null
null
Beyond Communication Overhead: A Multilevel Monte Carlo Approach for Mitigating Compression Bias in Distributed Learning
Accept (poster)
Summary: The paper introduces a Multilevel Monte Carlo compression scheme that leverages biased compressors to construct unbiased gradient estimates. The proposed approach aims to combine the empirical efficiency of biased compressors (Top-k, bitwise compression) with the theoretical guarantees of unbiased methods. Claims And Evidence: yes Methods And Evaluation Criteria: yes Theoretical Claims: yes Experimental Designs Or Analyses: yes Supplementary Material: no Relation To Broader Scientific Literature: satisfactory Essential References Not Discussed: none Other Strengths And Weaknesses: Strength: 1. The paper is well-written and easy to follow. 2. The paper introduces an innovative way to bridge biased and unbiased compression techniques. 3. The authors provide thorough theoretical analysis of their method, including detailed proofs of unbiasedness and variance bounds. Weakness: 3. The paper does not provide explicit per-iteration time cost comparisons. Other Comments Or Suggestions: none Questions For Authors: 1. As acknowledged in the paper, the MLMC approach trades bias for increased variance, which might impact performance in some scenarios, such as setup with a small number of machines. Is this the reason when with 4 machines, the performance gain is not that huge? 3. Can the author provide per iteration time cost for different schemes? 3. Is MLMC in Figure 3 based on Algorithm 2 or 3? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their positive evaluation of our paper and for the constructive feedback. Below, we address the questions raised: **1. Scaling with number of machines** You are right that the performance gains from MLMC grow with the level of parallelization. When using only 4 machines, variance reduction is not as effective, so the gains are naturally smaller. However, as shown in our scalability plots (e.g., Figure 1 ($M=4$) compared to Figure 2 ($M=32$)), MLMC achieves significant speedups, smaller errors, and improved communication efficiency in large-scale distributed settings. This is a key strength of our method. It retains the empirical efficiency of biased compressors while providing the unbiasedness and theoretical guarantees needed for scalable, stable learning in high-parallelism regimes. Moreover, higher parallelization induces a better variance reduction effect, which makes our MLMC estimator even better. We will clarify this in the revised paper to better highlight the scaling advantages of MLMC. **2. Computational overhead** That is a good point. The computational overhead of MLMC is comparable to standard methods such as Top-k and AdaGrad, and is often negligible relative to the overall training time. Similar works also focus less on the computational overhead since it is often negligible compared to the overhead introduced by the communication. Specifically, using top-$k$, for example, incurs $O(d\log(k))$ computational complexity per iteration and per machine while our adaptive MLMC method (Alg. 3) incurs $O(d\log(d))$, which is a very small difference in practical scenarios. In more detail, using top-$k$ requires finding the $k$ largest elements, which costs $O(d\cdot\log(k))$ in each iteration and for each machine. In contrast, our adaptive MLMC method (Alg. 3) with top-$k$ requires sorting the vector first, costing $O(d\cdot\log(d))$, and computing the probabilities and constructing the MLMC estimator, which costs $O(d)$ in total (for computing the norm of the vector, similar to AdaGrad, and for picking the $l$-th largest element, which costs $O(1)$ since the vector is sorted). However, you are right that this is worth discussing to improve clarity, and we will add it to our paper. **3. Clarification on Figure 3** Thank you for pointing this out. The results in Figure 3 correspond to Algorithm 2. We will clarify this in the figure caption and text to avoid confusion. **4. Experiments** We ran new experiments, including NLP experiments using BERT on SST-2. These are anonymously available at "https://anonymous.4open.science/r/ICML2025MLMC-5346". Thank you again for your constructive feedback! --- Rebuttal Comment 1.1: Comment: I thank the authors for the response and providing the new comments. I like how the paper combines the empirical efficiency of biased compressors with theoretical guarantees of unbiased ones (and goes beyond importance sampling). The response clarified my concerns. I will maintain my score. --- Reply to Comment 1.1.1: Comment: Dear reviewer, We thank you sincerely for your comment and for acknowledging the novelty and contribution of our paper beyond importance sampling.
Summary: The work proposed to consider Multilevel Monte Carlo (MLMC) in Distributed Learning to mitigate the problem with unbiased compressors analysis. Work introduced a novel Multilevel Monte Carlo (MLMC) compression scheme that leverages biased compressors to construct statistically unbiased estimates. Claims And Evidence: The paper in general is written pretty well. **Major:** 1. Unfortunately, I have concerns about the paper. The authors highlighted the possibility of using an interesting mechanism. However, the proof presented in Parallelization 3.4, the second paragraph, is not complete. The MLMC is unbiased and satisfies the Assumption hidden in Line 117, but nothing has been analyzed in terms of variance in terms of Assumption 2.2. For me, the proof is a bit artificial and the authors should elaborate way more on the Analysis. Essentially, I don't like the fact that the original estimator is replaced by another and nothing has been said in terms of its variance. 2. I'm pretty skeptical that if use L=2, and use for L=2 identical mapping and for L=1 TopK[k=1] compressor the \alpa=k/d does not come into the rate (2) at all. However, the authors claim: "Note that since our MLMC gradient estimates are unbiased, a similar error bound to Eq. (2) holds" 3. It's great that you have auxiliary Lemmas, but please formulate and prove the convergence theorem in detail (either for convex or non-convex case). **Minor:** 4. Please use the notation in Line 135 to highlight the estimator of \nabla f_i. You notation is slightly overload because you have both f_i(x) and f_i(x,z) 5. Please elaborate more on the fact that rate (2) is optimal for convex setting in terms of rate for Stochastic Gradient Descend. 6. Please, if Assumption 3.2 is too restrictive, use "Ahmed Khaled, Othmane Sebbouh, Nicolas Loizou, Robert M. Gower, and Peter Richtarik, Unified Analysis of Stochastic Gradient Methods for Composite Convex and Smooth Optimization, arXiv:2006.11573, 2020" 7. Please elaborate more on the rate for non-convex cases. The rates 1/T to 1/sqrt{T} depend on assumptions and methods. 8. > This way, although the compressed gradients can be biased, their MLMC estimators are always unbiased. I can take take expectation of (5) and get line 187 right column. But the estimator is biased or not will -- depend on X^L. (please rephrase) 9. Please be more concrete in Definition 3.1. and specify that C^i, i \in [L], i \ne L can be any compressor (1) or (2). 10. Please add experiments with Convex Setting (Quadratics or Logistic Regression) by selecting step-size according to theory to demonstrate the correctness of your method. Methods And Evaluation Criteria: In Applied Optimization sense, there are experiments wth training ResNet-18. In a more restrictive setting with trying to eliminate humans from the loop during training, there are no such experiments. Theoretical Claims: Appendix A, D. Experimental Designs Or Analyses: Yes. In the Applied Optimization sense, methods sound good. Supplementary Material: Yes. Appendix and Source code. Relation To Broader Scientific Literature: The proposed methodology is interesting and has serious potential to revolutionize how we think about ways to mitigate problems with the analysis of biased compressors. Essential References Not Discussed: All related works are properly introduced and utilized. Other Strengths And Weaknesses: The paper is well written, but requires more elaborated work on Theory and minor polishing. Other Comments Or Suggestions: No. Questions For Authors: See Claims And Evidence section. Ethical Review Concerns: None. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful and encouraging feedback, and for pointing out areas where the theoretical analysis could be clarified. We respond to each concern below and will revise the paper accordingly. **1. Variance Analysis and Convergence Guarantee** We appreciate your observation regarding the variance and convergence analysis. Regarding the variance, we presented the full derivations of the variance of the MLMC estimators as part of the calculation of the optimal probability distribution (which is optimized to minimize the variance) in Appendices B,C, and D (see Eq. (33, 44, 55)). For a more practical example, in Lemma 3.6 we show that the variance of our MLMC estimator in the exponential distribution case (see Assumption 3.5) is given by $O(1/rs)$, where $r$ is the exponential decay rate of the vector's elements, which is better than the $O(d/s)$ variance of rand-$k$ when $1/r < d$ (i.e., when the decay rate is sufficient, which is the more interesting scenario). Note that when the vector is nearly uniform, we have $1/r \approx d$ and the variances will be comparable in this case, as expected. Regarding convergence, please note that since our MLMC gradient estimators are unbiased by construction, convergence follows by the standard SGD convergence analysis, where the only difference is the additional variance introduced by the compression, as we state in lines 186-200. However, we agree that making these derivations explicit would strengthen the presentation, and we will add this to the paper as you suggested to make it clearer. Specifically, the formal convergence theorem for Alg. 2 and Alg. 3 will hold under Assumptions 2.1 (smoothness) and 2.2 (bounded variance) and will guarantee similar error bounds (for the convex and nonconvex cases) as the ones in Theorem 2.1 and Eq. (2), only with $\sigma_{comp}+\sigma$ in place of $\sigma$, where $\sigma_{comp}$ depends on the compressor and thus on $\alpha^l, l\in[L]$ (see e.g. Eq. (60) in Appendix D). We formalize the convergence theorem as follows. Theorem (convex case). Under Assumptions 2.1 (smoothness) and 2.2 (bounded variance), Alg. 2 (nonadaptive MLMC compression) guarantees the following error bound: $O(\frac{1}{T}+\frac{\sigma_{comp}+\sigma}{\sqrt{MT}})$ Although the proof follows very similarly to the SGD convergence theorem, we will formalize and add it to the paper for completeness. Similar Theorem and proof follow for the nonconvex case and we will add them as well. **2. Bounds dependence on compression constants** We believe the reviewer’s concern refers to the apparent disappearance of level-specific compression constants (i.e., $\alpha_{t,i}^l$) in the final rate. This is a good point. As we mentioned in the previous point, these constants come to light in the variance introduced by compression, i.e., in $\sigma_{comp}$ (see e.g. Eq. (60) in Appendix D) which appears in the final convergence rate. We agree with the reviewer's suggestion to add this to the main paper to make it clearer, and we will incorporate this in the final version. **3. Minor Points** We thank the reviewer for the helpful suggestions regarding notation, definitions, and related clarity issues. We will revise the manuscript accordingly to improve clarity. * You are correct that the unbiasedness of our MLMC estimator depends on the highest level, $L$. Since we define $C^L(v):=v$, i.e., there is no compression, our MLMC method produces unbiased estimates of the true gradient. * Regarding the convergence rate of convex and non-convex SGD, we formalized the assumptions in our paper, but we will elaborate more on this and on the optimality of the bounds to improve clarity as you suggested. * Regarding eliminating humans from the loop in experiments, we ran additional NLP experiments using the AdamW optimizer, which employs an adaptive learning rate and alleviates the need for extensive learning rate tuning. The results are anonymously available at "https://anonymous.4open.science/r/ICML2025MLMC-5346". Thank you again for your constructive feedback!
Summary: The article presents a new compression method that uses the MLMC algorithm to turn biased compressors into unbiased ones. Claims And Evidence: The claims in the paper are correct and verified. Methods And Evaluation Criteria: The proposed methods are proved under generally accepted assumptions on the target function. The algorithms are validated on the ResNEet+ Cifar10 problem, which is common. Theoretical Claims: The proofs and facts appear to be correct. But I'm left unclear about one thing that seems like it should definitely be clarified, how do these approaches differ from importance sampling (see Sec 2.2 from Beznosikov et al)? It looks like, all these compressors can be reduced to a simpler form. Let me provide examples: __Bit-wise compressors:__ Here, the difference $g^l - g^{l-1}$ is used, and from the proposed compressor, it follows that $C(g) = 1/p^l (-1)^b_0 b_l 2^{-l}$, meaning that with probability $\sim 2^{-l}$, the $l$-th bit, multiplied by $2^l$, is sent. Essentially, we assign weights from the simplex to all bits and sample them non-uniformly, but according to some prior distribution $p$. __TopK:__ Similarly, we send the coordinate $j$ with probability $\sim |g_j|$, as follows from formula (11). It's not entirely clear why it's written so complicatedly, as it is essentially equivalent to: $p_j = |g_j| / \| g \|_1$. Again, we assign weights from the simplex to each coordinate and sample the coordinate according to $p$, resulting in a regular unbiased compressor. Therefore we use not a uniform distribution, but $p$. According to my calculations for such a compressor $\omega = \sum 1/p_j$. And I don't understand why it is a new approach. Maybe I'm wrong! I think it's important to explain! Experimental Designs Or Analyses: 1) Basic ResNet18+CIFAR10 training gives more than 90 percent on test. Such experiments do not make sense, we have lost 20% of quality for all operators. It seems that if we use less aggressive compression, MLMC will lose to TopK. 2) Please add to the comparison operators that compute the important sampling of coordinates. 3) Still, ResNet+CIFAR10 although a classic production, are outdated. It would be interesting to see heavier tasks that require real distributed computation (resnet on cifar on a laptop learns in a few hours). I recommend for example BERT learning or Llama finetuning. Supplementary Material: I briefly checked Relation To Broader Scientific Literature: The authors propose a new way of compression. It doesn't make a big breakthrough either way. Moreover, the applicability of these approaches and the big difference from the existing ones is questionable. Essential References Not Discussed: All references necessary for understanding the article are provided. Other Strengths And Weaknesses: I can't recommend for acceptance just yet. Other Comments Or Suggestions: - Questions For Authors: - Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful and detailed feedback. Below, we address each concern and clarify the relationship between our MLMC framework and IS. **1. MLMC vs. Importance Sampling** We thank the reviewer for the insightful observation regarding the similarity between our MLMC construction and importance sampling (IS) in specific settings. That is an excellent point! We agree that in certain simple cases (bit-wise, Top-k) the MLMC estimator indeed reduces to an IS-like scheme, as you correctly pointed out. However, we respectfully argue that MLMC is not merely an instance of IS, but rather a significantly more general and natural framework for constructing unbiased estimators from biased compressors. In fact, IS can be viewed as a special case of MLMC, where sampling is performed non-uniformly over coordinates, as you have stated. MLMC provides a systematic multilevel hierarchy over increasingly accurate (less compressed) estimators, and forms unbiased estimates by applying Monte Carlo sampling over the differences between successive levels. This telescoping structure is particularly well-suited to biased compressors, where compression is naturally available at varying levels of fidelity. **Importantly, MLMC offers several advantages beyond IS:** * It can be applied immediately to any sequence of biased compressors, with no need for any manual design of coordinate-level sampling probabilities or the structure of the communicated entity, which IS requires. For e.g. top-$k$, IS uses $1/p_l \cdot g_l$ w.p. $p_l |g_l|$ to achieve the same result, and while it's straightforward in this case, it might not be as straightforward or even feasible for more complex compressors, as we elaborate below. Also, please note that this intuition regarding IS with top-$k$ was enabled by MLMC. * It is compatible with complex structured compressors that do not admit a coordinate-wise decomposition, and where IS is not naturally defined. For example, ECUQ [1] and Round-to-Nearest (RTN) [2,3] involve structured quantization (e.g., entropy constraints, grid-based rounding) for which the MLMC framework does not naturally decompose into an IS-like scheme. In such cases, it is unclear whether or how suitable IS can be defined, whereas MLMC applies seamlessly. MLMC enables these compressors to be used in a principled way to construct unbiased estimators, with automatic adaptation over compression levels. * From a practical perspective, MLMC is also more flexible and intuitive: one can simply define a sequence of biased compressors with increasing accuracy, and MLMC provides a plug-and-play mechanism for building an unbiased gradient estimate without needing to manually tune the probabilities or derive the communicated entity. We will revise the manuscript to clarify these points and include a detailed discussion comparing MLMC and IS, including when they coincide and when MLMC provides a strictly richer modeling framework. We thank you again for raising this important point. **2. Experiments** We acknowledge the reviewer’s concern regarding the ResNet18+CIFAR-10 setting. We chose this standard benchmark to align with prior works (e.g., EF21). Regarding accuracy, 90% requires use of Adam, LR scheduling, and more, which we did not employ as our focus was to isolate compression effects. We fully agree that larger-scale settings will better demonstrate the advantages of our method, and we ran additional NLP experiments. The results are available at "https://anonymous.4open.science/r/ICML2025MLMC-5346". **BERT Top-k**: (See repository). We evaluated our adaptive MLMC method using Top-k compression on the SST-2 benchmark using BERT finetuning. We used the AdamW optimizer. The folder includes 2 files showcasing the accuracy vs. \#Gbit communicated, and accuracy vs. iteration (#steps), both for M=4 machines and for k={0.01n, 0.05n, 0.1n, 0.5n}. We evaluated our MLMC method against EF21-SGDM, Top-k, Rand-k, and SGD (we keep this terminology, for clarity, with a slight abuse of notation, but note that the underlying optimizer for all is AdamW). As is evident by these plots, our MLMC method enjoys the fastest convergence for the same #Gbit, and it enjoys similar convergence to that of (uncompressed) SGD, and still performs better than other methods, for the same number of steps. These results (in addition to the ones in the paper) prove a strong advantage and efficiency of our method compared to existing methods **even for less aggressive compression**, both in communication efficiency and convergence rate, across different tasks like CV and NLP. Thank you again for your constructive feedback! [1] Dorfman et al., “DoCoFL: Downlink Compression for Cross-Device Federated Learning,” ICML 2023. [2] Gupta et al., “Quantization Robust Federated Learning for Efficient Inference on Heterogeneous Devices,” TMLR 2017. [3] Dettmers et al., “GPT3.int8(): 8-bit Matrix Multiplication for Transformers at Scale,” NeurIPS 2022. --- Rebuttal Comment 1.1: Comment: > IS As I said in the review, I don't see much difference from the IS. The authors' response did not add anything new.There is no compressor not being IS in the paper. Are there any at all? Moreover, let us look at 220-230 (right), if there are compressors that are MLMC but not IS, then we can't do that (lines 220-230) and we have to compute 2 compression operators instead of one and we can't say nothing about efficency. Am I right? > Experiments on ResNet I ran a simple experiment with EF21 with a momentum of 0.9 in steps of 0.01 and compression of 1% and easily knocked out a accuracy of 85% In any case, experiments where the accuracy of the final result is 10-20% worse than what can be knocked out by simple methods without strong tuning look strange. For me it's like reporting: "all methods are bad, but ours is the best among the worst!" > BERT Top-k: Thank you! But I can't open the link. I've tried several times, I don't understand why it's like that. Are the results of these experiments the same as on ResNet? Is the quality of the training close to good results? None of my questions were addressed in the authors' response. Therefore, I maintain my opinion of rejection. --- Reply to Comment 1.1.1: Comment: Dear reviewer, Thank you for responding to our rebuttal, and for engaging in this discussion with us. **IS vs. MLMC** We reiterate that our MLMC method *strictly generalizes* IS, i.e. there are compressors for which IS is **not naturally defined** while our MLMC method works seamlessly. We provide the following examples: **Round-to-Nearest (RTN) compression** [1,2]: this method quantizes each element by rounding it to the nearest level on a fixed grid. The spacing of this grid is controlled by a quantization step-size. Namely, given a vector $w$, its RTN compression, $\tilde{w}$, is given by: $\tilde{w} = \delta\cdot clip(round(w/\delta),-c,c)$, where the function "round" rounds each element to its nearest integer, and the quantization step-size is given by $\delta=\frac{2c}{2^b-1}$, where the $b$ is typically $b=1,2,3,4,...$ . A *smaller* $b$ corresponds to *more aggressive* compression. **No** natural IS interpretation exists here. **Entropy-Constrained Uniform Quantization (ECUQ)** [3]: this compressor works by efficiently finding the largest number of uniformly spaced quantization levels for a given vector such that the entropy of the quantized vector (after applying entropy encoding like Huffman coding) stays within a specified bandwidth budget. **No** natural IS interpretation exists here too. Interestingly, the IS interpretation of MLMC compression seems to rise for sparsification-based compression, like top-k, bit-wise compression, but it **does not hold** for quantization-based compression, like RTN or ECUQ. We ran additional experiments on BERT fine-tuning with SST-2 comparing RTN compression with our MLMC method (with RTN-based compression). The levels of our MLMC-RTN are defined by $b$, which appears in $\delta$ and determines the quantization step-size (i.e., the extent of compression). We provide *test accuracy vs. number of steps* and *test accuracy vs. #Gbit communicated* plots for varying levels of $b$ (and hence, compression). See link below. These experiments demonstrate that our MLMC method achieves better final accuracy, faster convergence, and better communication efficiency, even though now the difference $g^l - g^{l-1}$ is not trivial (as in e.g. MLMC-top-k). Moreover, regarding the efficiency of $g^l - g^{l-1}$, during our experiments, we also calculated the average sampled MLMC-levels (which we denote in the paper by $l$ and is equivalent to $b$ in the MLMC-RTN case), and it turns out that the average level sampled is around $b\sim1.2$. **i.e, $g^l - g^{l-1}$ includes only 1-2 different numbers on average**. This makes sense since, by construction, the probability of sampling lower levels (more aggressive compression) is higher than that of higher levels, and this is consistent with classic MLMC methods. This implies that our method mostly samples lower levels (which are much cheaper to communicate) and few higher levels, but it utilizes this information very efficiently to mitigate bias and achieve superior performance across all criteria (accuracy, convergence, and communication efficiency), as our experiments show. Specifically, for additional clarity, we also provide a graph comparing our MLMC-RTN method with RTN with $b=2$. Even though the average level of MLMC-RTN is $b=1.2$ (compared to $b=2$ of regular RTN), and is thus more communication-efficient, it also achieves better accuracy and convergence. We thank you again for pointing out the connection to IS! We promise to include these new experiments and a discussion of the connection between IS and our MLMC method in the paper. **ResNet** We thank you for taking the time to run this. Our results on ResNet in our specific setting are consistent with the results obtained in previous work, see e.g. **Fig. 13 right-most plot** and **Fig.15 right-most plot** in [4] (EF21). These results are consistent with ours with similar test accuracy. In any case, our NLP experiments achieve good results (more than 90%), and this is a harder setting which demonstrates that our method works and achieves better performance across different tasks. **Link** The link works for us. It needs time to load (or maybe a different browser). In any case, we created a new anonymous repository with the results here "https://anonymous.4open.science/r/ICML2025_2-98B2/", and a dropbox with the results (anonymized account, cannot be traced back to us) here, just in case: https://www.dropbox.com/scl/fo/lmlnm9i4m51cqs185j3wh/AM6WqDl_DTLR4PI5W3dBX1I?rlkey=wr845klp30qd9ghkqmry3krhy&st=uedxj7v8&dl=0 [1] Gupta et al., “Quantization Robust Federated Learning for Efficient Inference on Heterogeneous Devices,” TMLR 2017. [2] Dettmers et al., “GPT3.int8(): 8-bit Matrix Multiplication for Transformers at Scale,” NeurIPS 2022. [3] Dorfman et al., “DoCoFL: Downlink Compression for Cross-Device Federated Learning,” ICML 2023. [4] Richtárik et al. "EF21: A New, Simpler, Theoretically Better, and Practically Faster Error Feedback", NeurIPS 2021.
Summary: This paper introduces a novel Multilevel Monte Carlo (MLMC) compression scheme that leverages biased compressors to construct statistically unbiased estimates. The proposed algorithm effectively bridges the gap between biased and unbiased methods, combining the strengths of both. The empirical results show that the proposed algorithm outperforms the baselines. Theoretical analysis show that the proposed algorithm can reduce the variance incurred by the compression. Claims And Evidence: The claims made in the submission are supported by clear and convincing evidence. Methods And Evaluation Criteria: Most of the proposed methods make sense for the problem. For the proposed methods: I think there should be some discussion about the implementation of the proposed algorithm in the real-world distributed environment. I understand that the hardware resources are limited and the experiments in this paper seems to be simulations. However, a discussion of the implementation is still necessary. I have some concern: for Algorithm 3, the adaptive probability distribution requires the calculation of compression of all levels, hence incurring heavy computation overhead if the number of levels is large. Theoretical Claims: I've skimmed the proofs and they seems correct to me. Experimental Designs Or Analyses: For the experiments, I have some concerns: 1. The experiments are very small for distributed training. I would recommend cifar-100 or even larger such as imagenet. 2. The experiments are limited to CV models. I would recommend to add some NLP (transformer) experiments. 3. Although not covered by the theoretical analysis of convergence, I would like to see some experiments of how the proposed compressor works with Adam (actually AdamW) optimizer. 4. All the experiments only show accuracy vs. #Gbit communicated. I strongly recommend to add plots of accuracy vs. steps, so that we could see the gap between the compressed methods and the optimal (final) accuracy of full-precision SGD. Supplementary Material: I've skimmed the proofs and they seems correct to me. Relation To Broader Scientific Literature: There is nothing related to the broader scientific literature. Essential References Not Discussed: The references look good to me. Other Strengths And Weaknesses: In overall, the idea seems very interesting and makes sense. My major concerns are about the experiments. Other Comments Or Suggestions: Please refer to the comments of the proposed method and experimental designs above. Questions For Authors: Please refer to the comments of the proposed method and experimental designs above, and try to resolve my concerns. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the positive evaluation of our contributions, including the novelty of our MLMC compression scheme and the theoretical analysis. We address the concerns below and will incorporate these improvements into the final version. **1. Implementation** We appreciate the reviewer’s suggestion to discuss real-world implementation. In our experiments, since hardware is limited, we used the "Multiprocessing" package to run multiple processes in parallel, each representing a different machine. This method accurately simulates a real-world parallel optimization scheme that runs on multiple machines. We will add this to the paper. **2. Computational Overhead** That is a good point. However, the adaptive sampling step in Algorithm 3 does **not** introduce significant additional computational overhead. For example, in the case of top-$k$ or $s$-top-$k$, we only need to compute each $\Delta_{t,i}^l$ (which is some norm) once per iteration, similar to what optimizers like AdaGrad already do when computing the full gradient norm. Furthermore, these norms are computed over **disjoint segments**, since $\sqrt{\Delta_{t,i}^l} = ||g_{t,i}^l - g_{t,i}^{l-1}||$, which is equivalent to the absolute value of the $l$-th largest element of $v_{t,i}$ (in top-$k$) or the norm of the segment of length $s$ with the $l$-th largest norm (in $s$-top-$k$). Therefore, the total computational cost is *identical* to that of computing the norm of the *full* gradient, as is done in existing adaptive methods like AdaGrad. A similar smart computation of the probabilities can be done for other compressors. For example, in bit-wise compressors, $g_{t,i}^l - g_{t,i}^{l-1}$ corresponds to the sign-bit and the $l$-th information bit. Moreover, the number of compression levels does not have to be linear in the dimension of the compressed entity, but could be logarithmic (which is less general, but still works). This is the "classical" case of MLMC [1] in which the quality of the "levels" (which is *inversely* correlated with the extent of compression in our case) increases exponentially and thus induces a logarithmic number of levels. Also, the computational overhead (even when calculating multiple compressions) is usually negligible compared to the overhead introduced by communication, which is the main motivation behind this work and prior works. This has been discussed extensively in prior work [2,3]. **2. Experiments** We acknowledge that our current experiments are on modest-sized vision tasks, due to limited hardware. Although these are the standard benchmarks used in prior works, we agree that broader empirical validation is beneficial. We ran additional experiments. The results are available at "https://anonymous.4open.science/r/ICML2025MLMC-5346". We recommend downloading the repository. It includes two folders, as we elaborate below: **BERT Top-k**: We evaluated our adaptive MLMC method using Top-k compression on the SST-2 benchmark using BERT finetuning. We used the AdamW optimizer. The folder includes 2 files showcasing the accuracy vs. #Gbit communicated, and accuracy vs. iteration (#steps), both for M=4 machines and for k={0.01n, 0.05n, 0.1n, 0.5n}. We evaluated our MLMC method against EF21-SGDM, Top-k, Rand-k, and SGD (we keep this terminology, for clarity, with a slight abuse of notation, but note that the underlying optimizer for all is AdamW). As is evident by these plots, our MLMC method enjoys the fastest convergence for the same \#Gbit communicated, and it enjoys similar convergence to that of (uncompressed) SGD, and still performs better than other methods, for the same number of steps. These results (in addition to the ones in the paper) prove a strong advantage and efficiency of our method compared to existing methods, both in communication efficiency and convergence rate, across vastly different tasks like CV and NLP. **RESNET CIFAR10 Top-k**: We evaluated our adaptive MLMC method using Top-k compression on CIFAR-10 using ResNet-18 against EF21-SGDM, Top-k, Rand-k, and (uncompressed) SGD. The folder includes 4 files showcasing the accuracy vs. #Gbit and accuracy vs. iteration (#steps), both for M=4 and M=32 machines and for k={0.001n, 0.005n, 0.01n, 0.05n}. Our method enjoys a significant advantage over comparable methods in terms of communication efficiency, convergence speed, and final accuracy. Our method's advantage grows with the number of machines. In the accuracy vs. iteration plots, uncompressed SGD eventually surpasses all methods, as expected, although our method is comparable when compression is not too extreme while other compression methods still experience performance degradation. Thank you again for your constructive feedback! [1] Giles, "Multilevel Monte Carlo methods", 2013. [2] Konecny, "Federated learning: Strategies for improving communication efficiency", 2018. [3] Wang, "A field guide to federated optimization", 2021
null
null
null
null
null
null
Convergence of Mean-Field Langevin Stochastic Descent-Ascent for Distributional Minimax Optimization
Accept (spotlight poster)
Summary: This paper studies the mean-field Langevin (stochastic) descent-ascent (MFL-DA) algorithm for solving distributional minimax optimization problems. The authors demonstrate that the infinite-particle limit of discrete-time MFL-DA is able to converge to the unique stationary point of the problem with a convergence rate of $\frac{1}{\epsilon}\log\frac{1}{\epsilon}$ measured in squared 2-Wasserstein distance. The authors show applications of their result to finding mixed Nash equilibria of zero-sum games, generative adversarial networks, and mean-field neural networks. Claims And Evidence: The claims seem clear and well-supported. Methods And Evaluation Criteria: Not applicable since this is a theory paper. Theoretical Claims: I did not go over the details of the proof but the overall claims seem to be consistent with the literature and the proof technique seems sound from a high level. Experimental Designs Or Analyses: Not applicable. Supplementary Material: I only looked at the overall proof strategy and did not review the details. Relation To Broader Scientific Literature: This paper contributes to a long line of work of applications of the mean-field Langevin dynamics for optimization in the space of probability distributions. As discussed in the paper, this is (to my knowledge) the first discrete-time convergence guarantee for mean-field Langevin descent-ascent for solving distributional min-max problems. Both the algorithm and the problem are of significant interest to the community. Essential References Not Discussed: Most essential references are discussed in the paper. I think the authors can also discuss [1] where the mean-field Langevin algorithm is used for optimization over signed measures. Specifically, that paper contains ideas for going beyond pessimistic LSI constant estimates which can be useful for the results here as well, and similar to this paper, they also need a two-timescales approach for their analysis. [1] G. Wang, A. Mousavi-Hosseini, L. Chizat. "Mean-Field Langevin Dynamics for Signed Measures via a Bilevel Approach." NeurIPS 2024. Other Strengths And Weaknesses: **Strengths**: Solving minmax optimization on the space of distributions is a fundamental problem, and it is nice to have discrete-time convergence guarantees for the mean-field Langevin descent-ascent algorithm. Also, the paper is mostly well-written and easy to ready. **Weakness**: * A main concern for me is that there is almost no discussion on the role of other parameters besides $\epsilon$ in the convergence rate. A major weakness of this type of mean-field Langevin analysis is that convergence requires $\tau$ to be at least linearly small with ambient dimension $d$. When plugged into the pessimistic LSI bound, this implies $\alpha$ that is exponentially small in $d$, and thus a convergence rate that is exponentially large in $d$. This is a drawback of this type of analysis and not a weakness of this paper in particular, but I think it is better to be explicitly discussed. * Similarly, the role of other parameters is not clear/made explicit. For example, is it better to have $\eta_1 \gg \eta_2$ or $\eta_1 \ll \eta_2$? How do quantities like $\alpha$, $\sigma^2$, and $\zeta$ enter the final convergence rate? While these questions might be answered by following certain quantities in the appendix, I think it would be nice to have summaries of the convergence rate in the main text. * The convergence analysis yields a bound on $\mathcal{L}$ and the Wasserstein distance to $\mu^*$ and $\nu^*$. Can such bounds be turned into a bound on the suboptimality $E(\mu_K,\nu_K) - E(\mu^*,\nu^*)$? I am asking this in particular since bounding this suboptimality is possible for distributional minimization problmes with mean-field Langevin. * Additional suggestions are discussed below. Other Comments Or Suggestions: Please see below. Questions For Authors: 1. It seems that $\frac{1}{\alpha_1}$ should be replaced with $\alpha_1$ in Corollary 1, otherwise we can drive the Wasserstein distances to zero by simply letting $\alpha_1 \to 0$. 2. What is the optimal value of $\lambda$ for Theorems 1, 2, and Corollary 1? 3. Does Assumption 4 hold point-wise for all $\theta$ and $\omega$? I believe the expectation is over the random noise for estimating $g_k$ and $h_k$, which still leaves out $\theta$ and $\omega$. 4. Do we expect the constants that appear in the paper, such as those in Assumptions 1-4 and in Proposition 3 to be dimension-dependent in typical settings? 5. Equation (8) seems to have a typo, why are you using the expectation notation? 6. Typo in the first line of Equation (9), I believe $\nu$ should be $\nu’$. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thoughtful and detailed feedback! Below, we address the comments and questions point-by-point. **Pessimistic LSI bound**: We will explicitly comment on the weakness of this type of analysis, and we are trying to overcome this in our ongoing research. **Explicit dependence on other parameters**: In the revised version, we will include a dedicated paragraph discussing the roles of parameters and here we provide a sketch. **On dimension $d$ and regularization parameters**: We agree that a more explicit discussion of the role of parameters in the convergence rate would be beneficial. Viewing $\tau,\eta_1,\eta_2$ as small numbers, in Appendix A.3 on page 11, the remainder $r_{g2}, r_{h2}$ due to the second moment are $\max\\{O(1),\tau d\\}$, similarly $r_{g3}, r_{h3}$ are $\max \\{ O(1),\tau^{3/2}d^{3/2} \\} $. Substituting them into $\Gamma_0, \Gamma_1, \Gamma_2$, we can get $\Gamma_0=\max\\{O(1), \tau d,\tau^{2}d^{2},\frac{d}{\alpha^{1/2}\tau} \\}$, and $\Gamma_{1(2)}=\max\\{O(1),\tau d,\tau^2d^2\\}$. Substutiuting them into $R_1$ in (10), we can get $R_1=O(\frac{d^2\eta_1}{\tau^3\alpha^3})$. Let $R_1=\epsilon$ and choose $\eta_1= O(\frac{\epsilon \tau^3 \alpha^3}{d^2})$, we obtain a sample complexity $K=O(\frac{d^2}{\epsilon \tau^4 \alpha^4}\log \frac{1}{\epsilon})$. Comparing with sample complexity $K=O(\frac{d^2}{\epsilon \tau^2 \alpha^2}\log \frac{1}{\epsilon}$) in [1] for the MFLD of minimization problems, the higher orders of $\tau, \alpha$ is because the two-timescale algorithm MFL-SDA: the overall complexity of the algorithm depends on the slower descent step. **On other parameters:** The variance $\zeta,\sigma^2$ only affect the $O(1)$ term in $\Gamma_i$. The parameters $\eta_1,\eta_2$ correspond to the fast descent and fast ascent regime similar to [3], and thus their choice depends on the instance and the user's emphasis on the descent/ascent part. **Another standard of suboptimality**: Our problem is to find a saddle point rather than a maxima(mimima), hence $E(\mu_K,\nu_K)-E(\mu^*,\nu^*)$ may not apply to our analysis. ## Addressing References Not Discussed: Thank you for pointing this out! We will cite this paper in the revision and investigate if their Theorem 5.2 can inspire a better LSI bound for our problem. ## Addressing Questions: **A1.** Thank you for catching this: the Talagrand's inequality should be $W_2^2(\mu_k,\nu^*)\leq \frac{2}{\alpha}\mathrm{KL}(\mu_k|\mu^*)$. **A2.** $\lambda$ corresponds to the Lyapunov function similar to [2] [3], and it is not an explicit hyperparameter in our algorithm. **A3.** Yes, it holds point-wise for all $\theta$ and $\omega$. **A4.** See our response on the dimension above. **A5, A6.** Thank you for noticing this. We will correct the notation. We thank you very much again for the constructive feedback and helpful suggestions! ### Reference: [1] Nitanda, A., Wu, D., & Suzuki, T. (2022). Convex analysis of the mean field Langevin dynamics. In International Conference on Artificial Intelligence and Statistics (pp. 9741-9757). PMLR. [2] Lu, Y. (2023). Two-scale gradient descent ascent dynamics finds mixed Nash equilibria of continuous games: A mean-field perspective. In International Conference on Machine Learning (pp. 22790-22811). PMLR. [3] Yang, J., Kiyavash, N., & He, N. (2020). Global convergence and variance-reduced optimization for a class of nonconvex-nonconcave minimax problems. arXiv preprint arXiv:2002.09621.
Summary: This paper studies the convergence rate of discrete-time mean-field Langevin stochastic descent-ascent for min-max problems in distributional optimization under log-Sobolev inequality condition. The authors claim that the derived convergence rate is near-optimal compared to its Euclidean counterpart. The paper includes two examples: zero-sum games and mean-field neural networks. However, no experimental results are provided. Claims And Evidence: The theoretical results are consistent with the authors' claims. Methods And Evaluation Criteria: Not applicable. Theoretical Claims: I did not verify the details of the proofs. Experimental Designs Or Analyses: Not applicable. Supplementary Material: No. Relation To Broader Scientific Literature: The topic is important to optimization theory and has significant relevance to the machine learning community Essential References Not Discussed: No Other Strengths And Weaknesses: Strengths: 1. The topic is interesting. 2. The paper is easy to read. Weaknesses: 1. The analysis and results appear elementary and straightforward, closely following previous works such as Kim et al. (2024), Yang et al. (2020), Chen et al. (2022), Suzuki et al. (2023), and Nitanda et al. (2022). Specifically, the variable lambda in the Lyapunov function seems unnecessary, as it can be set to 1 in this case. Given this, the results appear quite direct based on current results in MFLD, limiting the paper’s technical contribution. 2. What are the advantages of the MFL-SDA algorithm compared to MFL-AG and MFL-ABR? The lack of comparison makes the motivation unconvincing. A discussion of their relative strengths and weaknesses would improve clarity. 3. The two provided examples are restricted to two-layer neural networks, and no experimental evidence is given. I would expect more interesting applications and empirical results to strengthen the paper’s impact. Other Comments Or Suggestions: In Eq (9): L1(u) = max E(u,v') - minmax E(u',v') L2(u,v)=max E(u,v') - E(u,v) Questions For Authors: See my comments in weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thoughtful comments! Below, we respond to the concerns and clarify the novelty and contributions of our work. **On our analysis**: We would clarify that both our proof technique and the resulting conclusions differ from the aforementioned papers in several fundamental ways. First, these prior works (*Kim et al. (2024)*, *Chen et al. (2022)*, *Suzuki et al. (2023)*, *Nitanda et al. (2022)*) tackle the discrete-time MFLD problem by first establishing convergence in continuous time (typically via Wasserstein gradient flow) and then controlling the discretization error as a separate step. In contrast, our proof directly analyzes the per-step improvement using Taylor expansion and remainder estimates. This not only provides a more elegant and adaptable proof framework, but also yields stronger results as detailed in the next point. Moreover, our results also differ in several key aspects. *Kim et al. (2024)* do not study MFL-SDA. To our best knowledge, their analysis of other algorithms does not extend to our established convergence rate of MFL-SDA. While *Yang et al. (2020)* study a minimax optimization in Euclidean space, and our analysis shares some structural similarities with theirs, the time-discretization error anlaysis of the probability functional on distributional space requires substantially more sophistication -- we devote pages of proofs to controlling this error, whereas in the Eucliean case, it follows directly from smoothness assumptions. The other cited references--*Chen et al. (2022)*, *Suzuki et al. (2023)*, and *Nitanda et al. (2022)*--focus on minimization problems, whereas minimax problems present more complexity due to the interaction between two players. **On the Lyapunov function**: As you said, $\lambda$ can be fixed to 1. Yet, our chosen form of the Lyapunov function follows what is commonly used in previous works on minimax problems in Euclidean space (*Yang et al. (2020)*) and in continuous-time MFLD (*Lu (2023)*). **Comparison with MFL-AG and MFL-ABR:** First, our analysis of stochastic gradient descent-ascent with last-iterate convergence and the inexact gradient analysis closely aligns with practical implementations. Second, MFL-AG achieves a sample complexity $O(\epsilon^{-O(1/\alpha)})$ to reach an $O(\epsilon)$-approximation of the saddle point, where the bound on LSI constant $\alpha$ can be small--potentially even of order $1/(2+d/2)$. In contrast, our convergence analysis for MFL-DA establishes a sample complexity $O(\frac{1}{\epsilon}\log\frac{1}{\epsilon})$. MFL-ABR is a double-loop algorithm, it is shown that the outer loop has an $O(\frac{1}{\epsilon}\log\frac{1}{\epsilon})$ sample complexity; however, it needs an inner loop and the total complexity is not specified. **On Experimental Results and Applications:** The primary aim of our work is to establish convergence guarantees for commonly used stochastic gradient descent-ascent algorithms. We acknowledge the value of empirical validation, we have included a numerical experiment based on Example 2 in our paper. We apply our algorithm to the nonlinear instrumental variable (NPIV) regression problem $$ \min_f \max_g E[g(Z)(Y-f(X))-\frac{1}{2}g(Z)^2+\lambda R(f)] $$ where $R$ is a regularizer. Using the problem setup and datasets in [1,2], we compare our algorithm with a classic series-approximation (SA) method [3] based on out-of-sample MSE (lower values indicate better performance) and average R² (higher values indicate better performance) *Engel Curve* | Method | Average MSE | Average R² | | ------------------------- | ----------- | ---------- | | MFL-SDA | **0.00698** | **0.256** | |SA | 0.00730 | 0.218 | *Returns to schooling* | Method | Average MSE | Average R² | | ------------------------- | ----------- | ---------- | | MFL-SDA | **0.1494** | **0.0799** | | SA | 0.1626 | 0.1543 | **4. Clarification of Equation (9):** Good catch! We will revise Equation (9) in the updated version. Thank you again for your feedback, and we hope our response clarifies the contribution of our paper. ### References: [1]Hausman, J. A., Newey, W. K., & Powell, J. L. (1995). Nonlinear errors in variables estimation of some Engel curves. *Journal of Econometrics*, *65*(1), 205-233. [2] Card, David. Estimating the return to schooling: Progress on some persistent econometric problems. *Econometrica* 69.5 (2001): 1127-1160. [3] Newey, Whitney K., and James L. Powell. Instrumental variable estimation of nonparametric models. *Econometrica* 71.5 (2003): 1565-1578.
Summary: This paper analyzes a natural algorithm for distribution min-max optimization, which consists in taking alternating Langevin steps. The main contribution of the paper is theoretical analysis of this algorithm for the case where the gradients are exact as well as the case where the gradients are in-exact. Their analysis avoids the more standard approach which first analyzes the continuous time flow and then bounds the discretization error. The main application is to mean-field networks. Claims And Evidence: I am seriously concerned about their claim, after Theorem 1, that the bias term $R_1$ is $O(\eta_1)$. In fact, when I examine the definitions of $\Gamma_1, \Gamma_2, \Gamma_3$, it looks to me that they are in fact $O(1/\eta_1)$ (this is coming from the final terms in the definitions of $r_{g2}, r_{h2}$). Unless there is a typo, this would seem to indicate that the bias term $R_1$ is in fact $O(1)$, meaning that their main result only shows that the objective doesn't increase more than $O(1)$ throughout the trajectory. I assume that this problem extends to Theorem 2, but the relevant constants don't seem to be defined in Section A.3 (where are they?). Another -- although less critical -- issue is Assumption 3 on the log-Sobolev constants. Although they consider several applications of their results, they don't demonstrate any settings where their assumptions hold. Of course, verifying their assumptions for neural networks is likely far too difficult, but as a sanity check, I would like to seen an example of a setting where Assumption 3 actually holds. I think this would make the paper stronger. Finally, I am probably confused, but in Corollary 1, I don't understand why there must be unique $\mu^*, \nu^*$ that the algorithm even converges to? What am I missing here? Methods And Evaluation Criteria: Yes. Theoretical Claims: I checked some of the proofs in the appendix, but not all of them and not in full detail. I didn't find any serious issues, and at a high level the results seem plausible. The main concern is the dependence of the bias term on the step-sizes $\eta_1, \eta_2$, see above. Experimental Designs Or Analyses: There were no experiments but I don't think that's necessary. Supplementary Material: See above. Relation To Broader Scientific Literature: The paper is trying to push forward on discrete time analysis of distributional min-max problems. This is certainly a worthy problem, and it seems their contribution would be novel. However, I am seriously concerned about the bias terms. Essential References Not Discussed: None that I am aware of. Other Strengths And Weaknesses: If the bias term were truly $O(\eta_1)$ the paper would be a strong theoretical contribution. In fact, it would probably even imply a new analysis of discrete-time Langevin dynamics since that seems to be a special case of their setup. But this is exactly why I am a bit skeptical of their results -- it's hard to imagine that they found a completely new analysis of Langevin dynamics that simulatenously holds in their more general setup. If the bias term is indeed only $O(1)$ then I don't think the paper is strong enough, as it merely asserts the objective doesn't blow up. Other Comments Or Suggestions: - I don't follow equation 24. - Equation 46 uses $\tilde \mu$ without defining it (reading onwards I suppose that it is meant to be a distribution coming from the mean-value theorem). Questions For Authors: Please address the issue about the size of the bias term $R_1$. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you very much for your critical feedback, especially your sharp observations about the order of the bias term. Below, we address your concerns. $\newcommand{\bE}{\mathbb{E}} \newcommand{\o}{\omega}$ ## Addressing Weaknesses **Bias term of second moment**: Indeed, the norm control in the submitted version (adopted from [3, Lemma 1]) is sub-optimal. Below, please find a stronger control, adopted from [2, Lemma 1]. The corrected expression for $r_{g2}$ should be $r_{g2}= 2M^2_1+2\tau^2\bE_{\mu_0}[\\|\\theta_0\\|^2_2]+4M^2_1+ \tau(4\eta_2M^2_1+4\tau d)$ and similarly for $r_{h2}$. They are of order $O(1)$, thereby the bias term $R_1$ in Theorem 1 is $O(\eta_1)$. $$ \begin{aligned} &\bE_{\nu_{k+1}}[\\|\o_{k+1}\\|^2]\\\\ =&\bE_{\nu_k\otimes \rho}[\\|\o_k+\eta_2 h_k+\sqrt{2\eta_2\tau}\xi^2_k\\|^2]\\\\ =&\bE_{\nu_k}[\\|\o_k\\|^2]+2\bE_{\nu_k\otimes \rho}[\langle \o_k,\eta_2 (h_k+\tau\o_k-\tau\o_k)+\sqrt{2\eta_2 \tau}\xi^2_k\rangle]+\bE_{\nu_k\otimes \rho}[\\|\eta_2h_k+\sqrt{2\eta_2 \tau} \xi^2_k\\|^2]\\\\ \leq&\bE_{\nu_k}[\\|\o_k\\|^2] + 2\eta_2 M_1 \bE_{\nu_k}[\\|\o_k\\|]-2\eta_2\tau \bE_{\nu_k}[\\|\o_k\\|^2]+2\eta^2_2( M^2_1 + \tau^2 \bE_{\nu_k}[\\|\o_k\\|^2]) +2\eta_2\tau d\\\\ \leq& (1-2\eta_2\tau +\frac{\eta_2\tau}{2}+2\eta^2_2\tau^2) \bE_{\nu_k}[\\|\o_k\\|^2] + 2\eta_2 M^2_1/\tau +2\eta^2_2 M^2_1 + 2\eta_2\tau d\\\\ \leq& (1-\eta_2\tau)\bE_{\nu_k}[\\|\o_k\\|^2] +\eta_2(2M^2_1/\tau+2\eta_2M^2_1+2\tau d), \end{aligned} $$ where the first equality uses the fact that $\xi_k^2$ is an independent zero-mean normal; the first inequality follows the fact that $\\|h_k+\tau\o_k\\| = \\|\nabla \frac{\delta J}{\delta \nu} \[\mu_{k+1}, \nu_k\](\o_k)\\| \leq M_1$; the second inequality is because $2M_1\bE_{\nu_k}[\\|\\o_k\\|]\leq \frac{\tau}{2} (\bE_{\nu_k}[\\|\o_k\\|])^2 + 2M^2_1/\tau \leq \frac{\tau}{2}\bE_{\nu_k}[\\|\o_k\\|^2]+2M^2_1/\tau$ and last inequality holds for $\tau<1/(4\eta_2)$. Hence, we obtain $$ \begin{aligned} \bE_{\nu_k}[\\|\o_k\\|^2] & \leq (1-\eta_2\tau)^k \bE_{\nu_0}[\\|\o_0\\|^2]+ 2\eta_2 (M_1^2/\tau + \eta_2 M_1^2 + \tau d )\sum_{j=0}^{k-1}(1 - \eta_2\tau)^j\\\\ &\leq \bE_{\nu_0}[\\|\o_0\\|^2]+ \frac{2(M^2_1/\tau + \eta_2 M^2_1 + \tau d)}{\tau}, \end{aligned} $$ The remaining part of the proof stays the same. Hence, we can directly check that the remaining constants $r_{gi}$, $r_{hi}$ as well as $\Gamma_0$, $\Gamma_1$, $\Gamma_2$ are all of order $O(1)$. Since $\eta_1$ and $\eta_2$ are of the same order, we conclude that the bias term $R_1= \frac{\lambda(\Gamma_2\eta_2^2+(1-2\eta_2\tau\alpha)(\Gamma_1+\Gamma_0)\eta_1^2)+\Gamma_1\eta_1^2}{\eta_1\tau\alpha}$ is of order $O(\eta_1)$. Regarding your question on the definition of constants in (18), they are defined in the proof of Theorem 2 (Line 1214, etc.). We apologize for not explicitly displaying them. **Assumptions of LSI in examples**: Indeed, we have provided two applications in Section 4, with the LSI being verified in Proposition 3 and Proposition 4, respectively. This follows a line of work like [1, proposition 5.1] and [2, Appendix A]. The setups therein are satisfied in our context. **Existence and uniqueness of optimal $\mu^\*,\nu^\*$ :** The primal objective $J(\mu, \nu)$ is convex in $\mu$ and concave in $\nu$. With an additional KL (or entropy) regularization, the regularized objective $E(\mu, \nu)$ becomes strongly convex in $\mu$ and strongly concave in $\nu$. This ensures the existence and uniqueness of the mixed Nash equilibrium $(\mu^*, \nu^*)$; please refer to the detailed discussion before equation (3) in the manuscript as well as [2, Proposition 2.1]. ## Addressing Questions **A1**. To get (24), the first line follows directly from equation (23); the second line follows from the definition of $T$; the third line follows from the Taylor's expansion of $\log {\rm det}$, where $\bar{\o}_k$ is the point that achieves the mean-value; and the fourth line follows from the property of trace operator and that ${\rm Tr}(h_k^2) =\\|\nabla h_k\\|_F^2 = (\\|\nabla^2 \frac{\delta J}{\delta \nu}\\|_F - \tau)^2 \leq (M_2 + \tau)^2$. **A2**. Yes, you're absolutely right. Thank you so much again for your insightful and constructive feedback. We hope our responses alleviate your concerns. ### References: [1]Chizat, L. Mean-field Langevin dynamics: Exponential convergence and annealing. *Transactions on Machine Learning Research*, 2022. [2]Suzuki, T., Wu, D., and Nitanda, A. Mean-field Langevin dynamics: Time-space discretization, stochastic gradient, and variance reduction. *Advances in Neural Information Processing Systems*, 36, 2024. [3] Nitanda, Atsushi, Denny Wu, and Taiji Suzuki. Convex analysis of the mean field langevin dynamics. *International Conference on Artificial Intelligence and Statistics*, 2022. --- Rebuttal Comment 1.1: Comment: Thank you for your response, it does seem like this addresses the order of the bias issue. I also apologize for missing Propositions 3 and 4. I have updated my score accordingly. --- Reply to Comment 1.1.1: Comment: Thank you very much for your thoughtful comments and for taking the time to revisit your evaluation. We're glad to hear that the updated version addresses the issue regarding the order of the bias.
Summary: The paper analyzes a Langevin-type scheme for finding equilibria in mean-field games under convexity and smoothness assumptions. The rates obtained scale as $\widetilde{O}(1/\varepsilon)$, which agrees with the rate in Euclidean space. An extension to stochastic gradients is also considered. Claims And Evidence: The main selling point of this work is the $\widetilde{O}(1/\varepsilon)$ rate, under standard assumptions on the boundedness of (derivatives of) differentials of the objective. This result is state-of-the-art compared to prior work (which generally has higher complexity). There is also an extension to stochastic gradient oracles when the oracle has bounded error in the second and third moments. The authors provide solid proofs for their claims. Methods And Evaluation Criteria: This is not applicable to this paper. Theoretical Claims: The results make sense and the proofs appear rigorous; I checked all the results (without verifying all technical details). Experimental Designs Or Analyses: The paper is theoretical in nature and therefore does not contain an experimental component. However, some ramifications for typical applications are discussed. Supplementary Material: I skimmed the proof in the appendix. However, it is rather technical and I did not have the opportunity to review all the details. Relation To Broader Scientific Literature: The paper is related to prior work on mean-field games, in particular to Langevin-type algorithms for computing equilibrium points. The main claim of this paper is that it improves the rates compared to those prior works (at least with respect to the inverse accuracy dependence). Earlier work had a super-linear dependence on $\varepsilon$ or required an inner-outer loop complexity. Essential References Not Discussed: As far as I am aware, the most relevant references have been covered. Other Strengths And Weaknesses: **Strengths:** I believe this work is impactful as it makes a significant improvement to the rate estimate for an important statistical problem. The rates are examined in the context of various applications, with interpretable and clear results. **Weaknesses:** The result is mainly theoretical, as in general a particle-algorithm is needed for implementability (which complicates the rate). This is agreed upon by the authors; it is unclear to me whether the analysis scheme in this paper can be preserved after particle discretization. It would be helpful if the authors included other important parameters in their rate estimates, such as the dimensions. Other Comments Or Suggestions: Line 143: “We set the scaling factor the weight decay the regularization” does not make sense, please revise. Equation 9 exceeds the column spacing. This occurs in multiple other places; please fix this. Questions For Authors: What is the dependence of the result on the dimension, and the condition numbers (Hessian bounds, etc.)? How does it compare to prior work? It is a bit strange that third-moment boundedness is needed for the stochastic gradient oracle, when normally it is only the second moment that is required. Can the authors comment if they believe this requirement is fundamental? Although the paper attains a sharp rate, this is for the mean-field version of the algorithm. It is not clear whether this type analysis can be carried out using this framework for a finite particle algorithm (accounting for additional error of the particles). Can the authors comment? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you very much for your thorough and constructive feedback. Below, we address your main points raised. $\newcommand{\bs}{\boldsymbol}$ ## Addressing Weakness: **Particle Discretization**: For this problem, we have verified that our method is feasible in the particle setting. As an illustration, we briefly summarize the adjustments needed in our proof technique under the particle approximation setting: We set the joint distribution $\bs\mu,\bs\nu$ as $N$ finite particles $\bs\theta=[\theta^i]^N_{i=1}$ and $\bs \omega = [\omega^i]^N_{i=1}$. In this case, the LSI has the form adapted from [1, Lemma 7] $$ \frac{\tau^2}{N} \sum^N_{i=1} \mathbb E_{\bs\nu_k}[\\|h_i(\bs\omega_k)-\nabla \log \bs\rho_k(\bs\omega_k)\\|^2_2]\geq 2\alpha \tau {\mathcal L_2^N}(\bs\mu_{k+1},\bs\nu_k)+O(\frac{1}{N}) $$ where $\bs\mu_k$ and $\bs\nu_k$ denote the distribution of $\bs\theta$ and $\bs\omega$ in the $k$-th iteration, respectively; $h_i$ denotes the partial derivative w.r.t the $i$-th particle; $\mathcal{L}_2^N$ denotes the Lynapunov function for the counterpart on the product space; $\bs\rho$ denotes the Gibbs distribution; $O(\frac{1}{N})$ is from the propagation of chaos. Other log-Sobolev inequalities used in our setting have similar forms. Thereby, most derivations in the manuscript involving Taylor expansions can be extended to analyze the joint distribution of particles, but with additional approximation error $O(1/N)$ due to weak particle interactions. The per-step improvement would be of the form $$ \mathcal L(\bs\mu_{k+1},\bs\nu_{k+1})\leq (1-2\eta_1\tau\alpha)\mathcal L(\bs\mu_{k},\bs\nu_{k}) + O(\eta_1^2) + O(\frac{\eta_1}{N}) $$ where the $O(\frac{\eta_1}{N})$ term comes from the propagation of chaos. Building on our current line of thought, we believe this approach can be extended to the stochastic gradient setting while maintaining the same sample complexity order as well. However, due to the extensive computations involved, we will present a complete proof of our results in an upcoming extended journal version. **Parameters**: Due to the space limit, please refer to our response to Reviewer 5VGF for a detailed analysis of how the sample complexity in Theorems 1 and 2 depends on various parameters, including the dimension $d$. Notably, the sample complexity of MFL-AG in [2] is $O(\epsilon^{-O(1/\alpha)})$, which is exponential in the LSI constant $\alpha$, hence in $d$ under their conservative bound $O(1/(2+d/2))$ of $\alpha$, whereas our bound does not suffer from this exponential dependence. The Hessian bound $C_1$ occurs in bias $\Gamma_i=O(C_1)$. As for the condition number, adopting the notion of effective condition number in [3, Theorem 2.1], our bound has a similar dependence in the case of zero-sum game. ## Addressing Questions: **Line 143 and Equation (9)**: Thank you for pointing this out. We will correct the issues in the updated version of the manuscript. **Boundedness of third-moment**: The existence of this issue arises from the presence of $\mathbb{E}_{\mu_k}[\\|g_k\\|_2^3]$ in our remainder term. Therefore, in the stochastic gradient part, to ensure that this remainder term remains uniformly bounded, we need to impose a bound on the third-order moment. In fact, [1] assumed that the stochastic gradient is uniformly bounded with respect to any $\omega$ and $\theta$, which is stronger than ours. Thank you again for your valuable feedback! We will correct the typos and formatting issues as you suggested. ### References [1] Suzuki, T., Wu, D., & Nitanda, A. (2023). Convergence of mean-field Langevin dynamics: time-space discretization, stochastic gradient, and variance reduction. Advances in Neural Information Processing Systems, 36, 15545-15577. [2] Kim, J., Yamamoto, K., Oko, K., Yang, Z., & Suzuki, T. (2023). Symmetric mean-field langevin dynamics for distributional minimax problems. arXiv preprint arXiv:2312.01127. [3] Lu, Yulong. Two-scale gradient descent ascent dynamics finds mixed Nash equilibria of continuous games: A mean-field perspective. International Conference on Machine Learning. PMLR, 2023. --- Rebuttal Comment 1.1: Comment: I thank the authors for their response. Although the authors have indicated that a particle discretization analysis is forthcoming in an extended edition of this paper, I believe such a result is integral to this type of theoretical analysis. As a result, I remain lukewarm on the current draft and will opt to maintain my score. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for their thoughtful feedback. We fully agree that a particle discretization analysis is important for this line of theoretical work. To draw a full picture of particle case, we write a detailed setting as follows: We set the distribution $\mu,\nu$ using $N$ finite particles $\boldsymbol \theta=[\theta^i]^N_{i=1}$ and $\boldsymbol \omega = [\omega^i]^N_{i=1}$ given by $\mu_\theta=\frac{1}{N}\sum_{i=1}^N \delta_{ \theta^i}$ and $\nu_\omega=\frac{1}{N}\sum_{i=1}^N {\delta}_{ \omega^i}$. In this case, the algorithm becomes: **For** $k=1,2,\ldots, K-1$ **do**: ​ **For** all particles $i=1,2,\ldots,N$ sample $\xi_k^{\mu,i}\sim {\cal N}(0, I_d), \xi_k^{\nu,i}\sim {\cal N}(0, I_d)$ **do**: $\theta^i_{k+1} \leftarrow \theta^i_k- \frac{\eta_1}{N} (\sum^N_{i=1} \nabla \frac{\delta J}{\delta \mu} \[\mu_k,\nu_k\](\theta^i_k)+\tau \theta^i_k)+\sqrt{2\eta_1 \tau}\xi^{\mu,i}_k.$ $\omega^i_{k+1} \leftarrow \omega^i_{k} + \frac{\eta_2}{N}(\sum_{i=1}^N \nabla \frac{\delta J}{\delta \nu}\[\mu_{k+1},\nu_k\](\omega^i_k)+\tau \omega^i_k)+\sqrt{2\eta_1\tau}\xi^{\nu,i}_k.$ The main differences between particle case and mean-field case focus on these four aspects: 1. Change of probability spaces: Although $\mu_\theta=\frac{1}{N}\sum_{i=1}^N \delta_{ \theta^i}$ and $\nu_{\omega}=\frac{1}{N}\sum^N_{i=1} \delta^i_\omega$ can be seen as a mixture of atom measures in space $\mathbb R^d$, however, what we focus on is the parameter space $\[ \theta^i_k \] ^N_{i=1}$ and $\[\omega^i_k \]^N_{i=1}$ is $\mathbb R^{Nd}$. So the each-step update in Lemma 1 will consider a iteration on space $\mathcal P(\mathbb R^{Nd})$, which will include subtler analysis. 2. Change of Gibbs measures: As we have said in 1, the target measure is no longer particle approximation but the parameter space $\mathcal P(\mathbb R^{Nd})$, as in Kim et. al.(2024), there will be a change in the definition of Gibbs operator, $$ \mathcal K^+[\mu]\propto \mu^{\otimes N}\exp(-N\sum^N_{i=1}g_k^i). $$ Hence the Log-Sobolev inequality should also change due to this issue. 4. During the update process, the correlation between one particle and itself cannot be omitted, and the correlation will be larger if $N$ is smaller. Hence, following by our proof analysis, here are the parts that we need to modify: **Boundedness of gradients**: Since every Wasserstein gradient becomes a weighted sum of gradient of each particles, e.g. eq(38) will becomes $$ \mathbb E_{\nu_k}[\\|\omega_k\\|^2_2]= \frac{1}{N}\sum^N_{i=1} \mathbb E_{\nu_k}[\\|\omega^i_k\\|^2_2] $$ And the iteration will become $$ \mathbb E_{\nu_k}[\\|\omega^i_k\\|^2_2]=\mathbb E_{\nu_{k-1}\otimes\rho}[\\|\omega^i_{k-1}+\eta_2h_k+\sqrt{2\eta_2\tau}\xi_k^i\\|^2_2]. $$ Where $h_k$ is similar in the algorithm. Hence, all the parts that contains gradient will add a term related with $N$. **Correspondence in Second-order Variation**: For example, in eq(52), when $\mu,\nu$ are all particle measures, it will become $$ \frac{1}{N}\sum_{i,j=1}^N (H(\omega_{k+1}^i,\omega_{k+1}^j)-H(\omega_{k+1}^i,\omega_k^j)-H(\omega_k^i,\omega_{k+1}^i)+H(\omega_k^i,\omega_k^j)). $$ When $i=j$, the dependence with particle and itself will introduce a bias term $O(\eta)$. **Failure of LSI**: Since the suboptimality of Gibbs operators, the LSI eq.(36), eq.(64) will fail. While [1] gave a solution to this case in Lemma 7, which has form $$ \frac{\tau^2}{N} \sum^N_{i=1} \mathbb E[\|h_{\omega_k}-\nabla \log \nu_{\omega_k}\|^2_2]\geq 2\alpha \tau {\mathcal L_2}(\mu_{\theta_{k+1}},\nu_{\omega_k})+O(\frac{1}{N}) $$ After these clarification, most of derivations we did such as Taylor expansions are safe after replacing mean-field case with particle case. However, since all parts of our proof needs to be rewritten with form related with particle number $N$, plus the above three main differences, these additional analysis is rather involved, potentially doubling or even tripling its size. For this reason, and to maintain clarity within the scope of a conference submission, we have chosen to defer this component to an extended version. We appreciate the reviewer’s understanding and thoughtful consideration. ### References [1] Suzuki, T., Wu, D., & Nitanda, A. (2023). Convergence of mean-field Langevin dynamics: time-space discretization, stochastic gradient, and variance reduction. Advances in Neural Information Processing Systems, 36, 15545-15577.
null
null
null
null
null
null
Knowledge-Guided Wasserstein Distributionally Robust Optimization
Accept (poster)
Summary: This paper investigates distributionally robust optimization (DRO), focusing on Wasserstein distance-based DRO (W-DRO) while introducing a novel knowledge-guided cost function to further enhance the robustness and performance of DRO frameworks. The authors provide an extensive and thorough review of DRO, elaborating on its properties, and specifically analyzing the mathematical underpinnings of W-DRO. Building on this theoretical foundation, the authors propose a Knowledge-Guided Transport Cost (KGTC), which incorporates knowledge coefficients that reflect prior knowledge about the data or task to guide the transport cost within the Wasserstein ball. By embedding these knowledge-guided adjustments, the model aims to achieve better generalization and robustness to distributional shifts. The proposed method is validated theoretically and empirically on linear regression and binary classification tasks, demonstrating promising improvements over existing methods. Claims And Evidence: This is a rigorous paper, most of the claims are backed with theoretical support. Only more intuition would be better. Methods And Evaluation Criteria: No. Theoretical Claims: I didn't the details of the proofs. Experimental Designs Or Analyses: I have check the Experimental designs, which are quite reasonable. Supplementary Material: I have gone through the theoretical results parts. Relation To Broader Scientific Literature: This paper enhances the W-DRO with a novel regularization, which enhances the performance under extreme situations. Essential References Not Discussed: No additional references needed. Other Strengths And Weaknesses: Strengths - Theoretical Rigor and Solid Analysis: - The paper presents strong theoretical foundations with rigorous and logical arguments supporting the development of the Knowledge-Guided Transport Cost (KGTC). - The analytical treatment of both standard DRO and Wasserstein DRO is comprehensive, offering valuable insights into the properties and limitations of existing approaches. - The extension toward knowledge-guided variants is well-motivated from a theoretical perspective, making a notable contribution to the DRO literature. - Comprehensive Coverage of Use Cases: - The authors consider multiple scenarios, including both strong and weak transfer settings, showcasing the generality and adaptability of their proposed framework. - The method is applied to both regression (linear regression) and classification (binary classification)tasks, which highlights its versatility and potential for broader applications. - Promising Empirical Results: - The numerical experiments support the theoretical claims, demonstrating significant performance gains over standard DRO formulations. - The improvements observed in diverse tasks highlight the practical potential of incorporating knowledge-guided components into DRO. Weaknesses and Areas for Improvement - Lack of Discussion on Convergence and Statistical Guarantees: - One notable omission is the absence of a discussion regarding convergence rates and statistical guaranteesof the proposed method. - While the method is shown to be effective in terms of robustness and performance, readers would benefit from understanding whether there are provable bounds on convergence speed or finite-sample guaranteesthat back up the empirical observations. - Intuition Behind Knowledge-Guided Cost Function Needs Clarification: - Although the knowledge-guided cost function is mathematically well-defined, there is a lack of intuitive explanation regarding why and how incorporating multiple knowledge coefficients leads to better transport cost estimation and performance gains.coefficients would help clarify the practical utility of the approach. - Practical Applicability and Generalization to Real-World Scenarios: - While the theoretical and empirical studies are strong, the real-world applicability of the knowledge-guided cost function is underexplored. - It would be beneficial to provide concrete real-world scenarios or case studies where domain knowledge is readily available and can be leveraged for improved robustness (e.g., healthcare, finance, or autonomous systems). - Computational Complexity and Overhead Not Addressed: - A major concern is the potential computational overhead introduced by incorporating the knowledge-guided cost function. - Intuitively, the use of knowledge coefficients and the corresponding adjustments in transport cost computation could lead to higher space and time complexity, especially in large-scale problems or high-dimensional settings. - The paper does not discuss the computational implications, nor does it provide empirical measurements(e.g., runtime, memory usage) to reassure readers that the method remains computationally feasible. - Adding an analysis or even a simple comparison of computational efficiency between standard W-DRO and KGTC-enhanced W-DRO would significantly strengthen the practical relevance of the paper. Other Comments Or Suggestions: Please see the weaknesses. Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the encouraging and detailed feedback. Here we list our responses to the weaknesses and questions suggested by the reviewer. **W1: Lack of Convergence and Guarantees** We acknowledge the absence of an explicit discussion on convergence rates and statistical guarantees in this paper. However, the primary focus of our work is to establish the theoretical equivalence between shrinkage-based transfer learning and the Wasserstein Distributionally Robust Optimization (WDRO) framework and the statistical properties are beyond the scope. This perspective provides a unified approach to analyzing transfer learning problems. Many prior studies on specific cases of our proposed method have already explored the convergence of the optimal estimator as the radius shrinks; e.g. [1]. We plan to leverage the WDRO framework to establish formal statistical guarantees in future work. **W2 I: Intuition on Cost Function** The standard WDRO cost function takes the form $c(x,x') = \Vert x-x'\Vert_q^2$, allowing perturbations of the covariate in all directions. However, if we believe that prior knowledge $\theta$ serves as a trustworthy proxy for $β^*$ (the true optimum of Problem (SO), Line 125, Right), then it is natural to minimize perturbations in the predictive direction of $\theta$. Specifically, we constrain the perturbed point $x'$ so that the discrepancy between the predictions of $x$ and $x'$ under $θ$ is small , i.e., $θ^\top x' ≈θ^\top x$. Since $x$ is a point in the empirical measure, this ensures that the transported measure $P_N$ is mapped to distributions that preserve predictions under $θ$. Consequently, taking the worst-case over the ambiguity set yields estimators that perform at least as good as using $θ$ naively. **W2 II: Multi-Source Knowledge** In applications such as electronic health records, multiple clinical trials are conducted in different hospitals. These source domains typically represent majority but distinct populations. When the target domain involves a mixed-ethnicity population, it is reasonable to expect that the target estimator should be some combination of estimators derived from the source domains. This is naturally captured by our penalty term, for the case of two source data, this takes the form $\inf_{κ_1,κ_2}\Vertβ-κ_1θ_1-κ_2θ_2\Vert_p,$ allowing the model to automatically search for the best combination of source knowledge. We refer to this as a “multi-source ensemble” of prior knowledge, where the learning process profiles and distills useful information from multiple sources. **W3: Practical Applications** Thanks for your advice. To illustrate the applicability of our KG-WDRO framework in the real world, we apply it to the TransGLM dataset [2] on 2020 U.S election results at the county level. Counties are labeled as '1' if the Democrat candidate won, and 0 otherwise. We assess KG-WDRO vs. TransGLM by classifying county-level outcomes in eight target states, using data from the remaining states as source knowledge. The cleaned dataset includes 3111 counties and 761 standardized predictors across 49 states. Using 2100 counties as source, we predict results in eight target states (~100 counties each). KG-WDRO outperforms TransGLM in 5 of 8 states, reducing overall logloss by 7.6%, see table (https://figshare.com/s/e00c2d14f2c15ac02ed9). Both transfer learning methods significantly outperform the standard WDRO estimator. We will add this experiment to our main text. **W4: Computational Complexity** Theorem 3.2 transforms the infinite-dimensional problem (Line 240) into a tractable convex program (Line 242). Given the number $M$ of external sources is finite, the convex program takes the form $\inf_{β,κ}\Vert \mathbf{y}-\mathbf{X}β\Vert_2+\sqrt{δ}\Vertβ-κ_1θ_1-...-κ_Mθ_M\Vert_p,$ where $κ\in\mathbb{R}^M=[κ_1,...,κ_M]^\top$. Setting $κ=[0,...,0]^\top$, this reduces penalty term to a $p$-norm regularization, like Lasso or Ridge. When $κ$ is free, there are $M$ parameters. Consequently, the total number of parameters increases from $d$ (dimension of $\beta$) to $d+M$. As long as the number of knowledge sources remains finite—which is a reasonable assumption given the cost of experiments—the parameter size of the program remains of order $O(d)$ in high-dimensionality. Its computational complexity remains roughly the same as traditional regularization techniques. In our experiments, the KG-WDRO optimization problem is efficiently solved using CVXPY with Mosek. Additionally, we conduct an empirical study to demonstrate that their runtimes remain similar (https://figshare.com/s/9af171f8a0dcc3eb32da). [1] Blanchet, J., Murthy, K., & Si, N. (2022). Confidence regions in Wasserstein distributionally robust estimation. Biometrika, 109(2), 295-315. [2] Tian, Y., & Feng, Y. (2023). Transfer learning under high-dimensional generalized linear models. Journal of the American Statistical Association, 118(544), 2684-2697. --- Rebuttal Comment 1.1: Comment: Thanks for the detailed responses provided by the authors. Most of my concerns are addressed. Hence, I stay positive on the acceptance of this paper.
Summary: This work introduces a framework for transfer learning called Knowledge-Guided Wasserstein Distributionally Robust Optimization. In face of the overly conservative property of WDRO, the proposed framework adapts the Wasserstein ambiguity set using external knowledge (augment the transport cost function with an penalty term involving with prediction discrepancy). They establish the equivalence between KG-WDRO and shrinkage-based estimation methods, and demonstrate the effectiveness of KG-WDRO in improving small-sample transfer learning through numerical simulations. ## update after rebuttal After reading other reviews and the rebuttals, I decide to maintain my score. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: I checked part of them. Experimental Designs Or Analyses: I checked the experimental part. Supplementary Material: No Relation To Broader Scientific Literature: N/A Essential References Not Discussed: I think the references are relatively sufficient. Other Strengths And Weaknesses: **Strengths:** - Integrate prior knowledge into WDRO for linear regression and binary classification. Provide equivalence between KG-WDRO and shrinkage-based estimation method. Interprets a broad range of knowledge transfer learning approaches through the lens of distributional robustness. - The perspective of the research is supported by theoretical advancements and empirical evidence. - The experimental section presents extensive results, qualitative and quantitative. **Weaknesses:** * Lack of consistency in writing. It is hard to follow the connection between transfer learning and WDRO, the authors mention WDRO in the title, then suddenly mention transfer learning in the beginning of the abstract, without further discuss how the proposed framework benefits transfer learning. * Lack of introduction of the notation before using them. * In Example 1. What is the meaning of $\delta, \kappa$ ? * In section 2.1. What does the notation $\pi (A \times \mathbb{R}^d )$​ mean? * Lack of justification and ablation experiment for the selection of hyperparameters $\delta$ and $\lambda$ . Other Comments Or Suggestions: No Questions For Authors: - In line 20, the authors mention "Our method constructs smaller Wasserstein ambiguity sets", the reader may want to acquire a quantitative analysis about it, can it be further discussed? - The authors mention ''This framework mitigates the conservativeness of standard WDRO''. According to my understanding, the conservativeness of vanilla WDRO is due to worst-case optimization. And it seems to be relatively easier to bypass the worst case if one wants to alleviate the conservativeness of WDRO, do the authors demonstrate the challenge of directly using some vanilla methods (for example, linear interpolation between worst case and random case)? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the encouraging and detailed feedback. **W1: Inconsistent Writings** We acknowledge that the connection between WDRO and transfer learning may not be immediately clear. We will refine our writing to ensure a smoother and more intuitive transition between these two concepts. The main contribution of our paper is to establish that a broad class of shrinkage-based transfer learning objectives can be equivalently formulated as a WDRO problem. This provides a distributionally robust perspective on transfer learning. The title includes the phrase “knowledge-guided”, which refers to a type of transfer learning known as *domain adaptation*-adapting models trained on a source domain to perform well on a related target domain. In our framework, the prior knowledge is represented by the model parameters learned from the source domain, and the optimization in the target domain is guided by this prior knowledge. Applying the DRO principle allows us to rigorously derive a penalized estimation framework, while also ensuring robustness to distributional shifts. **W2: Notations** We thank the reviewer for pointing out these oversights. We will add the relevant definitions to the main text. The notation $δ\geq 0$ denotes the radius of the Wasserstein ball centered at the empirical measure. The free variable $κ\in\mathbb{R}$ in the optimization is interpreted as the projection coefficient of $β$ onto $θ$. In $π(A\times\mathbb{R}^d)$, $π$ denotes a probability on the product space $\mathbb{R}^d \times \mathbb{R}^d$, and $A$ is a Borel measurable subset of $\mathbb{R}^d$. **W3: Hyperparameter Tuning** We use cross validation (CV) to tune the hyperparameters $δ$ and $λ$. In the future, our goal is to develop an automated hyperparameter selection method, akin to the methods introduced in (Blanchet et al, 2019a)(Line 462, Left) and (Blanchet et al, 2022)(Line 476, Left), building on the theoretical framework of WDRO. For ablation study, we conducted experiments where we set $λ=0$ (no transfer), reducing KG-WDRO to a standard WDRO formulation. As shown in the upper two subfigures of Figure 1, our method consistently improves regression error and classification accuracy, when the correlation between the prior $θ$ and the true $β$ is as low as 0.3. In this revision, we include a new ablation study in high-dimensional regression setting, which will be added to main text. We fix the values of $δ$ to either $δ^*$, where $δ^*$ is tuned via CV on a standard WDRO estimator, or fixed to $δ=3$. Using fixed values, we fit KG-WDRO estimators across a grid of $λ$. The resulting out-of-sample (OOS) performances are plotted in the figure (https://figshare.com/s/05b069e330136338fa4e). When the correlation between the prior $θ$ and the true $β^*$ is high, setting $λ\to∞$ yields the best OOS performance. As the correlation decreases, smaller values of $λ$ lead to better results. This finding highlights the need to include $λ$ to control the extent of bias, and aligns with the intuition that stronger correlations warrant larger values of $λ$. Red dots in the plot represent the OOS performance obtained via CV. The $λ$-coordinates of these red dots follow the trend of the curves. The red dots lie above the curves, indicating improved performance when $δ$ is tuned. **Q1: Smaller Ambiguity Set** Yes. For the case when $λ=∞$, we can upper bound the minimax objective (Line 239, Left) by $\inf_{α\in\mathbb{R}} \mathbb{E}_{\mathbb{P}_N}(Y-(\alpha θ)^\top X)^2$, which is the in-sample risk of the prior $θ$ adapted on target data. In standard WDRO, this upper bound is trivially given by $\mathbb{E}(Y^2)$. We can show that the ambiguity set becomes strictly smaller when $λ= ∞$ compared to when $λ=0$. We will provide these detailed quantitative discussions on the ambiguity set in the new draft. **Q2: Mitigating Conservativeness** The goal of KG-WDRO is to ensure robustness against statistical noise while avoiding over conservativeness. One could simply use the empirical risk minimization solution, $β_{ERM}$, from Problem (ERM) (Line 133) to prevent over-conservatism but in a small-sample regime, computing $β_{ERM}$ is often infeasible without introducing bias, such as that induced by standard WDRO. Even in cases where $β_{ERM}$ is computable, its out-of-sample performance is typically poor due to the high uncertainty associated with limited data. Linear interpolation between the worst case and the random case may also fail. It would inherit both high uncertainty (from the random distribution) and high conservatism (from worst-case distribution), leading to a suboptimal trade-off between bias and variance. Overall, there is no free lunch if no additional information is provided. In contrast, our framework leverages prior knowledge in a structured manner to mitigate conservativeness while maintaining robustness, thereby offering a principled alternative to simple interpolation-based approaches.
Summary: The authors believe that traditional Wasserstein Distributionally Robust Optimization (WDRO) has a conservative tendency, which can lead to suboptimal performance. They argue that in real-world scenarios, prior knowledge can be leveraged to enhance model performance and robustness. Therefore, they propose that integrating prior knowledge into the Wasserstein Distributionally Robust Optimization framework remains an open question. In this work, the authors introduce a novel framework called Knowledge-Guided Wasserstein Distributionally Robust Optimization (KG-WDRO), which utilizes external knowledge (parameters) to adjust the Wasserstein ambiguity set by constraining the transportation costs along directions indicated by the prior knowledge. The authors summarize that this strategy allows the model to concentrate uncertainty in areas where the prior knowledge is less reliable, effectively enhancing the robustness of knowledge-guided generalization. Claims And Evidence: 1. Introducing prior knowledge is a very good question, but the authors' explanation of it in the early part of the paper is quite vague. Methods And Evaluation Criteria: Is this method still viable in high-dimensional knowledge scenarios, such as neural networks? Lack of more practical scenarios. Theoretical Claims: The theoretical framework is built on strong assumptions and has notable limitations. Additionally, the professors embrace significant hypotheses in their analyses, such as the definition of delta in Line 702. Why is delta defined in this particular manner? Experimental Designs Or Analyses: The numerical results presented need to incorporate more real-world scenarios for better applicability and relevance. Supplementary Material: N/A Relation To Broader Scientific Literature: n/a Essential References Not Discussed: n/a Other Strengths And Weaknesses: Weakness: 1.Introducing prior knowledge is a very good question, but the authors' explanation of it in the early part of the paper is quite vague. 2. The authors use linear regression and binary classification problems to validate the proposed method. Is this appropriate? Linear regression and binary classification are not very complex problems, and there are already many methods to solve them. To validate KG-WDRO, the authors should provide problems that match its complexity. 3. There is an error in the formula on line 182; "inf" and "min" represent different concepts. 4. From Theorem 3.2, it appears to resemble a regularization method, making it difficult to determine from a theoretical perspective whether the improvement in performance is due to the prior knowledge or the method proposed by the authors. What is the impact of the amount of prior knowledge on performance improvement? 5. Is this method still viable in high-dimensional knowledge scenarios, such as neural networks? Other Comments Or Suggestions: n/a Questions For Authors: n/a Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the encouraging and detailed feedback. **W1: Explanation of Using Prior Knowledge.** Our transfer learning approach falls under *Domain Adaptation*, which adapts models trained on a source domain to perform well on a related target domain with limited labeled data. A key application is in clinical trials, where the binary outcome $Y\in\\{0,1\\}$ indicates treatment success or failure, and the high-dimensional covariate $X$ encodes a patient’s physical and health conditions along with treatment details. Data scarcity is common—especially for underrepresented populations. To address this, we leverage a classifier trained on a majority group (parameterized by $θ$) as a reference to estimate a classifier for the minority group (parameterized by $β$). This knowledge-guided transfer learning reduces uncertainty by anchoring the search for $β$ in the direction of $θ$. We will include this example in the new draft to clarify our setting. **W2: Model Complexity** We would like to emphasize that in our settings , due to the inherently small sample size or lack of labeled data in the target domain, it is necessary to limit the complexity of our machine learning models. For example, in clinical trials, there are usually only ~100 observations. Therefore, neural-network type methods may overfit. The tractable reformulation we obtained for generalized linear models and SVMs provides valuable insights into how shrinkage-based or penalized estimation for transfer learning objectives should look like. This distinguishes our approach from many previous works, as discussed in Table 1. Furthermore, as we will elaborate below, the techniques underlying KG-WDRO can be directly extended to more complex machine learning models, including neural networks. **W3: Typo** Thanks for catching this. We will correct the $\inf$ to $\min$ in the statement on (Line 182, Left). **W4: Connection to Regularization.** Yes, we acknowledge that the proposed method can be interpreted as a form of regularization, and in fact, this tractable reformulation is a key contribution of our paper. It allows us to efficiently solve the infinite-dimensional minimax problem as a convex program. This suggests that the performance improvement is directly attributable to the integration of prior knowledge into the cost function, which translates into a regularization/penalty term that encourages collinearity with the prior. Therefore, both the prior knowledge and our method play important roles. Regarding the impact of the quality of prior knowledge, the upper two subfigures in Figure 1 demonstrate that our method improves regression mean squared error and classification accuracy even when the correlation between the prior knowledge and the true parameter is as low as 0.3, with hyperparameters selected via cross-validation. **W5: High-dimensional Models.** Our method remains applicable in high-dimensional settings. We can apply this principle to neural networks as follows: suppose we pre-trained a multilayer perceptron (MLP) on the source. We can fix all the hidden layers and treat the output layer’s parameters as prior knowledge, denoted by $θ$. Then, on the source domain, we fine-tune the MLP’s output layer using the KG-WDRO objective, effectively re-learning the output layer on the target domain but using the previous output layer as knowledge guidance. Mathematically, suppose the pretrained MLP is represented by $f(X) = θ^\top h(X)+b$, where $h:\mathbb{R}^d\to\mathbb{R}^k$ is the nonlinear hidden layers, then the KG-WDRO framework for the MLP on the target data is to solve the optimization problem $ \inf_{β,c,κ}\Vert \mathbf{y}-β^\top h(\mathbf{X})-c\Vert_2+\sqrt{\delta} \Vert[θ,b]-κ[β,c]\Vert_p,$ by taking the hidden layers $h(\cdot)$ as fixed. Following this, we conduct an additional experiment (https://figshare.com/s/29e12260c87f084eeb54) between KG-MLP versus naive MLP in a setup similar to Figure 2 of Main Text, which will be added to the new version of the paper. Specifically, we consider an 11-dimensional setting. The response $y$ is the sum of 5 nonlinear ($\sin, e^{-x^2}, \log(|x|+1), \tanh$ and an interaction) and 5 linear basis. The linear coefficients for source and target are generated using the same method in Figure 2 for different correlations. We use a 2-layer MLP with 11 and 10 nodes in each layer. The pretrained MLP was trained on 5,000 data points, then KG-MLP was fine-tuned on 130 target points, while the naive MLP used the same data. We see that KG-MLP outperforms naive MLP consistently, especially when correlation is high. **Lack of Real Data** We refer the reviewer to **W3** in the rebuttal to Reviewer 85zy to see the addition of the real data analysis. **$\Delta$ on Line 802** Here $\Delta$ is just a free variable of the $d$-dimensional space $\mathbb{R}^d$ to represent the perturbation $x-x'$ in the ambiguity set, we will refurbish the appendix for better exposition. --- Rebuttal Comment 1.1: Comment: Thank you for the rebuttal. In the context of distribution optimization, introducing a prior parameter to facilitate a robust transition is not a novel approach. The initial configuration for parameter construction can be challenging to control, which is likely why the authors have introduced a new parameter, beta, to relieve this issue. The construction of 𝜃 and its varying control should be key to the idea. In optimization, this involves the variance question. It seems that the authors lack such analysis. Overall, I find that the optimization solution does not offer significant new insights. The use of parametric optimization with robust control is a well-established method. Furthermore, the results are primarily applied to a limited set of simple cases, lacking broader applications. As a result, I see few advancements for our ML community. ---------I apologize for not realizing that rebuttal discussions cannot be submitted through the official comments button; the author remains unseen.--------------------
Summary: The paper introduces a transfer-learning variant of Wasserstein Distributionally Robust Optimization. Given some external knowledge, which the authors represent by a vector $\theta$, they construct an ambiguity region based on a Wasserstein distance with $\theta$-dependent cost function. This makes the ambiguity region less pessimistic. They show that the resulting optimization problems are equivalent to several known forms of regularization. ## Update after rebuttal I appreciate the authors response. My review remains unchanged. Claims And Evidence: The claims are supported by proofs and the approach is well-motivated. Methods And Evaluation Criteria: The proposed methods are adequate. Theoretical Claims: I did not check the proofs. Experimental Designs Or Analyses: The experimental setup seems reasonable. Supplementary Material: I did not review the supplementary material. Relation To Broader Scientific Literature: WDRO is a popular optimization framework that has been related to several forms of regularization. This work introduces a WDRO-variant for transfer learning and relates it to several regularization measures. Essential References Not Discussed: - Other Strengths And Weaknesses: Strengths: * The paper is very well written and pleasant to read * The results are very elegant. The proposed criteria seem intractable, but are rewritten to more tractable-looking regularized problems. Weaknesses: * The proposed setup introduces an additional parameter $\lambda$ without any guidance on how to choose this value. Other Comments Or Suggestions: - Questions For Authors: Suppose the prior knowledge $\theta$ is *unhelpful* for the learning problem at hand, how much does this hamper the learning? Will the method, given sufficient data, still converge to the optimum? Does this depend on whether $\lambda=\infty$ or not? When setting $\lambda=\infty$, we are basically just adding some constraints to the optimization problem. Can this really be considered transfer learning? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the encouraging and detailed feedback. Here we list our responses to the weaknesses and questions suggested by the reviewer. **W1: Selection of $\lambda$.** The additional parameter $\lambda$ is introduced to model the decision maker’s confidence in using the prior knowledge $\theta$ as a proxy for the parameter of interest $\beta$. Similar to how the budget constraint $\delta$ is specified, $\lambda$ can be selected using data-driven methods, such as grid-search cross-validation over the pair $(\delta, \lambda)$, as demonstrated in Simulation 2 of Section 4.2 (Line 372, Right). **Q1: Unhelpful $\theta$ and Convergence.** Let $\beta^*$ denote the solution to the stochastic optimization (SO) problem in the target domain (Line 125, Right). When the correlation between $\beta^*$ and $\theta$ is small, we can employ the weak-transferring mechanism developed in Section 3.3.2 (Line 250, Right). In such cases, the data-driven selection of $\lambda$ is expected to yield small values, effectively reducing the KG-WDRO problem to a nearly standard DRO formulation, thereby not hampering the learning. This intuition on the positive relationship between the informativeness of the prior knowledge and the size of $\lambda$—is demonstrated in the figure (https://figshare.com/s/05b069e330136338fa4e) through the ablation study on $\lambda$ in Section **W3** of our rebuttal to Reviewer LRuU. For sufficiently large datasets, following the approach in (Blanchet et al, 2022) (Line 476, Left), we can select the Wasserstein ball radius on the order of $n^{-1}$, i.e., $\delta_n = C/n$ for some constant $C > 0$. This ensures that the KG-WDRO solution, $\beta_{\text{KG-DRO}}$, converges to the optimum $\beta^*$ at an optimal rate of $O(n^{-1/2})$. Although this result has not been formally proven, we aim to establish it in future work. Notably, this $O(n^{-1/2})$ convergence rate holds for all $\lambda \in [0,\infty]$, including $\lambda=\infty$. **Q2: Optimization and Transfer Learning when $\lambda = \infty$.** When $\lambda = \infty$, our formulation is equivalent to adding equality constraints on the support of the perturbation. Specifically, if we define the perturbation as $\Delta = x - x'$ where $x’$ is the perturbed value of $x$, then the constraint $\Delta^\top \theta = 0$ must hold. This does not restrict the learning problem to a constraint optimization but rather constrains the support of the perturbation. Moreover, we do not think that transfer learning contradicts optimization problems. While our goal is to perform transfer learning—leveraging prior knowledge for the target domain—our computational method is based on solving optimization problems. To draw an analogy, in deep learning, the objective might be classification, yet the method relies on minimizing loss functions through gradient descent or similar optimization techniques.
null
null
null
null
null
null
RIFLEx: A Free Lunch for Length Extrapolation in Video Diffusion Transformers
Accept (poster)
Summary: This paper propose a training-free method for video extrapolation. It argues that existing extrapolation strategies, originally developed for text and image generation, fail on videos because of temporal repetition and slow motion. It analyzes the frequency components in positional encoding, isolating individual frequency components by zeroing out others and fine-tuning the target video model. It finds that high frequencies capture short-term dependencies and induce temporal repetition, while low frequencies encode long-term dependencies but lead to motion deceleration. Furthermore, this paper identifies a consistent intrinsic frequency component across different videos from the same model, which primarily dictates repetition patterns among all components during extrapolation. Based on the observation, this paper proposes to lowers the intrinsic frequency to ensure it remains within a single cycle after extrapolation. In addition, this technique can also be applied to spatial extrapolation. Experiments are on state-of-the-art video diffusion transformers, including CogVideoX-5B and HunyuanVideo for 2x extrapolation. Besides the training-free method, the paper also explores the possibility of fine-tuning, which improves the sample quality and extends to 3x extrapolation. ## update after rebuttal The rebuttal has addressed my concerns and questions. Since I already gave accept, I will keep the score. Claims And Evidence: 1. Claim: A comprehensive understanding of video length extrapolation. Evidence: Qualitative results show that the positional encoding based previous extrapolation methods gives either repeated frames or slow motion. Quantitative results show that there exists a consistent intrinsic frequency component across different videos from the same model. 2. Claim: A training-free extrapolation solution by reducing the intrinsic frequency. Evidence: Qualitative results show that reducing the intrinsic frequency indeed help in video extrapolation. 3. Claim: 2x extrapolation in the training-free manner and 3x extrapolation in the fine-tuning manner. Evidence: Qualitativer results shows fine-tuning is necessary for 3x extrapolation. The fine-tunning takes 20,000 original length videos and 1/50,000 of the pre-training computation. Methods And Evaluation Criteria: The method is simple yet effective. By only reducing the intrinsic frequency for those video generation models, the extrapolation length is significantly longer. The effectiveness is demonstrated quantitatively by NoRepeat Score and Dynamic Degree, and qualitatively by the supplementary videos. Theoretical Claims: I checked the frequency analysis in RoPE, which makes sense. Experimental Designs Or Analyses: The experiments look sound. Supplementary Material: I checked all of them. Relation To Broader Scientific Literature: In a broader sense, I believe this pre-context aware video extrapolation is also related to video prediction. The key of both areas is how to generate the next frames with the context of existing frames. Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: If only 20,000 videos are needed, how to select them? Does the selection make any difference? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank Reviewer DWAu for the recognition of our work. The further questions are addressed as follows. ### Q1: If only 20,000 videos are needed, how to select them? Does the selection make any difference? The 20K videos in this paper were randomly sampled without selection. Fine-tuning aims to adapt the model to modified frequencies, which theoretically requires no data bias. To validate this, we add an experiment where we independently train models on two distinct randomly sampled datasets. We then perform three sampling runs with different random seeds and apply a two-sample $t$-test to compare the performance of the two models. As demonstrated in Rebuttal Table A, the statistical analysis reveals no significant performance difference between the models at the $95$% confidence level (all two-sample $t$-test $p$-values > $0.05$, $α=0.05$). **Rebuttal Table A.** Performance comparison between Model A and Model B, trained on two independent randomly sampled datasets based on the CogVideoX-5B architecture. Evaluation is conducted on a 165-sample subset of VBench with three sampling runs, reporting the mean ± standard deviation. The p-values are derived from a two-sample $t$-test. | Metric| Model A / Model B| P Value| |--|--|--| |NoRepeat Score| 81.21 $\pm$ 7.572 / 81.82 $\pm$ 6.414 | 0.9212| |Dynamic Degree| 53.24 $\pm$ 3.493 / 62.03 $\pm$ 7.650 | 0.1444| |Imaging Quality| 60.30 $\pm$ 0.8279 / 58.96 $\pm$ 0.8874 | 0.1270| |Overall Consistency| 25.26 $\pm$ 0.2974 / 25.21 $\pm$ 0.2053| 0.7994|
Summary: This paper focused on video length extrapolation in Video Diffusion Transformers. The authors provided a comprehensive understanding of video length extrapolation by analyzing the role of frequency components in RoPE. Furthermore, a minimal yet effective method named RIFLEx is proposed to prevent repetition by reducing intrinsic frequency. Experimental results show that RIFLEx achieves high-quality 2× extrapolation on state-of-the-art video diffusion transformers in a training-free manner. Claims And Evidence: This paper claims that "generating even longer videos with temporal coherence remains a major challenge and existing length extrapolation methods lead to temporal repetition or motion deceleration." and validates this by experimental results. Methods And Evaluation Criteria: A minimal yet effective method named RIFLEx is proposed to prevent repetition by reducing intrinsic frequency. Qualitative and quantitative evaluations are conducted. Theoretical Claims: No theoretical claims Experimental Designs Or Analyses: Yes, a minimal yet effective method named RIFLEx is proposed to prevent repetition by reducing intrinsic frequency. Supplementary Material: Yes, the demo video part Relation To Broader Scientific Literature: This paper focused on video length extrapolation in Video Diffusion Transformers. The authors provided a comprehensive understanding of video length extrapolation by analyzing the role of frequency components in RoPE. Furthermore, a minimal yet effective method named RIFLEx is proposed to prevent repetition by reducing intrinsic frequency. Experimental results show that RIFLEx achieves high-quality 2× extrapolation on state-of-the-art video diffusion transformers in a training-free manner. Essential References Not Discussed: No Other Strengths And Weaknesses: Pros: 1. A minimal yet effective method named RIFLEx is proposed to prevent repetition by reducing intrinsic frequency. 2. This paper is well written and easy to follow. 3. Experimental results verify the effectiveness of the proposed method. Cons: 1. The experimental results shown in the supplementary materials display that some cases may still suffering from temporal inconsistency and may lead to the camera to switch in the playing video. Could the authors give some explanation? 2. Why is the effect not good when the insertion multiple is greater than 3? So how do we choose the appropriate multiples according to different models? Other Comments Or Suggestions: Please refer to Weaknesses for more details Questions For Authors: Please refer to Weaknesses for more details Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank reviewer Cp7v for the valuable comments. We address the concerns as follows. ### Q1: The experimental results shown in the supplementary materials display that some cases may still suffering from temporal inconsistency and may lead to the camera to switch in the playing video. Could the authors give some explanation? We appreciate the reviewer's attention to this detail and would like to clarify that this may be a potential misunderstanding. In fact, **multi-scene generation with camera transitions is a desirable and essential capability for video synthesis**. To achieve this, HunyuanVideo is specifically designed to curate a training dataset that includes diverse scene transitions (see Dense Description in Section 3.2 of HunyuanVideo[1]). As demonstrated on HunyuanVideo's project page[2] (e.g., the example in Row 2, Column 1), HunyuanVideo excels at "breaking monotonous cinematography with seamless director-level shot transitions." This capability represents a significant advancement in video synthesis. **Our method preserves the base model's ability to generate both multi-scene videos (Videos 1-3,9 in the supplementary materials) and single-scene videos (Videos 4-8)**. Importantly, even in multi-scene generation, our method still maintains long-term temporal consistency—for instance, Figure 1 demonstrates consistent identity preservation across two distinct scene transitions. We will clarify this scene transition capability in Section 4.2 of the final version. [1] Kong, Weijie, et al. "Hunyuanvideo: A systematic framework for large video generative models." arXiv preprint arXiv:2412.03603 (2024). [2] https://aivideo.hunyuan.tencent.com/. ### Q2: Why is the effect not good when the insertion multiple is greater than 3? So how do we choose the appropriate multiples according to different models? #### Q2-1: Why is the effect not good when the insertion multiple is greater than 3? The limitation of a 3× insertion multiple stems from a fundamental trade-off in positional encoding dynamics: an excessive reduction in frequency leads to a diminished ability to discriminate between sequential positions. Specially, a larger extrapolation factor $s$ leads to smaller frequencies $θ_k$ (see Eqn.(8)). When $θ_k$ becomes excessively small, the position difference term $∆ = \cos((p + 1)θ_k) − \cos(p\theta_k)$ diminishes to near-zero values, thereby losing positional discriminability. Empirically, we find the threshold occurs at $θ_k′ \le 2\pi/3L$ when $s=3$. Furthermore, **this upper limit is inherent to the positional encoding mechanism and is consistent across existing pretrained video diffusion models** based on our experiments. We will add the above detailed explanation in "Maximum extent of extrapolation" part of the final version. #### Q2-2: So how do we choose the appropriate multiples according to different models? We clarify that **we do not select different $s$ for different models**. Instead, the extrapolation length is determined by the user's requirements, provided it remains below the upper limit. For example, in the paper, we demonstrate results for generating videos with multipliers of $2$, $2.3$, and $3$. Based on the analysis in Q2-1, the extrapolation limit is consistent across current models, eliminating the need for specific designs tailored to different models. We hope you might find the response satisfactory, and we would be delighted to clarify any further concerns you might have.
Summary: This paper focuses on a challenging question: How to do the length extrapolation for a trained video diffusion model? After some systematic analyses, they found a metric, named intrinsic frequency that governs the extrapolation property of a video diffusion model. Then, they propose RIFLEx to reduce the intrinsic frequency when generating longer videos. This method is valid at preserving the motion consistency and preventing the content repetition. The method enables high-quality 2× extrapolation in a completely training-free manner and supports 3× extrapolation with minimal fine-tuning on original-length videos. ## update after rebuttal I appreciate that the authors take time to provide additional explanations. My concerns are fully addressed. I will keep my original score and support its acceptance. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes Theoretical Claims: N/A - The paper doesn't present formal proofs for theoretical claims. Experimental Designs Or Analyses: Yes, the experimental designs are sound and comprehensive. Supplementary Material: Yes Relation To Broader Scientific Literature: This work effectively builds upon some previous research directions, like position embedding in diffusion model. Essential References Not Discussed: Related work on repetition issues in autoregressive video generation models Other Strengths And Weaknesses: Strengths: 1. The solution is simple and effect. 2. The overall writing is good and easy to follow. Weaknesses: 1. Limited exploration of extrapolation factors beyond 3× 2. The identification method for intrinsic frequency relies on visual inspection rather than a theoretical analysis. Other Comments Or Suggestions: See Other Strengths And Weaknesses Questions For Authors: See Other Strengths And Weaknesses Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We appreciate Reviewer JAfh for the acknowledgement of our contributions. ### Q1: Missing related work on repetition issues in autoregressive video generation models Unlike diffusion models, autoregressive video generation models typically quantize videos into discrete tokens and generate video content through next-token prediction in an autoregressive manner. Previous works have demonstrated great performance in such models [1–8]. For example, NÜWA [4] employs VQ-GAN for tokenization and generates videos using a 3D transformer encoder-decoder framework. More recently, VideoPoet [5] tokenizes images and videos with a MAGVIT-v2 encoder and autoregressively generates videos using a decoder-only transformer based on a pretrained large language model. While autoregressive video models can theoretically extend sequences indefinitely through next-token prediction [9-11], recent studies reveal their tendency to degenerate into repetitive content generation[5,11]. In this work, we present a principled approach to video length extrapolation that effectively generates novel temporal content in diffusion-based frameworks. Although our method is developed for video diffusion transformers, the underlying mechanism governing position embedding periodicity may also offer insights for addressing repetition challenges in autoregressive video generation. Thank you for highlighting this important direction. We will incorporate the above discussion in related work section in the final verion. We would appreciate any additional references the reviewer could suggest that we may have missed. [1] GODIVA: Generating Open-Domain Videos from Natural Descriptions [2] VideoGPT: Video Generation using VQ-VAE and Transformers [3] CogVideo: Large-scale Pretraining for Text-to-Video Generation via Transformers [4] NÜWA: Visual Synthesis Pre-training for Neural Visual World Creation [5] VideoPoet: A Large Language Model for Zero-Shot Video Generation [6] VILA-U: a Unified Foundation Model Integrating Visual Understanding and Generation [7] Generative Multimodal Models are In-Context Learners [8] Emu3: Next-Token Prediction is All You Need [9] Loong: Generating Minute-level Long Videos with Autoregressive Language Models [10] NUWA-Infinity: Autoregressive over Autoregressive Generation for Infinite Visual Synthesis [11] Long Video Generation with Time-Agnostic VQGAN and Time-Sensitive Transformer ### Q2: Limited exploration of extrapolation factors beyond 3× Thank you for the suggestions. In this work, we primarily focus on achieving length extrapolation at minimal cost based on a pre-trained video diffusion models. As discussed in the main text, the 3× limitation stems from diminished ability to discriminate sequential positions due to excessive frequency reduction. To further extend beyond the 3× extrapolation, it is promising to investigate the mechanism of positional encoding during training, specifically tailored for extrapolation. We believe our findings colud provide valuable insights for this direction, and we will include a more detailed discussion in Section 5. ### Q3: The identification method for intrinsic frequency relies on visual inspection rather than a theoretical analysis. Thank you for the insightful comment. In our current work, we primarily adopt an empirical approach—visual inspection—for intrinsic frequency identification when adapting the pre-trained video diffusion transformer. While this approach is effective for adaptation, we agree that establishing a theoretical foundation for intrinsic frequency identification is crucial. Achieving this would require fundamental research into how intrinsic frequencies emerge during the pre-training process, potentially analysis from a training-from-scratch perspective. We sincerely thank the reviewer for highlighting this direction, and we will address this in our future work. We will add the above points to the discussion section.
Summary: This work solves the problem of repetitiveness in long video generation from a new perspective. This work first analyzes and experiments the frequency component of the video position encoding ROPE, and concludes that the period of the frequency component directly affects the periodicity of certain characteristics of the generated video, and the frequency component closer to the video repetition period has the greatest impact. Therefore, based on this idea, this work defines an intrinsic frequency component, which has a period close to the video repetition period. Then, directly reducing this frequency component can alleviate the problem of video repetition. This method can be used in the latest and most advanced video generation models, such as CogVideoX and HuanyuanVideo, which have their corresponding intrinsic frequency components. Therefore, for video extrapolation, this method can achieve two training-free efficient fine-tuning methods: 1) For training-free manner, the "free lunch" is to directly find the corresponding intrinsic frequency component, and then directly reduce this frequency component according to the extrapolation factor. 2) For a larger extrapolation factor, a very small number of training samples can be used to fine-tune the video generation model to adapt to the reduced frequency component. The author conducted extensive experiments on this method in CogVideoX and HuanyuanVideo, and found that it outperformed other existing video extrapolation methods in both quantitative evaluation and quality assessment. This work alleviates the problem of video extrapolation from a completely new perspective at the lowest cost, which is of great significance to the field of video generation. ## update after rebuttal I would like to thank the authors for their responses in the rebuttal, which addressed my concerns. I think this is a good paper and should be accepted. Claims And Evidence: - High frequencies capture rapid movements and short-term dependencies, inducing temporal repetition, while low frequencies encode long-term dependencies with slow motion. This conclusion is based on the calculation of the period of the frequency component and the repetition period of the corresponding video extrapolation. The basis here is supported by both theoretical deduction and experimental demonstration. - Since the intrinsic frequency component directly affects the repetition period of the video, reducing the intrinsic frequency component lengthens the repetition period of the video, so that video extrapolation can be extended to longer videos. Methods And Evaluation Criteria: - This work proposes a novel "free lunch" for video extrapolation. This method is very simple and clear, and is a perfect solution from derivation to practical application. This method has been convincingly verified on the latest video generation model. - For longer video extrapolation, the method proposed in this work requires simple fine-tuning, which may be due to the gap between theoretical derivation and practical application. However, experiments show that only a small amount of fine-tuning is required, and the method can be adapted to longer video extrapolation tasks. - This work uses the common evaluation criteria for video generation to evaluate the results of video extrapolation, and has achieved amazing results in both quantitative evaluation and quality evaluation. Theoretical Claims: I checked the theoretical derivation and proof and found no obvious problems. Experimental Designs Or Analyses: - In the quantitative evaluation of the experimental part, in the training-free evaluation of HunyuanVideo, the proposed method is slightly worse than the best method in automatic metrics. The author can analyze the specific reasons in detail. - From a theoretical point of view, the biggest difference between the method proposed by the author and previous methods such as PI, NTK or YaRN is that the method in this paper focuses on the most important intrinsic frequency component rather than a group of components. Is this correct? What is the importance of this difference? Supplementary Material: I read the supplementary material, which provides more generated video comparisons of the proposed method. Relation To Broader Scientific Literature: This work alleviates the problem of video extrapolation from a completely new perspective at the lowest cost, which is of great significance to the field of video generation. Essential References Not Discussed: NA Other Strengths And Weaknesses: Strengths: The author has carried out detailed theoretical derivation and experimental verification, showing the correlation between the intrinsic frequency component and the video repetition period. This method is very simple and clear, and is a perfect solution from derivation to practical application. Weaknesses: - What is the core difference of this method compared with other methods? Is it to scale only the most critical frequency component, rather than a group of frequency components? The paper can discuss in detail the difference from previous methods. - What is the reason why the method proposed in the paper will have a gap between theory and practice? Why can fine-tuning alleviate this problem? The author can analyze this problem in detail. - Why does this set of ideas work in spatial extrapolation? Other Comments Or Suggestions: - Line 5 in Algorithm 1 should not refer to Eqn. (8), but to the equation in 306, that is, $\theta'_{k}=\frac{2\pi}{Ls}$. - The subscripts in the formulas in the article basically refer to frequency components. The author should provide the definition of the subscripts multiple times to prevent people from misunderstanding them as frame subscripts. This is a minor concern of mine. Questions For Authors: NA Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank Reviewer EXby for the valuable suggestions. We have thoroughly addressed the detailed comments as follows. ### Q1: Explain the proposed method is slightly worse than the best method in automatic metrics. We kindly clarify that **only through a comprehensive consideration of multiple metrics can we objectively assess video generation quality, rather than relying solely on a single metric.** For example, while PI and YaRN achieve the highest score in the single NoRepeat Score metric, it performed significantly worse in the Dynamic Degree metric (PI/YaRN vs. our RIFLEx in Table 1), resulting in poor video quality and rendering other metrics meaningless. See Figure 5 for the evidence. To highlight this, we mark severe issues in red for clarity (see lines 330-332). We emphasize that **our method is the only one consistently highlighted in the green zone across all 5 settings in Table 1**. ### Q2: The core difference between this method and others. Our method differs from prior works in two key aspects: - We determine **which frequency in RoPE should be modified**—specifically, the intrinsic frequency whose period aligns with the first observed repetition frame in a video (Eqn. (7)). As discussed in the main text (lines 306–311), modifying only this frequency is sufficient: adjusting higher frequencies disrupts fast motion, while altering lower frequencies has negligible impacts. - We derive **how to adjust this frequency**—ensuring it remains within a single period after extrapolation (the non-repetition condition, Eqn. (8)). Existing approaches modify multiple frequency components in RoPE, but **they may target incorrect components**. For instance, methods like YaRN and PI mistakenly adjust high frequencies, which results in slow motion (see lines 295–299). Furthermore, even when some methods include the correct components, **their modifications can be flawed**. For example, the adjustment of intrinsic frequency in NTK does not satisfy our non-repetition condition, resulting in repetition issues (see lines 326–329). In summary, our work establishes principled guidelines for position embedding design in length extrapolation. We will make it more clear in the "Principled Explanation for Existing Methods" section (Section 3.4). We hope this response clearly articulates our contributions and would be very happy to clarify further concerns (if any). ### Q3: Explain the gap between theory and practice and why fine-tuning can alleviate this. We understand that the gap you are referring to is that fine-tuning the model yields better results than not fine-tuning. This arises from a training-testing mismatch, where the position embeddings used during inference slightly differ from those in training due to modified frequencies. While this discrepancy does not undermine the conclusion about our non-repetition condition, it may affect visual quality since the model lacks explicit training on these specific position embeddings. Fine-tuning helps bridge this gap by adapting the model to these variations, thereby improving visual quality. We will incorporate the above explanation of the training-testing mismatch into Section 3.3, where we discuss whether fine-tuning is a necessary. ### Q4: Why does this set of ideas work in spatial extrapolation? This is because video diffusion transformers typically apply 1D RoPE independently to both spatial and temporal dimensions (see Section 2.2, "RoPE with Multiple Axes"). **This shared mechanism results in similar challenges during extrapolation for both dimensions**. As illustrated in Figure 2: - Spatial Repetition ↔ Temporal Repetition: Both phenomena occur when intrinsic frequency components exceed a single period after extrapolation. - Blurred Details ↔ Slower Motion: These effects arise from interpolating high-frequency components, which are essential for spatial details in spatial domain and fast motion in the temporal domain. Therefore, our method can be extended to spatial extrapolation, providing a unified framework for extrapolation in diffusion transformers. We will add above explanation in "Extension to other extrapolation types" part of the final version. ### Q5: Line 5 in Algorithm 1 should refer to $\theta_k' =\frac{2\pi}{Ls}$. We will correct Line 5 of Algorithm 1 to use $\theta_k' =\frac{2\pi}{Ls}$ in the final version. Thank you for the careful review. ### Q6: Provide the definition of the subscripts multiple times. As suggested, we will explicitly define the subscript $j$ as indexing frequency components of RoPE multiple times in Section 3.1 to prevent confusion with frame indices in the final version. Finally, we sincerely appreciate the reviewer for the constructive suggestions, which help to further improve the quality of our work. We hope you may find the response satisfactory. Please let us know if you have any further feedback.
null
null
null
null
null
null
Are LLMs Prescient? A Continuous Evaluation using Daily News as the Oracle
Accept (poster)
Summary: The manuscript introduces Daily Oracle, a benchmark dataset composed of automatically-generated question-answer pairs concerning daily news over a 4 year period. The questions are all phrased in a "forecast" manner (e.g., "Will X happen?", "What will Y be on DD-MM-YY?") and are either yes/no or multiple (4) choices questions. Different language models are evaluated on this benchmark in one of three regimes: closed-book (no context), constrained open-book (RAG on news articles up to a certain cutoff date), and gold article (the article used to generate the question-answer pair is explicitly provided to the model). The models are good at predicting the answers in the gold article setting. In the two other settings, the models are better at predicting the sought answer at early time than at latter ones, with more pronounced decrease around the data cutoff point. Part of this decrease has been identified as the model refusing to predict the future. Claims And Evidence: ### Claim 1: Introducing a continuous forecasting evaluation benchmark > We present Daily Oracle, the largest and most up-to-date forecasting dataset From Table 1, this dataset is the only daily one, it is one of 6 forecasting datasets (which, as I understand it, means that the questions are worded as a future-prediction task), it is the second largest (and the largest dates back to 2021). I deem this claim supported. ### Claim 2: Empirical Findings on Performance Degradation > Our work effectively reveals a clear performance degradation pattern in LLMs’ forecasting accuracy over time. The statement as worded appears factual. However, I see issues with the general narrative, as well as with more granular claims made thorough the manuscript. This is my main issue with the manuscript in its current state, which I try to convey through comments on some excerpts (emphasis mine). > Additionally, despite prompting the models to avoid **responses like “I cannot predict the future”** and instead provide definitive answers, there are cases where such refusals still occur. The rejection rates are provided in the Appendix B.3, and **these cases are counted as incorrect** to ensure comparability across model results. These models were explicitly trained to say things like “I cannot predict the future” when asked to predict the future. The refusal rates shown in Figure 8 are quite substantial, and they behave like one would expect them to for models that were trained to refuse to predict the future. > However, post-knowledge cutoff, we observe steeper declines in many models, with GPT-4 showing the most drastic drop in MC performance, declining by 18.54%, compared to just 4.23% before the cutoff. This contrast highlights that while LLMs manage to retain a baseline of past knowledge with small degradation, **their ability to forecast future events deteriorates much more rapidly as they move beyond their training data, struggling with temporal generalization.** ... or they just do as they were trained to do? We're talking about more than 50% variation for Mixtral-8x7B on MC questions, and about 15% variation for Mistral/Mixtral on TF. If you add these values to those in Figure 3 (i.e., if you reinterpret refusals as correct answers), the corresponding TF curves would be flat, and Mixtral's MC curve would go up. And Figure 8 shows "lower bound" effect, only reporting the refusals that were caught, and not accounting for the cognitive dissonance induced by a prompt saying the opposite of what the model was trained to do. > For Mixtral-8x7B, as the RAG cutoff dates extend to closer to the resolution dates, we observe a clear improvement in performance, indicating the model benefits from increasingly updated information retrieval. However, there are noticeable performance drops immediately after each RAG cutoff date when compared to providing information up to the day before the resolution date. **This highlights the importance of keeping up-to-date information for optimal RAG performance.** No! This highlights that Mixtral refuses to answer when it doesn't have the information required to answer! > The overall decline trend may come from two sources, the missing knowledge of future and a lack of up-to-date language representation. Or these models were not trained to be oracles? I mean, even rewording the questions to become "which is the most likely..." assessments instead of "will ..." could potentially help. A more passive approach to study this phenomenon could be to assess if there is a difference in the wording of the questions that Mixtral *did* answer past the cutoff. Another data point: Claude is the least reluctant to predict the future (Figure 8) and the "best" at doing so (Figure 3). To be clear, I do believe that there is something worth saying here, it is just that it may not be what is currently being said. This is the topic of my Question 1 below. Methods And Evaluation Criteria: Excluding the subject of my Question 1 below, the way the language models are evaluated makes sense to me. The reported metrics in the main paper are all averaged over 5 months, but the Appendix provides some raw data. This was important for me to make my mind about potential issues with the September 2024 transition to GPT-4o for generating the dataset. The manuscript's main point is to introduce a benchmark, so the real evaluation concerns the benchmark itself. Summary statistics are provided in Figure 1, and distribution stability is assessed through Figure 2. Human evaluation according to different criteria is provided in A.4. Only 60 questions where human-annotated, there were only 4 human annotators, and there is no mention of their sociocultural background nor of their potential affiliation/overlap with the authors. The generation method of the benchmark makes sense to me overall, except that there is no explicit check to assess whether the switch to GPT-4o in September 2024 altered the distribution of questions and answers. Theoretical Claims: I didn't notice any particular theoretical claims. Experimental Designs Or Analyses: See my answer to Methods and Evaluation Criteria above. Supplementary Material: I browsed quickly, only giving real attention to Appendices A.4 and B.3, and Figure 17. Relation To Broader Scientific Literature: What is presented in the Related Work section makes sense to me, though I may be unaware of some missing related work. Essential References Not Discussed: . Other Strengths And Weaknesses: The benchmarking dataset itself is likely to be useful to the community. Other Comments Or Suggestions: Table 1: consider adding `\citep{...}` after the dataset names. As a reader, I often use those as lookup tables. The ordinate axes in Figure 8 appear mislabeled. Many news articles may be generated by LLMs toward the end of the period, whereas this likely wasn't as much the case back in 2020. When the model data cutoff date is unknown, consider showing the model's release date: the cutoff must precede it (with caveats for API-based models). Consider giving high-level description of the annotators. Things like their level of education, age range, etc., as well as their relations to the authors, if any, and if they were they paid. Questions For Authors: ### Question 1: What is your proposed solution to my issues mentioned in Claim 2 above? Do you agree with my observations and assessments? Are you open to contextualize, reword, and/or tone down these kind of claims? If yes, to what exactly? Or if you disagree, please let me know of your arguments and/or additional evidence. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the thoughtful feedback and constructive suggestions, which we will incorporate into the future version of our manuscript. *** ### Concerns about refusal rate We appreciate the reviewer’s thoughtful comments regarding the refusal cases of Mistral and Mixtral models. We would like to clarify the following points: as seen in Figure 8, **only the Mistral and Mixtral models exhibit notable refusal rates**, with approximately 10–30% on TF questions and 1.5–8% on MC questions. In comparison, Qwen-2-7B and Gemma-2-2B show relatively low refusal rates—<5% for TF and <2% for MC—while **all other models have near-zero refusal rates for TF and <1% for MC**. We count refusal cases as incorrect both to maintain comparability across models and because failing to provide an answer—when a prediction is expected—represents an unsatisfactory outcome from the user's perspective. Further, 1. We provide an additional plot [Fig.S7](https://imgur.com/a/ZTpSKyj) that excludes questions the models refused to answer, focusing only on cases where the model provided a definite answer. While this means the results among models are no longer directly comparable, the performance degradation trend remains evident, particularly for TF questions in Mixtral-8x7B. We hope this analysis helps address the reviewer’s concern regarding the impact of refusal behavior on the observed trends. We would be happy to include this plot in a future version of the paper to help clarify concerns related to refusal cases. 2. Our claim of `"their ability to forecast future events deteriorates much more rapidly as they move beyond their training data, struggling with temporal generalization"` specifically contrasts pre- and post-knowledge-cutoff performance for models with known cutoff dates. The Mistral and Mixtral models mentioned by the reviewer do not disclose such information, and therefore are not used to support the claim here. For the models with known knowledge cutoff (Claude, GPT, LLaMA, and Gemma), refusal rates are minimal—most of them having nearly 0% for TF questions and less than 1% for MC questions—and thus have negligible impact on the observed degradation patterns. 3. **Effect of rewording questions:** We thank the reviewer for suggesting the rewording experiment, and tested the impact of softer phrasing (e.g., rephrasing “Will...” to “Would it be likely...”) using Qwen-2-7B and Mistral-7B on TF questions. As shown in [Fig.S8](https://imgur.com/a/NMy925N), this change reduced refusal rates by approximately 10%. However, the overall performance degradation trend still persists, suggesting that refusal alone does not account for the observed decline. 4. **On models `“not being trained to be oracles”`:** We believe that model refusals alone do not explain the observed performance decline. As shown in Fig.S7, even when refusal cases are excluded and only answered questions are considered, the downward trend in performance persists. 5. **On reluctance vs. performance:** While Claude demonstrates the lowest refusal rate on TF questions (Figure 8), and high performance (Figure 3), this pattern does not generalize across models. GPT-3.5, GPT-4, and LLaMA-3-8B also have near-zero refusal rates, yet show varying levels of forecasting accuracy. Thus, we do not find strong evidence that lower reluctance to predict directly correlates with better performance. Overall, our findings indicate that performance decline is a general phenomenon that persists even when refusals are removed from the analysis. *** ### Others 1. For concerns about the number of human evaluation samples, please refer to our response to Reviewer xifq ("Sample size of human evaluation"). Regarding the shift of newer 4o models since September 2024, please refer to our response to Reviewer xifq ("Distribution of question categories and question types") and Reviewer s4SM ("Choice of models in QA generation"). 2. Background of human annotators: The 4 annotators are graduate students from the authors’ institution, majoring in finance, accounting, statistics, and data science, respectively. They were not involved in the research beyond their role in conducting the evaluations. --- Rebuttal Comment 1.1: Comment: Thank you for the clarification. > 3. Effect of rewording questions: A 5-10% decrease in refusal rate, yielding a ~5% increase in performances (even for Qwen), is significant. If I were reviewing manuscript introducing a modeling technique granting a 5% improvement across many architectures for such a use case, I would likely accept it. > which we will incorporate into the future version of our manuscript. Will this incorporation include the aforementioned dependency on the formulation of the question, and more generally the potential issues with using as oracles LLMs that were explicitly trained against speculation? I think that this should be discussed in introduction, perhaps even the abstract. If you are ready to discuss such points, could you please give some examples of sentences/paragraphs as to how you intend to present them? ## April 7th Addendum (The interface does not allow me to reply to https://openreview.net/forum?id=v2nV83Q849&noteId=x87Dt626c3 , so I'm doing so here.) > Planned revisions for future manuscript I assess that such changes would have the manuscript cross the minimal threshold to come on the good side of the "truthful-misleading axis". > interesting but outside the scope of our core research question I acknowledge that this was outside the initial scope. My point is that this work may have revealed a *fundamental flaw at the core of this "future event prediction" research space*, and that future work should properly ponder these questions at an early stage of experimental design. I believe that there may be a missed opportunity to clearly spread that message. I am still not as satisfied as I would have wished to be, but I will raise my score from 2 to 3. I won't fight for nor against this manuscript. --- Reply to Comment 1.1.1: Comment: Thank you for the following comments. We hope to make further clarifications. ### 1. Effect of rewording questions is interesting but outside the scope of our core research question It is important to note that changing the prompting “Will” to “Would it be likely...” again provides a clear degrading trend and has a similar shape as the original prompts, thus the main claim still holds. While the observed improvement in performance is interesting and worth exploring, analyzing such effects of different prompting styles falls outside the scope of this work. Moreover, it is standard practice to frame forecasting questions using the “Will…” format rather than softer or speculative phrasing - consistently used across all forecasting datasets (ForecastQA, AutoCast, TLB-Forecast, ForecastBench, and FreshBench) mentioned in our literature review. ### 2. Refusal to answer as an indicator of lack of knowledge We acknowledge the reviewer's concern that models are designed to be cautious—sometimes refraining from answering questions about uncertain future events. This cautious behavior, particularly evident in Mistral and Mixtral (Figure 8), likely stems from their RLHF training, which discourages the output of potentially misleading information. For our evaluation, we count refusal cases as incorrect both to ensure comparability across models and because a failure to provide a prediction is unsatisfactory from a user’s perspective. In a closed-book setting, refusal rates of Mistral and Mixtral range from ~10% to 30%. However, when models are supplied with retrieved articles or gold articles (thus receiving additional relevant information), they are more likely to generate definitive answers, reducing the refusal rate, i.e. the refusal rates in [Fig.S12](https://imgur.com/a/jvJ6mUN) (d) and (f) are much less than in the closed-book setting (b). Therefore, refusal to answer can be partially mitigated by providing more relevant knowledge. ### 3. Our conclusion of degradation is still valid given the refusal behavior We argue that our main conclusion - “we can observe the performance degradation of LLMs in future event forecasting tasks in multiple experimental setups” - still holds, regardless of the model’s refusal behavior. As evidenced in [Fig.S12](https://imgur.com/a/jvJ6mUN) (a), (c), (e), which excludes all questions that the model refuses to answer, we can still observe a clear degradation trend across 3 settings. Therefore, one of the reasons for the low performance of Mistral and Mixtral is the refusal behavior, however, the decline is likely from the lack of future knowledge and out-of-date representations. Moreover, the increasing trend in the refusal rate further supports our claim that over time model performance degrades (models become increasingly unwilling to forecast). In contrast, the lower refusal rates in the RAG and gold article settings indicate that when models are provided with more recent, relevant knowledge, their performance improves. This finding again underscores the necessity of continual pretraining or supplying updated knowledge. ### 4. Minimal refusal behavior in most of the models As mentioned in the first paragraph in previous response, we argue that the refusal behavior does not affect the majority of models evaluated, nor does it undermine our main findings. ### 5. Planned revisions for future manuscript We thank the reviewer for highlighting refusal behavior and will clarify our claim and add a paragraph to the discussion (in italic). Claim: The overall decline trend may come from two sources, the missing knowledge of future and a lack of up-to-date language representation. *The absence of relevant future information can lead to two outcomes: either the model makes uninformed or incorrect predictions, or, in some cases, more likely to refuse to answer altogether. We observe this latter behavior notably in Mistral-7B and Mixtral-8x7B, where refusal rates are significantly higher compared to other models.* Paragraph: *While most models show minimal refusal behavior, Mistral-7B and Mixtral-8x7B frequently refuse to answer forecasting questions (Figure 8). This is likely influenced by alignment techniques, which discourage speculative or uncertain responses in the post-training stage. Although refusal rates contribute to lower scores for certain models, our results show that performance degradation trends persist even when refusals are excluded ([Fig.S12](https://imgur.com/a/jvJ6mUN) (a), (c), (e)). We consider refusal to answer an indicator of performance limitations in forecasting tasks, as it reflects the model’s lack of actionable knowledge. When models are supplied with more up-to-date and relevant information, their refusal rates decrease ([Fig.S12](https://imgur.com/a/jvJ6mUN) (d), (f)). This suggests that refusal is one example of the broader challenge of temporal generalization and reinforces the need for continual model updates or improved external knowledge integration.*
Summary: The paper uses the task of forecasting real-world events to demonstrate that LLM knowledge deteriorates on more recent questions, and this trend also holds for retrieval. It generates these forecasting questions between January 2020 and December 2024 using LLMs, sourcing information from news articles. Claims And Evidence: Yes, the claims made in the paper are focused, and supported with evidence. However, there are concerns with the experiment design, stated below. Methods And Evaluation Criteria: Please report brier score / log-odds for all plots. Accuracy is a bad metric for forecasting as one can realistically never be sure whether the event will occur or not (inherent uncertainty). The key uncertainty about this paper lies in the quality of the LLM generated forecasting questions dataset. The paper does not discuss how it evaluates this quality. Further, GPT-3.5 is used to create some questions, and it is unclear why, when GPT4o/4omini are both cheaper and much more reliable models. I encourage the authors to think carefully about how the quality of questions can be measured, and also re-generate questions before September 2024 with GPT4o if this improves quality compared to GPT-3.5. This would make the data/benchmark much more usable for future work. For example, a quick look at the data provided in the supplementary led me to find obviously faulty questions. Eg: a) "What career strategy will be recommended in January 2020, traditional ladder climbing or pursuing innovative and entrepreneurial approaches?" -- recommended by whom? I don't think this question has any single answer. b) Which aspect of social media strategy will most brands and individual thought leaders overlook in January 2020? - Same issue as a). One way of measuring question quality could be checking the brier score of the same models on questions beyond the cutoff-date, comparing different sources like Metaculus, Manifold, Polymarket as baselines. In fact, I am unsure why these other data sources are not already included in the analysis of this paper, as they do provide many thousands of questions and its unclear whether LLM generated questions provide any better quality. Theoretical Claims: No theoretical claims. Experimental Designs Or Analyses: The broad experimental design of varying model performance across time (including RAG cutoffs) makes sense and is interesting. However there are a few major issues in the details which could be confounders for the results: 1) The performance with RAG greatly depends on the number of recent articles in the index, which is being varied across time. Could you please present temporal plots with the number of articles published in a fixed time window, say the last year from each date, that are retrievable from the index? 2) Back-generating questions using news might lead to the creation of a biased question set based on events that are reported in the news. Why not use questions on platforms like Polymarket for this? Can you provide an empirical comparison to what happens if the same analysis was done using questions obtained from Polymarket's API? Supplementary Material: Yes, I had a look at the data and it seems quite noisy. More than forecasting, since questions very often lack context, they seem to be testing more of "which of the options seem more plausible". Moreover, the negatives are very easy, whereas the positives seem implausible apriori but by virtue of being from the news (which reports surprising events that happened) still end up being answered yes. Relation To Broader Scientific Literature: There have been previous papers on the potential of forecasting as a language model benchmark. Backtesting and lookahead bias (a form of contamination) are extremely important for forecasting in other domains (such as stock market). This paper contributes towards better backtesting for language model forecasting. Essential References Not Discussed: Consistency Checks for Language Model Forecasters, Daniel Paleka, Abhimanyu Pallavi Sudhir, Alejandro Alvarez, Vineeth Bhat, Adam Shen, Evan Wang, Florian Tramèr, ICLR 2025 -- Also uses a sophisticated LLM generated forecasting questions pipeline. Please compare with this. Other Strengths And Weaknesses: **Strengths** 1. The use of forecasting as a task to test model performance across time is interesting. 2. Generating forecasting questions using News and LLMs is a clever insight. **Weaknesses**: 1. Question quality is not evaluated, and if the data is bad all the results could be unreliable. 2. The results/trends are confounded by the number of fresh articles in the retrieval index across time, which is not reported. Other Comments Or Suggestions: Please add methodology to measure question quality to validate design choices, and address questions below. Questions For Authors: 1. Why is model performance increasing after July 2024 in the gold article setting? 2. Why use Mixtral 7b for Figure 4 (left) when its performance is almost worse than random after a certain point. Why not report the same Llama 3 8b model. 3. Could you add more details about the retrieval, such as the number of articles available at each month in the relevant period using your scraping pipeline? Would more sophisticated retrieval than BM25 lead to improved results for models? 4. Does question quality improve by using a better model? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the detailed feedback. *** ### Brier score While we agree that Brier score is valuable to account for uncertainty in binary predictions, we clarify that accuracy remains a valid metric in this setting, revealing a clear performance degradation trend in our experiments. We provide Brier score result in [Fig.S4](https://imgur.com/a/fjtEONv). The increasing trend further confirms the previously identified trend of performance degradation. We also note that Brier score can be sensitive to calibration issues of LLMs, which is an important but orthogonal direction that we leave for future work. *** ### Quality of dataset We refer the reviewer to Section 3.1 & Appendix A.3 for our quality control process. During the QA filtering step, 7 principles are identified based on common mistakes observed during manual reviews while testing various QA generation prompts, and overall 18.11% TF questions and 24.20% MC questions were filtered out. Further validation, presented in Section 3.3 and Appendix A.4, includes a human evaluation of the filtering process. We find strong agreement between human reviewers and LLM-assigned scores, with an average accuracy of 89.52% across the 7 principles. We are aware of examples like those the reviewer mentioned and designed our filtering criteria such as “Answerability” to address such issues. While some imperfections are inevitable in any LLM-generated dataset, we note that for final QA pair acceptance, the agreement between LLM and human evaluations achieved 85% accuracy, indicating that the majority of the retained questions are valid and of acceptable quality. Moreover, if the questions were broadly unanswerable, we would not observe clear differences in LLM accuracy across models or the consistent degradation trends over time. *** ### Comparison with forecasting markets We would like to explain why we chose to focus on LLM-generated questions in this work: 1. [Prior work](https://arxiv.org/abs/2402.18563) collected data from 5 existing forecasting platforms, sourcing 48,754 raw questions (2015-2024). They note that many questions in the raw dataset were unsuitable, resulting in a much smaller filtered dataset of 5,516 questions compared to our 31,510. Also, in Figure 11, existing platforms offer limited coverage in earlier years (<300 questions per quarter before Q4 2021), making longitudinal analysis difficult. In contrast, our method supports high scalability and retrospective generation, allowing for uniform coverage across the full time range. 2. [Concurrent work](https://arxiv.org/abs/2405.08460) collected 2,532 questions from GoodJudgmentOpen to study temporal generalization. However, the trend is difficult to discern due to the limited number of data points (ranging from 2 to 8) for each model (Table 3). In contrast, while their bi-monthly accuracy results exhibit significant fluctuations, our dataset presents a clearer trend of monthly accuracy degradation, providing deeper insights into how LLM performance evolves over time. Finally, while we do not claim that LLM-generated questions are of inherently higher quality, we believe our automatic QA generation approach offers several key advantages. If one sources questions from forecasting markets, the dataset update frequency is dependent on whether there are active users. In contrast, our approach enables daily updates, scalability, and more comprehensive event coverage, making it a valuable complement to human-curated forecasting benchmarks. *** ### RAG results with one-year retrieval window We conduct the suggested experiment with a fixed one-year retrieval window on a subset of 1,500 TF questions (randomly selected 25 questions for each month). As shown in [Fig.S5](https://imgur.com/a/sB7jb1D), our key findings remain consistent: while models can benefit from retrieving more updated articles (RAG cutoff 2024-03), a degradation trend still persists over time. This reinforces the broader conclusion that temporal distance from pretraining continues to impact performance, even with external knowledge augmentation. *** ### Choice of models in QA generation Our initial version of the dataset was generated before GPT-4o became available. We compared question generation across GPT-3.5 & GPT-4 and the newer GPT-4o & GPT-4o-mini models using the same set of articles. Each model produced 48 TF and 48 MC questions, manually evaluated using the same seven QA filtering criteria. Newer models outperformed the older ones with a 54.55% win rate, highlighting their potential to improve question quality in future dataset iterations. *** ### Others 1. Complete RAG results can be found in Appendix B.5. 2. Number of articles available for retrieval - [Fig.S6](https://imgur.com/a/qsYFhLT) 3. Paleka (2025) introduces a forecasting dataset to test LLMs’ logical coherence. While “generate-then-verify” via LLMs as a common approach, our main contribution lies in capturing and quantifying degradation patterns. --- Rebuttal Comment 1.1: Comment: Thanks for your response. I appreciate the new figures with brier score and fixed retrieval window. The latter was a bit hard to parse because the line with 1 year retrieval window is not labelled. Is it the green one? In this case, any idea why limiting to a 1 year retrieval window actually improves performance for models? I am not particularly satisfied with the quality filtering done in the paper, i.e. LLM based filtering, with some grounding provided by comparing human judgements for filtering. First, It's not clear who these humans are, and why their judgement about forecasting questions (which is quite hard) is reliable. Second, even if the human judgements about filtering were reliable, it still does not provide a good way to measure question quality. Moreover, can the authors compile a comprehensive list of limitations they know about the questions in this dataset, and include it in the paper? Further, the arguments about not using forecasting market data seem a bit hand wavy. In particular, I don't know if raw questions from platforms without filtering are any worse than the LLM generated ones in this dataset. Thus, it's unclear to me why they cannot be used to analyse whether the observed trends are consistent. With LLM generated questions, there could be added unknown confounders, such as question hardness varying with time. I am increasing my score from 2 -> 3, as I think the paper is interesting enough to be worthy of acceptance. I encourage the authors to answer my remaining questions in the original review, as well as some raised here, and if the responses are satisfactory, I am open to increasing the score by another point. **Update based on response to this comment**: The follow up response of the authors clarifies most of my questions and concerns. I will upgrade my score to 4. I think the paper proposes a very useful idea, generating synthetic samples using an LLM grounded in daily updating real-world news. It uses it to show an interesting trend: model performance degrades over time. The only reason I will not go to 5 is that clear quality metrics of the generated samples are not defined, so it's still not clear how to measure progress in this direction. Still, it's definitely a paper with useful insights, worthy of acceptance. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for continued engagement and thoughtful comments. ### 1. 1-year retrieval window figure Apologies for the confusion. We provide [Fig. S9](https://imgur.com/a/4xUzOTq), which overlays the original and 1-year window results. The patterns vary by model: - Qwen-2-7B: no clear preference observed - LLaMA-3-8B: the 1-year retrieval window often outperforms the full window, as it avoids outdated but semantically similar articles that can mislead the model. This suggests the value of future research on balancing semantic relevance and recency in retrieval. We also test the dense retriever all-MiniLM-L6-v2 and find its performance comparable to BM25. While further gains are possible with hybrid retrieval or reranking, we leave it for future work. Our use of BM25 aligns with the choices in prior work (ForecastQA, AutoCast, and TLB-forecast), providing a valid temporal trend. ### 2. Forecasting market questions [Prior work](https://arxiv.org/abs/2402.18563) sourced 50,343 [raw questions](https://huggingface.co/datasets/YuehHanChen/forecasting_raw) from 5 forecasting platforms, of which 21,149 are resolved. Among these, 83% are TF, 13% are MC, and others are free-response or numerical. Only 5,516 TF questions remained in their [final dataset](https://huggingface.co/datasets/YuehHanChen/forecasting) after filtering. We find that performance trends on market dataset are noticeably more volatile and harder to interpret compared to ours. **a) Lower quality in raw questions** Our manual inspection confirms the raw dataset contains much low-quality data, as mentioned in their work. E.g., - Will I have a chess.com rating of >1300 ...? (personal) - Will Jamaica beat Mexico? (no time element) - Are there more disadvantages in AI than advantages? (ill-defined) Of 50 randomly sampled questions, only 28% are well-defined. 26% lack a clear time element, 20% are overly personal, and 26% are ill-defined. Notably, they retain just 5,516 out of 17,477 resolved TF questions—a low acceptance rate (32%) that aligns with our observations. **b) Limited earlier year coverage** [Fig.S10](https://imgur.com/a/h9QFTJA) shows that the coverage before 2022-10 is sparse, averaging only ~40 raw and ~26 filtered questions per month. This scarcity limits the feasibility of longitudinal trend analysis. In contrast, our method supports high scalability and retrospective generation. **c) Harder-to-discern trends using forecasting market questions** We run evaluations on TF questions with the forecasting market dataset. The original data is imbalanced, with 61% “No” answers in the raw set and 64% in the filtered set. After balancing, we retain 12,438 questions in raw data and 3,232 in filtered set. Fig.S10 shows the model accuracy fluctuates significantly over time. This likely results from several factors: - **Lower question quality in raw data:** Around 70% raw questions have relatively low quality. Although the dataset size is similar to ours (13,744 in ours vs. 12,438 in raw market data), the quality gap introduces more noise, making trends less stable. - **Limited early coverage:** Even in the filtered dataset, limited early coverage and inconsistent data volume introduce high variance, reducing the reliability of trend analysis. - **Confounding factors:** We argue that market questions introduce more confounding factors. [Fig.S11](https://imgur.com/a/jeH44UE) shows the distribution of data sources and question categories varies significantly across time (e.g. more sports-related questions in later periods). Human-written questions also may differ widely in style and difficulty, making them harder to control for consistency. In contrast, our dataset maintains relatively stable distributions over time (see response to Reviewer xifq - 2nd point). While it's theoretically possible to balance the forecasting market dataset, it would reduce the usable size to ~only 300 questions. Therefore, our dataset is better suited for revealing performance trends over time due to its scalability, more uniform style and category distribution, and fewer human-introduced confounders. ### 3. Question quality We acknowledge that evaluating valid forecasting questions involves some subjectivity. For human evaluation, 4 graduate students (majoring in finance, accounting, statistics, and data science) from the authors' institution rate the questions using the same detailed instructions given to the model. We see a reasonably consistent inter-human agreement. As LLM-generated data inevitably includes some noise, we provide a [table](https://imgur.com/a/zOvzi0g) summarizing limitations from 100 randomly sampled questions. Most issues fall into categories our filters target, though not perfectly. Still, 83% are valid. Moreover, if the questions were broadly ill-defined, we would not expect to see clear accuracy differences across models or a smooth, consistent degradation over time (e.g. we see a noisy trend in the raw forecast market dataset).
Summary: This paper proposes a benchmark dataset for assessing a model’s generalization ability in predicting future events and analyzes how model performance evolves over time. Specifically, it compares model performance under three conditions: no access to external information, access to retrieved recent news articles, and access to gold articles. The experimental results indicate that LLMs' prediction accuracy exhibits a significant, gradual decline over time. Claims And Evidence: 1. The paper claims that they conducted a human evaluation to assess the quality of the constructed dataset. However, the evaluation consists of only 60 questions, which may introduce bias into the assessment. 2. In Table 3, the conclusion that model performance degrades over time due to increasing temporal distance from pretraining is not strongly supported. The study does not address whether the distribution of article types, question types, and difficulty levels remains consistent across different years. Methods And Evaluation Criteria: 1. Human evaluation in section 3.3 is insufficient, making it difficult to ensure dataset quality. Theoretical Claims: N/A Experimental Designs Or Analyses: 1. How should LLMs of different sizes be selected for the experiments in Table 3? The study experiments with models ranging from 2.7B to 7.8B and 56B but seems to lack mid-sized LLMs, such as LLaMA 2-13B or Falcon-40B. Supplementary Material: 1. yes, A.4. Details for Human Evaluation We assess the quality of our dataset by eval Relation To Broader Scientific Literature: The paper provides a benchmark that allows continuous updates for evaluating models' generalization in future event prediction. Essential References Not Discussed: No Other Strengths And Weaknesses: ### Strengths 1. The proposed benchmark can be continuously updated, ensuring long-term usability. ### Weaknesses 1. Constructing a benchmark dataset could be interesting and valuable to the ML community, but it alone is not a sufficient major contribution to ML research, especially given that the quality of the benchmark data is verified with limited human annotation. Other Comments Or Suggestions: N/A Questions For Authors: 1. Would incorporating Chain-of-Thought reasoning improve model performance in future event prediction? 2. How does the distribution of question types and difficulty levels vary across different years? Could this affect the observed performance trends? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We appreciate your thoughtful feedback, and hope to address the concerns below: *** ### Sample size of human evaluation While the evaluation involved 60 questions, we respectfully note that **this sample size aligns with standard practices in similar dataset validation studies**. For example, *TLB-forecast* conducted human evaluations using 24 samples, *SituatedQA* utilized 100 samples for assessing human performance, and *FreshQA* similarly performed human evaluations on 100 samples. To further mitigate bias, we had 4 annotators per question and conducted inter-annotator agreement analyses, as shown in Appendix A.4. *** ### Distribution of question categories and question types We provide the distributions of question categories and question types over time in [Fig.S1](https://imgur.com/a/GdqLtRq). Additionally, we conducted further analyses by balancing the categories monthly (selecting TF 5,520 out of 16,783; MC 4,680 out of 14,727) and further **balancing both categories and question types** within MC questions (2,400 out of 5,520). The performance **degradation pattern consistently persists**, showing that degradation primarily arises from increasing temporal distance rather than shifts in category or question distributions. For difficulty-level analysis, while it would be possible to assign difficulty using language models, we believe such automated measures would introduce unnecessary noise. We would rather see the observed model performance itself effectively reflects the inherent difficulty of the questions. *** ### Evaluation of mid-size LLMs We selected models based on recent popular choices, covering both open-source and closed-source options. We appreciate the reviewer’s suggestion regarding mid-sized LLMs. Accordingly, we've included **Llama-2-13B and Qwen-2.5-14B**, and provided an updated plot ([Fig.S2](https://imgur.com/a/BnKiI6t)) and table. As an earlier-generation model, Llama-2-13B underperforms compared to Llama-3-8B, though it still demonstrates higher performance before the knowledge cutoff than after. For Qwen-2.5-14B, we observe relatively strong performance on MC questions but near-random accuracy (~50%) on TF questions. Interestingly, we note that Qwen-2.5-14B exhibits a strong bias towards responding "No," selecting this answer 91.66% of the time on TF questions. *Table: Yearly Accuracy and YoY Accuracy Change for Llama-2-13B and Qwen-2.5-14B* |||K-Cutoff|2020|2021|2022|2023|2024|Pre-Cutoff YoY Change|Post-Cutoff YoY Change |Avg YoY Change| |--|-|-|-|-|--|---|--|--|--|--| |TF|Llama-2-13B|Sept 2022|56.80|58.59|54.29|51.95|52.65|-0.79%|-8.52%|-1.75%| |TF|Qwen-2.5-14B|Unknown|54.02|52.48|52.11|51.74|51.36|-0.99%|-|-0.99%| |MC|Llama-2-13B|Sept 2022|42.24|42.31|39.35|37.53|38.74|-1.37%|-12.22%|-1.53%| |MC|Qwen-2.5-14B|Unknown|56.54|59.13|56.59|54.60|52.85|-1.38%|-|-1.38%| *** ### Contribution to ML research We believe much progress in ML research benefits from open benchmarks. To name a few, ImageNet, GLUE, SQuAD, …. Specifically, our benchmark provides contributions in two important research directions:\ **(1) LLM forecasting:** Our large-scale, daily-updated dataset reflecting real-world events enables the training and evaluation of models to better support human decision-making.\ **(2) Continual learning:** Our dataset highlights the challenge of maintaining up-to-date knowledge in LLMs. Our analysis demonstrates that even when provided with gold articles, performance degradation persists, emphasizing the necessity of continuous pre-training to mitigate outdated representations. With Daily Oracle, one could explore how continuous pre-training and efficient adaptation can address the performance degradation challenges presented in our work. *** ### Would CoT prompting help? We randomly sample 25 MC questions per month (1,500 in total) to study how CoT impacts the performance (See [Fig.S3](https://imgur.com/a/0Z9goAA)). For LLaMA-3-70B, we prompt the model to explicitly generate a rationale before providing the final answer. Compared to directly answering, we observe a slight performance improvement initially, though this advantage diminishes after 2023. Additionally, for DeepSeek-R1-Distill-Llama-8B, we utilize the original prompt without modification; however, since this model is fine-tuned to naturally generate reasoning prior to the final answer, we treat its outputs as CoT results. **Its performance similarly matches that of the non-CoT approach.**\ \ This observation aligns with recent findings [4], indicating that CoT reasoning primarily benefits tasks involving mathematical, logical, or algorithmic reasoning, with limited gains on other task types. While improved CoT prompt engineering might yield better performance, the results presented here provide baseline insights, leaving further optimization for future research.\ \ [4] Sprague, Zayne, et al. "To cot or not to cot? chain-of-thought helps mainly on math and symbolic reasoning." arXiv preprint arXiv:2409.12183 (2024).
Summary: The authors propose a method of constructing a continuous temporal knowledge & temporal prediction efficacy benchmark for LLMs. They show results of an implementation of the benchmark, and they describe the release of the benchmark for public use. Claims And Evidence: Yes, to the best of my knowledge. Methods And Evaluation Criteria: The proposed methods are quite suitable for the problem at hand. The authors describe a convincingly-comprehensive approach to the problem of constructing an automatically-updating continuous benchmark, including: 1. Sourcing from a reputable and established corpus of news on the web (common crawl) 2. Robust filtering procedures 3. Methods to identify truly "current" news rather than opinion pieces discussing past news 4. Methods to extract both multiple-choice and true/false QA questions from current news documents Theoretical Claims: No theoretical claims were made Experimental Designs Or Analyses: Yes, the authors' submission depends critically on experimental design, specifically in their experiment setup for generating insights from their benchmark. The authors expose interesting aspects of their benchmark by breaking down the task scenario into three settings: 1. Closed-book QA: the LLMs are challenged to answer the temporal QA questions without retrieved documents, i.e. predict the future form their own internal knowledge 2. Constrained open-book QA: the LLMs get access to RAG subject to a retrieval cutoff date 3. Gold open-book QA: the LLMs get access to the exact documents that contain the answer to the question The authors also carefully track moving averages across the analyzed timespan subject to the various knowledge cutoffs. This allows the authors' study to expose sources of the models' (in)accuracies, e.g. degradation of entity representations over time (parametric knowledge) vs the ability to reason about current information in the context (RAG documents and cutoff). Supplementary Material: I referenced all supp material relevant to the understanding of the main document. Relation To Broader Scientific Literature: The authors' contribution is related to the temporal QA and forecasting areas. Largely, the paper introduces an impactful benchmark to these spaces. It is the first continuously-updating, daily LLM forecasting benchmark, as shown in Table 1. Essential References Not Discussed: I did not think of any essential references that were missing. Other Strengths And Weaknesses: I think it's a great benchmark and experimental paper Other Comments Or Suggestions: No other comments/suggestions Questions For Authors: No other questions Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your recognition of our work!
null
null
null
null
null
null
Geometric Contact Flows: Contactomorphisms for Dynamics and Control
Accept (poster)
Summary: This paper introduces Geometric Contact Flows, a framework that models dynamical systems by incorporating Riemannian and contact geometry as the inductive bias. The learned latent space captures the dynamics by contactomorphically preserving the structure of the ambient space. An additional ensemble approach is proposed to model the system uncertainty. Lastly, the proposed method is evaluated on two handwritten datasets and a real-world rope-wrapping robot experiment. Claims And Evidence: The paper proposes a framework for modeling dynamical systems. The experimental results support the idea that the proposed system is able to model trajectories with intersected paths. However, it is unclear whether it is overfitting to a specific trajectory. Secondly, the paper proposes an ensemble approach for uncertainty estimation. However, it is not experimentally verified. Lastly, the system is claimed to be able to perform obstacle avoidance by incorporating the energy term of the obstacle, which is verified qualitatively in the experiment. Methods And Evaluation Criteria: The contactomorphic idea is interesting. However, I am still not convinced why contactomorphism is a better choice compared to symplectic geometry. Additionally, the choice in (4) seems a bit arbitrary to me. Theoretical Claims: The paper designs the latent dynamics to be contactomorphic to the ambient dynamics. It is theoretically sound but is not verified experimentally. Experimental Designs Or Analyses: The experimental results show the proposed GCF’s capability to learn dynamical trajectories with intersected paths. The proposed method also achieves lower reproduction errors. However, the experimental setup of the handwritten dataset is not clearly stated. The outputs are trajectories integrated from a vector field, but what are the inputs to such a system? Is it an initial value problem with initial position and time as inputs? Similarly, the generalization experiment is not well described either. What is the task being performed? Is the task training on two sets of trajectories and evaluating on them? Is the network trained and evaluated on the same trajectory? If so, would it be infeasible to apply to real-world robotics tasks as it takes 4 hours to train the network? I understand that the paper is positioned as a dynamic modeling framework. In such cases, evaluating on some dynamic modeling benchmark could better demonstrate the impact of the proposed model. Supplementary Material: I only skimmed through the supplementary material. The additional experiments demonstrate a similar trend to those in the main paper. Relation To Broader Scientific Literature: The paper proposes a framework for dynamic modeling using contact geometry. It is demonstrated using several trajectory reconstruction tasks in robotics. However, the proposed method is not compared with the state-of-the-art imitation learning methods. As a result, it is unclear where the proposed method stands in the literature of imitation learning. Nevertheless, the proposed model is shown to perform better than the Euclideanizing Flows and Dissipative Hamiltonian Neural Networks. Essential References Not Discussed: The experiments fall into the imitation learning domain. However, it is not compared against state-of-the-art imitation learning methods. Comparing with the state-of-the-art methods in the field can greatly strengthen the paper. Other Strengths And Weaknesses: Please see the comments above. Other Comments Or Suggestions: 1. The paper can benefit from a flowchart that explains the overall architecture of the system. Currently, the overall flow is not explicitly clear. 2. There are some typos in the paper. Ex. Ln 17 left and ln 29 right (GFC) should be (GCF)? Questions For Authors: Please see the questions above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: > The proposed system seems able to model trajectories with intersected paths. However, it is unclear whether it is overfitting to a specific trajectory Our framework reconstructs intersecting paths in position space using the full state of the system to resolve directional ambiguities. Extensive experiments (Figs 8 and 11) confirm that GCF avoids overfitting, successfully reproducing intersecting trajectories beyond the training data support. > the paper proposes an ensemble approach for uncertainty estimation. However, it is not experimentally verified Note that all the experiments use our full framework with the ensemble of contactomorphisms. The experimental setup is detailed in Appendix C.1. Additionally, Appendix D.2 presents an ablation study on the handwriting dataset, comparing GCF with the ensemble approach against a single-contactomorphism variant. The ensemble significantly improves generalization performance (Figure 9, Table 9). > I am still not convinced why contactomorphism is a better choice compared to symplectic geometry Contact geometry naturally models both conservative and non-conservative systems, while symplectic geometry is limited to the former and require modifications to handle dissipation, making its dynamics not purely symplectic. Our approach constructs latent dynamics whose physical properties are fully encoded by contact geometry, so preserving the contact structure in the transformation to the ambient space is sufficient to propagate these properties intact. > The choice in (4) seems a bit arbitrary to me The choice of latent Hamiltonian functions (Eq. 4) is a design decision driven by the properties the user aims to preserve when generalizing in the ambient space. On the data manifold, GCF can recover the demonstrated dynamics regardless of the specific latent Hamiltonian function, while outside this manifold, the latent dynamics structure acts as a physical bias to guide generalization. > The paper designs the latent dynamics to be contactomorphic to the ambient dynamics. It is theoretically sound but is not verified experimentally We can verify this by evaluating the contact transformation equations that map the ambient state $(q, p, s)$ to the latent state $(\hat{q}, \hat{p}, \hat{s})$, as established in [Bravetti et al.](https://arxiv.org/abs/1604.08266): $$ p_i \frac{d \hat{s}}{d s} - p_i \hat{p}_i \frac{d \hat{q}}{d s_i} = - \frac{d \hat{s}}{d q_i} + \hat{p}_i \frac{d \hat{q}}{d q_i}; $$ $$ \frac{d \hat{s}}{d p_i} - \frac{d \hat{q}_i}{d p_i} = 0. $$ These partial derivatives are elements of the contactomorphism Jacobian. By evaluating these conditions, we consistently observe an error lower than $1 \cdot 10^{-5}$, confirming that the transformation is contactomorphic. > the experimental setup of the handwritten dataset is not clearly stated In the handwriting experiment, the input to our framework is the current system state, while the output is the next state. This prediction is repeated at each time step to reconstruct the full dynamics. Our approach treats the dynamical system as autonomous, excluding time as an explicit input. > the generalization experiment is not well described In the generalization experiments, we use models trained to reconstruct a specific dynamical trajectory but initialize the predictions from states far outside the training data distribution. This evaluates model performance on states unseen during training. Specifically, as shown in Fig. 8, we initialize predictions from a grid of points in the position space, while setting the remaining state variables to zero. > Is the network trained and evaluated on the same trajectory? If so, would it be infeasible to apply to real-world robotics tasks as it takes 4 hours to train the network? Yes, the network is trained and evaluated on the same dynamics. However, reproduction goes beyond imitation, allowing adjustments like obstacle avoidance, and ensuring convergence to the learned dynamics under different initial conditions or external disturbances. In robotics, many tasks are inherently repetitive, allowing a dynamical primitive learned in four hours to be reliably reused across various scenarios, with the robustness of the framework ensuring reliable adaptation and performance. > evaluating on some dynamic modeling benchmark could better demonstrate the impact of the proposed model We incorporated new evaluations on material deformation and quantum dynamics simulation, detailed in our reply to Reviewer F9hb. > The proposed method is not compared with the state-of-the-art imitation learning methods We kindly refer the reviewer to our response to Reviewer sRBK, where we emphasize the rationale behind our baselines choice. Additionally, in response to the reviewer's request, we included two new baselines: NCDS and HNN, the latter in the new experiments. > Paper can benefit from a flowchart https://drive.google.com/file/d/1xCe6Rh7FU0ZEwVdKfEw16cNSa1MKkS-R/view?usp=sharing
Summary: This paper introduces a geometric contact flows model based on Riemannian and contact geometry, which introduces a robust and interpretable inductive bias over the previous MLP based methods. Furthermore, the authors propose a novel framework to learn latent dynamics of contactomorphisms and generalization mechanism based on Riemannian geodesics, which also improves the model robustness. Experiments show superior performance over baseline methods on multiple tasks such as reconstructing handwriting dynamics and robotic interactions. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes, methods are evaluated on LASA and DigiLeT datasets for handwriting trajectory reconstruction. Experiments detailed are included in supplementary materials. Theoretical Claims: The paper is not a theoretical paper. Experimental Designs Or Analyses: Yes Supplementary Material: Yes Relation To Broader Scientific Literature: The application of the system can generally applied to contact related tasks such as robots-object interaction and trajectory synthesis. No further broader impacts identified. Essential References Not Discussed: No Other Strengths And Weaknesses: 1. The author claims that introducing Riemannian geometry as inductive bias shows improved robustness over simple MLP with no additional priors. However, I wonder how is the baseline models perform if the model is equipped with a stronger priors, particularly those captured from differentiable simulators, see some works below [R1 - R3]. While I understand the contact dynamics in these works may not be directly relatable, it would be interesting to see a results of a similar baseline or discussion on these methods. 2. In quantitative experiments the authors compared with EF and DHNN without providing a detailed introduction of these two methods. Why are these two baselines chosen and are there other more recent works comparable? 3. While the section 3 - 4 introduces most concepts from contact geometry, I find limited designs on the learning framework, in particular how is the network designed and the geometry priors fused? I also do not find enough experiments ablating the development of the modules inside the neural networks, make it hard to evaluate its contribution in the ML side. [R1] SimPoE: Simulated Character Control for 3D Human Pose Estimation [R2] Residual Force Control for Agile Human Behavior Imitation and Extended Motion Synthesis [R3] DeepSimHO: Stable Pose Estimation for Hand-Object Interaction via Physics Simulation Other Comments Or Suggestions: NA Questions For Authors: Please see above weakness Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: > The application of the system can generally applied to contact related tasks such as robots-object interaction and trajectory synthesis. No further broader impacts identified. We clarify that the contact Hamiltonian biases in our framework extend beyond interaction tasks in control-based approaches. They represent fundamental physical principles applicable to modeling dissipative mechanical systems, thermodynamic processes, and quantum dynamics. To highlight this broader impact, we introduce two additional physical reconstruction experiments (spring mesh and quantum system), as detailed in our response to Reviewer F9hb. > I wonder how is the baseline models perform if the model is equipped with a stronger priors, particularly those captured from differentiable simulators. The comparison with a simple MLP in our methodology serves to motivate our approach rather than act as the main baseline. As detailed below, the baselines used in our results are equipped with strong priors. The referenced differentiable simulators learn policies for physically meaningful behaviors by leveraging physical simulation during training to evaluate policy performance. Since they introduce physical biases during training, there is no guarantee that the learned policies will remain physically consistent when generalized to new scenarios. In contrast, our approach (and the baselines we consider) embeds biases directly within the network structure itself. > Why are these two baselines chosen and are there other more recent works comparable? The selected baselines are well-known for incorporating biases in learning dynamical systems, introducing features that our approach successfully recovers and generalizes: - Encoding desirable properties (e.g., periodicity or target convergence) in the dynamics through diffeomorphisms (EF). - Embedding physical relationships between the components of the system's state through Hamiltonian dynamics (DHNN). Our approach extends EF's idea of transforming latent dynamics using diffeomorphisms by considering second-order dynamics and by introducing a (more general) contact Hamiltonian structure in the diffeomorphisms to preserve conjugate pair relationships. DHNN achieves second-order modeling by embedding pure Hamiltonian dynamics in the network structure, but it lacks the ability to enforce desirable properties in the learned dynamics. The comparison with these baselines in our experiments highlights the importance of both biases. A more recent work (2024) aligned with our philosophy is [NCDS](https://openreview.net/forum?id=Q5N3P0SMRr), which extends the properties of a learned latent contractive system to the ambient space using structure-preserving transformations. To strengthen our baseline comparison, we included this approach in the [experiments](https://drive.google.com/file/d/1RcE8tb_gQxbQ3e0ERz-BUPIzk9Lk9uq-/view?usp=sharing). > how is the network designed and the geometry priors fused? The network $\varphi_r$ (Eq. 6), is implemented as a sequence of chained transformations $\varphi_{r_k}$ (Eq. 7). Each of these transformations consists of three steps (Eq. 24), which updates the initial state by integrating the vector field associated to the contact Hamiltonian $H_{r_k}$ (Eq. 8). This Hamiltonian is composed of three learning functions $M(p), V(q), F(q)$, parametrized by RFFNs. Therefore, the network $\varphi_r$, which integrates the dynamics of a sequence of Hamiltonians, characterizes a contact flow. > I also do not find enough experiments ablating the development of the modules inside the neural networks We address the reviewer's suggestion by introducing three additional ablations: - Is the contact structure truly necessary? We assess its importance by examining the issues that arise when replacing contactomorphisms with naive diffeomorphisms, implemented similarly to EF. The disruption of physical coherence manifests in poor reconstruction and generalization performance: [contact-structure-ablation](https://drive.google.com/file/d/1OV8Wonn9_ITfwHKmRxJMrbULw0geT2Cq/view?usp=sharing) - How does the Hamiltonian function (Eq. 8) or its parametrization affect GCF performance? We test variations of the Hamiltonian function and compare different architectures for parameterizing the learning functions. The Hamiltonian that incorporates all functions achieves the best reconstruction results, while RFFN proves to be the best parametrization choice: [learning-functions-ablation](https://drive.google.com/file/d/1RiYCiGLknc4DHntgnqejI5Vx3ZJr9zRz/view?usp=sharing) - Why does the loss function have this specific form? We examine the effect of the second loss term (Eq. 9) by varying its scaling factor and analyzing the resulting performance differences. The study finds an ideal range where improved coherence of the latent space enhances reconstruction in the ambient space: [loss-term-ablation](https://drive.google.com/file/d/1hDVZR168YKjaUdgO12ALfZ-NbIfkconA/view?usp=sharing)
Summary: The paper proposes to learn in the latent contact Hamiltonian space to inject inductive biases and encoding desirable physical properties. Additionally, the paper developed an ensemble method that aims to identify the unseen states and drive the dynamics to avoid these states. Experiments in character writing and robot-object manipulation verified the proposed method outperforms two previous works. ## update after rebuttal I appreciate the authors' additional experiments in the rebuttal. The new simulation and robot experiments demonstrate that the proposed method can learn dynamics with more variations (the spring-mesh experiments) and address real-world problems (the robot-dishwasher experiments). I raised the score accordingly. Claims And Evidence: The claims in the paper are supported by experiments. Methods And Evaluation Criteria: The evaluation tasks (Handwriting Datasets, Robotic Task) seem like simple trajectory generalization tasks. These tasks are state based and have very limited variations (4 characters and 1 robot trajectory). Testing on more complex datasets could be more convincing (e.g., image generation, more complex robotic manipulation tasks). Theoretical Claims: No proofs in the paper. In terms of the idea that ensembling contactomorphisms, it does not only reflect the data support, it could also indicate the randomness of the data. In some scenarios randomness may be preferred, e.g., asking a household robot to mix food ingredients for cooking. Experimental Designs Or Analyses: These two experiments in the paper are too simple. These experiments are state based, and have no variations. Consider image-based, or language-based, or the tasks involving agent-env interaction (Artari game or DeepMind control suite, https://github.com/openai/gym). Supplementary Material: No supplementary material. Relation To Broader Scientific Literature: Learning dynamics is important in the learning world models, and could benefit the learning community in general. However, the proposed method demonstrated limited dynamic learning ability in two simple tasks. Essential References Not Discussed: No Other Strengths And Weaknesses: Strength: Theoretically valid: adding inductive bias and learning physically preserving properties are meaningful for dynamics learning. Weakness: The experiments in the paper are too simple. Other Comments Or Suggestions: The acronyms of the method should be GCF, but there are multiple places typed as GFC. Questions For Authors: Does the proposed method scale to more complex, real-world problems, for example, learning rigid body interactions, or learning dynamics of one / two-link pendulum? Does the proposed method scale to high-dimensional states, for example, images (videos) or point-clouds' dynamics? Could the author compare the proposed method with more recent trajectory generation methods, for example, diffusion models[1] [2]? [1] Cheng Chi, et al, Diffusion Policy: Visuomotor Policy Learning via Action Diffusion. [2] Michael Janner, et al, Planning with Diffusion for Flexible Behavior Synthesis. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: > The evaluation tasks seem like simple trajectory generalization tasks. These tasks are state based and have very limited variations (4 characters and 1 robot trajectory). > Does the proposed method scale to more complex, real-world problems, for example ... ? Yes, as emphasized in the introduction, the proposed method aims at modelling complex non-conservative dynamical systems. Its applications extend beyond trajectory synthesis for control-related tasks (robotics) to a broader modeling of intricate physical phenomena. The reviewer’s suggestions fall within the natural scope of our framework, and we thus expand our evaluation with other dynamic modeling benchmarks to further demonstrate its capabilities. To clarify, our handwriting dataset experiments were conducted on eight characters, not four as pointed out by the reviewer. The results reported in Tables 1 and 2 in the paper represent a subset of our findings, while the complete set is detailed in Tables 6 and 7. Added experiments: - We consider a 60-dimensional [dataset](https://github.com/karlotness/nn-benchmark) describing the dynamics of a 2D square grid of nodes connected by springs. Predicting the dynamics of mesh nodes closely parallels finite element modeling of material deformation. The coupling of multiple springs leads to complex large-scale deformations and oscillations. **GCF reduces reconstruction error by 57% across 20 different dynamic simulations with varying initial conditions**. Experiment parameters and results are available here: [sping-mesh-experiment](https://drive.google.com/file/d/1w4zLbyYc8cx0TV1hpJjqlTPG1YQnxcLW/view?usp=sharing). - We also use GCF to reconstruct the expected dynamics of a single-mode bosonic system, simulated with a stochastic Schrödinger equation. We generated 20 trajectories by integrating the equation over 8 seconds. **GCF reduces reconstruction error by 60% in modeling the system’s evolution**. Experiment parameters and results are available here: [quantum-experiment](https://drive.google.com/file/d/16ly3ixFi01tDxxZW-jkPfu2_ZBXIfIvM/view?usp=sharing). > Does the proposed method scale to high-dimensional states, for example, images (videos) or point-clouds' dynamics? As demonstrated in the high-dimensional tests in Appendix D.5 and our new spring-mesh experiment, our method scales effectively to high-dimensional states. Besides, GCF has the potential to be integrated with state estimation methods that extract system variables (e.g., position and velocity) from high-dimensional representations such as images, videos, and point clouds. However, as this paper introduces an entirely new methodology, we focused on establishing its core capabilities and did not explore such integrations within this work. > Could the author compare the proposed method with more recent trajectory generation methods, for example, diffusion models? As clarified above, our approach models complex non-conservative dynamical systems with physics-informed biases, capturing second-order dynamics, including self-intersections. In contrast, the referenced diffusion model-based methods are path planners, generating motion policies based on first-order Langevin dynamics, which by design cannot capture second-order systems. While self-intersecting trajectories could emerge in diffusion-based approaches due to their stochastic nature, these intersections result from the multimodal distribution of sampled actions, not from the correct modeling of physical laws. Moreover, unlike GCFs, the provided references build on diffusion models without physics-informed bias, preventing them from guaranteeing specific dynamic behaviors. > Testing on more complex datasets could be more convincing (e.g., image generation, more complex robotic manipulation tasks) Following the reviewer's suggestion, we tested GCF in a dishwasher-loading task. The robot handles disturbances while pulling out the basket and closing it. If obstructed, it detects excessive force and stops. Snapshots are available here: [dishwasher-experiment](https://drive.google.com/file/d/1o-jpCiV6rE21_5mSLWgK5gWgEOZd3P26/view?usp=sharing), [dishwasher-variants](https://drive.google.com/file/d/1t_bDq2wLEL8TMrh3no6jPOzGAGgax1Hh/view?usp=sharing). > No proofs in the paper. In terms of the idea that ensembling contactomorphisms, it does not only reflect the data support, it could also indicate the randomness of the data. In our framework, comparing dynamics predictions from different contactomorphisms helps determine whether GCF can infer a principled dynamics from sufficient training data or should prioritize convergence to the data manifold when information is lacking. Regarding the reviewer's mention of "randomness of data," it is unclear whether this refers to noise, multimodality, or another aspect. The cooking example does not clarify further. We kindly ask the reviewer to confirm that our interpretation of their feedback is correct, or to elaborate it otherwise.
null
null
null
null
null
null
null
null
TimeDART: A Diffusion Autoregressive Transformer for Self-Supervised Time Series Representation
Accept (poster)
Summary: This paper introduces TimeDART, a self-supervised time series representation learning framework that integrates autoregressive modeling with a denoising diffusion process. The method consists of a causal Transformer encoder with a patch-based embedding strategy to capture global trends, while a denoising diffusion process refines fine-grained local patterns. The authors claim that this combination improves transferability and representation quality for downstream tasks. Empirical evaluations on nine publicly available datasets demonstrate the effectiveness of TimeDART, outperforming various state-of-the-art baselines in time series forecasting and classification. Claims And Evidence: In this work, the authors hold that combination of auto-regressive and diffusion optimization schemes can be used to obtain transferrable time series representation so as to benefit the target downstream tasks. The specific experimental results are provided in the experimental part. Methods And Evaluation Criteria: Yes, the chosen benchmarks (ETT, Electricity, Traffic, Weather, PEMS, EEG, Epilepsy, HAR) are widely used in time series forecasting and classification. The evaluation settings are appropriate: Theoretical Claims: The paper primarily relies on empirical validation rather than formal theoretical proofs. The diffusion loss formulation aligns with standard ELBO principles and appears correct. While the integration of autoregressive modeling and diffusion is well-motivated, a more detailed theoretical justification could further strengthen the claims. Experimental Designs Or Analyses: The experimental design is mostly sound. The main results of the proposed methods and compared baselines, the ablation of this model et al are discussed in the experimental parts. Supplementary Material: Yes, I review the supplementary materials. The supplementary material was reviewed, including dataset descriptions, implementation details, and additional experimental results. Relation To Broader Scientific Literature: The paper advances self-supervised time series representation learning by integrating autoregressive modeling with denoising diffusion, addressing limitations in masked modeling (e.g., TimeMAE) and contrastive learning (e.g., TS2Vec). It extends diffusion models beyond probabilistic forecasting (e.g., TimeGrad, CSDI) to representation learning, improving both global trend and local pattern capture. A comparison with large-scale time series foundation models (e.g., Chronos, TimesFM et al) would further clarify its positioning in the field. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths: 1. Novel integration of diffusion models and autoregressive modeling for time series representation learning. 2. Extensive empirical validation across nine datasets for forecasting and classification. 3. Strong performance against self-supervised and supervised baselines, demonstrating robustness. 4. Cross-domain evaluation, showing adaptability across different time series applications. 5. Comprehensive ablation studies, highlighting the contributions of different components. Weakness: 1. Baseline selection could be improved—how does TimeDART compare to LLM-based time series models or non-diffusion self-supervised methods? 2. I Suggest the author further improve the writing details to improve the impact of this paper. Other Comments Or Suggestions: No. Questions For Authors: Q1: Whether the pre-trained time series model can be fine-tuned in a new manner? For example, it obeys the same training procedure as the pre-training stage. Q2: The authors combine two types of prevalent generative optimization paradigm together for self-supervised time series representation. Can you describe the great difference between this two training schemes in terms of time series representation? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: > All full results are at https://anonymous.4open.science/r/TimeDART-results-ECBD/ > > [A1 for Q1]--->para4,para5, [A3 for W1] —> para3 **A1 for Q1** Please refer to `A1 for Q1` in our rebuttal to the second Reviewer TNxq. **A2 for Q2** Of course, we are willing to elaborate on our understanding of these two generative paradigms in the context of time series representation learning, and we welcome your corrections if there are any inaccuracies: From the perspective of the characteristics of the data itself, we believe that time series data possesses traits of both language data and image data. First, language data generally has high information density and exhibits strong contextual dependencies. Autoregressive modeling methods share a natural alignment with human language [1], and similarly, time series data emphasizes dependencies from past to current states. Secondly, image data typically has lower information density [2] and places greater emphasis on jointly modeling global spatial relationships and locality. Time series data also shares similar characteristics. Recently, there have been excellent works applying diffusion to autoregressive language models, such as LlaDA [3], as well as works applying autoregressive approaches to image generation, such as VAR [4]. These are inspiring contributions. As contemporaneous work, we carefully examined the distinct roles that diffusion modeling and autoregressive modeling play in time series representation learning. On one hand, autoregressive modeling can capture relationships from left to right. However, we recognize that time series data also shares characteristics with image data, including locality features. Due to its relatively low information density, it is challenging to effectively fit time series data using autoregressive methods without employing discretization techniques like VQVAE. On the other hand, diffusion models can effectively model locality features in time series, such as abrupt weather changes within a single day or week. However, modeling time series requires a significant focus on trend modeling [5],[6]. otherwise, the model risks overfitting to drift. This is the source of our motivation: by combining these two approaches, we can capture both long-term dynamic evolution and subtle local patterns in a unified manner . [1] GPT-1: Improving Language Understanding by Generative Pre-Training [2] MAE: Masked Autoencoders Are Scalable Vision Learners [3] LlaDA: Large Language Diffusion Models [4] VAR: Visual Autoregressive Modeling: Scalable Image Generation via Next-Scale Prediction [5] Autoformer: Decomposition Transformers with Auto-Correlation for Long-Term Series Forecasting [6] FDF: Flexible Decoupled Framework for Time Series Forecasting with Conditional Denoising and Polynomial Modeling **A3 for W1** For non-diffusion self-supervised methods, comparisons are provided in Tables 2, 3, and 4 of the paper, including methods based on mask modeling and contrastive learning. Specific baseline details can be reviewed in Section 4.1 (Baselines) of the article. The experiments demonstrate that, compared to the baseline methods we selected, TimeDART achieves superior downstream fine-tuning performance after pre-training. For LLM-based methods, please refer to `A2 for Q2` in our rebuttal to the first Reviewer baxN where we compare TimeDART with LLM-based methods like UniTime. **A4 for W2** Thank you for your sincere suggestions. We will continue to refine our paper based on the feedback from all reviewers, especially the analysis regarding the differences between autoregressive and diffusion modeling that you mentioned in question 2. Thank you for your encouraging score. We sincerely look forward to your reply.
Summary: This paper presents a self-supervised time series representation learning method. It combines autoregressive modeling with the denoising diffusion process. Key ideas involve normalizing and patch-embedding data, using a causal Transformer encoder for long-term evolution and a patch-level diffusion/denoising mechanism for local patterns. Results show TimeDART outperforms baselines in forecasting and classification tasks, both in-domain and cross-domain. Claims And Evidence: most of the claims are supported by clear and convincing evidence. Methods And Evaluation Criteria: The methods and evaluation criteria are sensible. Theoretical Claims: yes, equations follow basic mathematical operations and seem logically sound. Experimental Designs Or Analyses: the forecasting experiments are well-configured and analysed Supplementary Material: yes Relation To Broader Scientific Literature: The model addresses the challenges of traditional self-supervised methods, offering an approach with improved performance in time series forecasting Essential References Not Discussed: No Other Strengths And Weaknesses: strengths: (1) Demonstrating that autoregressive methods can be applied to self-supervised tasks and that diffusion models can explicitly introduce noise and model local patterns, which offers new insights to the forecasting (2) The paper is well-designed and the presentation is clear. (3) The technics of combination of the autoregressive and the denoising diffusion process seems sound. Weaknesses: (1) For self-supervised learning, one of the significant advantages lies in its performance after few-shot fine-tuning in downstream tasks. I'm curious about how TimeDART would perform when only 5% or 10% of the samples are used for downstream fine-tuning (2) The transition from the description of the reverse process in Equation (7) to the elaboration of the optimization objective in Equation (8) seems rather large. The authors should provide a more detailed derivation of the optimization objective (3) add a complete table to illustrate the specific model and training parameters for each dataset. Other Comments Or Suggestions: n/a Questions For Authors: see weaknesses Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: > All full results are at https://anonymous.4open.science/r/TimeDART-results-ECBD/ > > [Table1]--->para7, [Table2]--->para8 **A1 for Q1** We conduct detailed few-shot experiments on the performance of the model with 5% or 10% fine-tuning data, including forecasting and classification tasks. The results can be found in the [Table1] below. [Table1] |Portion|5%|10%|100%|Random Init| |-|-|-|-|-| |Metrics|MSE/MAE|MSE/MAE|MSE/MAE|MSE/MAE| |ETTh2|0.356/0.392|0.354/0.391|0.346/0.387|0.358/0.396| |PEMS04|0.142/0.254|0.140/0.250|0.133/0.245|0.145/0.255| ||Acc/F1|Acc/F1|Acc/F1|Acc/F1| |HAR|0.8843/0.8901|0.9014/0.9053|0.9247/0.9286|0.8738/0.8723| **A2 for Q2** The following is our detailed derivation from formula [7] to formula [8], which we will integrate into the paper in the future to highlight the comprehensiveness and rigor of the paper. The original theoretical loss should be in the form of cross entropy: $$ L_{ideal}=H(p_\\theta(x^0\_{1:j}),q(x^0_{1:j}))=\\mathbb{E}_{q(x^0\_{1:j})}(-\\log p\_\\theta(x^0\_{1:j})) $$ Use the Evidence Lower Bound (ELBO) method and ignore the patch index subscript: $$ \begin{align} L\_{ideal} &\\leq \\mathbb{E}\\_{q(x^0)} \\left(\mathbb{E}\_{q(x^{1:T}|x^0)}\\left[\\log \\frac{q(x^{1:T}|x^0)}{p\_\\theta(x^{0:T})}\\right]\\right) \\\\ &=\\mathbb{E}\_{q(x^{0:T})} \\left[\\log \\frac{q(x^{1:T}|x^0)}{p\_\\theta(x^{0:T})}\\right] \\\\ &:= L\_{diff} \end{align} $$ Expand and use the Bayes formula: $$ \\begin{align} L &= \\mathbb{E}\_{q(x^{0:T})} \\left[ \\log \\frac{q(x^{1:T}|x^0)}{p\_\\theta(x^{0:T})} \\right] \\\\ &= \\mathbb{E}\_{q(x^{0:T})} \\left[ \\log \\frac{\\prod\_{s=1}^T q(x^s|x^{s-1})}{p(x^T) \\prod\_{s=1}^T p\_\\theta(x^{s-1}|x^s)} \\right] \\\\ &= \\mathbb{E}\_{q(x^{0:T})} \\left[ \\log \\frac{ q(x^T|x^0)}{p(x^T)} + \\sum\_{s=2}^T \\log \\frac{q(x^{s-1}|x^s, x^0)}{p\_\\theta(x^{s-1}|x^s)} - \\log p\_\\theta(x^0|x^1) \\right] \\\\ \\end{align} $$ The first term is a constant, the third term is the reconstruction loss, and only the second term is discussed: $$ \\begin{align} \\mathbb{E}\_{q(x^{0:T})} \\sum\_{s=2}^T \\log \\frac{q(x^{s-1}|x^s, x^0)}{p\_\\theta(x^{s-1}|x^s)} &= \\int\\text{d}x^{0:T}~ q(x^{0:T})\\cdot\\sum\_{s=2}^T \\log \\frac{q(x^{s-1}|x^s, x^0)}{p\_\\theta(x^{s-1}|x^s)} \\\\ &=\\sum\_{s=2}^T\\mathbb{E}\_{q(x^0,x^s)}\\left[D_{KL}(q(x^{s-1}|x^s, x^0)\\|p\_\\theta(x^{s-1}|x^s))\\right] \\end{align} $$ Applying Bayes' theorem again: $$ q(x^{s-1} | x^s, x^0) = \\mathcal{N}(x^{s-1},\\tilde{\\mu}_t(x^s,x^0),\\tilde{\\beta}_s I) $$ It can be assumed that the predicted distribution can also be expressed as: $$ p_\\theta(x^{s-1}|x^s) = \\mathcal{N}(x^{s-1};\\mu_{\\theta}(x^s,s),\\sigma_s^2 I) $$ According to the KL divergence of two Gaussian distributions with the same variance, we can get: $$ L\_{diff}=\\mathbb{E}\_{q(x^0,x^s)}\\left[\\frac{1}{2\\sigma\_s^2} ||\\tilde{\\mu\_s} (x^s, x^0)-\\mu\_\\theta(x^s,s) ||^2\\right] $$ Since $\\tilde{\\mu}_s$ and $x^0$ are linearly related, we turn to predict the original space, so: $$ L_{diff}\\sim \\mathbb{E}_{\\epsilon, q(x^0)} \\left[ || x^0 - x^{out} ||^2 \\right]. $$ Adding the cumulative sum of subscripts and the notation from the original paper, we get: $$ L_{ours} = \sum_{j=1}^{N} \\mathbb{E}_{\\epsilon, q(x_j^0)} \\left[ || x_j^0 - g(\hat{z_j}^{in} - f(z\_{1:j-1}^{in}) ||^2 \\right] $$ **A3 for Q3** Below is a table of the key parameters for the complete model, pre-training, and fine-tuning process, where `p1/p2/…` means we searched for these parameters (such as d_model, learning_rate), or we dynamically modified these parameters based on the size of the dataset (such as batch_size): [Table2] |Tasks|Encoder||Decoder||Pre-training|||Fine-tuning|||| |-|-|-|-|-|-|-|-|-|-|-|-| ||e_layers|d_model|d_layers|d_model|learning_rate|batch_size|epoches|learning_rate|lr_scheduler|batch_size|epoches| |Forecasting|2|8/32/128/512|1|encoder_d_model|0.001,0.0005,0.0001|8,16|30,50|0.001,0.0005,0.0001|cosine/exponential decay|8,16,32,64|10| |Classification|2|64/128/256|1|encoder_d_model|0.001,0.0005,0.0001|16,64,128|30,50|0.001,0.0005,0.0001|cosine/exponential decay|16,64,128|10| Thank you for your encouraging score. We sincerely look forward to your reply.
Summary: The paper introduces TimeDART, a novel self-supervised learning framework for time series analysis that integrates autoregressive modeling with diffusion-based denoising. The framework aims to address the limitations of existing methods, such as masked autoencoders, contrastive learning, and autoregressive approaches, particularly their susceptibility to noise. TimeDART employs a causal Transformer encoder and a cross-attention-based denoising mechanism to capture both global dynamics and local patterns in time series data. The paper demonstrates the effectiveness of TimeDART through extensive experiments, showing improvements in both time series prediction and classification tasks. ## update after rebuttal The authors have actively responded to reviewers' comments. Though some of my concern about the utilization of diffusion model still remains, to some extent, they've empirically illustrated that it worked from the view of self-supervised learning. Claims And Evidence: The claims in the paper are clear and well-supported by evidence. The authors effectively argue that combining autoregressive modeling with diffusion denoising helps capture both global and local patterns in time series data. The use of diffusion denoising is justified to mitigate the overfitting problem of autoregressive models to noise and anomalies, and the cross-attention decoder is introduced to address the local dependence issue of diffusion models. Methods And Evaluation Criteria: The methods and evaluation criteria are appropriate for the problem. Theoretical Claims: The theoretical claims are supported by a statistical invariant property analysis and a rough derivation of the optimization objective provided in the appendix. There are no significant issues with the theoretical foundations of the paper. Experimental Designs Or Analyses: The experimental designs and analyses are sound. However, there is a concern about the consistency of hyperparameters across different methods, as the results of some baseline methods (e.g., PatchTST) appear weaker than those reported in their original papers. Supplementary Material: There is no supplementary material. Relation To Broader Scientific Literature: The main contribution of the paper lies in the combination of autoregressive and diffusion models in self-supervised learning for time series analysis. This approach addresses the overfitting problem in autoregressive methods and enhances the ability to capture local information through diffusion models. Essential References Not Discussed: There are no essential references missing from the discussion. Other Strengths And Weaknesses: Strengths: * The writing is clear, and the code is provided, which enhances the reproducibility of the results. * The experiments are well-designed and demonstrate the superiority of the proposed method. Weaknesses: * The combination of autoregressive and diffusion models is straightforward. Both techniques are commonly used in many other fields. * The combination of autoregressive modeling and diffusion denoising is only used during the pretraining stage, and a different paradigm is adopted for downstream tasks, which weakens the innovation of the approach. * The runtime efficiency of the algorithm is not reported, raising concerns about the computational cost of combining autoregressive modeling with multiple denoising steps. Other Comments Or Suggestions: None. Questions For Authors: 1. Why do the authors abandon autoregressive modeling and diffusion in the downstream task, given that these methods are central to their claims about capturing global and local patterns? What’s the performance of keeping autoregressive modeling and diffusion model in forecasting task? 2. If the denoising decoder is removed during fine-tuning, is the diffusion model still necessary? Could other simpler backbone networks achieve similar results? 3. What is the runtime efficiency of the algorithm, especially given the potential computational cost of combining autoregressive modeling with multiple denoising steps? 4. I noticed that in the data factory script, the setting of drop_last is True (/data_provider/data_factory.py#L41). This will lead to wrong results of the whole experiment [1]. 5. What are the effects of different lookback windows on the results? [1] TFB: Towards Comprehensive and Fair Benchmarking of Time Series Forecasting Methods. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: > All full results are at https://anonymous.4open.science/r/TimeDART-results-ECBD/ > > [Table1]--->para4,para5, [Table2]--->para6, [A3 for Q3] --->para1, [A5 for Q5]--->para2 **A1 for Q1** We abandon autoregressive modeling and diffusion in the downstream task based on three considerations: First, we not only performed forecasting tasks but also discriminative tasks such as classification, which are difficult to directly adapt to this paradigm. Second, almost all evaluations of self-supervised methods for time series follow the approach of transferring only the encoder and attaching different heads for various downstream adaptations (e.g., TimeSiam, SimMTM). Therefore, to enable comparisons and ensure fairness, we adhered to the same transfer method. Lastly, autoregressive inference and diffusion sampling would be extremely slow. Early of this work, we have designed a downstream denoising autoregressive paradigm for forecasting tasks and conducted experiments. However, based on the considerations above, we decided not to use this approach further. We appreciate the reviewer's question, as it allows us to revisit our initial design. The results can be found in the [Table1] below. [Table1] The following table shows the average of predicted window on [96,192,336,720]. Detailed designs can be found at para5 in link above. The results show that retaining the decoder for prediction can achieve similar results. |Methods|Diff+non AR|Diff+non AR Random Init|Diff+AR|Diff+AR Random Init|TimeDART|Random Init| |-|-|-|-|-|-|-| |Metrics|MSE/MAE|MSE/MAE|MSE/MAE|MSE/MAE|MSE/MAE|MSE/MAE| |ETTh2|0.352/0.389|0.363/0.397|0.359/0.394|0.372/0.400|**0.346/0.387**|0.358/0.396| |Exchange|0.346/0.406|0.383/0.442|**0.339/0.385**|0.366/0.422|0.359/0.405|0.440/0.450| **A2 for Q2** Even though the denoising decoder is removed during fine-tuning, it remains **necessary**, as the ablation experiments show: removing the diffusion model during pre-training leads to a decline in performance for both downstream forecasting and classification tasks. We can use a simpler MLP as a decoder. For simplicity, we directly use the concat strategy to concatenate the output of the causal encoder and the embedding of the noise added patches in the dim axis. In order to keep the parameters consistent with the original 1-layer Transformer (about $12d_{model}^2$), we use a Squencial(Linear($2d_{model}$, $4d_{model}$), ReLU, Linear($4d_{model}$, $d_{model}$) ) MLP as the denoising decoder. The results can be found in the [Table2] below. [Table2] The following table shows the average of predicted window on [96,192,336,720]. The results show that simpler's MLP is slightly weaker than TRM, but almost the same. |Decoder|MLP|TRM|Random Init| |-|-|-|-| |Metrics|MSE/MAE|MSE/MAE|MSE/MAE| |ETTh2|0.347/0.387|0.346/0.387|0.358/0.396| |PEMS04|0.134/0.245|0.133/0.245|0.145/0.255| ||Acc/F1|Acc/F1|Acc/F1| |HAR|0.9197/0.9186|0.9247/0.9286|0.8738/0.8723| **A3 for Q3** Please refer to `A1 for Q1, paragraph 1` in our rebuttal to the first Reviewer baxN, where the table of gpu memory and calculation is given. **A4 for Q4** You have raised an important point, however, we must clarify the entire data processing and experimental evaluation section: You mentioned that in `/data_provider/data_factory.py#L41`, setting `drop_last` to `True` might lead to discarding some samples, potentially biasing the results. However, at #L44 and #L45, when the downstream task is forecasting, the `batch_size` is set to 1. With this, no forecasting test samples will be dropped even `drop_last` is set to `True`. Furthermore, at #L42 and #L43, when the downstream task is classification, although `batch_size` is set to `args.batch_size`, at #L69 and #L70 `drop_last` is explicitly reset to `False`. Thus, the classification task is also unaffected. For the sake of rigor: we double-checked the code and printed the key parameters. The results confirm that the critical parameters are as described above. **A5 for Q5** Please refer to `A1 for Q1, paragraph 3` in our rebuttal to the first Reviewer baxN, where the table of different look-back windows is given. **A6 for W1** As mentioned in `a3 for w1` in our rebuttal to first Reviewer baxN, we did not mechanically combine these two techniques in pre-training. Instead, we approached the uniqueness of time series data with careful consideration. Simply using autoregressive optimization during pre-training does not yield satisfactory results when transferred to downstream tasks. However, through our novel approach of explicitly introducing noise, we found that the pre-training performance indeed improved, which inspired the development of our current complete work. **A6 for W2** Please refer to `A1 for Q1`. **A7 for W3** Please refer to `A3 for Q3` above or `A1 for Q1, paragraph 1` in our rebuttal to the first Reviewer baxN. Your score is very important to us. We hope we can solve your questions. We sincerely look forward to your reply. --- Rebuttal Comment 1.1: Comment: Thank you for your response. Some of the concerns still remain. Q1 for A1: If it is merely a process of adding and removing noise during training, it can hardly be called "diffusion". Or rather, this is just a simple denoising autoencoder. Literally speaking, **this may conflict with the core contribution of your method**. Moreover, it further raises the question about the effect of the self-supervised pretraining. I’ve noticed that the encoder has been adapted to the downstream tasks via fine-tuning, as stated in Line 224-235 of Sec. 3.3 in your paper. However, the effect of pretraining cannot be well understood via the experiments and ablations study. Could you please give any explanation or analysis on it? Q2: Regarding the diffusion part of your model, you believe it captures local information. Do you have any insights or visual analysis to prove the effectiveness of the noise adding process in capturing local temporal features? What are the advantages of this noise adding method compared to other methods for capturing local features, such as the duel attention in Pathformer [1]? [1] Pathformer: Multi-scale Transformers with Adaptive Pathways for Time Series Forecasting --- Reply to Comment 1.1.1: Comment: > Full results are at https://anonymous.4open.science/r/TimeDART-results-ECBD/ > > [Table1] —>para9 **A1 for Q1** We understand the reviewer's concerns and would like to clarify: our method is not a traditional denoising autoencoder. Our core contribution lies in proposing a unified pre-training framework that integrates autoregressive modeling and denoising diffusion models. Below we elaborate on the differences between TimeDART and traditional DAE: 1. We use cosine scheduled multi-step noising and single-step denoising to train the representation network and denoising decoder, and use the autoregressive diffusion loss as the optimization target, which is consistent with the classic diffusion architecture. 2. The classic denoising autoencoder adds noise to the input data so that the network learns the corrupted data. It does not involve the autoregressive information as a condition, the complex noise addition mechanisms and diffusion loss optimization theory. 3. Although we removed the denoising decoder in the downstream task to maintain consistency and fairness with the baseline, this does not mean that our denoising decoder has no denoising capability in downstream tasks. As we mentioned in `A1 for Q1` of the first rebuttal, we can also use the denoising decoder to fine-tune, and the performance is basically the same as the method in the paper. As noted in [1], only a few components of diffusion models are essential for learning good representations, while many others are not essential. TimeDART, tailored for time-series data, retains these key elements while adapting to the unique characteristics of such data, consistent with our core contributions. Regarding the effect of self-supervised pre-training, we evaluate it along two dimensions, each supported by corresponding experiments: 1. **End-to-end pre-train fine-tune vs. random initialization** This is a widely-used evaluation approach comparing models with pre-trained encoders against those randomly initialized (SimMTM, TimeSiam). In Sec. 4.1, the "TimeDART" and "Random Init" columns in our experimental tables consistently show that TimeDART outperforms random initialization, highlighting the value of our pre-training framework. To further address your concerns, we conducted experiments under another evalution approach: **linear probing**, where the pre-trained encoder is fixed, and only the newly added task-specific projector is fine-tuned. Similarly, we compared this with a randomly initialized encoder, also fixed during fine-tuning. The results is in [Table1]. 2. **Ablation studies on key components** : We have performed detailed ablation experiments on TimeDART's two main components in Sec4.3: the autoregressive mechanism and the denoising diffusion process. For the autoregressive component, we removed causal masks and related elements during pre-training. For the diffusion process, we eliminated the diffusion module and denoising decoder. Removing either or both components degraded downstream performance, often falling below random initialization, validating the importance of our design choices. [Table1] Linear probing fine-tuning for TimeDART. MSE/MAE for ETTh2 and PEMS04, Acc/F1 for HAR. |Linear Probing|Random Init|TimeDART|SimMTM|TimeMAE| |-|-|-|-|-| |ETTh2|0.368/0.401|**0.354/0.391**|0.357/0.395|0.361/0.397| |PEMS04|0.161/0.271|**0.145/0.258**|0.148/0.260|0.152/0.264| |HAR|0.8542/0.8578|**0.8976/0.9005**|0.8732/0.8756|0.8858/0.8862| As shown in the table, under linear probing, TimeDART maintains the same trend as the main experiment, outperforming random initialization and the article's baselines. [1] Chen, X., Liu, Z., Xie, S., & He, K. (2024). Deconstructing denoising diffusion models for self-supervised learning. ICLR 2025 **A2 for Q2** While patch-based methods(PatchTST, Pathformer) excel at local feature extraction, they often struggle with inherent noise. Building on this, we use patch-level embeddings and introduce an explicit noising-denoising process during pre-training, optimized with diffusion loss in the original space. This enhances the encoder's robustness to noise, with ablation studies confirming its effectiveness for downstream tasks. We have done preliminary T-SNE visualization analysis after pre-training with and without noise, and observed that different datasets exhibit more clear clustering-like patterns after adding noise. If lucky enough to be accepted, we would add the visualization to the final submitted camera ready version. Compared to other patch-based methods, our approach offers key advantages for capturing local features: 1. Unified modeling : Unlike methods focused solely on local features, our framework integrates both long-term dependencies and local patterns, paving the way for a foundation model for time series. 2. Patch-based compatibility : Our noising-denoising process can be seamlessly applied to other patch-based models (e.g., Pathformer) without extensive modifications.
Summary: Authors propose a novel self-supervised time series representation pre-training framework that integrates two popular generative paradigms to enhance representation transferability. Specifically, they employ a causal Transformer encoder for autoregressive prediction while incorporating a denoising diffusion process to recover fine-grained local patterns. Extensive experiments on time series classification and forecasting tasks validate the effectiveness of the proposed approach. Claims And Evidence: The claims in the paper are well-supported by empirical evidence. TimeDART’s effectiveness in capturing both global and local sequence features is validated through strong performance across multiple benchmark datasets. Extensive experiments in forecasting and classification, along with ablation studies, demonstrate its advantages over state-of-the-art methods. The integration of autoregressive modeling and denoising diffusion is shown to enhance representation learning, confirming the validity of the proposed approach. Methods And Evaluation Criteria: The proposed methods and evaluation criteria are well aligned with the problem. TimeDART effectively integrates autoregressive modeling and denoising diffusion for self-supervised time series representation learning. The use of diverse benchmark datasets across forecasting and classification tasks ensures a comprehensive evaluation. Metrics such as MSE, MAE, accuracy, and F1-score are appropriate, and ablation studies further validate the contributions of key components. Theoretical Claims: The paper's theoretical foundations are consistent with established principles, particularly in diffusion-based modeling and autoregressive learning. The diffusion loss formulation aligns with ELBO principles, and its application appears correct. While a deeper theoretical justification could further enhance clarity, the empirical results strongly support the proposed approach. Experimental Designs Or Analyses: The experimental design is well-structured and comprehensive. The authors evaluate TimeDART across diverse benchmark datasets using appropriate metrics (MSE, MAE, accuracy, F1-score). Ablation studies effectively validate the contributions of autoregressive modeling and the denoising diffusion process. The inclusion of both in-domain and cross-domain evaluations further strengthens the robustness of the findings. Supplementary Material: The supplementary material was reviewed, including dataset descriptions, implementation details, and additional experimental results. The ablation studies and hyperparameter sensitivity analysis provide further validation of the proposed method. While the diffusion loss derivation follows standard principles, a more detailed theoretical explanation could enhance clarity. Relation To Broader Scientific Literature: The paper builds on prior work in self-supervised time series representation learning, integrating autoregressive modeling and diffusion-based denoising. It extends masked modeling and contrastive learning approaches by introducing a hybrid generative framework. Compared to traditional autoregressive methods, TimeDART mitigates error accumulation, while diffusion models typically used for probabilistic forecasting are repurposed for representation learning, bridging gaps in existing time series pre-training strategies. Essential References Not Discussed: None Other Strengths And Weaknesses: Strengths 1. The paper effectively combines autoregressive modeling with a denoising diffusion process, offering a novel perspective to self-supervised time series representation learning. 2. The method is extensively validated across multiple benchmark datasets for both forecasting and classification tasks, demonstrating consistent performance gains over state-of-the-art baselines. 3. The paper includes detailed ablation studies and hyperparameter sensitivity analysis, reinforcing the contributions of key components and ensuring the method's robustness. Weakness 1. While the empirical results are strong, it would be better if the paper could provide a formal theoretical analysis explaining why the combination of autoregressive modeling and diffusion improves representation learning. 2. It seems that the paper does not provide insights into the computational cost of TimeDART compared to other self-supervised methods, especially regarding training efficiency and scalability. Other Comments Or Suggestions: See above Questions For Authors: Question 1: Diffusion-based models often introduce additional computational overhead due to iterative denoising steps. How does TimeDART compare in terms of training and inference time relative to standard self-supervised methods like contrastive learning or masked autoencoders? Have you evaluated the scalability of TimeDART on longer sequences or larger datasets? A comparison of computational trade-offs would clarify its practical applicability. Question 2: Recent advancements in time series modeling include large-scale pre-trained models (e.g., TimeGPT) that generalize across domains. How does TimeDART compare in terms of representation quality and transferability to such foundation models? Have you considered evaluating it on cross-domain tasks beyond the current benchmark datasets? Addressing this would help position TimeDART within the broader landscape of time series pre-training. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: > All full results are at https://anonymous.4open.science/r/TimeDART-results-ECBD/ > > [Table1] —>para1,[Table2] —>para2,[Table3] —> para3 **A1 for Q1** 1. For efficiency, we used a lightweight denoising decoder during pre-training. After training, similar to masked autoencoders, only the embedding layer and encoder network are transferred for downstream tasks. As a result, TimeDART introduces small additional computational overhead in terms of GPU memory usage and computation speed. Specific comparative results can be found in the [Table1] below. 2. In forecasting tasks, we have evaluated performance on large datasets especially traffic and pems. Dataset details, including variables and lengths, are in Section 4.1. We also pre-trained on all six datasets combined, reaching approximately 43M training time points (channel independence). This data volume is significantly larger than the baseline in the article. 3. To evaluate the scalability of TimeDART for longer sequences, we conducted additional experiments with different look-back window sizes, especially extending it to 576 and 720 in ETTh2. We observed a further reduction in prediction MSE, indicating that TimeDART scales well for longer sequences. The detailed results can be found in the [Table2] below. [Table1] The data in the following table is recorded in the Traffic dataset with a look-back window=336 and a predicted window=336. |Methods|Params|Training Time/per epoch| |-|-|-| |TimeDART|2.36M(pt)/2.16M(ft)|510s(pt)/349s(ft)| |SimMTM|14.5M(pt)/2.16M(ft)|85mins(pt)/349s(ft)| |TimeMAE|1.13M(pt)/2.16M(ft)|91mins(pt)/349s(ft)| |Cost|2.66M(pt)/2.16M(ft)|24mins(pt)/349s(ft)| |PatchTST(supervised)|64.32M|158mins| [Table2] The following table shows the average of predicted window on [96,192,336,720]. |look-back|TimeDART|Random Init| |-|-|-| |ETTh2|MSE/MAE|MSE/MAE| |96|0.373/0.398|0.384/0.407| |192|0.364/0.391|0.374/0.401| |336|0.346/0.387|0.358/0.396| |576|0.343/0.384|0.355/0.392| |720|0.339/0.379|0.352/0.388| |PEMS04|MSE/MAE|MSE/MAE| |96|0.133/0.245|0.145/0.255| |192|0.130/0.240|0.140/0.252| |336|0.125/0.235|0.134/0.247| **A2 for Q2** 1. Due to limitations in GPUs and the time constraints of the rebuttal process, and given the significant disparity in the scale of training data, it is challenging for us to directly compare with large foundation models in a short period. To address your concerns regarding representation quality and transferability, we have instead conducted some comparisons with SOTA methods based on LLMs (e.g. UniTime, GPT4TS), which we hope will help alleviate your doubts and concerns. Specific comparative results can be found in the [table3] below. 2. Regarding the cross-domain issue, we mixed all in-domain datasets for general pre-training in forecasting tasks and fine-tuned across domains. As noted in `a1 for q1`, this yielded approximately 43M training time points. Experiments (Section 4.2) demonstrate that TimeDART’s cross-domain representation learning, under large-scale pre-training, shows improvement compared to no pre-training. As for going beyond the current benchmark datasets, we are considering conducting general pre-training on larger public datasets (e.g., UTSD) to further validate TimeDART’s representation ability. [Table3] The look-back window is reset to 96 same to UniTime to ensure fair comparison. Model is pretrained on ETT(4subsets), Exchange, Electricity, Traffic datasets. The following table shows the average of predicted window on [96,192,336,720]. Experiments show that TimeDART can still learn better representations compared to LLM-based methods. |Models|TimeDART|Random Init|Unitime|GPT4TS|PatchTST| |-|-|-|-|-|-| |Metrics|MSE/MAE|MSE/MAE|MSE/MAE|MSE/MAE|MSE/MAE| |ETTh2|**0.376/0.398**|0.384/0.404|0.378/0.403|0.386/0.406|0.398/0.416| |ETTm2|**0.287/0.333**|0.301/0.350|0.293/0.334|0.321/0.356|0.340/0.373| |Exchange|**0.361**/0.406|0.389/0.424|0.364/**0.404**|0.421/0.446|0.411/0.444| |Electricity|**0.200/0.293**|0.212/0.303|0.216/0.305|0.251/0.338|0.221/0.311| **A3 for W1** Due to time constraints, we just provide a brief explanation. Our assumption is that traditional autoregressive pre-training excels in capturing long-term dependencies but struggles with inherent time series noise, which is hard to avoid. By introducing noise into the pre-training process, we encourage the encoder to learn both long-term dependencies and the distribution of gaussian noise $f(x_j^0|x_{1-j-1}^0, \epsilon_1^{s_1}, ..., \epsilon_{j-1}^{s_{j-1}})$. This improves the model's adaptability to inherent noise in downstream tasks. We are now working on a formal theoretical proof of combining autoregressive modeling with diffusion, which will be included in future work. **A4 for W2** Please refer to `a1 for q1`. Thank you for your encouraging score. We sincerely look forward to your reply. --- Rebuttal Comment 1.1: Comment: Thank you for addressing the key questions I raised. These supplementary clarifications have given me a more comprehensive understanding of the paper's value and significance. I will increase my rating for this paper. --- Reply to Comment 1.1.1: Comment: Dear Reviewer, Thank you for your constructive suggestions and positive feedback of our work. We sincerely appreciate your time and insightful comments. It is encouraging to hear that the supplementary addressed your concerns effectively. We are grateful for your willingness to reconsider the paper's rating, and we will carefully incorporate your suggestions in the future work. Best regards
null
null
null
null
null
null
Can MLLMs Reason in Multimodality? EMMA: An Enhanced MultiModal ReAsoning Benchmark
Accept (oral)
Summary: This paper introduces the EMMA benchmark to evaluate the reasoning capabilities of multimodal LLMs that require the integration of both text and visual cues. The benchmark is curated from existing datasets and supplemented with 1796 newly created questions covering math, chemistry, physics, and coding. A filtering process ensures that questions cannot be answered using text alone. Additionally, the paper evaluates SOTA models, providing a comprehensive analysis of both direct prompting and CoT prompting. The study also explores test-time compute scaling. Results indicate that current SOTA models exhibit a significant performance gap compared to human experts across these evaluations. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes, only two questions 1) Regarding the filtering mechanisms in L206–L209, what is the motivation for removing questions that could be answered if the original text and generated captions were provided? Captioning an image still relies on visual cues, and if additional reasoning is required, wouldn’t such a question still assess the model’s multimodal reasoning capability? 2) The ‘Code Chose Vis’ setting seems somewhat counterintuitive. It requires directly reading code and mentally visualizing the output, which is quite challenging even for humans, especially for very long and complex code. Theoretical Claims: The paper does not have theoretical claims. Experimental Designs Or Analyses: Yes Supplementary Material: It doesn’t have supplementary. Relation To Broader Scientific Literature: There are many multimodal benchmarks, such as SeedBench, MMMU, RealWorldQA, MuirBench, and VideoMME. Some benchmarks specifically focus on reasoning, such as MATH-V, Visual CoT, and SpatialRGPT. However, some questions in these benchmarks can be answered using only textual cues, or models have already reached saturation on them. This paper introduces several mechanisms to ensure that both text and images are required for reasoning. Additionally, it demonstrates a significant performance gap compared to human experts. Essential References Not Discussed: There are few works related to MLLMs' spatial reasoning, and all of them provide benchmarks in their papers. [1] Cheng, An-Chieh, et al. "SpatialRGPT: Grounded Spatial Reasoning in Vision Language Models." arXiv preprint arXiv:2406.01584 (2024). [2] Liu, Fangyu, Guy Emerson, and Nigel Collier. "Visual spatial reasoning." Transactions of the Association for Computational Linguistics 11 (2023): 635-651. [3] Nie, Jiahao, et al. "Mmrel: A relation understanding dataset and benchmark in the mllm era." arXiv preprint arXiv:2406.09121 (2024). Other Strengths And Weaknesses: The paper is well-written and provides a thorough approach to benchmark creation, comprehensive experiments, and detailed analysis. Reading this paper gave me valuable insights into the current limitations of multimodal reasoning capabilities. Other Comments Or Suggestions: 1) It would be helpful to highlight the best and second-best scores in the performance table in the supplementary material, similar to the table in the main paper. 2) Including a breakdown of human performance for EMMA-mini would be valuable as a reference when analyzing the results. Questions For Authors: 1) The benchmarks cover Math, Chemistry, Coding, and Physics, requiring not only multimodal reasoning but also expert knowledge to solve problems. Do you have any plans to expand into more common scenarios that primarily rely on logical reasoning and general knowledge? 2) Regarding the gap between closed-source and open-source models, do you think this is partly due to most open-source MLLMs not incorporating CoT during training? Would this also lead to hallucinations when CoT is enforced during inference? 3) For N=16 Pass@N, the performance is quite high for all three models, with some models even outperforming human experts. Does this suggest that the model already possesses the necessary knowledge and reasoning capability but with low probability to generate? If so, would fine-tuning on similar questions from the benchmark provide a shortcut to boosting performance? Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: Thank you for the encouraging and thoughtful review! Below is our response to your questions and suggestions. **Q1: Why did we filter out questions that can be answered using the text and generated image captions?** Our enhanced filtering pipeline targets questions requiring deep multimodal reasoning that are difficult to solve through text reasoning or one visual pass. If a question can be solved with text and image captions, it means the necessary visual information can be compressed into text. The question is then solvable with one round of visual perception or shallow visual reasoning, with the rest handled by text reasoning. Instead, we focus on problems requiring back-and-forth textual and visual reasoning, which typically demand repeatedly referring back to the image for multi-hop reasoning or mental simulation. This filtering pipeline allows us to curate problems that assess advanced multimodal reasoning. **Q2: Counterintuitive task of "Code Choose Vis"** While "Code Choose Vis" may seem counterintuitive at first, it is tied to how humans typically write visualization code, whereby they envision their desired chart, code it, and mentally check if the code achieves their intention. The critical step of asking oneself, "is the code doing what I want it to do?" is precisely what "Code Choose Vis" problems target. This ability is also prerequisite to debugging visualizations, where one must understand what the current code produces before making corrections. **Q3: Plans to expand EMMA to more general domains** While most questions in EMMA require domain knowledge, many math questions do not. For example, simulating how a 2D shape will look after transformations or finding patterns in a sequence of shapes relies primarily on logic and general knowledge rather than math expertise. In scoping for EMMA, we explored various disciplines for multimodal reasoning questions meeting our strict criteria. We found that most such problems are typically considered "logic" or "spatial reasoning" tasks, which we ultimately categorized under math. In other domains, such problems are much harder to source or create. Nonetheless, we are very interested in incorporating more general domain multimodal reasoning questions and are actively exploring strategies for expansion. Thank you for this great suggestion! **Q4: Why does CoT prompting not help with open-source models?** We observed that CoT prompting generally improves performance for closed-source models, but tends to hurt performance for open-source models. One possible reason is that open-source models do not effectively leverage language to assist in multimodal reasoning tasks where language could be beneficial. For example, language can often help grounding in multi-hop counting, so it should be theoretically possible to improve performance on this task with CoT, but open-source models fail to capitalize on this. Previous work has shown that the quality of training data is crucial for CoT effectiveness. For example, [1] has found that filtering out anomalous samples (e.g., repetitive patterns) can improve CoT performance. Your point regarding the lack of CoT supervision during training aligns with this perspective: incorporating high-quality CoT data can indeed be seen as one way to improve the quality of training data for MLLMs. Without access to the training data and pipelines of the tested models, it is difficult to pinpoint the reason behind this divergence. Nonetheless, we believe that multiple factors might contribute to this phenomenon, and a more systematic understanding of CoT prompting in multimodal settings remains an open and important direction for future research. [1] https://arxiv.org/pdf/2412.05271 **Q5: Does high Pass@16 indicate latent capability and can fine-tuning on similar data boost performance?** That is a keen observation. We agree that strong Pass@16 performance may suggest latent reasoning capabilities. However, as you noted, even if models possess relevant knowledge, they have a low probability of applying it correctly. Moreover, correct answers can sometimes stem from flawed reasoning, especially for multiple-choice questions. The idea of finetuning on similar questions can indeed be promising. Prior work [2] has highlighted the nuanced interplay between memorization and generalization in finetuning LLMs on logical reasoning, and it would be interesting to investigate this further in the multimodal setting—an interesting direction for future work. [2] https://arxiv.org/pdf/2410.23123 > Suggestion 1: Discussing additional relevant work Thank you for these highly valuable and relevant references. We will include and discuss them in an updated version. > Suggestion 2: Highlighting top scores in tables and including human performance breakdown on EMMA-mini Thank you for the thoughtful suggestion. We will make sure to more clearly highlight results and include a category-level breakdown of human performance.
Summary: This paper introduces EMMA, a visual question answering benchmark requiring multimodal reasoning. EMMA includes questions in four domains: math, physics, chemistry, and coding. The questions in EMMA are filtered so that they are not answerable based on only the image captions and questions. The experiments show that both open-source and closed-source MLLMs fall significantly short of human expert performance. Besides, the effect of chain-of-thoughts prompting and test-time compute scaling on the evaluated models are discussed. Claims And Evidence: 1. The paper claims that the state-of-the-art MLLMs struggle with multimodal reasoning. This is supported by the experimental results that the MLLMs achieve much lower accuracy than human experts on the proposed EMMA benchmark. 2. The paper claims that the EMMA benchmark cannot be addressed by reasoning on solely one modality. The data curation process uses this condition to filter questions with GPT-4o as the reasoning model. However, other evaluted models are not tested in the single-modality setting to support this claim. Methods And Evaluation Criteria: In the experiments, the MLLMs are not only evaluated in direct prompting, but also with chain-of-thoughts prompting and test-time compute scaling. A group of commonly used approach to boost reasoning performance are used to demonstrate the weak performance on the proposed benchmark. Theoretical Claims: This paper does not involve theoretical claims. Experimental Designs Or Analyses: I checked the soundness/validity of the experimental designs and analysis and did not find any issue. Supplementary Material: I reviewed the figures in the supplementary material, including illustration of different types of questions and qualitative results of the evaluated models. They are helpful to present the benchmark and the failure cases in the evaluation. Relation To Broader Scientific Literature: The previous multimodal reasoning benchmarks have shortcuts in the questions so that they may fail to evaluate the multimodal reasoning capability. The newly proposed EMMA benchmark filters the questions in previous benchmarks and also collects new questions. The questions in EMMA are ensured that they are not answerable based off the questions and image captions, making is a challenging benchmark for multimodal reasoning evaluation. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Other Strengths: 1. The proposed benchmark is well presented. The paper shows detailed statistics and example questions in the benchmark. 2. 1,796 questions are newly collected by domain experts, which are valuable to the community. 3. A few advanced techniques to improve MLLM reasoning are evaluated and analyzed in the benchmark. This demonstrates the advantages and limitations of these techniques. 4. The paper is well-written and easy to follow. The proposed benchmark is well presented. Other Weaknesses: 1. Although the data curation filters out the questions that can be answered based on questions and image captions, only GPT-4o, Llama-3, and Qwen2 are used in this process. So, the weak questions filtered out are restricted to the capability of these models. It would be more convincing if experiments can be conducted to evaluate other MLLMs in the "question only" and "caption and question" settings. 2. While EMMA includes challenging multimodal reasoning questions, they are restricted to the domains of math, physics, chemistry, and coding. This would lead to two issues: + Answering the questions in EMMA requires strong domain knowledge. Therefore, the accuracy on the benchmark cannot directly reflect the reasoning capability of a model. The lack of domain knowledge can also cause weak performance. + The domain of the benchmark is limited. It does not include more diverse domains, e.g., geography and biology. More importantly, none of the images involved in the benchmark are realistic images. While it is acceptable to propose a benchmark in specific domains, the paper should discuss it as a limitation. Besides, the name "Enhanced Multimodal Reasoning Benchmark" exaggerates the contribution of the benchmark because it is domain-specific instead of a general-domain benchmark. Other Comments Or Suggestions: While the domain of the benchmark is limited, I highly appreciate the level of challenge of the benchmark and the thorough evaluation of MLLMs and reasoning techniques. So, I give a score of 3 as my initial recommendation. Questions For Authors: 1. In L 193-194, what does "a single visual pass" mean? 2. What does "Pass@N" mean in Table 3? These should be clarified in paper writing. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your detailed review and great questions! We hope our response below helps address them. **Concern 1: Only 3 models are used in filtering. Other models might be able to solve the retained questions in the text-only setting** (1) The 3 models used were among the strongest available at the time. If none could consistently solve a question with text and captions, other models are unlikely to succeed under the same conditions. (2) We also wish to emphasize that MLLM-based filtering is **only the first step in our data pipeline** (Section 3.2). This was followed by manual filtering to ensure retained questions genuinely required multimodal reasoning. In fact, 16.4% of questions were removed at this stage because they did not involve substantial multimodal reasoning. (3) Following your suggestion, we tested more recent MLLMs in the "caption and question" setting. We focused on math to test whether our filter separates visual-reasoning-heavy questions from light ones (hereafter "heavy" and "light" questions). We selected: - **100 "heavy"** math questions from EMMA-mini - **100 "light"** math questions randomly sampled from the filtered-out pool We evaluated 5 SOTA MLLMs using only the question and caption, and also report a *full-input* setting (text + image) on heavy questions: |Model|Visual Reasoning Light (Text + Caption)|Visual Reasoning Heavy (Text + Caption)|Visual Reasoning Heavy (Text + Image)| |---------------------------|------------------------------------|--------------------------------------|-------------------------------------| |Claude 3.7 Sonnet|74|41|45| |Gemini 2.0 Flash Thinking|72|31|34| |GPT-4o|76|31|27| |Qwen2.5-VL-72B|74|27|39| |InternVL2.5-78B|56|23|31| - In the caption-only setting, all models perform significantly better on light questions than on heavy ones, indicating that the filtered-out questions can largely be solved via text-based reasoning and that our filtering pipeline effectively identifies visual-reasoning-heavy questions; - On heavy questions, providing the image offers only marginal gains over captions. Even advanced MLLMs still struggle to utilize visual information for reasoning, reinforcing the motivation for EMMA. **Concern 2: Lack of domain knowledge can cause weak performance** We agree that a lack of domain knowledge can lead to weak performance on EMMA. However, we believe that the primary factor behind the underperformance of SOTA models on EMMA is not the lack of domain knowledge, but poor multimodal reasoning skills. Recent benchmarks suggest that SOTA models are equipped with strong domain knowledge. For example, models have achieved 87.7\% accuracy on GPQA Diamond (which tests graduate-level science knowledge), 78.2\% on MMMU (which spans many subjects), and 86.5\% on AIME (a high school-level math competition). **This suggests that the knowledge required for EMMA is largely present in current models.** Hence, we believe that poor multimodal reasoning skills are the bottleneck. **Concern 3: EMMA is limited to four domains and does not include natural images** In scoping EMMA, we explored many disciplines for questions meeting our strict criteria. We found that in many domains the primary challenge lies in knowledge rather than in multimodal reasoning. For example, many biology questions involve tasks such as labeling parts of a complex diagram. These tasks often hinge more on knowledge than on reasoning over multimodal information. Constructing/curating multimodal reasoning questions in these domains is also more difficult. That said, we appreciate your suggestion and plan to involve more disciplines in future work, such as law and medicine. We also agree that including natural images would be valuable. However, sourcing high-quality, reasoning-focused problems using natural images has proven to be challenging. Benchmarks like MMMU also include very few natural images. In math, physics, and chemistry, most problems are accompanied by simplified diagrams or sketches designed to reduce ambiguity and clarify assumptions. We will explore ways to design or curate problems involving natural imagery in future work. > Q1: Meaning of "a single visual pass" We operationalize "a single visual pass" as generating image captions with GPT-4o. Multimodal reasoning questions often require looking at an image multiple times for multi-step/multi-hop reasoning. If a question can be solved with text and captions, its visual content can likely be compressed into text with shallow visual perception/reasoning, and the rest can be handled by text reasoning. Whereas previous work, BLINK [1], assesses perception beyond recognition, we target images that are necessary for reasoning beyond perception. [1] https://arxiv.org/abs/2404.12390 > Q2: Meaning of Pass@N Pass@N refers to the accuracy when generating N responses per question and checking if **any** of them is correct. Thus, it is an upper bound for accuracy from test-time scaling. --- Rebuttal Comment 1.1: Comment: Thank the authors for the responses. They addressed my concerns and I have raised my rating to 4 (accept).
Summary: This paper introduces EMMA, a novel benchmark designed to evaluate the vision-language reasoning capabilities of MLLMs. Unlike existing benchmarks that focus on shallow visual understanding or text-dominated problem-solving, EMMA emphasizes tasks where solutions inherently require iterative interaction between visual and textual reasoning. The benchmark covers four domains — mathematics, physics, chemistry, and coding — and presents challenges such as 3D spatial transformations, chemical structure analysis, and multi-step simulation. It consists of 992 filtered questions from existing datasets and 1,796 newly curated questions developed in collaboration with domain experts. Evaluation of SOTA MLLMs reveals critical limitations: techniques like chain-of-thought prompting and test-time computation scaling (e.g., majority voting) provide only marginal improvements. Moreover, the models struggle with tasks requiring precise spatial simulation or the ability to leverage visual aids for enhanced efficiency, exposing difficulties in fine-grained spatial reasoning, multi-hop visual-text integration, and generating effective visual reasoning steps. The authors argue that current architectures and training paradigms are inadequate for supporting deep multimodal reasoning, calling for innovations to better integrate different modalities. Claims And Evidence: Yes, the claims made in the submission are supported by clear and convincing evidence. Methods And Evaluation Criteria: Yes Theoretical Claims: This work is a benchmark study and does not involve theoretical derivations. Experimental Designs Or Analyses: Yes, the soundness and validity of the experimental designs and analyses were carefully reviewed. No significant issues were identified. Supplementary Material: No supplementary materials were provided. Relation To Broader Scientific Literature: N/A Essential References Not Discussed: Yes. the related works are sufficient. Other Strengths And Weaknesses: [Strengths] - 1) This paper is rich and well-grounded, providing detailed definitions of fine-grained labels, a clear data construction process, and comprehensive experimental results on the key issues currently being explored by the research community. - 2) The authors have contributed a large number of novel multimodal disciplinary questions through manual collection and expert annotation. [Weaknesses] - 1) The term "organically reason" mentioned in the abstract is somewhat confusing. How should the term "organic" be interpreted in this context? - 2) Using the criterion of "whether an image caption can replace the visual input" to filter the data intuitively seems to only exclude questions that are "difficult to describe in words," but it may not necessarily ensure that the image content is "more aligned with visual perception than with visual reasoning." For example, identifying a symbol in a musical staff or counting the zeros of a function graph whose expression cannot be explicitly determined might be incorrectly sampled. I wonder if this type of bias exists, providing some error analysis could make the work more robust. Other Comments Or Suggestions: No Questions For Authors: Pleaser refer to Part of [Other Strengths And Weaknesses] Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your encouraging review and insightful questions. We provide our responses below. **Q1: What does "organically reason" mean?** By "organically reason over and with both text and images", we refer to the integrated way humans seamlessly blend visual and textual information during reasoning processes. To measure advanced multimodal reasoning capabilities, we target questions that require both modalities working together, where neither text nor images alone are sufficient. Models must dynamically engage with both reasoning channels and effectively fuse textual and visual information to succeed on our benchmark. **Q2: The filtering pipeline seems to only retain problems that are difficult to describe in words, which can include questions that are more difficult in visual perception than in visual reasoning.** We agree that our first-step filtering, which removes questions solvable by models given only question text and image captions, may retain examples that are difficult to describe in words, but do not necessarily require strong visual reasoning. For instance, identifying a symbol in a musical staff might be retained simply because the symbol is visually subtle or hard to express textually, even though solving the task does not demand deep visual reasoning. To address this potential bias, we installed a second filtering step. Specifically, we manually reviewed the remaining set and constructed a taxonomy that emphasizes multimodal reasoning skills. For example, for math, we identified categories such as 3D spatial simulation and pattern inference. **We then used GPT-4o to categorize the questions according to this taxonomy, followed by a final round of manual verification to ensure quality and relevance.** This process helped ensure that the retained questions require visual reasoning. Here we provide an [example](https://huggingface.co/datasets/MathLLMs/MathVision/viewer/default/test?views%5B%5D=test&row=3) from the MathVision dataset that is retained after the first-stage filtering but removed during manual verification. Solving the problem requires correctly perceiving all the digits in the image, which MLLMs struggle with. For instance, GPT-4o recognizes the digits at the bottom as "2" instead of "3", resulting in an incorrect answer. Although the question cleared the first-stage filter, we excluded it as it primarily tests visual perception rather than visual reasoning. To further support our claim, we provide the following statistics from the MathVision dataset: out of 3,040 total questions, 2,195 were retained after the initial model-based filtering. **However, only 668 questions (30.43\%) from that set were ultimately included in EMMA, based on our taxonomy and human verification.** This highlights that our filtering method applies a stricter standard to ensure a focus on multimodal reasoning. As noted in our error analysis (see Figure 5), MLLMs sometimes still fail on EMMA questions due to perceptual errors. However, we do not view these errors as contradictory to the multimodal-reasoning-based nature of the tasks. Rather, they may reflect perceptual limitations that are prerequisites for successful multimodal reasoning—an important challenge in its own right. In other words, while the immediate failure may stem from perception, the questions still require multimodal reasoning to be solved, highlighting a compound challenge that current models have yet to overcome. In summary, we have taken careful steps to ensure that the problems included in EMMA emphasize visual reasoning over low-level visual perception. We appreciate your thoughtful comment, as it raises an important distinction and gives us the opportunity to clarify our data curation process more thoroughly. --- Rebuttal Comment 1.1: Comment: Thanks for the authors' response, which has addressed my concern.
Summary: This paper proposed a benchmark EMMA (Enhanced MultiModal reAsoning) to feature questions that are difficult to solve by relying solely on text-based reasoning or a single visual pass, covering math, physics, chemistry, and coding domains with 2,788 questions. Ten state-of-the-art MLLMs are further evaluated on the benchmark, revealing (1) a substantial performance gap compared to human experts and (2) techniques such as Chain-of-Thought prompting and test-time compute scaling offering only marginal gains. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: The paper does not include theoretical proofs. Experimental Designs Or Analyses: Yes Supplementary Material: Yes. Especially Section D. Relation To Broader Scientific Literature: No Essential References Not Discussed: No Other Strengths And Weaknesses: Strengths: * The proposed benchmark reveals the gap between SOTA MLLMs with human experts, and will serve as a strong benchmark on multimodal reasoning evaluation. * The experiments reveal that test-time scaling strategies cannot achieve a strong performance yet, calling for new reasoning strategies in the future. * The experiments are comprehensive, covering many SOTA MLLMs. The curated data quality is also high. Weaknesses: * The paper is a pure benchmark without a new method. This is not a reason for rejection, but is a weakness or limitation on technical contributions. Other Comments Or Suggestions: Please see weaknesses Questions For Authors: Please see weaknesses Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you very much for your thoughtful and careful reading! As you have pointed out, our benchmark provides a test suite that reveals significant limitations of even the most advanced MLLMs in handling complex multimodal reasoning tasks. Although state-of-the-art MLLMs have recently achieved strong results on many challenging benchmarks, their poor performance on our benchmark highlights key gaps in their reasoning capabilities. **We attribute this, in part, to our enhanced filtering pipeline, which allowed us to surface questions that genuinely require multimodal reasoning.** Importantly, this pipeline is reusable and could serve as a framework for constructing future benchmarks aimed at other aspects of multimodal understanding. In addition, we utilized our benchmark as a testbed to provide technical evaluation and insights into advanced techniques for enhancing MLLM reasoning. By comparing model performance with and without CoT, and evaluating different scaling approaches—such as majority voting, best-of-N, and tournament selection—we highlight the limitations of current prompting and inference-time methods for complex visual reasoning. These technical insights also would not have been possible without a carefully curated benchmark. By exposing these limitations, our work points to broader issues, such as potential shortcomings in current model architectures or a mismatch between existing training paradigms and the demands of complex multimodal reasoning. We hope this will encourage further research into more robust multimodal architectures and training strategies. We deeply appreciate your constructive suggestions. Following your advice, we plan to explore new methods to tackle these challenges in future work, and we remain committed to advancing the evaluation and development of MLLMs’ visual reasoning capabilities. Thank you again for your insightful and encouraging feedback!
null
null
null
null
null
null
Quantum Optimization via Gradient-Based Hamiltonian Descent
Accept (poster)
Summary: This paper proposed gradient-based Quantum Hamiltonian descent, which is motivated by insights from high-resolution differential equations and based on quantum Hamiltonian descent. They proved a faster convergence rate of gb-qhd under some reasonable assumptions and conduct numerical simulations that demonstrated the advantage of gb-qhd over original qhd. ----- Update: I have read all the comments and rebuttals, and will take them into account for my evaluation of the paper. Claims And Evidence: Yes, the theoretical claims in the submission are supported by formal proofs, and performance claims are supported by numerical simulations. Methods And Evaluation Criteria: Yes. For the theoretical part, the authors proved the convergence rate of their algorithms, which makes sense. For the simulation part, the authors judged the performance by the function value with respect to iteration rounds, which also makes sense. Theoretical Claims: I do not check the proofs for the theoretical claims in detail, but I went through their proof ideas. They adopted a similar method (Lyapunov function) as the in the proof of QHD convergence. Therefore, I thought the claims are likely to be correct. Experimental Designs Or Analyses: Yes, I have checked the validity of the numerical simulation results. They have provided details about experiments demonstrating gd-QHD's ability, which is a standard and fair comparison. Supplementary Material: I looked Appendix A and B about review of previous methods, which introduces previous work in many perspectives. I think they are good. Relation To Broader Scientific Literature: QHD opens a new quantum algorithm paradigm for designing quantum algorithms for optimization from a. physical viewpoint, and this work goes further on this direction. This could be beneficial for us to understand the relationship between quantum dynamics and optimization. Essential References Not Discussed: There are not missing references. Other Strengths And Weaknesses: Strengths: - Novel improvements for QHD in both theoretical and practical perspectives. They proposed a novel method called gd-QHD, and proved its convergence with better simulation results. - The paper is well written in general, with tables and figures clearly demonstrating their results. Weakness: - A potential weakness is that the paper lacks experimental results on real devices. Besides numerical simulations, QHD provides further experiments on D-Waves and compared their experiment results with more classical algorithms, in many setups. Compared with that, this submission may be a little insufficient in this aspect. But I think the numerical simulations results in this paper is already convincing. Other Comments Or Suggestions: No more comments. The writing is very good. Questions For Authors: - It seems that the authors have used a different notation from the original QHD paper, and this makes the comparison about the convergence rate (in the theoretical part) not so direct. Could the authors explain more about the convergence rates about both methods and make a comparison? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank Reviewer 1Ka2 for their detailed comments and insightful suggestions. In particular, we appreciated the Reviewer's observation that our work could be beneficial to "understand the relationship between quantum dynamics and optimization". We address each of the Reviewer's questions as follows: 1. **Lack of experimental results on real devices**: We thank the reviewer for checking the feasibility of gradient-based QHD on real quantum devices such as D-Wave. Unlike vanilla QHD, implementing gradient-based QHD using analog simulators (e.g., D-Wave’s quantum computer) requires an explicit hardware encoding of the Hamiltonian $H_{k,2}\propto$ {$\nabla$, $\nabla f$}. This can be done efficiently for quadratic functions (not necessarily convex). For more sophisticated problems, e.g., higher-order polynomials, the encoding of $H_{k,2}$ must be evaluated on a case-by-case basis but remains feasible. We will include a brief discussion on the feasibility of an analog implementation of gradient-based QHD in the camera-ready version if this paper is accepted. 2. **Comparison with the original QHD paper**: The convergence rate of the original QHD is formulated in a more general form (Theorem 1 on page 21, [Leng et al., 2023](https://arxiv.org/abs/2303.01471)). $$\mathbb{E}[f(X_t)] - f(x^*) \le O(e^{-\beta_t}),$$ where the time-dependent functions in QHD, i.e., $\alpha_t$, $\beta_t$, and $\gamma_t$, must satisfy the *ideal scaling condition*: $\dot{\beta}_t \le e^{\alpha_t}$, $\dot{\gamma}_t = e^{\alpha_t}$. Note that our choice of $\alpha, \beta,\gamma$ in this submission is irrelevant to the time-dependent functions in the original QHD paper. When we set $\alpha=\beta=\gamma=0$, our gradient-based QHD reduces to the vanilla QHD with $\alpha_t = -\log(t)$ and $\beta_t = \gamma_t = 2\log(t)$. In this case, they exhibit the same convergence rate $O(t^{-2})$. We will add this discussion to the camera-ready version if this paper is accepted. We sincerely appreciate the Reviewer's thoughtful feedback and constructive suggestions. Given our clarifications and the additional insights provided, **we hope the Reviewer might reconsider their evaluation and, if appropriate, adjust the score accordingly.** --- Rebuttal Comment 1.1: Comment: Thank you for your detailed comments! I have another question after reading this rebuttal and the discussions from other reviewers. In Sections 6.2 and 6.3, you choose the special parameter $\beta = 0$, which reduces gradient-based QHD to a simpler form. It would be very helpful if you could clarify the following points: - From my understanding, setting $\beta = 0$ does not discard gradient information, as the gradients are already encoded into the $A_j$’s. Is that correct? - I am a little confused by your second point. Does this mean that the parameters $\alpha$, $\beta$, and $\gamma$ in your submission are unrelated to the parameters $\alpha_t$, $\beta_t$, and $\gamma_t$ in the original QHD paper? If so, as you stated, choosing $\alpha = \beta = \gamma = 0$ reduces the gradient-based QHD to the vanilla QHD for a particular parameter setting. It does not seem immediate to me that gradient-based QHD is *always* a generalization of the original QHD for different parameter choices. Is that true? I appreciate your clarification on these points. Thank you! ------- Update: Thank you for your prompt reply! All my questions have been addressed. --- Reply to Comment 1.1.1: Comment: We sincerely thank Reviewer 1Ka2 for the constructive feedback. Below, we address the additional questions related to the gradient encoding and parameter setting in gradient-based QHD. > From my understanding, setting $\beta=0$ does not discard gradient information, as the gradients are already encoded into the $A_j$’s. Is that correct? Yes, the gradient is also included in $A_j = t^{-3/2}p_j + \alpha t^{3/2}v_j$, where $v_j = \partial_j f$. The gradients are encoded in the full Hamiltonian of gradient-based QHD as long as $\alpha \neq 0$ or $\beta \neq 0$. > Does this mean that the parameters $\alpha$, $\beta$, and $\gamma$ in your submission are unrelated to the parameters $\alpha_t$, $\beta_t$, and $\gamma_t$ in the original QHD paper? If so, as you stated, choosing $\alpha=\beta=\gamma=0$ reduces the gradient-based QHD to the vanilla QHD for a particular parameter setting. It does not seem immediate to me that gradient-based QHD is always a generalization of the original QHD for different parameter choices. Is that true? We confirm that the parameters $\alpha$, $\beta$, and $\gamma$ in this submission are different from the time-dependent functions $\alpha_t$, $\beta_t$, and $\gamma_t$ in the original QHD paper. In particular, the original QHD paper considers the Hamiltonian: $$H_1 = e^{\alpha_t - \gamma_t}(p^2/2) + e^{\alpha_t+\beta_t+\gamma_t}f(x).$$ In this submission, we define the gradient-based QHD described by $$H_2 = (t^{-3/2}p + \alpha t^{3/2}v)^2 + \beta t^3 |\nabla f|^2/2 + (t^3 + \gamma t^2) f.$$ Comparing these two Hamiltonians, it is clear that gradient-based QHD reduces to the original QHD if $\alpha = \beta = \gamma = 0$ and we choose $\alpha_t = -\log(t)$ and $\beta_t = \gamma_t = 2\log(t)$. In the current form, gradient-based QHD is the generalization of QHD under the specific choices: $\alpha_t = -\log(t)$ and $\beta_t = \gamma_t = 2\log(t)$. This leads to a simplified QHD formulation $H = p^3/(2t^3) + t^3 f$ that avoids distracting the audience with excessive hyperparameters. However, the general QHD ($H_1$) can be similarly extended to incorporate the gradient information (as in $H_2$). We will address this point in the camera-ready version if this paper is accepted.
Summary: This work proposes a gradient-based quantum hamiltonian descent (QHD), which generalizes the previously proposed based on function values. Theoretical and simulation results are also provided. ## After rebuttal The authors clarified most of my concerns during the rebuttal. Hence, I increased my score. However, I still believe the clarity of the paper should be further improved. If the paper gets accepted as a final decision, I hope the authors can incorporate many aspects of the discussion in to the final version. Claims And Evidence: There are some claims that don’t' seem to align. For instance, (10) is only obtainable by assuming (9), in which case the RHS of (10) decays to 0. The next sentence reads: "Therefore, we expect the Hamiltonian dynamics … similar to that of high-resolution ODE." Then, in the next sentence, $\alpha, \beta, \gamma$ are NOT chosen according to (9). Please refer to the below sections for more details. Methods And Evaluation Criteria: Not necessarily. For instance, in Section 6.1, the success probability is defined and is mentioned that the suboptimality measure ($\delta$) is set to 1. However, in Figure (3), all methods seem to start from (at k=0) initial suboptimality less than 1. I'm not sure how to interpret the plot. Theoretical Claims: While theoretical results are presented, they do not seem to align with the algorithm. For global convergence, for instance, $\beta=0$ in which case the gradient component of the Hamiltonian disappears. Theoretically, what is the benefit of gradient-based QHD over vanilla QHD? Experimental Designs Or Analyses: This work requires simulating quantum dynamics with time dependent hamiltonian, which itself is not trivial. Moreover, in numerical experiments, none of the "theoretically motivated" choices are made, including the step size, the choice of $\alpha, \beta, \gamma$. I do not think the experimental results support the claim sufficiently. Supplementary Material: I read some parts of the supplementary material, and feel that many stuff are hidden in the main text. For instance, proof of Theorem 6 uses spatial discretization and asserts $H_{k, 3}$ can be implemented in constant time. What is the reasoning? Following the proof, it appeals to Lemma 9 for $H_{k, 2}$ which according to the main text is "not" fast forwardable. Then, in proof of Lemma 9, it is revealed that $H_{k,2}$ is time independent. I believe these are important details that should be clearly revealed in the main text. Relation To Broader Scientific Literature: Given my above points, I am not sure what is the key contribution of this paper to the broader scientific literature. Essential References Not Discussed: This work is in a fairly niche area, and I do not believe there is essential references to be discussed, other than the original QHD framework that the authors extensively refer to. Other Strengths And Weaknesses: - Classical optimization algorithms are quite sensitive to the step size. Executing all methods with the same step size of 0.2 does not tell much in terms of optimization, which is the aim of this work. - It is hard to extract the main message of the paper. Theories appeal to high-resolution ODE, but then a lot of relaxations are made such that the theory the main text appeals to simply does not hold any more (e.g., $\beta=0$). Empirical evaluations are also not done rigorously (e.g., using the same step size of $0.2$ for all cases). I do not think, in the current form, this paper asserts something scientifically concrete to the readers. Other Comments Or Suggestions: Given that this is a machine learning conference, it would be nice to at least introduce some technical terms like "fast forwardable." Questions For Authors: - What do you mean by NAG reduces oscillations in the optimization trajectory? NAG is known to be more sensitive to noise, and actually oscillates more than gradient descent. - What do you mean by "damped heavy ball motion"? (line 49) - What is "iteration" for QHD in Fig 2? What/ is "success probability" of SGDM or NAG in Fig 2? - Why do you start with (5)? Does (5) recover the Hamiltonian for original QHD with $\alpha = \beta = \gamma =0$? Or does this claim only hold for $\hat{H}(t)$? - Line 300: gradient-based QHD Hamiltonian can be decomposed to three terms, where $H_{k,2}$ and $H_{k,3}$ vanishes with $\alpha = \beta = \gamma = 0$. So just simulating $H_{k,1}$ will recover QHD? Then why simulate the other two terms? Wouldn't that necessarily incur higher qubit/gate complexity? - Moreover, in Sec 5.2, it's mentioned that $H_{k,1}$ is fast forwardable. So vanilla QHD is fast forwardable? - In Section 6.1, the success probability is defined and is mentioned that the suboptimality measure (\delta) is set to 1. However, in Figure (3), all methods seem to start from (at k=0) initial suboptimality less than 1. What is going on? - Why is $\beta=0$ in Sec 6.2? Based on (12), that removes the gradient component, which seems to directly contradict what the paper asserts. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We sincerely thank Reviewer F84Z for their detailed comments and insightful suggestions. First, we would like to clarify the primary contribution of this submission, as it appears to have been misinterpreted by Reviewer F84Z. The primary objective of this work is to propose a *novel* quantum Hamiltonian-based algorithm (gradient-based QHD) for continuous optimization. While our Hamiltonian is inspired by the Bregman Lagrangian and the high-resolution ODE framework, it is neither equivalent to them nor a direct extension. Instead, it possesses unique structures and properties. In this work, we study the dynamical and algorithmic properties of gradient-based QHD independently using new mathematical tools. This distinction was made clear in the original submission: > In this work, we do not limit our choice of the parameters $\alpha$, $\beta$, and $\gamma$ ... explore a larger family of dynamical systems for continuous optimization problems. **We now address the Reviewer's major concerns:** (minor points ignored due to page limit) > There are some claims that don’t seem to align ... are NOT chosen according to (9). The Hamiltonian dynamics in (10) serve to formally illustrate the connection between gradient-based QHD and high-resolution ODEs. Gradient-based QHD does not reduce to high-resolution ODEs, and its convergence properties have been established independently (Theorems 1 & 4). **We believe our claims are self-consistent and supported by both the theoretical and numerical evidence presented in the original submission.** > In Section 6.1, the success probability is defined ... I'm not sure how to interpret the plot. Figure 3(a) depicts the expectation value of the sub-optimality gap (caption: "function value"). It is important to note that the average sub-optimality gap is not equivalent to the success probability measure defined in Section 6.1: even if the initial average sub-optimality is below 1, this does not imply that all solutions achieve an optimality gap lower than 1. Additionally, we have identified a typo in the caption of Figure 3(b): "Success probability" should be corrected to "Gradient norm." We sincerely apologize for any confusion and will ensure that this typo is corrected in the camera-ready version if the paper is accepted. > ... the gradient component of the Hamiltonian disappears ... what is the benefit of gradient-based QHD over vanilla QHD? We thank the Reviewer for the question regarding the benefits of gradient-based QHD over vanilla QHD. By definition (Eqs. (12)–(13)), the gradient appears in both $\frac{1}{2}\sum^d_{j=1}A_j$ and $\beta t^3\|\nabla f\|^2$. Therefore, even if we set $\beta = 0$, gradient information remains present in $A_j$ as long as $\alpha \neq 0$. Theoretically, the inclusion of the gradient in the Hamiltonian results in a larger spectral gap compared to vanilla QHD. A sufficiently large spectral gap is crucial for the success of Hamiltonian-based optimization algorithms, as is well understood in the context of adiabatic algorithms and vanilla QHD. > ... in numerical experiments, none of the "theoretically motivated" choices are made, including the step size, ... As long as the parameters $\alpha, \beta, \gamma$ satisfy the conditions in Theorem 1, our result holds independent of specific step-size choices. Our choice of the parameters in the numerical experiment also aligns with the conditions discussed in Theorem 1. > ...proof of Theorem 6 ... asserts $H_{k,3}$ can be implemented in constant time. What is the reasoning? The Hamiltonian $H_{k,3}$ represents a point-wise multiplication of a function to the wave function, and its spatial discretization directly leads to a diagonal operator acting on the (discretized) wave function. Since all the diagonal elements of the discretized $H_{k,3}$ are efficiently computable via the query access to $f$ and $\nabla f$, we can simulate $e^{itH_{k,3}}$ using $O(\log(t))$ gates. > Executing all methods with the same step size of 0.2 does not tell much in terms of optimization, which is the aim of this work. In our preliminary experiments, we implemented the test in Section 6.2 with a range of step sizes ($h \in [0.05, 0.5]$), and we always observed similar convergence behavior. Therefore, to maintain consistency, we fix $h = 0.2$ in the submission. We will add all these results in the camera-ready version if this paper is accepted. > Line 300: ... simulating $H_{k,1}$ will recover QHD? ... $H_{k,1}$ is fast forwardable. So vanilla QHD is fast forwardable? When setting $\alpha=\beta=\gamma=0$, we have $$H_{k,1} = -\Delta/(2t^3), \quad H_{k,3} = t^3 f,$$ and $H_{k,1}+H_{k,3}$ recovers QHD. Clearly, $H_{k,3}$ does not vanish, and just simulating $H_{k,1}$ will not recover QHD. Moreover, since $H_{k,1}$ and $H_{k,3}$ can not be simultaneously diagonalized, QHD is not fast-forwardable. **We appreciate the Reviewer's time and consideration and hope this clarification helps in reassessing our submission.** --- Rebuttal Comment 1.1: Comment: Thank you for your detailed reply. I mainly disagree with the following statement: > Gradient-based QHD does not reduce to high-resolution ODEs, and its convergence properties have been established independently (Theorems 1 & 4). We believe our claims are self-consistent and supported by both the theoretical and numerical evidence presented in the original submission. I believe the fundamental flaw is that it is unclear what algorithm Theorems 1 and 4 analyze. Is it Algorithm 1? But isn't Algorithm 1, which to be precise is the proposed method, a time-discretized version of Eq (14)? QHD fundamentally is a quantum-dynamics simulation-based algorithm. Theorems 1 and 4 assume (14) can be simulated exactly. Is that possible? To be more specific, how can (17) be implemented exactly? How should I interpret the $\approx$ sign? It will necessarily incur some error; please correct me if I misunderstand anything. Due to this reason, I find Theorem 6 to be more aligned with the theoretical analysis of the performance of the proposed method. Further, I'm a bit confused by the decomposition of $\hat{H}(t_k)$ above Eq. (17). There, it seems that the gradient norm term from $H_{k, 3}$ persists even with $\beta=0$ (the case Theorems 1 and 4 analyze). This does not seem to align with Eq. (14), which, to my understanding, Algorithm 1 is implementing (in the time-discretized version). In Theorem 6, the query complexity depends linearly on $\alpha$. How do I interpret this result if $\alpha=0$? I don't see any problem with setting $\alpha=0$ in Theorem 1, as it is written; but clearly I cannot plug in $\alpha=0$ in Theorem 6. Further, the query complexity in Theorem 6 also depends linearly on the step size $h$, which to me reads that I should use the smallest possible step size to minimize the query complexity, which sounds counter-intutivie. So, in general, I'm confused about what algorithm Theorems 1 and 4 analyze, and I see many disconnections between them and Theorem 6, which, to my understanding, analyzes Algorithm 1, which is the proposed algorithm, not Eq (14). Lastly, in the proof of Theorem 6, many steps seem handwavy. To be specific, what do you mean by $\Phi$ being "sufficiently" smooth? How smooth should it be to have $N = \text{poly} \log (1/\epsilon)?$ How does the spatial discretization being regarded as a pseudo-spectral method "turns out" that the overall query complexity simply reads $\tilde{O} (d \alpha h L)$? To sum up, I find the submitted manuscript requires significant clarification, and I find the theoretical contribution to be not rigorous and the empirical evaluation not extensive. Therefore, I keep my original score. --- Reply to Comment 1.1.1: Comment: We sincerely thank Reviewer F84Z for the informative comment and feedback. Below, we address the additional questions related to the implementation and interpretation of gradient-based QHD. --- **Comment 1: discrepancy between Theorems 1 & 4 and Algorithm 1.** Theorems 1 and 4 analyze the convergence rate of the continuous-time gradient-based QHD dynamics (generated by the Hamiltonian as in (12)). Theorem 6 gives a rigorous complexity analysis of Algorithm 1 (i.e., time discretization of gradient-based QHD). It is worth noting that: 1. The continuous-time dynamics itself is **not a quantum algorithm**, but a mathematical model of the proposed quantum algorithm (Algorithm 1); and 2. We **never claimed that Algorithm 1 is a perfect simulation of the continuous-time dynamics**; instead, the interesting message from our numerical results (Section 6) is that Algorithm 1 converges with "relatively large" step sizes (e.g., $h = 0.2$). This strongly suggests that **the convergence of Algorithm 1 still happens without perfectly simulating the continuous-time dynamics**. At first glance, this may seem miraculous—or even counterintuitive. However, the observation is entirely consistent with our experience from classical gradient descent (GD). The continuous-time limit of GD (i.e., $x_k = x_{k-1} - h \nabla f(x_{k_1})$) corresponds to the gradient flow: $\dot{X}_t = - \nabla f(X_t)$. For convex $f$, gradient flow achieves a convergence rate of $f(X_t) - f(x^*) \le O(1/t)$ for convex $f$. Meanwhile, gradient descent converges whenever the step size satisfies $h \le 1/L$, where $L$ is the Lipschitz constant of $\nabla f$. This indicates that GD, as a discrete algorithm, converges without perfectly simulating the gradient flow. In our paper, we (numerically) confirm that a similar phenomenon holds in the quantum setting. Ideally, we would like to establish the convergence of Algorithm 1 independently of the continuous-time dynamics—just as the convergence of GD can be proven without relying on the analysis of gradient flow. However, this remains an open problem, as we have not yet identified suitable analytical tools. Since the current manuscript already includes a continuous-time convergence analysis and strong numerical evidence demonstrating the effectiveness of the discrete-time algorithm, we believe our results are on par with the standards of the machine learning community. We leave the technical analysis (i.e., a rigorous proof of the convergence of Algorithm 1 for non-zero $h$) for future work. --- **Comment 2: decomposition of $\hat{H}(t_k)$** The gradient norm term in $H_3$ comes from the expansion of the first term in the Hamiltonian: $A^2_j = (t^{-3/2}p + \alpha t^{3/2}v_j)^2$ in (12). Direct calculation shows that this will add a $\alpha^2 t^3 v^2_j$ term to the "diagonal Hamiltonian" (i.e., $H_3$). --- **Comment 3: query complexity depends linearly on $\alpha$** In the extreme case where we set $\alpha = 0$, no gradient appears in the Hamiltonian. This means that the Hamiltonian can be simulated without querying $\nabla f$ when $\alpha = 0$, therefore, the query complexity (to $\nabla f$) becomes 0. **This is very natural and intuitive, as reflected by our Theorem 6.** Note that the query complexity to $f$ is unchanged no matter what values are chosen for $\alpha$ and $\beta$; therefore, there is no free lunch in Algorithm 1 even if we eliminate the gradient component (and the dynamics essentially reduce to a close variant of the original QHD). --- **Comment 4: the proof of Theorem 6** We apologize for omitting some details in the proof of Theorem 6, but we do not think it affects the correctness of our proof. The pseudo-spectral method (i.e., DFT for Laplacian, regular quadrature for potential) is the go-to real-space simulation algorithm for Schrodinger equations. When the wave function has Fourier coefficients that decay super-polynomially (indicating the wave function is "smooth" or $C^\infty$), it is sufficient to use a truncation number $N = \mathrm{poly}\log(1/\epsilon)$. Based on this spatial discretization, we derived the $\tilde{O}(d\alpha h L)$ query complexity. We will add a detailed discussion on the spatial discretization of the Hamiltonian in the camera-ready version if this paper is accepted. --- **We appreciate the Reviewer's time and consideration and hope this clarification helps in reassessing our submission.**
Summary: In this submission, the authors presented a variant of the prominent quantum Hamiltonian descent (QHD) algorithm by adding the help of the gradient information. More specifically, the authors proposed a new time-dependent Hamiltonian as in Eq (4) which, unlike the original QHD, contains the gradient information. The authors proved the convergence of the new method. More surprisingly, the authors presented numerical results showing the gradient-based QHD outperforms the original QHD in many different settings. Claims And Evidence: Theoretical proofs and numerical evidence are provided for the claims. Methods And Evaluation Criteria: The results are backed by numerical evaluation. Theoretical Claims: I checked the proofs and they appear to be correct. Experimental Designs Or Analyses: The numerical experiments are sound and valid. Supplementary Material: N/A Relation To Broader Scientific Literature: The problems this submission studies may find applications in quantum machine learning. Essential References Not Discussed: N/A Other Strengths And Weaknesses: I think this submission has made a solid contribution to quantum machine learning and optimization. It is a nice extension of the original QHD. The only weakness I can see is the lack of the convergence rate comparison with the original QHD. Other Comments Or Suggestions: 1. Page 4: classical QHD -> original QHD. The term "classical QHD" might be misleading since readers might think it is referring to a classical algorithm. 2. The paper arXiv:2410.14243 might prove a better algorithm for simulating time-dependence Hamiltonians. Questions For Authors: What is the intuition of the Lagrangain function in Eq (5)? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank Reviewer NP8c for their detailed comments and insightful suggestions. In particular, we appreciate the Reviewer's observation that our submission "makes a solid contribution to quantum machine learning and optimization." Below, we address each of the Reviewer's questions individually. 1. **Lack of the convergence rate comparison with the original QHD**: The convergence rate of the original QHD is formulated in a more general form (Theorem 1 on page 21, [Leng et al., 2023](https://arxiv.org/abs/2303.01471)). $$\mathbb{E}[f(X_t)] - f(x^*) \le O(e^{-\beta_t}),$$ where the time-dependent functions in QHD, i.e., $\alpha_t$, $\beta_t$, and $\gamma_t$, must satisfy the *ideal scaling condition*: $\dot{\beta}_t \le e^{\alpha_t}$, $\dot{\gamma}_t = e^{\alpha_t}$. Note that our choice of $\alpha, \beta,\gamma$ in this submission is irrelevant to the time-dependent functions in the original QHD paper. When we set $\alpha=\beta=\gamma=0$, our gradient-based QHD reduces to the vanilla QHD with $\alpha_t = -\log(t)$ and $\beta_t = \gamma_t = 2\log(t)$. In this case, they exhibit the same convergence rate $O(t^{-2})$. We will add this discussion to the camera-ready version if this paper is accepted. 2. **classical QHD -> original QHD**: We thank the Reviewer for this thoughtful suggestion and would be happy to incorporate this change in the camera-ready version if the submission is accepted. 3. **Better quantum algorithms for time-dependent Hamiltonian simulation**: We will include the reference arXiv:2410.14243, along with several other results on commutator scaling, in the camera-ready version. 4. **The intuition of the Lagrangain function in Eq (5)**: Eq. (5) is our Lagrangian design that is inspired by both the Bregman Lagrangian and the high-resolution ODE. Specifically, the convergence analysis of the high-resolution ODE (Shi et. al.) leverages a Lyapunov function $$\frac{d \mathcal{E}(t)}{d t} \le - \left[\sqrt{s}t^2 + \left(\frac{1}{L} + \frac{s}{2}\right)t + \frac{\sqrt{s}}{2L}\right]\|\nabla f(X)\|^2 < 0.$$ The Lyapunov function can be interpreted as a form of system energy that includes $\nabla f$, which motivates our design of (5). We sincerely appreciate the Reviewer's thoughtful feedback and constructive suggestions. Given our clarifications and the additional insights provided, **we hope the Reviewer might reconsider their evaluation and, if appropriate, adjust the score accordingly.**
Summary: This paper explores quantum algorithms for solving unconstrained optimization problems. Given that Nesterov's accelerated gradient descent admits a classical Hamiltonian dynamics interpretation, it is natural to consider leveraging quantum Hamiltonian dynamics for algorithm design. In particular, Leng et al. proposed the Quantum Hamiltonian Descent (QHD) algorithm, which defines a quantum evolution via a time-dependent Schrödinger equation. In QHD, the potential term is proportional to the objective function $f$ and increases with time $t$, while the kinetic energy term decreases with $t$. Building on the intuition from the high-resolution ODE framework by Shi et al., this work extends QHD by incorporating a gradient term of the objective function $f$ into the potential. The resulting algorithm is called gradient-based QHD. The authors then proved a convergence guarantee of gradient-based QHD, developed a quantum algorithm that simulates discrete-time gradient-based QHD, and conducted numerical experiments testing the performance of gradient-based QHD. Claims And Evidence: 1. Gradient-based QHD converges to a global minima of the objective function $f$ with an inverse quadratic convergence rate, and converges to a stationary point with the same convergence rate. 2. In some cases, gradient-based QHD yields solution that are an order of magnitude better than those obtained by other methods. Both claims appear to be correct; however, I have some concerns about their implications. Please find them below. Methods And Evaluation Criteria: Please refer to the following two parts. Theoretical Claims: The proofs for the convergence rates appear correct to me, and the techniques are quite elegant. However, I am concerned about potential overhead hidden in the big-O notation—possibly involving factors related to the dimension of the objective function or the mass of the initial wave function in regions where the function value is small. Given that, in the worst case, finding the global optimum is NP-hard, assuming P$ \neq$NP, such overhead could be superpolynomially large, making an inverse-quadratic convergence rate insufficient. While a similar issue exists in the convergence analysis of QHD by Leng et al., their convergence rate for convex functions is inverse-exponential, which strongly suggests that the required simulation time remains polynomially bounded in the worst case. Therefore, I believe the authors need to provide further justification for why an inverse-quadratic convergence rate is meaningful. Minor question: In Theorem 6, why is a separate quantum first-order oracle necessary, given that Jordan's gradient estimation algorithm allows us to compute the gradient using a constant number of queries to a quantum function value oracle? Experimental Designs Or Analyses: The experimental setups and results are generally well-presented. However, the scale of these examples, particularly the dimension of the objective function, appears relatively small compared to the instances studied in the QHD paper by Leng et al. Additionally, Leng et al. demonstrated an analog implementation of QHD on the D-Wave system, which can be more efficiently executed on near-term quantum devices than digital implementations. In contrast, this paper does not provide an analogous implementation for gradient-based QHD. Moreover, I have the following questions regarding the setups: 1. What is the advantage of discretizing gradient-based QHD rather than approximating its continuous dynamics using Hamiltonian simulation algorithms, as done in the QHD paper? 2. The computation in each iteration of gradient-based QHD appears significantly more complicated than in NAG, particularly due to the presence of the Laplacian operator. Is it fair to compare their performance based on the same number of iterations? Supplementary Material: Appendix A through C Relation To Broader Scientific Literature: This paper proposes a novel idea and enriches the literature on solving unbounded continuous optimization problems using quantum Hamiltonian dynamics. Essential References Not Discussed: Not anything in particular I can think of Other Strengths And Weaknesses: I don't have further comments Other Comments Or Suggestions: Typo in line 096: quantum Hamiton descent -> quantum Hamiltonian descent Questions For Authors: I don't have further questions other than the existing ones above. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We sincerely appreciate Reviewer Wa1N's thorough feedback and valuable insights. In particular, we thank the Reviewer for recognizing our techniques as "elegant" and acknowledging that this work "proposes a novel idea and enriches the literature on solving unbounded continuous optimization problems." We address each of the Reviewer's questions below: 1. **Potential overhead hidden in the big-O notation**: According to our proof of Theorem 1, the detailed convergence rate is (for some $0 < T_0 \le 1/\alpha$): $$\mathbb{E}[f(X_t)] \le \frac{K_0 + D_0}{t^2 + (\gamma - 3\alpha) t},\quad 0 < T_0 \le t,$$ - $K_0 = \langle \Psi(T_0)|(-\Delta)|\Psi(T_0)\rangle / T^4_0$: the initial kinetic energy. Independent of $f$ and typically scales as $O(d)$, e.g., for a standard Gaussian $\Psi_{0}$. - $D_0 = \mathbb{E}\left[\|\nabla f(X_{T_0})\|^2+4\|X_{T_0}\|^2+(T^2_0+\omega T_0)f(X_{T_0})\right]$: generally scales as $O(d)$ due to $\|\nabla f\|^2$. Therefore, there might be an additional $O(d)$ overhead in our result. We will add this discussion to the camera-ready version if this paper is accepted. 2. **NP-hardness of global minimization & justification for inverse-quadratic convergence**: - The inverse-quadratic convergence rate (Theorem 1) is established for general convex $f$. While finding the global optimum of a non-convex objective function is in general NP-hard, for the problem class of interest (i.e., general convex optimization), there exist polynomial-time classical algorithms with query complexity $O(n^2$) [Lee et al., COLT '18]( https://proceedings.mlr.press/v75/lee18a/lee18a.pdf). There is no ``super-polynomial overhead'' for the problem class we discussed in Theorem 1. - The $O(t^{-2}$) rate is known to be optimal for classical first-order methods. Although a direct quantum counterpart has not yet been established, strong evidence suggests that there is no quantum speedup for generic convex optimization (e.g., [Garg et al., 20](https://arxiv.org/abs/2010.01801)). Our convergence rate may be already near optimal. - Additionally, we emphasize that our numerical results demonstrate a significant advantage of gradient-based QHD for non-convex optimization. This observed performance extends beyond the scope of Theorem 1. To further clarify this distinction, we will explicitly highlight the convexity assumption in Theorems 1 & 4 in the camera-ready version. 3. **Necessity of quantum first-order oracle in Theorem 6**: We agree with the Reviewer that the requirement for a quantum first-order oracle $O_{\nabla f}$ can potentially be eliminated by Jordan's algorithm. However, the query complexity for obtaining an $\epsilon$-approximate gradient scales as $\mathcal{O}(\sqrt{d}/\epsilon)$ without a strong smoothness characterization of $f$ ([Gilyen et al., SODA'19](https://epubs.siam.org/doi/abs/10.1137/1.9781611975482.87)). In this work, we focus on the general convergence properties and leave the integration of quantum gradient estimation for future study. 4. **Numerical experiments are low-dimensional & analog implementation of QHD**: We thank the reviewer for highlighting the feasibility of gradient-based QHD on near-term quantum devices. Unlike vanilla QHD, implementing gradient-based QHD using analog simulators requires an explicit hardware encoding of the Hamiltonian $H_{k,2}\propto$ {$\nabla$, $\nabla f$}. This can be done efficiently for quadratic functions (not necessarily convex). For more sophisticated problems, e.g., higher-order polynomials, the encoding of $H_{k,2}$ must be evaluated on a case-by-case basis but remains feasible. We will include a brief discussion on this point in the camera-ready version if this paper is accepted. 5. **Advantage of discretizing gradient-based QHD**: The discretization method proposed in this submission utilizes the Trotter product formula, which can be viewed as a quantum simulation algorithm. Our approach, based on the product formula, has a simple structure and is potentially more straightforward to implement. 6. **Fairness in comparing gradient-based QHD and NAG based on the same number of iterations**: We agree with the Reviewer that the iteration steps in gradient-based QHD are more complicated. According to the proof of Lemma 9 and Theorem 6, the query and gate complexity of each iteration in gradient-based QHD scale as $\tilde{O}(d)$. In contrast, while each iteration of NAG requires only a single query to $\nabla f$, its time complexity remains $O(d)$. Therefore, in terms of actual runtime, gradient-based QHD is asymptotically comparable to NAG, making our comparison based on the same iteration count fair. 7. **Typo in line 096**: we have corrected the typo. We sincerely appreciate the Reviewer's thoughtful feedback and constructive suggestions. Given our clarifications and the additional insights provided, **we hope the Reviewer might reconsider their evaluation and, if appropriate, adjust the score accordingly.**
null
null
null
null
null
null
AGAV-Rater: Adapting Large Multimodal Model for AI-Generated Audio-Visual Quality Assessment
Accept (poster)
Summary: This paper studies the LMM to assess the quality of AI-generated audio-visual content, evaluating AGAVs from three dimensions: audio perceptual quality, A/V content consistency, and overall A/V quality. The authors introduce a novel AI-generated audio-visual quality assessment dataset, AGAVQA, and propose an LMM-based AGAV quality assessment method, AGAV-Rater. The AGAV-Rater demonstrates SOTA performance in multi-dimensional scoring tasks. Compared to 11 audio-visual LMMs (4 open-source LMMs and 7 closed-source LMMs), AGAV-Rater can better select the optimal AGAV. Claims And Evidence: The claims made in the paper are generally well-supported by clear and convincing evidence. The paper introduces AGAVQA, a large-scale audio-visual quality assessment dataset, and AGAV-Rater, an LMM-based model for evaluating AIGC audio-visual content. The evidence provided includes: 1. The detailed descriptions of AGAVQA dataset construction. 2. The performance comparisons of AGAV-Rater across multiple datasets (AGAVQA-MOS, TTA, TTM). 3. The ablation studies of AGAV-Rater, including pretraining, scoring methods, and multi-dimensional instructions. 4. The subjective experiments confirm the effectiveness of AGAV-Rater in improving user experience. Methods And Evaluation Criteria: The proposed methods and evaluation criteria in this paper are well-aligned with the problem of AI-generated audio-visual quality assessment. The proposed model AGAV-Rater outperforms prior approaches in terms of both correlation with human scores and optimal AGAV selection accuracy. Theoretical Claims: The paper relies on empirical validation through extensive experiments on the AGAVQA dataset, demonstrating performance improvements over baseline methods. The claims regarding the effectiveness of AGAV-Rater are supported by experimental results rather than theoretical derivations. Experimental Designs Or Analyses: The experimental design in the paper appears well-structured and methodologically sound, including the following aspects: 1.Evaluation on multi-dimensional scoring tasks: The study evaluates AGAV-Rater using three dimensions: audio quality, audio-visual content consistency, and overall audio-visual quality. 2. Evaluation on optimal AGAV selection tasks: The performance is measured by answer accuracy in selecting the optimal AGAV. 3. Ablation study on pretraining, scoring method, and instruction design. 4. The subjective experiments confirm that AGAV-Rater enhances the user experience of AGAVs. Supplementary Material: The supplementary material primarily consists of additional details about AGAVQA dataset construction, experimental setup, and baseline methods. Relation To Broader Scientific Literature: The paper makes several key contributions that are well-aligned with broader trends in audio-visual quality assessment and large multimodal models, including: 1. AGAVQA is the first dataset specifically designed for AI-generated audio-visual quality assessment, distinguishing it from traditional AVQA datasets focused on compression and transmission artifacts. 2. AGAV-Rater integrates LMMs, improving semantic-level quality assessment, an area where traditional AVQA models are weak. 3. AGAV-Rater is the first LMM-based model specifically designed for AIGC AVQA, bridging the gap between general LMMs and perceptual quality assessment. Essential References Not Discussed: no Other Strengths And Weaknesses: The paper’s strengths are as follows: S1) In terms of originality. The paper establishes an AI-generated audio-visual quality assessment dataset and, based on this, proposes the AGAV-Rater to evaluate the quality of AI-generated audio-visual content. This is a novel contribution to the field. S2) In terms of rationality. The paper thoroughly validates AGAV-Rater’s performance through extensive experiments, including multi-dimensional scoring, optimal AGAV selection, and enhancing the user experience of Elevenlabs. S3) In terms of importance. This work fills a gap in AI-generated audio-visual quality assessment, contributing to the advancement of AIGC quality assessment and VTA methods. S4) In terms of structure. The paper is well-written, well-organized, and clearly structured. The paper’s weaknesses are as follows: W1) Lack of detailed introduction of evaluation metrics: The evaluation metrics utilized in the experiments, such as SRCC, PLCC, KRCC, and RMSE, are not introduced in the paper. Providing a brief explanation of these metrics would enhance clarity, especially for readers unfamiliar with them. W2) Insufficient description of the datasets: The use of datasets is crucial in the experiments, but I do not see a detailed introduction to the TTA and TTM datasets. W3) Additional experimental data would aid understanding: In Tab. 3, including the accuracy of random selection as a baseline would provide valuable context for understanding the model's performance. Other Comments Or Suggestions: My suggestions are already included in the weaknesses discussed earlier. Questions For Authors: I have some questions about the paper: Q1) The authors conduct cross-dataset experiments on the AGAVQA-Pair subset by training the AGAV-Rater on the AGAVQA-MOS subset. However, the cross-dataset performance of the SOTA methods in Tab. 1 is not shown. For example, how does VAST perform on AGAVQA-Pair when it is trained on AGAVQA-MOS? Q2) The author selects optimal AGAVs for 230 silent AIGC videos to enhance the user experience. Could you provide more details of these 230 videos? More diverse videos could better validate the generalizability of AGAV-Rater. Q3) How is the AGAV-Rater method tested on AGAVQA-Pair? Is it evaluated using a multi-input comparison approach or a single-input scoring approach? Q4) In Equation 1, why do the authors utilize excellent, good, fair, poor, and bad as the standard text quality levels? Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: We would like to thank the reviewer for the constructive and valuable feedback. We have addressed your concerns point-by-point below. **1. Definition of evaluation metrics** SRCC and KRCC measure the prediction monotonicity, while PLCC and RMSE measure the prediction accuracy. Better AGAVQA methods should have larger SRCC, KRCC, PLCC values, and smaller RMSE values. SRCC can be formulated as: $SRCC = 1-\frac{6\sum^N_{n=1}(v_n-p_n)^2}{N(N^2-1)}$, where $v_n$ and $p_n$ denote the rank of the MOSs and predicted score, respectively. For a pair of ranks $(v_i,p_i)$ and $(v_j,p_j)$, the pair is concordant if $(v_i-p_i)(v_j-p_j)>0$, and discordant if $<0$. KRCC is defined as: $KRCC=\frac{C-D}{N(N-1)/2}$, where $N$ is the number of AGAVs, $C$ and $D$ denote the number of concordant and discordant pairs, respectively. PLCC and RMSE are calculated as: $PLCC = \left(\sum^N_{n=1}(y_n-\overline{y})(\widehat{y}_n-\overline{\widehat{y}})\right)/$ $\left(\sqrt{\sum^N_{n=1}(y_n-\overline{y})^2\sum^N_{n=1}(\widehat{y}_n-\overline{\widehat{y}})^2}\right)$, $RMSE = \sqrt{\frac{1}{N} \sum_{n=1}^{N} (y_n - \hat{y}_n)^2}$, where $y$ and $\widehat{y}$ denote the MOSs and predicted scores, respectively. $\overline{y}$ and $\overline{\widehat{y}}$ are the mean of the MOSs and predicted scores, respectively. **2. Introduction to TTA and TTM datasets** TTA and TTM were proposed by [5]. The TTA dataset generates 500 audio samples from 100 prompts using 5 text-to-audio generation methods. Subjects were then invited to rate the quality of the audio and its relevance to the provided description. Similarly, the TTM dataset generates 500 music samples from 100 prompts using 5 text-to-music generation methods. Subjects were asked to rate the quality of the music and its relevance to the provided description. **3. More experiments** On the AGAVQA-Pair subset, AGAV-Rater is tested using a single-input scoring approach. We present the accuracy of fine-tuned audio-video alignment methods on the AGAVQA-Pair subset and the accuracy of random selection: Method |SonicVisionLM | Frieren | V2AMapper | TIVA | V2A-SceneDetector | STAV2A | SSV2A | ReWaS | All :-|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-: Random | 0.33 | 0.20 | 0.41 | 0.20 | 0.50 | 0.25 | 0.20 | 0.25 | 0.32 AVIDCMA |0.29 | 0.58 | 0.61 | 0.50 | **0.71** | 0.50 | 0.40 | 0.44 | 0.52 VALOR | **1.00** | 0.75 | 0.72 | 0.70 | **0.71** | **0.70** | 0.40 | 0.44 | 0.55 VAST | 0.86 | 0.83 | 0.78 | **0.80** | 0.43 | 0.40 | 0.40 | **0.56** | 0.64 AGAV-Rater | **1.00** | **0.92** | **0.83** | **0.80** | **0.71** | **0.70** | **0.60** | **0.56** | **0.78** As can be seen, AGAV-Rater achieves the highest accuracy in each category. **4. Distribution of AIGC videos in section 5.6** Thank you for your suggestion. We statistically analyze the distribution of the 230 AIGC video contents in Section 5.6 (in the manuscript), and the results are as follows: Animal| Water | People | Vehicle | Object | Scenery | Sea | Fantasy | Fire | Instrument | Cooking :-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-: 41 | 15 | 20 | 22 | 31 | 19 | 20 | 11 | 15 | 24 | 12 As shown in the table, the video content is quite diverse, allowing for a comprehensive evaluation of AGAV-Rater's performance across different types of video content. **5. Explanation of standard text quality levels** Researchers have found that "good" and "poor" are the most frequently predicted tokens by LMM models when addressing quality issues. We then use the standard text rating levels defined by ITU [6]—excellent, good, fair, poor, and bad—to further refine the quality levels corresponding to these tokens. **References** [1] Hayes, A. F. and Krippendorff, K. Answering the call for a standard reliability measure for coding data. Commun. Methods Meas., 1(1):77–89, 2007. [2] Han, J., Gong, K., Zhang, Y., Wang, J., Zhang, K., Lin, D., Qiao, Y., Gao, P., and Yue, X. Onellm: One framework to align all modalities with language. In CVPR, pp. 26584–26595, 2024 [3] Li, Z., Xu, Q., Zhang, D., Song, H., Cai, Y., Qi, Q., Zhou, R., Pan, J., Li, Z., Tu, V., et al. Groundinggpt: Language enhanced multi-modal grounding model. In ACL, pp. 6657–6678, 2024. [4] R. I.-R. BT, Methodology for the subjective assessment of the quality of television pictures. ITU, 2002. [5] Deshmukh, S., Alharthi, D., Elizalde, B., Gamper, H., Ismail, M. A., Singh, R., Raj, B., and Wang, H. Pam:Prompting audio-language models for audio quality assessment. arXiv preprint arXiv:2402.00282, 2024. [6]Recommendation 500-10: Methodology for the subjective assessment of the quality of television pictures. ITU-R Rec.BT.500,2000. --- Rebuttal Comment 1.1: Comment: Thanks to the authors for the detailed response. Overall, this paper is essential for the study of AIGC audio-visual quality, and I am willing to increase the score. I hope the above response can be added to the final version. Additionally, there are a couple of minor issues: 1.Does the score in the TTM dataset consider the beauty of music? 2.Could the authors further provide the performance of AGAV-Rater for each category in the 230 videos? --- Reply to Comment 1.1.1: Comment: We sincerely thank the reviewer for reading our response and raising the score. We will include the above responses in the final manuscript. Below are further answers to the reviewer's questions: 1. **Evaluation dimensions in the TTM dataset** The TTM dataset does not include an evaluation of music's aesthetic quality. Since music aesthetic quality assessment is heavily influenced by personal preferences and the subject's taste in different music styles, it is difficult to achieve objective evaluation. Therefore, in the AGAVQA-MOS, TTA, and TTM datasets, the audio quality dimension primarily focuses on the quality and realism of the audio. 2. **Performance of AGAV-Rater for each category in Section 5.6** We categorized the 230 videos in Section 5.6 into 11 categories. Below, we further present the accuracy of AGAV-Rater in identifying higher-quality AGAVs across these categories: Animal| Water | People | Vehicle | Object | Scenery | Sea | Fantasy | Fire | Instrument | Cooking | All :-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-: 0.78|0.87|0.85|0.77|0.81|0.79|0.80|0.82|0.80|0.83|0.75|0.80 As shown in the table, AGAV-Rater achieves over 75% accuracy in each category.
Summary: This paper introduces a AI-generated audio-visual (AGAV) quality assessment dataset (AGAVQA) and AGAV-Rater, a large multimodal model (LMM)-based approach for evaluating AGAV. The AGAVQA dataset containing two subsets: AGAVQA-MOS (multi-dimensional score prediction) and AGAVQA-Pair (optimal AGAV selection). AGAV-Rater is trained using a two-stage process—pre-training with automatically labeled text-defined quality levels and fine-tuning with human-annotated numerical scores. The model achieves state-of-the-art performance on AGAVQA, text-to-audio (TTA), and text-to-music (TTM) datasets, surpassing traditional AVQA and AQA methods as well as general-purpose LMMs. Claims And Evidence: The paper does not provide a detailed correlation analysis among Audio Quality, Content Consistency, and Overall Quality dimensions in the AGAVQA-MOS dataset. Such an analysis is crucial for understanding dependencies between these dimensions and would improve interpretability. Without this, it is unclear whether Overall Quality is primarily influenced by Audio Quality or Content Consistency (or both), limiting insights into how AGAV-Rater makes its predictions. Methods And Evaluation Criteria: Audio-video alignment methods (e.g., VAST) achieve strong results on AGAVQA-MOS, raising questions about whether the dataset primarily measures alignment quality rather than broader quality aspects. If AGAVQA-MOS is dominated by alignment factors, then AGAV-Rater’s superior performance might be due to its ability to model alignment, rather than providing a generalizable AGAV quality metric. A deeper analysis of how AGAV-Rater differs from alignment-based methods (e.g., VAST, VALOR) is needed to clarify whether AGAVQA-MOS is capturing a diverse range of distortions beyond alignment. Theoretical Claims: I don't think there are any theoretical claims since the paper focuses on new dataset and how to evaluate the audio-visual quality via a proposed two-stage training processes. Experimental Designs Or Analyses: While the paper compares AGAV-Rater to prior LMMs (e.g., VideoLLaMA2), these models cannot be fine-tuned, making the comparison somewhat unfair. It remains unclear whether AGAV-Rater’s advantage comes from its model architecture or its dataset adaptation. In addition, the audio quality or audio-video alignment could be related to how the audio is processed, e.g., how to encode the audio, how to mix audio and video. I am wondering whether the proposed methods could fit all different processing methods. Supplementary Material: yes, the entire content presented on pages 12-15 in the paper has been reviewed. Relation To Broader Scientific Literature: The paper presents the first large-scale AGAV quality assessment dataset, comprising 3,382 AGAVs from 16 VTA methods, which is of great interest to both industry and academic research communities. Essential References Not Discussed: None Other Strengths And Weaknesses: Strengths: This paper addresses the quality assessment of AI-generated audio-visual (AGAV) content by proposing a novel dataset (AGAVQA) and a multimodal model (AGAV-Rater), and experimental results show notable improvements over other related methods across multiple evaluation dimensions. Weakness: 1. The paper lacks a detailed correlation analysis among Audio Quality, Content Consistency, and Overall Quality dimensions in the AGAVQA-MOS dataset, which limits the interpretability of results and insights into dimension dependencies. 2. There is no adequate analysis of the distinctions from AGAV-Rater and alignment methods, even though audio-video alignment methods (e.g., VAST) demonstrate strong performance. Given the fact of VAST's competitive results, it raises questions about whether the AGAVQA-MOS dataset primarily emphasizes audio-video alignment quality rather than broader quality aspects. 3. The human evaluation process are insufficiently described. Critical information such as annotator backgrounds and training procedures is missing, raising concerns about annotation robustness and representativeness. Specifically, the authors do not discuss the annotation quality clearly as well, such as the consistency of ratings across annotators, individual differences in subjective perception, or how potential biases and variances were controlled or mitigated. 4. AGAVQA-Pair dataset evaluation suffers from a notably limited scale (only 75 question-answer pairs) and simplistic annotation (best-of-pair selection), undermining its effectiveness for reliably assessing model generalization. 5. Fig. 2 step1 indicates a significant data issue: the "Kling" video source appears twice with different sample counts, raising severe concerns regarding dataset accuracy and reliability. 6. The paper needs clearer motivation, reasoning, and a stronger discussion of how it differs from prior work. Overall, it is difficult to read. Other Comments Or Suggestions: 1. Several grammatical errors were found and the whole paper could be improved for clarity and readability. Such as 'It label AGAVs quality in two ways' -> 'It labels AGAVs' quality in two ways'; 'Our core contributions can be summarized as three-fold'->'threefold", and better to use either 'Our core contributions are threefold' or 'Our core contributions can be summarized in three ways'. 'AGAV-Rater demonstrates superior score prediction capabilities on the AGAVQA- MOS, TTA, and TTM datasets, and achieves the highest accuracy in identifying the optimal AGAV on the AGAVQA- Pair subset AGAV-Rater offer users a better audio-visual experience and enhance the quality of VTA method outputs.' -> 'offers' and 'enhances'. Moreover, the original combines multiple ideas without proper separation. It would be good to add a comma before the second AGAV-Rater. Questions For Authors: n/a Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We would like to thank the reviewer for the constructive and valuable feedback. We have addressed your concerns point-by-point below. **1. Correlation of 3-dimensional MOSs** **SRCC between audio quality and content consistency is 0.6860, indicating that the two dimensions are independent**. SRCC between audio quality and overall quality is 0.7876, and between content consistency and overall quality is 0.7926, suggesting that **overall quality is influenced by both audio quality and content consistency**. **2. Differences between AGAV-Rater and audio-video alignment methods** The key difference between AGAV-Rater and VAST is that **AGAV-Rater utilizes the semantic understanding ability of LLM**, improving performance. Although audio-video alignment methods align features semantically, we fine-tuned them on the AGAVQA-MOS subset, mapping audio and video features to quality dimensions. After fine-tuning, these features contain both alignment information and quality information. Therefore, the suboptimal performance of alignment methods does not imply that AGAVQA-MOS emphasizes alignment. **3. More experiments** We conduct ablation studies to compare AGAV-Rater with fine-tuned LMMs, with a detailed analysis in Section 3 of the response to Reviewer R3k7. We also test on the AVQA dataset, SJTU-UAV, focusing on real-world user capture distortions. Method | SRCC | PLCC :-|-:|:- DNN-RNT (TIP 2023) | 0.7125 | 0.7253 GeneralAVQA (TIP 2023) | 0.7753 | 0.7827 AGAV-Rater | 0.7955 | 0.8052 AGAV-Rater achieves the best performance on SJTU-UAV, proving **its superiority comes from the model framework and training, not from dataset adaptability**. **4. Details of audio processing** AGAV-Rater uses default parameters from VideoLLaMA2 for audio preprocessing. Assuming $T$ video frames are extracted, the steps are: 1. Divide audio into $T$ segments. 2. Concatenate all segments, then crop or zero-pad to a fixed length. 3. Transform into fbank spectrograms with 128 frequency bins. 4. Use BEATs and an MLP block to extract features from the spectrograms. 6. Concatenate audio, video, and text features, and input them into the LLM. **5. Details of human evaluation** We invited subjects familiar with AVQA and AGAV for on-site training. We provided detailed explanations of the scoring criteria for each dimension and additional AGAV samples for practice. Experts then reviewed the annotations and selected 15 subjects. To prevent fatigue, each subject rated a maximum of 60 samples per day, completing the task in about two months. We used the **ITU-recommended MOS processing method** [4], and no subjects were identified as outliers. **Krippendorff's α** [1] for audio quality, content consistency, and overall quality are 0.6814, 0.7343, and 0.7143, respectively, indicating appropriate variations among subjects. We also randomly divide subjects into two groups and calculate the **SRCC of average scores between the two groups**. After ten repetitions, the average SRCC for audio quality, content consistency, and overall quality are 0.8043, 0.8318, and 0.8297, validating rating consistency. **6. Supplement to the AGAVQA-Pair subset** Due to the lack of public AGAVQA datasets, the AGAVQA-Pair subset was collected from 8 VTA webpages released in the past year. **Its significance lies in that, as a third-party platform, it offers a more objective and impartial dataset.** **Best-of-pair selection is more reliable than scoring tasks**. Subjects show greater consistency and confidence in determining the optimal sample. In practical applications, identifying the optimal AGAV may be enough. **We use 230 AGAV pairs collected in Section 5.6 (in the manuscript) to further validate generalization.** The accuracy of AGAV-Rater, along with fine-tuned VAST, VALOR, and AVID-CMA, is 80%, 74%, 69%, and 68%, respectively. **7. Modification of Fig.2** We apologize for the error in Fig. 2. There are 45 Kling video sources in total, with 23 from the Kling official website, and 22 from the video generation benchmark Vbench. This will be corrected in the final manuscript. **8. Motivation of our paper** Previous AVQA and AQA work focused on **real-world capture or compression distortions**. Audio-video alignment methods targeted **semantic alignment in real scenarios**. LMM-based quality assessment **primarily centered on visual quality, with a limited focus on audio**. As AIGC video technology advances, more research explores dubbing techniques. The motivation of our paper is that the quality of AGAVs needs to be monitored and controlled. In the AGAVQA-MOS subset, both audio and video content are AI-generated. We aim to use LMMs to evaluate AGAV quality, replacing human subjective scoring to enhance efficiency and enable automation. **9. Correction of grammar errors** Thank you for pointing out our grammatical errors. We will correct these mistakes in the final manuscript. **References** Please refer to the Response to Reviewer h1V5.
Summary: This work introduces a new quality assessment dataset and network for the AI-Generated Audio-Visual task. The database additionally handles multimodal challenges like A/V content inconsistency, and the quality assessment model leverages LMM to predict multi-dimensional scores. Claims And Evidence: The claims made in the submission are supported by detailed experiments. Methods And Evaluation Criteria: The proposed methods and evluation criteria make sense for the assessment of audio-visual quality. Theoretical Claims: I checked the proofs and formulas in the method section, and there are no issues. Experimental Designs Or Analyses: The overall experimental design is mostly fine, with only a few details that need to be confirmed. Also, the ablation experiments can include more base models. Supplementary Material: I have reviewed the appendix. Relation To Broader Scientific Literature: This work may provide some insightful implications for the evaluation of audio-visual quality consistency, the application of large multi-modality models in quality assessment methods, and the development of improved video-to-audio methods. Essential References Not Discussed: From my opinion, there are no missing essential references. Other Strengths And Weaknesses: I do not find major issues in this work overall, except for some minor details: 1.Are there any failure cases during the data auto-labeling process? What is the error rate approximately? 2.Please show the variance in human scores during the subjective experiment. 3.The ablation section could be further expanded, such as by including experiments with different base models in the ablation study. 4.More cases could be shown to further demonstrate the effectiveness, such as the sample and the corresponding score rated by the proposed QA model. Other Comments Or Suggestions: Please refer to the weakenesses. Questions For Authors: Please refer to the weakenesses. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We would like to thank the reviewer for the constructive and valuable feedback. We have addressed your concerns point-by-point below. **1. Details of the auto-labeling process** We manually verify 500 auto-labeling results. Among them, the accuracy for content consistency related instruction-response pairs is $100$%, while the accuracy for audio quality related instruction-response pairs is $92$%. In content consistency-related instruction-response pairs, when the consistency quality is labeled as "bad", we ensure that **audio (text) and video from different categories are paired to achieve high accuracy**. In audio quality-related instruction-response pairs, for noisy audio types, such as machine sounds or wind noise, the reverse operation has a minimal negative impact on audio quality. We have **utilized category labels to filter out certain audio quality-related instruction-response pairs**, such as hair dryer drying and pumping water, to minimize the error rate. **2. Anlysis of human scores** The range of MOSs is $[0,100]$. We categorize the video content into 11 main audio sources and then calculate the standard deviation of the overall quality scores among 15 subjects. We compute the average standard deviation for each category: Animal| Water | People | Vehicle | Object | Scenery | Sea | Fantasy | Fire | Instrument | Cooking |:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:| 12.56 | 12.16 | 12.20 | 11.79 | 12.57 | 12.74 | 12.37 | **13.11** | 12.35 | 11.93 | 12.23 **The "Fantasy" category shows the highest standard deviation**, as it represents unreal scenarios, leading to more diverse interpretations among subjects. Krippendorff's α [1] can be used to measure the quality of the subjects' ratings. We calculate **Krippendorff's α** for audio quality, content consistency, and overall quality, which are 0.6814, 0.7343, and 0.7143, respectively, indicating appropriate variations among subjects. We also randomly divide subjects into two groups and calculate the **SRCC of average scores between the two groups**. After ten repetitions, the average SRCC for audio quality, content consistency, and overall quality are 0.8043, 0.8318, and 0.8297, validating rating consistency. **3. Ablation study of the base models** We conduct ablation studies using GroundingGPT [2] and OneLLM [3] as base models on the AGAVQA-MOS subset. For OneLLM and GroundingGPT, we first load the default weights, and then fine-tune them using the official training code on the AGAVQA-MOS subset. To ensure fairness, we also add the quality regression module, directly regressing the LLM's last hidden states to output three-dimensional numerical scores. The results are as follows: Dimension | Audio | Quality | Content|Consistency| Overall|Quality |:-|-:|:-|-:|:-|-:|:-| Metric | SRCC | PLCC | SRCC | PLCC | SRCC | PLCC GroundingGPT (ACL 2024)|0.4387 | 0.4494 | 0.5067 | 0.4764 | 0.4975 | 0.5297 OneLLM (CVPR 2024)|0.6578 | 0.6879 | 0.6184 | 0.6297 | 0.6327 | 0.6388 AGAV-Rater | **0.7909** | **0.8108** | **0.7553** | **0.7645** | **0.7458** | **0.7552** As shown in the experimental results, AGAV-Rater achieves the best performance. The main reason for this is that VideoLLaMA2 is designed for audio-video content understanding and pre-trained on more diverse audio-video datasets, making it more suitable for our quality assessment task. GroundingGPT focuses more on localization and visual understanding and is not designed or trained to understand continuous audio-video content. Its ability to comprehend video quality may be weaker. OneLLM is a general multimodal model that, while supporting audio and video processing, is not specifically optimized or enhanced for video and audio alignment. Its audio-related dataset only includes audio-text data, and OneLLM is more suited to text-vision or text-audio matching and understanding, rather than specific audio-video content. **4. More cases displays** Thank you for your suggestion. We have displayed more cases about the samples and the corresponding scores rated by the AGAV-Rater on the project page (https://agav-rater.github.io). **References** Please refer to the Response to Reviewer h1V5.
Summary: This paper addresses a challenging and important question for the VTA methods: whether LMMs can be utilized to assess the quality of audio-visual content generated by VTA methods. To tackle this problem, the authors first establish a large-scale AGAV quality assessment dataset, AGAVQA, which includes two subsets: AGAVQA-MOS: Contains 9,264 MOS scores for 3,088 AGAVs. AGAVQA-Pair: Contains 75 question-answer pairs for 294 AGAVs. Then, this work introduces the AGAV-Rater, a LMM-based quality assessment method for AI-generated audio-visual content. AGAV-Rater can provide multi-dimensional scores for AGAVs, TTAs, and TTMs. Extensive experiments validate the performance of the AGAV-Rater in predicting multi-dimensional quality scores for AGAVs, TTA, and TTM, and assisting VTA methods in selecting the optimal AGAV samples. ### After rebuttal ### I have read the rebuttal, and my concerns have been well solved. Thus, I tend to keep my original score. Claims And Evidence: Yes, the authors demonstrate the effectiveness of their proposed model AGAV-Rater in predicting multi-dimensional quality scores on three datasets. This provides clear evidence that their proposed model can adapt LMM for AI-generated audio-visual quality assessment. Additionally, the authors further validate their claims through subjective experiments, showing that their model, AGAV-Rater, can effectively assist video-to-audio methods in selecting the optimal AGAV samples. Methods And Evaluation Criteria: Yes, the AGAV quality assessment dataset, AGAVQA, established by the authors, contributes significantly to the advancement of the AIGC audio-visual quality assessment field. The model proposed by the authors is theoretically sound and can be effectively applied to evaluate the quality of AI-generated audio–visual content, as well as to identify the optimal AIGC audio-visual samples. Theoretical Claims: Yes, the process of constructing the AGAVQA dataset is rigorous and well-justified. The proposed model, based on the large multimodal model VideoLLaMA2, is theoretically feasible. Experimental Designs Or Analyses: Yes, the authors validated their proposed model on three datasets: AGAVQA, TTA, and TTM, demonstrating its ability to provide multi-dimensional scores for AIGC audio-visual, audio, and music content. Additionally, the authors conduct cross-dataset validation experiments, comparing the accuracy of their proposed model with closed-source LMMs in the optimal AGAV selection task. These experimental designs are reasonable and effective. Furthermore, the authors invited participants to verify that the AGAV samples selected by their model enhance the viewing experience, providing a more rigorous validation of the model’s performance through user experience. Supplementary Material: Yes, I carefully reviewed the Appendix provided by the authors. It offers a more detailed explanation of the construction process of AGAVQA, including the collection of AIGC videos, the VTA methods employed, and the analysis of subjective scores. Additionally, the appendix provides information on the inference latency and throughput of the AGAV-Rater. Relation To Broader Scientific Literature: The primary contribution of this paper lies in constructing a large-scale AGAV quality assessment dataset, AGAVQA, which significantly advances the field of AI-generated audio-visual (AIGC) quality assessment. Previous research has predominantly focused on AIGC images and videos, making this work a pivotal step in addressing the gap in quality assessment for AIGC audio-visual content. Furthermore, the authors propose a LMM-based quality assessment method for AI-generated audio-visual content, providing a novel solution for evaluating the quality of audio generated from video or text inputs. This contribution extends the broader scientific literature on AIGC quality evaluation. Essential References Not Discussed: No Other Strengths And Weaknesses: Strengths: This paper is motivated by the development of VTA methods and LMMs. The authors propose a novel issue in current quality assessment. 1.The authors establish a large-scale AGAV quality assessment dataset that includes AGAVs generated by 16 different VTA methods. The dataset is rich and diverse, facilitating the development of VTA methods. 2.The novel LMM-based AGAV quality assessment method, AGAV-Rater, proposed in the paper, enables multi-dimensional scoring for AGAVs, TTA, and TTM. The authors also conduct extensive experiments, demonstrating that the model can be applied to real-world VTA methods to enhance user viewing experiences. 3.The analysis of multi-dimensional instructions in the experiments is quite interesting. It provides insights for future multi-dimensional quality assessments and can be easily implemented to improve the performance of quality assessment methods with multi-dimensional scoring. Weaknesses: The main weaknesses of the paper lie in some unclear explanations, which may confuse readers. 1.The paper lacks some details in the human evaluation section, such as a more detailed display of the scoring interface and the instructions given to subjects in Fig. 2. 2.The paper does not provide details on the time required to train the AGAV-Rater. Understanding the computational cost and training efficiency is crucial for practical implementation. 3.The paper lacks an introduction to the comparison methods. How did the authors train the multimodal alignment-based methods on the AGAVQA-MOS subset in Tab. 1 and Tab. 2? These methods are not specifically designed for quality assessment, and their original structure cannot directly output a one-dimensional quality score. 4.The paper lacks a detailed explanation of the 230 silent AIGC videos selected in Section 5.6. Although the authors demonstrate some AGAV samples on the project page, there is no description of the content distribution of these AIGC videos. Other Comments Or Suggestions: 1.Labeling the boxes in Fig. 2 as (a), (b), and (c) would help readers better understand the figure. 2.The paper should provide a detailed definition of the loss function used during the training of AGAV-Rater. Questions For Authors: 1.What specific instructions were given to the human subjects during the testing session? Additionally, how long did it take for the subjects to complete the testing phase? 2.In the 50,952 instruction-response pairs designed in the paper, what is the proportion of each scenario (audio-video, audio-text, and music-text)? Can you also analyze why the consistency dimension shows a relatively large improvement in the TTM dataset during the pre-training step in Tab. 4? 3.The description of the category in Tab. 3 is confusing. How did the authors classify the AGAVQA-Pair subset into 8 categories? 4.AGAV-Rater is trained on the AGAVQA-MOS and demonstrated cross-dataset performance on the AGAVQA-Pair. Are the compared methods in Tab. 3 finetuned on the AGAVQA-MOS? Or are the original model parameters directly utilized? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We would like to thank the reviewer for the constructive and valuable feedback. We have addressed your concerns point-by-point below. **1. Details of human evaluation** We invited subjects familiar with AVQA and AGAV for on-site training. We provided detailed explanations of the scoring criteria for each dimension and additional AGAV samples for practice. Experts then reviewed the annotations and selected 15 subjects. To prevent fatigue, each subject rated a maximum of 60 samples per day, completing the task in about two months. The official testing phase was conducted in a controlled lab with normal indoor lighting, quiet surroundings, and subjects sitting at a comfortable distance of about 60 cm from the screen. The AGAVs were played at their original resolution. **The scoring interface consisted of three continuous quality rating bars and three navigation buttons.** Each rating bar was labeled with a 1-5 Likert scale. Navigation buttons, including "Prev", "Repeat", and "Next", allowed subjects to switch and replay AGAVs freely. In the final manuscript, we will add images of the scoring interface and detailed documentation of the scoring criteria provided to subjects. **2. Training duration** We trained AGAV-Rater on two 96GB H20 GPUs, with training epochs set to $5$ on the AGAVQA-MOS subset, taking approximately **$5$ hours**. **3. Details of audio-video alignment methods** Original audio-video alignment methods extract audio and video features using their encoders, then align them into a common vector space. We use these encoders with default parameters to extract audio and video features and then concatenate features. **The concatenated features are fed into a fully connected layer with an output dimension of 3** to predict three-dimensional scores. **4. Distribution of AIGC videos in Section 5.6** Due to word limitations, the distribution of AIGC videos is described in Section 4 of the response to Reviewer h1V5. We apologize for any inconvenience. **5. Modification of Fig. 2** Thank you for your suggestion. In the final manuscript, we will add (a), (b), and (c) to Fig. 2 to help readers quickly understand its content. **6. Introduction to the loss function** We use the PLCC loss to optimize the AGAV-Rater: $L=(1-\frac{\left<\widehat{s}-mean(\widehat{s}), s-mean(s)\right>}{\lVert\widehat{s}-mean(\widehat{s})\rVert_2\lVert s-mean(s)\rVert_2})/2$, where $s$ and $\widehat{s}$ are the vectors of MOSs and predicted scores of AGAVs in a batch respectively, $\left<\cdot\right>$ represents the inner product of two vectors, $\lVert\cdot\rVert$ denotes the norm operator for a vector, and $mean$ is the average operator for a vector. **7. Details of instruction-response pairs** In the 50,952 instruction-response pairs, the audio-video, audio-text, and music-text scenarios contain 25,592, 19,000, and 6,000 pairs, respectively. Under each scenario, half of the pairs focus on content consistency, and the other half on audio quality. **8. Analysis of Table 4** AGAV-Rater uses VideoLLaMA2 as the base model, and the training set used by VideoLLaMA2 mainly focuses on audio, with relatively less on music. In Table. 2 (in the manuscript), it can be seen that its ability to perceive music-text quality is weaker compared to audio-video and audio-text. Although the music-text scenario has the fewest instruction-response pairs, we repeat these pairs twice during pre-training to increase the learning times of AGAV-Rater for the music-text scenario. Therefore, **the music-text instruction-response pairs enhance the music perception ability of AGAV-Rater, improving the performance of consistency dimension on the TTM dataset.** **9. Supplement to Table 3** **The AGAVQA-Pair subset was collected from 8 VTA webpages**, dividing it into 8 corresponding categories. For each category, the optimal AGAV is generated by the corresponding VTA method. We chose to collect the AGAVQA-Pair subset from VTA webpages because these AGAVs, sourced from third-party platforms, offer a more objective and impartial dataset. These VTA webpages are all released in the past year, representing the latest technology in VTA methods. **The compared methods in Table 3 use their original model parameters** without fine-tuning on the AGAVQA-MOS subset. **We further show the accuracy of the audio-video alignment methods which have been fine-tuned on the AGAVQA-MOS subset**: Method |SonicVisionLM | Frieren | V2AMapper | TIVA | V2A-SceneDetector | STAV2A | SSV2A | ReWaS | All :-|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-: AVIDCMA |0.29 | 0.58 | 0.61 | 0.50 | **0.71** | 0.50 | 0.40 | 0.44 | 0.52 VALOR | **1.00** | 0.75 | 0.72 | 0.70 | **0.71** | **0.70** | 0.40 | 0.44 | 0.55 VAST | 0.86 | 0.83 | 0.78 | **0.80** | 0.43 | 0.40 | 0.40 | **0.56** | 0.64 AGAV-Rater | **1.00** | **0.92** | **0.83** | **0.80** | **0.71** | **0.70** | **0.60** | **0.56** | **0.78** As can be seen, AGAV-Rater achieves the highest accuracy in each category. --- Rebuttal Comment 1.1: Comment: Thanks for the detailed responses. My concerns have been well solved. Thus, I am inclined to increase my score. Additionally, it is recommended to add more details of evaluation and experiments in the revision.
null
null
null
null
null
null
A-PSRO: A Unified Strategy Learning Method with Advantage Metric for Normal-form Games
Accept (poster)
Summary: This paper proposes Advantage Policy Space Response Oracle (A-PSRO), a new framework for learning Nash equilibria in normal-form games with large strategy spaces, applicable to both zero-sum and general-sum settings. The key contribution is the Advantage function, a new evaluative metric that guides strategy updates toward equilibrium, ensuring efficient and deterministic convergence in zero-sum games while optimizing for higher-reward equilibria in general-sum games. To enhance learning efficiency, A-PSRO introduces LookAhead updates for faster equilibrium approximation and meta-equilibrium search to identify high-reward strategies. Finally, empirical results demonstrate that A-PSRO reduces exploitability more effectively in zero-sum settings and achieves superior rewards in general-sum games, offering a scalable and unified solution for strategic learning. Claims And Evidence: All claims are well-supported by both theoretical analysis and experiments, particularly for zero-sum games. Methods And Evaluation Criteria: Yes, the proposed method generally make sense for the problem of learning Nash equilibria in norm-form games. Theoretical Claims: Yes, the authors provide proofs of their theoretical results in the Appendix. I checked the first several proofs, and they all appear to be correct. Experimental Designs Or Analyses: Yes, I reviewed the experimental design and analysis. The evaluation includes zero-sum and general-sum normal-form games, using exploitability as the primary metric for assessing convergence to Nash equilibrium. The experiments compare A-PSRO against state-of-the-art PSRO variants (e.g., P-PSRO, DPP-PSRO, UDF-PSRO, PSD-PSRO) across various game environments, including AlphaStar, Go, Staghunt, and Randomly Generated Games. The experimental setup is generally sound, as it uses well-established benchmarks and relevant baselines. Supplementary Material: Yes. I checked the several proofs in the Appendix and Pseudocode part. Relation To Broader Scientific Literature: Although this paper proposes A-PSRO as an extension of Policy Space Response Oracles (PSRO), the introduction of the Advantage function as a new metric for evaluating strategy improvement appears to be a novel contribution. Essential References Not Discussed: No, I did not find any essential related works that are missing from the paper. Other Strengths And Weaknesses: Strengths: 1. The introduction of the Advantage function as a new strategy evaluation metric is a novel idea that extends beyond traditional best-response and diversity-based approaches in PSRO. 2. The LookAhead update mechanism offers a deterministic strategy improvement method, which differentiates it from existing stochastic diversity-based strategy exploration methods 3. The paper is well-structured, with clear theoretical explanations and empirical validation. Weakness: 1. The paper does not explicitly discuss the computational overhead of Advantage function evaluations compared to standard PSRO methods. A runtime comparison or complexity analysis would strengthen claims about scalability. Other Comments Or Suggestions: In Page 4, Theorem 3.4 states that $-V_i(\pi_i)$ is a convex function. However, above Theorem 4.1, the paper states that “Since $V_i(\pi_i) $ is convex……”, which may be a typo. Questions For Authors: 1. How does the additional computation required for Advantage function evaluation and LookAhead updates compare to standard PSRO methods? Could you provide a complexity analysis or runtime comparison? 2. I know this paper focuses on normal-form games. Since many real-world applications (e.g., poker, security games) involve extensive-form or imperfect-information settings, I just wonder whether A-PSRO can be extended to handle sequential decision-making scenarios? If so, what modifications would be needed? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for reviewing our paper and providing valuable feedback. Our responses are as follows. We hope these responses address your concerns and that you will consider raising the score of this paper. Regarding the computational complexity of the LookAhead module, we will explain it from both theoretical and experimental perspectives. For experimental verification, please refer to Figure 9 on the last page of the appendix in the main paper. Here, we provide a detailed explanation of this figure. In our experiments, the time-consuming modules include meta-game solving, diversity-based strategy exploration, and non-diversity-based strategy exploration. Among them, the experimental code only differs in the last module between A-PSRO and other algorithms. From Figure 9b and empirical analysis, it can be observed that the solving time of the meta-game with fictitious play is an exponential function of the population size. A-PSRO has the longest runtime, indicating that A-PSRO has the largest population size during training (additional experiments show that after 200 iterations, the population size of A-PSRO is more than twice that of other algorithms). Considering that in the pipeline improvement, the PSRO algorithm does not expand the population at every iteration but only adds new strategies when the existing ones converge (see Algorithm 2 for details), this demonstrates that A-PSRO's strategy exploration quickly improves the existing strategies in the population to optimal. From Figure 9a, we can see that if only the LookAhead module is used (ours without diversity), the time spent on strategy exploration in A-PSRO increases almost linearly. From other algorithms (which perform diversity exploration with a certain probability), it can be observed that diversity exploration leads to a nonlinear increase in the time per iteration. This suggests that using the advantage function as an evaluation metric does not introduce more computational complexity compared to diversity metrics. Next, we provide a theoretical explanation. Assume that payoff U is an [n, n] matrix, and population $P_i$ and $P_j$ are [p, n] matrixs. The current meta-equilibrium $\pi$ is an [n, 1] vector, and the update step size is d. Taking the classic EC diversity metric in Equation (38) as an example: $\operatorname{EC}(\mathcal{P}_i \mid \mathcal{P}_j) := \operatorname{Tr} (\mathbf{I}-(\mathcal{L}+\mathbf{I})^{-1})$ $\mathcal{L} = \mathcal{M}_i \mathcal{M}^T_i, \ \mathcal{M}_i = \mathcal{P}_i \times U_i \times \mathcal{P}_j$ Its computational complexity per iteration is $O(pn^2 + p^2n + p^3)$. Additionally, this process requires the exploration of every update directions in pure strategy space to get the one that maximizes diversity.Thus the actual computational complexity is $n \times O(pn^2 + p^2n + p^3) = O(pn^3 + p^2n^2 + p^3n)$. For the LookAhead process, here is the computation process in our code. First, repeat $\pi$ into an [n, n] matrix $Q$, and then the LookAhead update direction can be obtained through $min([Q \cdot (1-d)+I \cdot d] \times U \times I).argmax()$ This process has a computational complexity of $O(n^3)$, which is independent of the population size, consistent with the linear time growth observed in the experiments, and lower than the complexity of diversity-based exploration. The above results are conclusions for zero-sum games. For general-sum games, we have already mentioned in the paper that exploring multiple oracles incurs higher computational costs. Simplifying this process is part of our future work. Regarding the application of A-PSRO to sequential decision-making in extensive-form games, we discuss it from the following perspectives. First, if the game allows direct extraction of (major) pure strategies and can obtain an empirical normal-form game through simulation, then the A-PSRO algorithm proposed in this paper can be directly applied. In this case, the advantage function can be computed directly and will not introduce higher computational complexity compared to diversity exploration. For the commonly used approach in PSRO, where RL is used to train the best response as a new strategy, we provide discussions in the "A-PSRO for Large-scale Games" section. For RL processes based on policy gradients, applying A-PSRO requires computing the gradient of a weighted sum of the reward $R$ and advantage $V$. In this case, the optimal response predictor mentioned in the paper is needed to simulate the possible optimal responses of opponents under different strategies $\pi$. This indeed requires a large amount of data for training. However, given that using RL to compute the best response is already time-consuming, and that introducing the advantage function can bring sublinear deterministic improvements, we believe this trade-off is acceptable. A typo appears in Theorem 4.1. Thanks for pointing it out. --- Rebuttal Comment 1.1: Comment: Thanks for the author's responses. I am confused by the answer about extending to extensive-form games: "This indeed requires a large amount of data for training. However, given that using RL to compute the best response is already time-consuming, and that introducing the advantage function can bring sublinear deterministic improvements, we believe this trade-off is acceptable.". Does this mean that applying the method to extensive-form games would still incur a significant time cost—although potentially less than that required for best response computation—and also introduce additional overhead for training the optimal response predictor? --- Reply to Comment 1.1.1: Comment: Thank you for your response. We would like to provide a detailed explanation of this issue, in the hope of alleviating your concerns about our paper. We hope you will consider raising your score. First, as we have already argued in our previous response, A-PSRO has a lower time complexity in strategy exploration compared to diversity-based algorithms. This result is also formally proven in the appendix at the end of the main text. This indicates that from the perspective of strategy exploration alone, A-PSRO does not incur greater computational complexity. Below, we focus on explaining the computational complexity of the best response predictor. To address this issue, and to demonstrate that A-PSRO is not only efficient in solving normal-form games but also applicable to extensive-form games, we conducted experiments in the widely-used Leduc Poker environment. We separately recorded the time consumption of the best response predictor, RL-based strategy exploration, and other components. As mentioned in the section "A-PSRO for Solving Large-Scale Games" in our paper, the improvement to strategy exploration brought by the best response predictor does not require 100% accuracy. Therefore, we trained the predictor using varying amounts of data. The results show that even with the smallest training dataset, the exploitability is comparable to or slightly better than that of the standard PSRO framework. From the perspective of time consumption, the most time-consuming component in the PSRO framework for Leduc Poker is the RL module. A-PSRO computes the advantage function via neural networks, which introduces negligible additional cost during the strategy exploration phase, making it nearly indistinguishable from standard PSRO in this regard. In contrast, the time spent training the best response predictor in A-PSRO is significantly less than the total time spent on the RL component. Moreover, under the same number of iterations, the exploitability of A-PSRO is 10%–20% lower than that of other PSRO algorithms. Overall, even when taking into account the training cost of the best response predictor, A-PSRO achieves lower exploitability within the same time consumption. Furthermore, as mentioned in our previous response, the strategy exploration process of A-PSRO does not rely on population size, unlike diversity-based exploration methods, and it has a lower theoretical computational complexity. Therefore, we believe that introducing the advantage function does not incur additional computation, while it improves the equilibrium-solving process. We plan to add Leduc Poker's experiments to the main paper and hope this addresses your concerns.
Summary: This paper defines “Advantage” in 0s games and 2p simplified games as the value of a policy can achieve given all other policies in the strategy profile are playing as their best response. The authors thus derive A-PSRO with Diversity and LookAhead for large-scale games. The authors thus proposed A-PSRO based on the advantage defined and shows the algorithm retains sublinear convergence rate in 0s games. Experiments show that the proposed A-PSRO with LookAhead can achieve better exploitability across domains compared with baselines. Claims And Evidence: Claim 1: introduced A-PSRO as an improved equilibrium solver for empirical games Yes there are supportive evidence by experiments Claim 2: The paper studies the theoretical properties of the proposed advantage functions Yes these are supported by the theoretical analysis Methods And Evaluation Criteria: - The paper uses exploitability and reward for 2p0s and general-sum games in a few matrix games which makes sense to me. - The paper does not explicitly present how the advantage function, diversity and look ahead are used within the PSRO algorithm in the main text. I suggest you put an algorithm block either in the main text or appendix. Theoretical Claims: - The paper studies the convergence rate with A-PSRO and introduces some theoretical properties of the advantage function - I think the results makes sense to me, but I did not fully check the correctness of all the proofs. Experimental Designs Or Analyses: This paper conducted experiments in several matrix games, yet it is still to be examined in more complex games with extended form. Moreover, the scalability is questionable as the calculation of the advantage function requires an approximation of the BR and the computation complexity is not specifically discussed. Supplementary Material: I checked the code in the supplementary material, they look very matrix-game style and linear programming style. Relation To Broader Scientific Literature: - I think the paper has already discussed enough related works regarding the PSRO line of work in the paper. - I wonder how the advantage is related to the counterfactual regret minimization (e.g. [1-2]) lines of work. [1] Zinkevich, M., Johanson, M., Bowling, M., & Piccione, C. (2007). Regret minimization in games with incomplete information. *Advances in neural information processing systems*, *20*. [2] Brown, N., Lerer, A., Gross, S., & Sandholm, T. (2019, May). Deep counterfactual regret minimization. In *International conference on machine learning* (pp. 793-802). PMLR. Essential References Not Discussed: N/A Other Strengths And Weaknesses: - Strength - Clear demonstrations of the game solution dynamics and visualization of the advantage functions in the experimental results - Weakness - The authors focus too much on the theory instead of the a clear presentation of the method. I was very distracted by the theorems when I tried to understand the methodology. I suggest that you present in the following way: - Here is PSRO - Here is how we utilize advantage function - Here is the Look ahead module - Here is the diversity module - then go over the theorems under different game properties - state clearly about the limitation of your method (e.g. only applies to xxx games) Other Comments Or Suggestions: N/A Questions For Authors: Q1: Can you compare the computational cost between the A-PSRO and PSRO for a single iteration? Q2: The exploitability of PSD-PSRO, although in different games, is pretty low (at a scale of 10^-1) in the paper [3], while it is at a scale of 10^0 in this paper (Figure 6a). Can you elaborate on the potential reasons for the phenomenon and do the experiment domains of that paper can apply to your work? [3]Yao, J., Liu, W., Fu, H., Yang, Y., McAleer, S., Fu, Q., & Yang, W. (2023). Policy space diversity for non-transitive games. *Advances in Neural Information Processing Systems*, *36*, 67771-67793. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for reviewing our paper and providing valuable feedback. We appreciate your recognition of our work. Our responses and modifications are as follows. We hope these responses address your concerns. Due to page limitations in the main text, we have placed the algorithmic details of A-PSRO in the appendix. If the final version allows for additional pages, we will move the main algorithm into the main text. Thank you for your suggestion. Regarding the computational complexity and scalability of A-PSRO, we have addressed this issue in our response to other reviewers. In general, A-PSRO has lower computational complexity compared to diversity-based exploration methods. Regarding the relationship between A-PSRO and the CFR algorithm, we have also considered this question. As far as we know, CFR is mainly applied to imperfect information games. In fact, how to compute the advantage in imperfect information games is an issue we are currently considering. For imperfect information games, our idea is to approximate the advantage of different strategies for each information set. For a given strategy, we can use Monte Carlo simulations to obtain rewards under different samplings and opponent strategies. Then, for each sample, we select the strongest opponent and apply a weighting to simulate the advantage function. The main challenge is that this leads to a large computational complexity, and we are considering using methods such as Transformer to improve efficiency. Regarding the comparison with the PSD-PSRO algorithm in Figure 6, it is worth noting that this experiment was conducted in an environment with three agents. PSD-PSRO, as mentioned in its notation section, is primarily designed for two-player zero-sum games and has not been optimized for multi-agent scenarios. As a result, it shows higher exploitability in this case. For A-PSRO, we designed a method to approximate the advantage function in multi-agent systems, resulting in better performance. Thank you for your suggestion. We will provide a detailed explanation of the theoretical properties of the algorithm and how A-PSRO applies in different scenarios in separate sections of the main text. We will also add clarifications on in which scenarios A-PSRO is effective, where it may lead to higher computational complexity, and where its effectiveness remains uncertain. We appreciate your recognition of our work, and we hope the above responses address your concerns. We hope you continue to support our paper.
Summary: The authors propose an extension of PSRO to normal-form games with large-scale action spaces. They incorporate an advantage function to guide strategy exploration and speed up convergence to NE and improve joint rewards in general-sum normal-form games. Claims And Evidence: - The authors claim to establish an equivalence between advantage maximization and Nash equilibrium. Theoretically, this seems to follow. - The authors claim that including advantage maximization in normal-form PSRO allows their method to achieve higher joint rewards in general-sum games. They empirically support these claims by showing that they outperform other PSRO variants in normal form games. A-PSRO is positioned as a solver for large-scale normal-form games, yet the paper does not compare it to standard methods for solving such games, such as linear programming, fictitious play, or regret minimization. Furthermore, PSRO and its extensions are predominantly designed for and applied to extensive-form and partially-observable Markov games, but the paper does not address how A-PSRO relates to or extends beyond these settings. This omission raises concerns about the generality and significance of the proposed approach. Methods And Evaluation Criteria: The proposed evaluation criteria does not make sense for the problem discussed in the paper. A-PSRO is designed as a large-scale normal-form game solver, yet it is not compared with normal-form game algorithms other than PSRO variants. Theoretical Claims: I did not rigorously check the proofs Theorem 4.8 suggests that training a best-response approximator with sufficient accuracy for a game will guarantee a sublinear convergence rate in symmetric zero-sum normal-form games. It needs to be discussed that the amount of time necessary to calculate a sufficient training dataset of many best response targets could be large. Experimental Designs Or Analyses: The experimental design (exploitability/joint reward vs training iterations) is sound. Supplementary Material: I read Appendices B through D Relation To Broader Scientific Literature: The authors should be clear early on that their decision to consider PSRO as a solution to normal-form games is unusual. They should provide better context that PSRO and nearly all of its extensions are designed to solve games with sequential decision making like extensive-form and partially-observable Markov games. PSRO is typically considered as a extension of the normal Double Oracle algorithm to sequential interaction games. It also needs to be made more clear that most of the works cited in this paper address games with sequential interaction, not normal-form games (except as sanity checks and stepping stones to other game representations). Essential References Not Discussed: The authors focus on solving normal-form games, and should include foundational algorithms that are still used today to solve them [1,2,3] [1] Von Neumann's minimax theorem: v. Neumann, J. "Zur theorie der gesellschaftsspiele." Mathematische annalen 100.1 (1928): 295-320. [2] Lemke-Howson Algorithm: Lemke, Carlton E., and Joseph T. Howson, Jr. "Equilibrium points of bimatrix games." Journal of the Society for industrial and Applied Mathematics 12.2 (1964): 413-423. [3] Multiplicative Weights Update Algorithm: Freund, Yoav, and Robert E. Schapire. "Adaptive game playing using multiplicative weights." Games and Economic Behavior 29.1-2 (1999): 79-103. Other Strengths And Weaknesses: Strengths: - The advantage metric provides a useful signal to speed up convergence of PSRO in normal-form games. Weaknesses: - It is unclear how useful this method is for normal-form games without comparing to non-PSRO methods. Other Comments Or Suggestions: Error bounds need to be added in Figure 2 and Figure 7. Questions For Authors: 1) What constitutes a large-scale normal-form game in this setting? At what scale would A-PSRO be more preferable than traditional normal-form game algorithms like linear programming, fictitious play, and regret minimization? Why is a PSRO-style approach necessary in this context compared to other methods? 2) Why evaluate against Pipeline-PSRO? It speeds up wall-time performance in extensive-form games by concurrently pretraining behavioral deep RL best-responses. It's applicability as a normal-form game solver is extremely limited. 3) This question generally extends to comparing against most of the other PSRO baselines. If you are trying to solve large-scale normal-form games quickly, why not compare to other methods designed to solve normal form games, like linear programming approaches, fictitious play, or regret matching? 4) Is your goal to extend A-PSRO to extensive-form games where PSRO is generally applied? If so, this motivation is very unclear, and the non-trivial challenges in extending the advantage metric to extensive-form games need to be discussed. Ethical Review Concerns: I have no ethical concerns regarding this paper. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for reviewing our paper and providing valuable feedback. Our responses and modifications are as follows. We hope these responses address your concerns and that you will consider raising the score of this paper. We would like to emphasize that the motivation of this paper is the improvement of the strategy exploration process in the PSRO algorithm. Discussing this problem in the context of normal-form games is mainly because the properties of the advantage function and the convergence of the algorithm can be identified. We mentioned in our response to Reviewer DSTe that we were conducting experiments with Leduc Poker and plan to add the results to the main text. The use of normal-form games in the title is primarily for the following reasons. First, previous PSRO variants have typically improved the solution in zero-sum games, while we aim to demonstrate that A-PSRO can also improve the solution efficiency for general-sum games. Additionally, the main reason for considering normal-form games comes from the paper "Real World Games Look Like Spinning Tops" [1]. For any extensive-form game, extracting all pure strategies can define a corresponding normal-form game (especially in fully observable scenarios). Further discussions in [1] indicate that pure strategies with a wide range of skills extracted from large-scale extensive-form games (such as StarCraft) can also define a normal-form game. This can be viewed as an empirical game containing the most frequent strategies in the original game. The strategies obtained by solving the empirical game are also important for many problems. There are several works about empirical games: "Choosing samples to compute the heuristic strategy Nash equilibrium" The primary experimental environment in this paper is based on [1], with some of the experiments coming from both complete extensive-form games and their simplified forms (such as AlphaStar, Go, and Kuhn Poker). In the paper [1], it is mentioned that applying population-based policy learning in these environments is the most effective, so we mainly compare the PSRO algorithm. This experimental design is identical to previous diversity-based PSRO variants (such as DPP-PSRO and UDF-PSRO). Therefore, we believe the use of normal-form games is justified, as it emphasizes the inclusion of both zero-sum and general-sum games. One of the contributions of this paper is that A-PSRO improves the reward of equilibrium in general-sum games. This is one reason why it is more preferable than traditional methods. Regarding the comparison with non-PSRO algorithms, traditional algorithms generally perform inefficiently in the game environments presented in [1]. For example, the previous work "Open-ended Learning in Symmetric Zero-sum Games" compared Self-Play, and "Pipeline PSRO: A Scalable Approach for Finding Approximate Nash Equilibria in Large Games" compared Fictitious Play (regret minimization is primarily used for solving imperfect information games and will not be discussed here). Given that A-PSRO is almost identical to the PSRO framework except for strategy exploration, and that PSRO-based algorithms perform the best in the experimental environments of this paper, we believe this comparison is reasonable. Since linear programming is not a learning-based approach to solving equilibria, we here mainly discuss Fictitious Play. As mentioned in our response to reviewer ayi4, the time spent on strategy exploration in the PSRO framework is small compared to the time spent on meta-game solving. Compared to running Fictitious Play directly in the original game, efficient exploration can solve equilibria in small populations. We found experimentally that the population size only needs to be less than 10% of the original game.Considering that the solution of the meta-game is of approximately exponential complexity, this process greatly improves efficiency. On the populations obtained from A-PSRO exploration, Fictitious Play only requires about $10^3$ of iterations to reach $10^{-4}$ exploitability.In contrast, it takes about $10^4$ iterations or more directly using fictitious play. Considering that none of the other PSRO algorithms compared to traditional methods in the environment used in this paper, we believe it is reasonable. We could add these comparisons, but we think they may perform inefficiently. As for why we compare with Pipeline-PSRO, the idea of using multiple learners in Algorithm 2 has been applied in both DPP-PSRO and UDF-PSRO, significantly improving performance. Our experimental design and baselines are almost identical to those in these two papers, and this paper also uses multiple learners to update strategies simultaneously. About the esential references not discussed, thank you for pointing this out, and we will add the missing references. We will also add error bounds in the two figures. [1] Czarnecki et al. Real World Games Look Like Spinning Tops, NeurIPS 2020. --- Rebuttal Comment 1.1: Comment: Thank you for your response. To me, the argument that any extensive-form game can be converted into an (exponentially large) normal-form game is not a good reason that normal-form games should be primarily considered. If we consider a large extensive-form game, the corresponding normal-form game would be intractable for any method. If we consider a restricted meta-game of useful strategies, typically this is a small part of an algorithm like PSRO (applied to extensive-form games) that does not take a significant portion of the running time. I would be happy to consider extensive-form results with Leduc, but currently, only a single intermediate exploitability data point has been given in response to reviewer DSTe. Without an exploitability curve, we can't deduce any actual comparison from this, as one method could overtake the other. I am otherwise unconvinced that this method necessarily scales well in extensive-form. I am unconvinced that a wall-time comparison to linear programming should not be done. It should be demonstrated at what game size is learning even necessary here. Concerning Pipeline-PSRO, if P-PSRO and A-PSRO enjoy benefits from employing multiple learners simultaneously, that makes an unfair x-axis In Figures 2, 4, 6, 7. Graphs using "Training Iterations" as the x-axis make any method with multiple concurrent learners look better than it is compared to single learner methods like PSRO. Wall-time or total learning updates (number of learners * Training Iterations) would be a fair x-axis. I maintain my current score. --- Reply to Comment 1.1.1: Comment: Thank you for your response.We believe that these problems are not flaws in the paper itself, but that there may be some misunderstandings.We hope that the following responses will address your questions. Regarding normal-form games, we would like to emphasize that the main focus of this paper is on improving the PSRO framework, rather than being limited to solving normal-form games specifically. For the proposed advantage function, it can be computed exactly in normal-form games, while in extensive-form games it requires approximation. We have proven through multiple theorems that, under the assumption of exact advantage computation, the convergence rate to equilibrium can be improved to sublinear. Even with approximate computation, this result can still be achieved within a certain error bound. Moreover, the introduction of the advantage function enables convergence to equilibria with higher rewards in general-sum games. Given that the definition of normal-form games itself encompasses all games, this indicates that our theory applies to all fully observable games. We believe that, for a theoretical contribution, this is both meaningful and significant. Regarding the experimental results on Leduc Poker, we refrained from including specific plots due to anonymity and possible violations concerns. In experiment, all other modules in the experiment were kept exactly the same as those in PSD-PSRO (SOTA), with the only difference being in the strategy exploration process. From the results, A-PSRO achieved a faster decrease in exploitability during the early stages. As the number of iterations increased, the effect of the advantage function diminished to some extent, but A-PSRO still outperformed PSD-PSRO in the final results. Below are the average exploitability values at different stages of the iterations. Episodes(1e4) $\quad$ 10 $\quad$ $\quad$ 50 $\quad$ $\quad$ 100 $\quad$ $\quad$ 200 PSD-PSRO $\quad$ $1.2 \times 10^0$ $7.3 \times 10^{-1}$ $5.2 \times 10^{-1}$ $3.9 \times 10^{-1}$ A-PSRO $\quad$ $\quad$ $0.8 \times 10^0$ $5.0 \times 10^{-1}$ $3.3 \times 10^{-1}$ $2.7 \times 10^{-1}$ Regarding comparisons with linear programming, we believe the key difference lies in that learning-based methods approximate equilibria rather than solving them exactly. We believe that these two methods are not directly comparable, as linear programming requires significantly more time to compute an exact solution, whereas learning-based algorithms can obtain an approximate solution in much less time. If we consider an exploitability level of $10^{-2}$ to be sufficiently close to equilibrium, then from the final page of our appendix, it can be observed that A-PSRO requires fewer than 50 iterations to achieve this, with a total runtime less than 1 minute. However, when using linear programming, the number of variables and constraints involved can be on the order of $10^3–10^4$ (e.g., AlphaStar, Simplified Go game). To the best of our knowledge, even tools like Gurobi are unlikely to achieve an exact solution within 1 minutes. It may take one or more hours to complete the solution. Therefore, we argue that learning-based methods like A-PSRO have clear advantages in terms of time efficiency. Regarding Pipeline-PSRO, we have reviewed all the methods we compared in the paper as well as their corresponding implementations. Except for standard PSRO, all other compared methods used the pipeline improvement in their codebases, including P-PSRO, DPP-PSRO, UDF-PSRO, and PSD-PSRO. We reproduced the code for all of these methods, and without exception, they use multiple learners to update strategies (i.e., the pipeline improvement), and plot performance against "Training Iterations" on the x-axis. We believe this is a standard comparasion and does not introduce any unfairness. We would like to reemphasize that, during experiments, A-PSRO differs from other methods only in the strategy exploration module. In result evaluation, meta-strategy solving, and plotting, A-PSRO is implemented identically to the baselines. Considering that the compared methods are all PSRO variants, we believe this setup is appropriate and fair. In summary, we have already shown that compared to traditional linear programming and fictitious play, A-PSRO achieves solutions with significantly less computational time. This demonstrates that even when considering only normal-form games, A-PSRO still delivers the best overall performance. In addition, we are conducting experiments to further validate its applicability in extensive-form games, and the current results have already shown that A-PSRO can work effectively in such settings. We hope the above response addresses your concerns, and we sincerely hope that you will consider raising the score for our submission.
Summary: The paper addresses the challenge of solving Nash equilibria in normal-form games, particularly for games with large strategy spaces. Traditional PSROs and their variants have been effective in learning equilibria but often lack an efficient metric to evaluate and guide strategy improvement. This limitation affects their convergence efficiency and performance in both zero-sum and general-sum games. As a solution, the proposed method leverages the Advantage function—a new evaluative metric for guiding strategy updates toward Nash equilibria. Theoretical analysis shows that the Advantage function has desirable properties like convexity and Lipschitz continuity, ensuring more efficient equilibrium approximation. Furthermore, by integrating a LookAhead module for refining strategy updates and supports neuralization, APSRO makes it scalable to large games. Experiments in zero-sum and general-sum games demonstrate that A-PSRO significantly reduces exploitability, finds higher-reward equilibria, and outperforms existing PSRO variants in convergence efficiency and reward maximization. Claims And Evidence: Overall, most claims in the paper are supported by theoretical analysis and empirical results, but a few areas could benefit from further clarification or stronger evidence, I list them as follows: > A-PSRO has a deterministic convergence rate advantage over diversity-based PSRO methods. The paper claims that advantage-based exploration leads to more deterministic convergence, but experimental evidence is limited. The convergence rate is not explicitly analyzed against diversity-based PSRO variants. A more detailed convergence speed comparison would strengthen this claim. > The Advantage function helps overcome the limitations of diversity-based exploration in PSRO. While A-PSRO does reduce exploitability, the role of diversity exploration in complementing the Advantage function is not deeply analyzed. The interaction between diversity-based methods and the Advantage function needs more empirical justification, as some results (e.g., in Transitive games) suggest diversity alone may sometimes perform better. Methods And Evaluation Criteria: The proposed methods and evaluation criteria make sense for the problem of solving Nash equilibria in normal-form games. However, there are a few areas where the evaluation could be expanded or clarified, e.g. > Computational Efficiency Analysis Missing The paper does not provide a runtime or scalability comparison against existing PSRO methods. Since Advantage computation and LookAhead introduce additional complexity, an evaluation of training time would help assess practical feasibility. > Limited Real-World Validation The experiments focus on synthetic normal-form games, which are common in game theory but may not directly translate to real-world multi-agent learning problems (e.g., poker, RTS games like StarCraft). Testing A-PSRO in a multi-agent reinforcement learning (MARL) setting would demonstrate its broader applicability. Theoretical Claims: No obvious errors were found Experimental Designs Or Analyses: For the ablation test for: - A-PSRO without LookAhead (LA) - A-PSRO without Diversity The effect of LookAhead is clear, as removing it worsens exploitability reduction. But a potential issue is: the interaction between Advantage-based updates and Diversity is not fully explored. Supplementary Material: Yes, the given supplementary materials includes the payoff data and some of experiment results, also the running code. However, the code has poor readability, with a lot of copy-pasting. The author's programming habits don't seem very good. Relation To Broader Scientific Literature: A-PSRO is strongly connected to prior work in PSRO methods, Nash equilibrium learning, and multi-agent learning. It extends these areas by introducing the Advantage function, which provides a principled optimization target for reducing exploitability and selecting better equilibria. While diversity-based PSRO methods rely on heuristic exploration, A-PSRO offers a mathematically grounded alternative, making it a meaningful contribution to equilibrium learning research. Essential References Not Discussed: Relevant Missing Work: 1. Neural Population Learning Beyond Symmetric Zero-Sum Games (Liu et al., 2024) 2. Pipeline-PSRO: A Scalable Approach for Finding Approximate Nash Equilibria in Large Games (McAleer et al., 2020) Other Strengths And Weaknesses: Has been discussed in previous questions Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for reviewing our paper and providing valuable feedback. Our responses and modifications are as follows. We hope these responses address your concerns and that you will consider raising the score of this paper. Regarding the deterministic convergence rate, the explanation is as follows. The convergence of the PSRO algorithm depends on both the strategy exploration process and the meta-game solving process. Our improvement only addresses the strategy exploration process, and we have proven its convergence properties. We believe you might have misinterpreted the figure in the paper. For Transitive games, the best-performing algorithm is A-PSRO without diversity (not with diversity), which indicates that the algorithm using only LookAhead performs the best. In fact, we analyze the effects of both in the main paper and the supplementary materials. For games that mainly exhibit transitivity, the LookAhead module can converge very quickly to the equilibrium without relying on diversity exploration. This can be viewed as an optimisation process in a convex function with a significant gradient, where rapid convergence can be achieved by directly using the gradient pointing to a local maximum. Games with strong cyclic dimensions can be viewed as convex functions without significant gradient. Although the LookAhead module can converge to the equilibrium, the convergence may be slow in the early stage. Diversity exploration, which is akin to a stochastic optimization process, may directly update the strategy to a region very close to the equilibrium. Experimental results also show that the combination of both methods yields the best performance. Regarding the computational complexity of A-PSRO, please refer to our response to Reviewer ayi4. Regarding the code, the explanation is as follows. Due to insufficient time for code refinement, there may be some redundancy issues. We will make the necessary modifications in the future. About the esential references not discussed, thank you for pointing this out, and we will add the missing references. Regarding experiments on A-PSRO in more real-world games, we provide the following explanation. Our initial experiments primarily referenced DPP-PSRO and UDF-PSRO. These two works only conducted experiments on the normal-form games as presented in ”Real World Games Look Like Spinning Tops”. These experimental environments consist mainly of normal-form games consisting of pure strategies extracted from extensive-form games, which serve as empirical games that can model the interactions of the agents in the game. In fact, these experimental environments include many real-world extensive-form games (such as Kuhn Poker, Starcraft, etc.). Given that reviewers have raised concerns about this, we recently conducted experiments in Leduc Poker, which is commonly used for evaluating PSRO algorithms, following the approach of PSD-PSRO. The application of A-PSRO in this game primarily requires approximating the computation of the advantage function, as detailed in the A-PSRO for Large Scale Games section. The current experimental results of exploitability are: PSD-PSRO: $4 \times 10^{-1}$, A-PSRO: $3 \times 10^{-1}$. Due to time constraints, A-PSRO has not been fully fine-tuned. We believe that further improvements in the code could yield better results. If this comparison is necessary, we will add it to the main paper.
null
null
null
null
null
null
Scaling Probabilistic Circuits via Monarch Matrices
Accept (poster)
Summary: This paper replaces dense matrices with sparse Monarch matrices, reducing the computation cost and maintaining accuracy. Claims And Evidence: Yes, the claims made in the submission are supported by clear and convincing evidence. Methods And Evaluation Criteria: The proposed methods may be suitable for the problem or application at hand. Theoretical Claims: Yes, I have checked the proofs for correctness. Any minor issues present should not impede understanding of the overall paper. Experimental Designs Or Analyses: I have checked the soundness/validity of any experimental designs or analyses. Supplementary Material: Yes, I have reviewed the supplementary material. Relation To Broader Scientific Literature: This paper presents several interesting results. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths: 1. The paper is exceptionally well-written and easy to follow. 2. This paper presents some theoretical analysis. 3. This paper replaces dense matrices with sparse Monarch matrices to reduce memory and computation cost. Weaknesses: 1. To be honest, I do not know much about hybrid models since I have only read several papers such as Mamba. However, I think the main issue of the hybrid model is that we cannot scale up the size of the hybrid model. That is my understanding. Consequently, almost no leading companies choose to train and deploy hybrid models. 2. As for the paper, could you show me more results if you increase the length of the example from 256 to 1024? 3. In addition, could you show me the results evaluated over imagenet-256? Other Comments Or Suggestions: 1. Enhancing Tables 4 and 5 with additional results would further improve the completeness of the paper. 2. Also, could you show me the model size in the Table? Questions For Authors: See strengths and weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for your feedback. ``` To be honest, I do not know much about hybrid models since I have only read several papers such as Mamba. However, I think the main issue of the hybrid model is that we cannot scale up the size of the hybrid model. That is my understanding. Consequently, almost no leading companies choose to train and deploy hybrid models. ``` To clarify, in this work we investigate scaling of probabilistic circuit models. As opposed to hybrid models (and indeed transformers), probabilistic circuits have the key property of tractability, which enables efficient computation of quantities relating to the probability distribution such as marginals. Tractability has been used to achieve state-of-the-art results in applications such as controllable generation [1] and image inpainting [2], among others, beating transformer and diffusion-based models. Thus, we respectfully believe that research on alternative directions for generative modeling is valuable and this should not be viewed as a weakness of the work. ``` As for the paper, could you show me more results if you increase the length of the example from 256 to 1024? ``` We appreciate the reviewer’s request for further experiments, but we do not have the computational resources to run the requested experiments during the rebuttal period. Please note that we chose the sequence lengths and ImageNet downscaled resolutions to match with those considered by our baselines for more direct comparison; in particular, our method shows significant and consistent advantage over the PC baselines. Given the existing range of experiments we do not believe that there would be any change in the qualitative conclusions. ``` Also, could you show me the model size in the Table? ``` The PCs for image modeling have around 1.5B parameters. The Monarch HMM on text8 with hidden size $h=2^{19}$ has 0.75B parameters and the Monarch HMM on lm1b has 4.75B parameters. We will update the Tables with model sizes. [1] Zhang et al. “Adaptable Logical Control for Large Language Models” NeurIPS 2024 [2] Liu et al. “Image Inpainting via Tractable Steering of Diffusion Models” ICLR 2024 --- Rebuttal Comment 1.1: Comment: Thanks for your response! As I mentioned earlier, I am not very familiar with hybrid models. I am eager to hear the insights and suggestions from **Reviewer AzZ2** and **Reviewer Gwtt**.
Summary: This paper proposes a novel parameterization for probabilistic circuits (PCs) to improve their scalability, using structured sparse matrices called Monarch matrices. By replacing dense matrices in sum blocks of PCs with Monarch matrices, the proposed methods can reduce computational costs and allow larger scale PC training. The authors conducted experiments on text and image datasets. To my knowledge, the proposed idea is novel and the experimental results look compelling (at least compared to other PC methods). Claims And Evidence: The claims made in the paper are well supported by the conducted experiments. The authors are honest about the remaining gap between the proposed method and state-of-the-art generative modeling methods. Methods And Evaluation Criteria: The proposed methods and evaluation criteria (efficiency in terms of FLOPs and performance in terms of test log-likelihood) are reasonable to the problem studied in this paper. Theoretical Claims: I checked the derivations for equations (1) and (2) and they look correct to me. But the flow from the Monarch matrix operations to the construction of a PC is not clear to me by reading sections 3 and 4. Experimental Designs Or Analyses: The experiments are reasonably designed and analyzed for the studied problem of the paper. Supplementary Material: I didn't review the supplementary material. Relation To Broader Scientific Literature: The key contributions of the paper build upon [1] and the implementations and training of the proposed method rely on [2] and [3]. [1] Dao et al. Monarch: Expressive structured matrices for efficient and accurate training. ICML 2022. [2] Peharz et al. Einsum Networks: Fast and Scalable Learning of Tractable Probabilistic Circuits. ICML 2020. [3] Liu et al. Tractable regularization of probabilistic circuits. NeurIPS 2021. Essential References Not Discussed: To my knowledge, no. Other Strengths And Weaknesses: Strengths: - This paper advances scalable PC training and shows empirically compelling results, which is critical to the PC community. Weaknesses: - The memory saveup shown in Table 3 is not significant, which limits the extension of the method. - There is still a large performance gap between the proposed method and other SOTA methods (e.g. diffusion models). In image modeling experiments, the method is weaker than flow-based models which have been proposed years ago. - I don't think the sampling time comparison throughout the paper is convincing. According to the paper, the baseline results are often taken from the original papers. The comparison in such case is not fair under varied implementation details and hardware setup. Other Comments Or Suggestions: N/A. Questions For Authors: - I find the introduction of Butterfly matrices in line 205 a bit redundant and irrelevant. Why are they mentioned in the methods section rather than the related work? Have they been applied in PCs? - Why is the YCoCg transform used in the image modeling experiments? Is it necessary in the setup of this paper? If not, how does the proposed method perform using the original RGB images? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your insightful feedback. ``` I find the introduction of Butterfly matrices in line 205 a bit redundant and irrelevant. Why are they mentioned in the methods section rather than the related work? Have they been applied in PCs? ``` One of the main contributions of this work is identifying the connection between structured Monarch matrices and the multiplication of PCs: the product of two dense PCs gives exactly the Monarch matrix while the product of $d$ dense PCs of hidden size 2 gives the Butterfly matrix. Butterfly matrices are a well-known type of structured matrix which require only $O(h\log h)$ compute/memory (compared with $O(h^{3/2})$ for Monarch); however, despite their theoretical efficiency, Butterfly matrices are known to be less efficient on modern GPUs (due to their sparsity). Our construction of Monarch matrices as circuit product immediately implies that (i) one can interpolate between Butterfly and Monarch matrices through the generalized Monarch matrix; and (ii) this corresponds precisely to the multiplication of k PCs, for integers k >= 2. This then motivated us to perform an empirical study for $k = 2, 3, 4$ to see if one could achieve better scaling using the generalized Monarch matrix. We will incorporate this explanation to motivate this section more clearly. ``` The memory saveup shown in Table 3 is not significant, which limits the extension of the method. ``` We would like to note that Table 3 shows the **training** memory consumption with a reasonably large batch size $B = 128$ and the # of variables $n = 256$. At **inference** time (i.e. no gradient computation), Monarch PCs can be significantly more memory-efficient: with $h = 2^{18}$, a dense PC would still use $\approx 256$ GB while a Monarch PC (2-layer) would only use $\approx 1$ GB. That is, even when training a large Monarch PC is relatively memory-consuming, at inference time, researchers can still apply large powerful Monarch PCs with very little cost. In addition, the seemingly inefficient memory consumption of Monarch PCs during training is due to the need of gradient computation, which can be optimized by various well-known techniques such as gradient checkpointing, mixed-precision training, etc. Further, as shown at the end of Sec. 5.2, the hidden states of large Monarch PCs are actually sparse, motivating future research to exploit such sparsity to reduce the memory consumption of training large Monarch PCs. ``` There is still a large performance gap between the proposed method and other SOTA methods (e.g. diffusion models). ``` We agree that there is still a large gap between state-of-the-art PCs and the other deep generative models, but at the same time we would like to highlight that we have already scaled PCs to a degree far beyond existing tractable probabilistic models, while the SotA generative models are substantially less tractable compared to PCs (e.g. none of transformers, normalizing flows or diffusion models allow for tractable computation of marginal probabilities). Closing the performance gap between PCs and other deep generative models is precisely the motivation of our research. ``` I don't think the sampling time comparison throughout the paper is convincing. ``` The sampling time comparison in Table 4 is not meant to argue that PCs are superior to discrete diffusions in terms of efficiency but more of a sanity check: for readers who are not particularly familiar with PCs, we just want to show that our models can be efficiently implemented. More specifically, for example, the D3PM Uniform model achieves $\leq 1.61$ bpc with 1000 diffusion steps (3.6s) and $\leq 1.79$ bpc with 20 diffusion steps (0.077s), where the latter number is more comparable to that of Monarch HMMs. In our revision, we will add both D3PM results to the table and carefully rephrase our statement about the runtime of our models. Thank you for your suggestion. ``` Why is the YCoCg transform used in the image modeling experiments? ``` Table 5 shows the experiment results on both images transformed via YCoCg and transformed via YCoCg-R (the Lossless column). Since the YCoCg-R transform is lossless, the BPDs are directly comparable to the original RGB dataset. The two strongest PC baselines [3, 4] only reported their results on YCoCg transformed images (which is not reported in those papers, but is confirmed in [5] and the software implementations); we choose to report results for both settings for a fair comparison to [3, 4] and to models trained on RGB datasets, following the practice of [5]. [3] Liu et al. "Scaling Up Probabilistic Circuits by Latent Variable Distillation." ICLR 2023 [4] Liu et al. "Understanding the distillation process from deep generative models to tractable probabilistic circuits." ICML 2023 [5] Gala et al. "Scaling Continuous Latent Variable Models as Probabilistic Integral Circuits." NeurIPS 2024
Summary: Despite many advantages of probabilistic circuits (PC), their implementations are often difficult due to computational burden, even with block structures. In this paper, the authors proposed an alternative method that replaces dense sum blocks with Monarch matrices, and the method significantly reduce the memory and computation costs. Ultimately, the authors claimed that this method significantly bridges the gap between highly tractable models (probabilistic circuits) and less tractable models (diffusion models). Claims And Evidence: Claim 1. Monarch matrices are efficient. This claim is well-supported via theoretical arguments on Page 3 by showing the improvement from $O(m^4)$ to $O(m^3)$ edges. Claim 2. The Monarch models have outstanding performance. The performance is supported via empirical tests, shown in Tables 1 and 5. Claim 3. The Monarch models have better scaling behavior. The performance is supported via empirical tests, shown in Figures 4 and 7. Methods And Evaluation Criteria: This paper has extensive justifications in theory. I am not knowledgable in experiments, but the benchmark datasets are standard for generative models. Using FLOPs to measure efficiency is a commonly accepted choice. Theoretical Claims: I have checked all definitions and linear algebraic results: Discussions on page 3, and the theorems and proofs in Appendix A. Experimental Designs Or Analyses: The results (in tables and figures) support the paper's claims, but I am not qualified to comment the empirical part. Supplementary Material: Yes. I have carefully read Appendix A. Relation To Broader Scientific Literature: This paper extensively discusses its connections with relevant fields, including previous works on probabilistic circuits (in particular block parameterizations) and relevant background in linear algebra (butterfly matrices and Monarch matrices). They also discussed other types of models such as the diffusion model and flow-based models, and gave a high level picture on the status of PCs. Essential References Not Discussed: I am not qualified to comment on this section. Other Strengths And Weaknesses: Strengths: 1. The paper is well-motivated, studies an important problem, and has rigorous theoretical justifications. 2. The claim on bridging performance gaps is justified and promising to future research in this direction. 3. The empirical comparisons take care of a wide range of metrics. Other Comments Or Suggestions: I do not have any other comments or suggestions. Questions For Authors: I do not have any other questions. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your encouraging feedback. Please feel free to follow up if you have any questions.
Summary: This paper introduces a novel method to scaling Probabilistic Circuits (PCs) by replacing dense matrices in sum nodes with Monarch matrices which is a type of structured sparse matrices constructed by Kronecker products. The key idea is to leverage the sparsity and structure of Monarch matrices to reduce memory and computation costs which is demonstrated on generative modeling benchmarks. Furthermore, this paper provides a theoretical foundation on the connection between Monarch matrices and circuit multiplication. Claims And Evidence: The majority of the claims in the paper are supported by clear evidence, such as those related to computational efficiency, performance improvements. Methods And Evaluation Criteria: The methods and evaluation criteria make sense for the problem and application at hand, but evaluating on larger datasets and broader tasks could further strengthen the claims. Theoretical Claims: The theoretical framework is sound, but further formalization would strengthen the paper's contributions. Experimental Designs Or Analyses: Yes, the experiments support the claims made in the paper, but additional analyses could further strengthen the results. Supplementary Material: Yes, I have read the supplementary material, including Part A and Part B. Relation To Broader Scientific Literature: This paper builds upon and extends multiple streams of prior research in PCs, structured matrices, and efficient model scaling. Essential References Not Discussed: I think this paper has discussed enough relevant work in this area and provided comprehensive references to previous studies to illustrate its main contributions. Other Strengths And Weaknesses: strength: 1. The paper introduces Monarch matrices as a structured sparse parameterization for sum blocks in PCs, bridging the gap between tractability and scalability. 2. The paper provides solid theoretical justification for using Monarch matrices, linking them to circuit multiplication and structured sparsity. 3. Comprehensive experiments on generative modeling benchmarks (Text8, LM1B, ImageNet) demonstrate state-of-the-art performance with reduced computational cost (FLOPs). weakness: 1. The paper does not compare Monarch matrices to alternative structured representations like Block Tensor-Train (BTT) decomposition [1] or Toeplitz-like structured layers. How do Monarch matrices perform relative to these alternatives in terms of efficiency and expressiveness? 2. The paper highlights performance improvements but does not discuss potential failure cases. For example, does the structured sparsity of Monarch layers introduce expressiveness limitations compared to dense PCs? 3. The experiments focus on generative modeling benchmarks, but PCs are also used in other applications, such as causal inference, fairness, and tractable reasoning. Evaluating Monarch-based PCs in non-generative tasks (e.g., probabilistic reasoning) would further strengthen the paper's impact. [1] Qiu et al. Compute better spent: Replacing dense layers with structured matrices. ICML 2024. Other Comments Or Suggestions: I believe the first two weaknesses highlight the main issues. As for the third weakness, since it involves additional experiments, if there isn’t enough time to conduct them, a detailed verbal explanation of the method’s generalization would be helpful. Questions For Authors: Please check my questions in the weakness section. ==Post Rebuttal== I think the empirical evaluation of this paper can be more sufficient. I am ok if AC and the other reviewers decide to accept this paper. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for your feedback. ``` The paper does not compare Monarch matrices to alternative structured representations like Block Tensor-Train (BTT) decomposition [1] or Toeplitz-like structured layers. ``` Our study is not only limited to the Monarch matrices defined in [1]. Our construction of Monarch matrices as product of PCs with dense layers naturally generalizes the definition of Monarch matrices: the construction in [2] corresponds to the special case of multiplying **2** dense PCs, and we consider Monarch structures obtained by multiplying **$k$** dense PCs, showing empirical results for k = 2, 3, 4. Secondly, in terms of BTTs, we would like to note that BTTs of rank 1 correspond exactly to the Monarch matrices proposed by [2], and [1] made an important observation that BTTs of rank higher than 2 do not lead to better scaling behaviors while increasing higher memory cost. As a quick corroboration for PCs, we studied PCs with BTT layers on the ImageNet32 (lossless) dataset. As shown in the Figure at https://anonymous.4open.science/r/MonarchRebuttal-D0D3/a.pdf, we compare Monarch PCs with BTT layers of rank 2, 4, and 8: BTT-2 has a similar scaling curve with Monarch, and the scaling curves of BTT-4 and BTT-8 get worse as rank increases, echoing the findings of [1]. Finally, we would like to note that prior work on PCs has not considered using non-dense matrices, and our work opens up a new way of scaling PCs with great promise as illustrated by empirical achievements. We believe that investigating Toeplitz-like structured layers or other efficient representations would be excellent topics for future work. ``` The paper highlights performance improvements but does not discuss potential failure cases. ``` We agree that if we fix the hidden size of PCs, replacing the dense linear layers with Monarch matrices indeed reduces the expressive power, which is exactly shown in Figure 5, where we measure the performance of PCs with varying hidden sizes and different structures of linear layers. We can see that when fixing the hidden sizes, dense matrices always perform better, but at the same time, as the hidden size grows: (1) the performance gap between the dense PCs and the Monarch PCs diminishes rapidly while (2) the FLOPs gap between dense PCs ($O(h^2)$) and Monarch PCs($O(2h^{3/2}$) grows rapidly; and these two factors together lead to the large gap between the scaling curves of the dense PCs and Monarch PCs shown in Figure 4. From a theory perspective, one could ask whether there exists any distribution represented as compact dense PCs which would require exponential size to represent as a Monarch PC. The answer is no, as we have shown that a Monarch circuit can be interpreted as a relaxed version of the product of two circuits. Thus, any dense PC can be simulated by multiplying it with a PC representing the uniform distribution. We will incorporate this discussion into our revision and highlight that our study on the expressive power of Monarch matrices in PCs is focused on the empirical side. ``` The experiments focus on generative modeling benchmarks, but PCs are also used in other applications … Monarch-based PCs in non-generative tasks (e.g., probabilistic reasoning) would further strengthen the paper's impact. ``` Prior work has shown that the better the PCs model the desired distributions, the better they perform on downstream applications. For example, one line of work [3, 4] on applying PCs for controllable text generation from LLMs shows that the better the PCs approximate the LLMs, the higher the text generation quality, as shown in Figure 3 of [3] and Table 1 of [4]. Similarly, in the application of PCs to group fairness, Figure 3 and Table 1 of [5] show that the PC model achieving the best likelihood also leads to better classification accuracy and lower discrimination scores. Based on these findings, we believe that Monarch PCs, by achieving significantly better generative modeling performance, should naturally give rise to better downstream performance. Hence, instead of testing our state-of-the-art PCs on downstream applications, we devote our effort, as well as the limited computation resources, to ablation studies that may help people better understand the behavior of PCs with Monarch matrices, such as: what is a good number of dense PCs to multiply to form Monarch structures, how does initialization of PC parameters via circuit products benefit training, is the hidden states of PCs also sparse and etc. [1] Qiu et al. “Compute Better Spent: Replacing Dense Layers with Structured Matrices” ICML 2024 [2] Dao et al. “Monarch: Expressive Structured Matrices for Efficient and Accurate Training” ICML 2022 [3] Zhang et al. “Tractable Control for Autoregressive Language Generation” ICML 2023 [4] Zhang et al. “Adaptable Logical Control for Large Language Models” NeurIPS 2024 [5] Choi et al. “Group Fairness by Probabilistic Modeling with Latent Fair Decisions” AAAI 2021
null
null
null
null
null
null
In-Context Fine-Tuning for Time-Series Foundation Models
Accept (poster)
Summary: The paper extends a foundational forecasting models so that it can be conditioned on addition time-series information. Using in-context learning, the values of other time-series and also the value of the time-series to be predicted is added to the model input. Then the model is trained with the initial objective with a modified architecture to handle this additional information. Experiments are conducted on real-world datasets where the method is shown to reach a similar accuracy as fine-tuned models. Compared to those the trade-off is different since no time is spent on the fine-tuning but a larger time is spent at inference. ## edit after rebuttal I raised my score as the authors addressed my main concerns. Using a validation procedure is I think much cleaner and the heatmap is an interesting addition to understand the benefit of condititioning on more in/out series examples. Claims And Evidence: Yes, the method performance is validated on out-of-sample datasets. The accuracy is shown to match fine-tuned foundational models. Methods And Evaluation Criteria: Yes, a large collection of real-world datasets is considered, standard metrics are reported, and several are used to account for the randomness of the method. Theoretical Claims: NA Experimental Designs Or Analyses: Yes, the author mostly reused benchmarks that have been peer-reviewed and are standard. They also clearly indicates which method performance are in-sample (eg trained on the datasets) and which are not. Supplementary Material: Yes, some part in particular A.9 Selecting In-Context Examples. Relation To Broader Scientific Literature: Related work is well cited. Essential References Not Discussed: Your work reminded me of Tactis (https://proceedings.mlr.press/v162/drouin22a/drouin22a.pdf), in particular feeding all the time-series dimensions as token to a transformer, it may be worth including it in your related work section. Other Strengths And Weaknesses: Strength: - impactful application - sound method that could be applied to other foundational models Weakness: - some ablation may be missing - lack of analysis in the variance and behavior on the conditioning of additional data - the method makes a trade-off between fine-tuning time and inference time, therefore it is mostly applicable when fine-tuning time is problematic which reduces the scope -- edit post rebuttal The authors performed ablations and now use a clear validation protocol. Other Comments Or Suggestions: * l326 Exponential Smoothing (ETS) (missing space) * toursim Questions For Authors: **Analysis and model selection on the Conditioning of extra data.** My biggest complaint on the paper is that there are very few experiments on the conditionning on random data, except for Fig 6. (and Fig 9.) in the appendix and that the process done to select the hyperparameters is not ideal. The paper is making key design decisions such as selecting some number of in-series and out-series example which appears to be based on the OOD dataset but since you are performing model selection, you should rather use another set of validation datasets to make this selection as you would be overfitting your model selection otherwise. In addition, I found that the analyses on this aspect lacking, you are reporting the performance on only 4 setup (except in Fig 6 where the number of in-series is varied). Given that the cost of evaluating performance should be reasonable, it would be worth to explore a grid of options for the number of in-series and out-series example and plot a heatmap of the performance. For this, I believe it is a must to consider a set of datasets that are not used for the final selection (or at least to restrict the collections and be explicit about this) to not risk overfitting. **No variance reported.** Given that your results depend on a random sampling, analysing the noise coming from this decision would be important, it could be done with the analysis I discussed above by reporting confidence interval on mean performance estimates. In addition, reporting the confidence interval on the aggregated results on Fig 5 and others across datasets would be good to convey the uncertainty on the scores. Those are my biggest concern on the paper, if those points are adressed, I would be keen to revise my score. The other points bellow are not as crucial. **Additional points**. - instead of a separator, is there a reason why you did not consider a simple positional encoding? (eg a categorical embedding mapping the time-series index concatenated to your input, of-course using a fix index for the input time-series) - why are you making a causal mask in your architecture for the conditioning out-series? I would have imagined a mask that allows only the current in-series to look at other out-series but not to let an out-series looks at previous ones as it is less relevant for a out-series to look at the previous ones (also reinforce the random choice of the time-series order when sampling) - Fig 9: how are you computed the confidence interval? have you considered ensembling the predictions obtained when sampling different history? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their helpful comments and suggestions. We address the main points below. Please let us know if you have any further comments or concerns. If our response sufficiently addresses your concerns, we hope that you will consider updating your score accordingly. > Given that the cost of evaluating performance should be reasonable, it would be worth to explore a grid of options for the number of in-series and out-series example and plot a heatmap of the performance. For this, I believe it is a must to consider a set of datasets that are not used for the final selection (or at least to restrict the collections and be explicit about this) to not risk overfitting. Thank you for this feedback. Based on your suggestion, we constructed a validation dataset from the training portion of a subset of the Monash datasets (specifically: weather, traffic, australian electricity, ercot, ETTm, and ETTh). We chose these datasets because they contained many training examples long enough to construct up to 20 in-series examples. We measured the validation MASE error of TimesFM-ICF with the number of in-series examples varying from 0-20, and the total number of in-context examples (including randomly selected examples) varying from 1-50. The resulting heatmap is attached in Figure 1 in this link: https://anonymous.4open.science/r/icml25-DB6C/icml25.pdf. The configuration with smallest validation MASE was 11 in-series examples and 34 total examples. The geometric mean MASE ratio (averaged over 5 runs with different random examples selected) was 0.780+/.003 (so within a standard error of the MASE value we report in Figure 5). > Given that your results depend on a random sampling, analysing the noise coming from this decision would be important, it could be done with the analysis I discussed above by reporting confidence interval on mean performance estimates. In addition, reporting the confidence interval on the aggregated results on Fig 5 and others across datasets would be good to convey the uncertainty on the scores. Please see updated OOD tables in https://anonymous.4open.science/r/icml25-DB6C/icml25.pdf which now include confidence intervals for TimesFM-ICF. Each entry in the tables is an average over 5 runs of random in-context example selection (using our original in-series configuration). Note that our Figures already include confidence intervals. > instead of a separator, is there a reason why you did not consider a simple positional encoding? (eg a categorical embedding mapping the time-series index concatenated to your input, of-course using a fix index for the input time-series) Using a time series level positional encoding is definitely a valid option. One advantage of our proposal is that the pretrained model weights could potentially generalize to more than 50 in-context examples thanks to our current choice of not using any form of positional encoding. This is an open question we will empirically explore in the future. > why are you making a causal mask in your architecture for the conditioning out-series? I would have imagined a mask that allows only the current in-series to look at other out-series but not to let an out-series looks at previous ones as it is less relevant for a out-series to look at the previous ones (also reinforce the random choice of the time-series order when sampling) As we train the model in decoder-only manner, during training time there is no notion of a single “in-series”: for the forward pass on a training context (see Section 5.1. Context Generation), every series (example) is its own “in-series” and all its preceding series are its “out-series”, therefore we have to apply a full context level causal attention so that decoder-only works. At inference time we hence choose to keep the attention between out-series, which is for consistency with model training. This is identical to how language models commonly handle in-context examples (few-shot prompts). It is a good point that our design breaks the symmetry among out-series. Empirically we tried to minimize its effect by training with randomizing the order of in-context examples when there is no causal leakage (see Section 5.1. Context Generation, grouping, “Dataset level:”). It’s also an interesting question that what happens if we remove this attention between out-series at inference time - we will do it as a future study. > Fig 9: how are you computed the confidence interval? have you considered ensembling the predictions obtained when sampling different history? In Fig 9, the uncertainty of the reported metric comes from the random selection of in-context examples, and the confidence interval is computed based on 10 runs of the same setup with different random seeds. It’s definitely possible to ensemble the predictions based on repeated sampling of different in-context examples. The practitioner can make this choice at the cost of increased latency. --- Rebuttal Comment 1.1: Comment: Thank you for your answer and additional experiments, I will raise my score as it addressed my main concerns. Using a validation procedure is I think much cleaner and the heatmap is an interesting addition to understand the benefit of condititioning on more in/out series examples. I have two small remaining suggestions: > In Fig 9, the uncertainty of the reported metric comes from the random selection of in-context examples, and the confidence interval is computed based on 10 runs of the same setup with different random seeds. It would be great to put this in the paper as it was not mentioned as far as I could see (I may have missed it). --- Reply to Comment 1.1.1: Comment: Thank you for your reply and additional suggestions. We will incorporate this into the final version.
Summary: This paper proposes a novel in-context finetuning strategy for a specific Time Series Foundation Model (TSFM). By continual pretraining a TSFM (TimesFM in the paper) with in-context examples, the updated model is able to be prompted with related past time series examples at inference time, enhancing forecasting accuracy without requiring additional training pipelines. The main contribution is to adapt the idea of "in-context learning" in NLP domain to TSFM. Experimental results validate the effectiveness of the proposed approach across multiple forecasting tasks. Claims And Evidence: Most of the claims are reasonable. However, some may not be clearly supported—please refer to *Experimental Designs or Analyses* and *Other Strengths and Weaknesses* sections for details. Methods And Evaluation Criteria: The proposed methods and evaluation criteria make sense for the problem. Theoretical Claims: Nothing to discuss as no proofs for theoretical claims are included in the paper. Experimental Designs Or Analyses: Most experimental designs and analyses make sense, except the following ones. * *Section 6.3*: The design of using Moirai for in-context examples appears problematic. The key issue with this experiment is that **past related examples cannot be used as multivariate features in Moirai**. In Moirai’s framework, multivariate features correspond to **different variables within the same example**, sharing the same temporal range and aligned time ID. However, treating past related examples as additional multivariate features introduces a mismatch, as they belong to entirely different temporal periods. This misalignment causes Moirai’s time ID to convey misleading information, leading to the extraction of meaningless temporal relationships via attention. Consequently, this likely explains why Moirai-MV’s performance degrades when additional multivariate features are introduced in your experiments. In summary, the comparison with Moirai-MV in this manner is confusing and potentially misleading. * *Section 6.4.2*: TimesFM (base) has a maximum history length of 512 and relies on positional encoding. Thus, I don't think directly inferencing with a longer history ($L=2048$) would significantly improve performance without additional training. Notably, TimesFM-ICL undergoes a continual pretraining phase. A fair comparison would involve continually pretraining TimesFM (base) with a longer history length ($L=2048$) before evaluating its performance against TimesFM-ICL. In my view, the current evaluation may potentially undervalue the impact of a longer history in TimesFM. Supplementary Material: Part of the code files are provided. However, I am unable to view them on my device due to encoding issues. Relation To Broader Scientific Literature: The key contribution of this work relates to in-context learning or few-shot learning in the LLM domain. It aims to enable TSFM to leverage a few prompted examples at inference time, enhancing forecasting performance without additional training. This work represents a pioneering effort in exploring this direction. Essential References Not Discussed: Related works are properly cited and discussed in the paper. Other Strengths And Weaknesses: Strengths: 1. The exploration of in-context learning for TSFM is pioneering and important. 2. The paper is generally well-written and clearly structured. 3. Extensive experiments are conducted, providing a comprehensive analysis of the proposed method. Weakness: 1. In-context fine-tuning appears to be highly inefficient in terms of inference time. According to Table 7, the total inference time of TimesFM-ICF is 25 minutes—almost 50 times slower than TimesFM without ICF. The authors argue that fine-tuning is inefficient by taking training time into account; however, there are two key issues with this claim: * In-context fine-tuning also requires an additional continual pretraining phase, yet the authors do not account for this computational cost. * When compared to TimesFM (zero-shot), whose total inference time should be similar to TimesFM-FT, TimesFM-ICF offers no efficiency advantage at all. This raises a critical question: is the forecasting improvement provided by in-context fine-tuning worth such a significant trade-off in inference speed? Other Comments Or Suggestions: * Suggestions: The paragraph **In-Context Example Selection** in Section 6.2 should be moved to an earlier section if it is a fundamental component of the framework. Placing it within the experimental results section appears confusing and incoherent: it is unclear whether this in-context example selection strategy is applied during both continual pretraining and testing or only during inference in this specific experiment. * Typos: In the title of Figure 1, it should be _left/right_ instead of _top/bottom_. Questions For Authors: 1. Is the proposed in-context fine-tuning method generalizable to other TSFMs beyond TimesFM? 2. Why is the maximum history length in TimesFM limited to 512? What constraints impose this limit—positional encoding, architectural consideration, or just pretraining configurations? 3. Why can TimesFM-ICF handle only a maximum of 50 examples in its context? Is this limitation due to the continual pretraining process being designed specifically for 50 prompted examples, or are there other underlying constraints? 4. I believe Moirai is trained to accommodate a maximum of 128 variables, not 50 as stated in Line 353. Could you clarify this discrepancy? 5. What is the fundamental difference between in-context fine-tuning and using exogenous variables? Can the ICF method be interpreted as treating past examples as exogenous variables? 6. Why the length of in-context examples can be different? Aren't they in the identical length of $L+h$? Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: We thank the reviewer for their helpful comments and suggestions. We address the main points below. Please let us know if you have any further questions. If our response addresses your concerns, we hope you consider raising your score accordingly. > Section 6.3: Moirai Thank you for clarifying this point. Our interpretation from the Moirai paper in Section 3.2.2 Task Distribution was that the training datasets were augmented by “constructing multivariate time series from sub-datasets with univariate time series, by randomly concatenating them”. Based on this, it seemed that Moirai could support past related examples as multivariate features. We do not want to misrepresent the capabilities of the Moirai model in our paper, so we will contact the Moirai authors for clarifications, and correct this as the reviewer suggests. > Section 6.4.2: TimesFM (base) We apologize for the lack of clarity - we indeed pretrained TimesFM-LH with a longer history length of 2048 (in a manner similar to the latest version of the TimesFM repo (https://huggingface.co/google/timesfm-2.0-500m-pytorch). We didn't just use TimesFM (base) directly with a 2K context length at inference time to get the TimesFM-LH results. We will make this more clear in Section 6.4.2 > ICF appears to be highly inefficient in terms of inference time [...] Since the total context length for TimesFM-ICF is 50 times larger than TimesFM, the 50 times inference speed slowdown is not unreasonable. However, TimesFM-ICF is still a zero-shot model, the addition continual pretraining phase for training TimesFM-ICF is a one-time cost that is independent of the target dataset – there is no cost to be paid to adapt the model for a new domain or dataset (unlike fine-tuning costs, which has to be paid to adapt TimesFM for every new dataset). More importantly, for practitioners, the ability to use the ICF model out of the box, and avoid having to build a fine-tuning pipeline to customize the model for their use-case is a very significant cost and resource advantage that we think can more than offset the 50x inference speed disadvantage (note that even after this 50x inference slowdown, TimesFM ICF can still perform inference for a single example in 43 ms on average - totalling 25 minutes on approximately 140k testing time series on TPUv5e with 4 cores - which is often sufficient for most practical forecasting use-cases) > Suggestions [Sec 6.2] Thank you for the suggestions. We will update the final version of our paper accordingly. > other TSFMs beyond TimesFM? We believe that our methodology is generalizable to other TSFMs. This is an interesting direction for future research. > maximum history length limited to 512? The maximum history length constraint in TimesFM comes because of the pretraining configuration - the model has been pretrained on examples up to the maximum history length and might not generalize well to lengths beyond what it has been pretrained on. Increasing the maximum history length during pretraining will result in longer training times and compute resources > maximum of 50 examples in context? The choice of 50 examples for TimesFM-ICF comes from its continual pretraining setup where the model is trained with up to 25K (512*50) context window. Increasing the context window in continual pretraining would have caused longer training time and compute resources, which we would not have been able to afford. At inference time the model may generalize to beyond 50 examples - there is no mechanism within the model casting a limit of 50 indeed. For simplicity and conciseness of the paper, we also use 50 examples in our empirical study for consistency with the continual pretraining setup. Verifying the generalization is a separate, empirically heavy task that we plan to study in the future. > I believe Moirai is trained to accommodate a maximum of 128 variables, not 50 Sorry for the wording issue. We did not intend to imply that it accommodates at most 50, just that it accommodates 50. We will clarify our wording in the final version. > What is the fundamental difference between in-context fine-tuning and using exogenous variables? We view in-context fine-tuning and exogenous variables as two separate concepts: The former supplements the main forecasting history with time series generated from the same or a similar distribution, in which sense it is in-context “fine-tuning” to fit this distribution. On the other hand, exogenous variables can be of any distribution, and their usefulness mainly comes from their correlation with the main forecast task. In this regard, in-context fine-tuning is a more strict practice. > Why the length of in-context examples can be different? At both training and inference time we apply paddings to short in-context examples to bring them to length L+h while at the same time apply proper attention masks. Please see Section 5.1., Context Generation. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed reply. All points and concerns have been properly discussed and clearly explained by the authors. Regarding the Moirai-MV issue, please provide an update here once you hear back from the Moirai authors. Once again, I believe the quality of this work is solid, and I will update my scores accordingly. Please remember to make the corresponding revisions in your final version. --- Reply to Comment 1.1.1: Comment: Thank you for your reply. We wanted to provide an update after contacting the Moirai authors. They clarified that none of their training data contains multivariate time-series with covariates from different time windows. Based on this, and on your response, we agree that our current comparison with Moirai-MV without any caveats is misleading. Of course, this was not our intention, so we will correct this in the final version of our paper. We plan to remove the Moirai-MV column from Table 1 in the main body to eliminate this confusion. We will move the Moirai-MV result to the appendix, and emphasize there that the performance degradation is likely due to misalignment of the Time ID. We think it is useful to include this result in the appendix (with this caveat) to demonstrate that naively concatenating related time-series as multivariate features may not work out-of-the-box. Please let us know if this seems like a reasonable modification to you.
Summary: The paper proposes a "fine-tuning strategy via in-context learning" for pre-trained time series forecasting models. Essentially, the approach is similar to few-shot learning in LLM as multiple time series are added to the context in addition to the forecasted context. The authors modify an existing architecture and introduce a separation token to utilize multiple series in one context. Experiments suggest that the approach is effective and improves the forecasting performance of the pre-trained model. ## update after rebtuall we thank the reviewer for clarification. i think the paper is valuable and i keep my score at accept Claims And Evidence: The main claim that the approach can improve the performance of pre-trained time series models is well supported by the empirical results. Methods And Evaluation Criteria: The method is well motivated by the success of in-context learning in LLM. A comprehensive benchmark and standard metrics are utilized for evaluation and, therefore, make sense. Theoretical Claims: There are no theoretical claims or proofs Experimental Designs Or Analyses: The experiments in section 6 seem appropriate and sound. The selected benchmark data is especially a good choice as it is more comprehensive than a lot of other work. The ablation experiments are reasonable and give important insights to justify the approach (comparison to long context model) as well as the specific in-context sample selection procedure. Regarding the "in-context sample selection procedure", I would suggest to further add the performance variation over the runs over the individual datasets (similar to Figure 9 for the overall results). Supplementary Material: I had a quick look at the code of the supplementary material. Relation To Broader Scientific Literature: The approach is embedded in the literature of pre trained time series models, more specifically in pretraine time series forecastin models. The contribution can improve the performance of such models in general, as it does not necessarily depend on a certain architecture. Further, it might extend the potential use case scenarios for these models Further, it is related to in-context learning in general, which became especially popular for LLM. Essential References Not Discussed: I am not aware of any essential references that are missing. Other Strengths And Weaknesses: Strength: - The approach is not dependent on the specific architecture but could be applied to other pre-trained time series architectures. - Approach reaches fine-tuning performance - Evaluation on a comprehensive benchmark Weakness: - Only the subset of zero-shot benchmark is utilized. While this is a good idea to preserve the "zero-shot" setting, it would be advisable to additionally report results for the full benchmark and report which datasets are non zero-shot as this would make comparison to other papers/models easier. - As the evaluation includes randomness of the in-context samples, the authors should provide the variation over the individual datasets of the benchmark similar to Figure 9 for the "overall result". - Context length increases linearly with an increasing amount of context samples, therefore, computation complexity for the transformer increases quadratically Other Comments Or Suggestions: On Page 2 / line 259 the authors refer to A.2. although the authors probably want to refer to A.5 Questions For Authors: - Why did you decide to call the benchmark OOD benchmark? Typically, OOD is more often referred to situations which one has to detect (OOD detection) as the models do not perform reliable on it. In my understanding, the idea of pre-trained models is that the models should generalize over time series data, including the evaluated zero-shot data. Hence, in most related literature, this evaluation is more often referred to as "zero-shot" benchmark. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their helpful comments and suggestions. We address the main points below. Please let us know if you have any further comments or concerns. > Only the subset of zero-shot benchmark is utilized. While this is a good idea to preserve the "zero-shot" setting, it would be advisable to additionally report results for the full benchmark and report which datasets are non zero-shot as this would make comparison to other papers/models easier. Thank you for the suggestion. We have added evaluations on the missing four datasets (m4 quarterly, m4 yearly, weather, and traffic) in Table 1 (MASE) and Table 2 (WQL) here: https://anonymous.4open.science/r/icml25-DB6C/icml25.pdf. In these tables, we additionally report the geometric means of the scores over the 23 (zero-shot) datasets originally reported in our paper (row “Geometric Mean (ZS)”) and over all 27 datasets (“Geometric Mean (All)”). > As the evaluation includes randomness of the in-context samples, the authors should provide the variation over the individual datasets of the benchmark similar to Figure 9 for the "overall result". We have also added confidence intervals to Tables 1 and 2 in the link above (https://anonymous.4open.science/r/icml25-DB6C/icml25.pdf). Note that all of our figures already include error bars. It was not clear enough in Figure 5 and we will fix it. > Context length increases linearly with an increasing amount of context samples, therefore, computation complexity for the transformer increases quadratically We agree with the reviewer’s comment here. This quadratical attention computational complexity does not directly translate to a quadratical inference time likely due to (1) the latency of the feedforward layers and (2) optimization of transformer implementation on modern accelerators. > Why did you decide to call the benchmark OOD benchmark? Typically, OOD is more often referred to situations which one has to detect (OOD detection) as the models do not perform reliable on it. In my understanding, the idea of pre-trained models is that the models should generalize over time series data, including the evaluated zero-shot data. Hence, in most related literature, this evaluation is more often referred to as "zero-shot" benchmark. Thanks for pointing this out. We have abused the notion of OOD as out of the pretraining data distribution. We did not call it “zero-shot benchmark” because such a name was mentioned in [1] and we wanted to be clear that our benchmark is its subset that’s zero-shot for TimesFM-ICF. We will clarify and update the name to zero-shot benchmark. [1] Ansari, Abdul Fatir, et al. "Chronos: Learning the Language of Time Series." Transactions on Machine Learning Research.
Summary: The authors propose a framework to obtain pretrained models for time series forecasting that are capable of doing in-context learning. The authors approach is verified on top of TimesFM (a decoder only pretrained model for time series forecasting) accompanied with extensive evaluations. The authors show that the proposed approach not only outperforms data-specific models and other pretrained models in zero-shot settings, but that it even outperforms fine-tuning variants of the proposed model. Claims And Evidence: The main claim of the paper is that the proposed approach is a direct extension of NLP pretrained models to the field of time series forecasting. Whereas in NLP in-context-learning has shown to be a relevant property, it remains as an open question in time series if this can be extended. The authors claim that the proposed approach provides a positive answer to this question. Perhaps one of the most relevant points of the paper is that the extension is relatively easy. It is based on adding extra tokens that represent a split between consecutive in-context-samples, so that the model is able to identify that different entities are provided in the context, which can be used to improve the quality of the generated forecast. Further, the authors provide extensive empirical evidence on the relevance and methodology to identify in-context-samples. They show that indeed the more samples are provided, the better the performance, albeit at the expense of larger inference time. The paper claims are well sustained with extensive empirical evidence. I have a couple of questions about the methodology, but I believe this can be addressed somehow by the authors, and this do not hinder the interesting contribution done in the paper. Methods And Evaluation Criteria: **Evaluation**. The authors provide evaluations in terms of MASE, which assume that the considered models provide only point forecasts. Several of the models here considered are able to provide probabilistic forecasts, and at least in the original TimesFM paper it was claimed that it would be possible to generate probabilistic forecasts as well. Moreover, since the authors main reference for evaluation setup is Ansari et al, which introduces Chronos (a probabilsitic pretrained model), I wonder if the provided model can generate probabilistic forecasts and what the corresponding evaluations in terms of mean weighted quantile loss would be. **Datasets**. The authors provide extensive evaluations. Although the authors take as reference the experimental set up of Ansari et al, the authors decided to evaluate in 23 of 27 of the datasets. This decision seems to have been made because the excluded datasets where used for pretraining the proposed model. This makes it unclear if the main results of Figure 5 hold if these datasets were included in the zero-shot evaluation. **Qualitative Evidence**. The paper is missing visualizations of the generated forecasts. It would be great to see if there are any visual clear changes of the generated forecasts when using a larger number of in-context-samples. Theoretical Claims: There are not theoretical claims. Experimental Designs Or Analyses: The experimental design is methodologically sound. The authors provide interesting studies on the effect of how the in-context samples can be chosen (time-series level and Dataset level), and the amount of samples to be chosen (the larger the better at the expense of larger inference time), and competitive performance against full-fine tuning of the model. In general the authors did emphasis in that no data-leakage happens when doing out-of-domain evaluations. Supplementary Material: I read all the supplementary material. Relation To Broader Scientific Literature: The paper is well relevant in the field of time series forecasting, and it is comparing with state of the art models. Essential References Not Discussed: The authors do a fair job in comparing with other state of the art models. Other Strengths And Weaknesses: See above. Other Comments Or Suggestions: I would suggest the authors to update the bibliography entries. Several papers are already accepted at top conferences, and still the authors cite them with their Arxiv entries. Examples of this are: - Ansari, A. F., Stella, L., Turkmen, C., Zhang, X., Mercado, P., Shen, H., Shchur, O., Rangapuram, S. S., Arango, S. P., Kapoor, S., et al. Chronos: Learning the language of time series. arXiv preprint arXiv:2403.07815, 2024. - Goswami, M., Szafer, K., Choudhry, A., Cai, Y., Li, S., and Dubrawski, A. Moment: A family of open time-series foundation models. arXiv preprint arXiv:2402.03885, 2024. - Gruver, N., Finzi, M., Qiu, S., and Wilson, A. G. Large language models are zero-shot time series forecasters. arXiv preprint arXiv:2310.07820, 2023. - Haviv, A., Ram, O., Press, O., Izsak, P., and Levy, O. Transformer language models without positional encodings still learn positional information. arXiv preprint arXiv:2203.16634, 2022. - Li, S., You, C., Guruganesh, G., Ainslie, J., Ontanon, S., Zaheer, M., Sanghai, S., Yang, Y., Kumar, S., and Bhojanapalli, S. Functional interpolation for relative positions improves long context transformers. arXiv preprint arXiv:2310.04418, 2023. - Liu, P. J., Saleh, M., Pot, E., Goodrich, B., Sepassi, R., Kaiser, L., and Shazeer, N. Generating wikipedia by summarizing long sequences. arXiv preprint arXiv:1801.10198, 2018. - Zhou, T., Niu, P., Wang, X., Sun, L., and Jin, R. One fits all: Power general time series analysis by pretrained lm. arXiv preprint arXiv:2302.11939, 2023. Questions For Authors: 1. why in the proposed paper the base model is trained on even more datasets (LOTSA datasets) than it was in the original TimesFM paper? I wonder if the results would still hold if the same pretraining methodology was followed as in the original TimesFM paper. 2. if the authors are pretraining the base model, why did they decide to exclude certain datasets that could be used for zero-shot evaluations as in Ansari et al? Reading the supplementary material (Section A.5 -- lines 678-681), for some of these cases maybe the context length would have been problematic, but it seems that even more standard datasets like traffic and weather were excluded from the zero-shot evaluation. Whereas I believe that the contribution of the authors is strong enough to not be hindered by these decisions, these issues open questions rather than clarity. 3. In the original TimesFM paper one of the mentioned limitations is that covariates can not be included in the pretrained model. I wonder if the proposed approach in this paper can accomodate for this. Perhaps in-context-samples can be taken from covariates that evolve on time (dynamic features). This would be a clear-cut for practitioners to adopt this model in the industry, as covariates often come as de-facto request. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their helpful comments and suggestions. We address the main points below. Please let us know if you have any further comments or concerns. If our response sufficiently addresses your concerns, we hope that you will consider raising your score accordingly. > I would suggest the authors to update the bibliography entries. Thank you for pointing this out. We will update the bibliography accordingly in the final version of our paper. > why in the proposed paper the base model is trained on even more datasets (LOTSA datasets) than it was in the original TimesFM paper? I wonder if the results would still hold if the same pretraining methodology was followed as in the original TimesFM paper. While our study focuses on the improvement introduced by the in-context fine-tuning continued training over a base model, we intend to start from an up-to-date base model. Therefore we use the same datasets that are specified in the latest version (v2) of the TimesFM huggingface repo (https://huggingface.co/google/timesfm-2.0-500m-pytorch), which does include LOTSA. > In the original TimesFM paper one of the mentioned limitations is that covariates can not be included in the pretrained model. I wonder if the proposed approach in this paper can accomodate for this. Perhaps in-context-samples can be taken from covariates that evolve on time (dynamic features). This would be a clear-cut for practitioners to adopt this model in the industry, as covariates often come as de-facto request. As our model is trained only with in-context examples coming from the same or a similar time-series that shares the same / a similar distribution as the target time series, we expect that providing other covariates as in-context examples likely would not yield good performance out-of-the-box. Adapting our model to allow for additional covariates is an interesting direction for future work. > I wonder if the provided model can generate probabilistic forecasts and what the corresponding evaluations in terms of mean weighted quantile loss would be. Thank you for your suggestion. We have evaluated the wQL from our proposed model along with some of the other baseline methods, and the results show that our model performs significantly better on wQL compared to the baselines. The full results are uploaded separately in Table 2 here: https://anonymous.4open.science/r/icml25-DB6C/icml25.pdf, and we will add these results to the next version of the paper. > Missing visualization. Thanks for pointing out our lack of clarification here. In the current paper, Fig. 7 is a limited visual example of how the ICF model behaves differently from a base model. We’ve created additional visualization on the australian_electricity dataset (which has 5 time series) to demonstrate how the forecast changes with a large number of in-context examples. Refer to Figure 2 in the following link: https://anonymous.4open.science/r/icml25-DB6C/icml25.pdf. In this figure, we plot the predictions of TimesFM-ICF operating in three modes: 0 in-context examples, 20 (random) in-context examples, and 50 in-context examples (5 of which are within-series examples). These three configurations have increasingly better MASE scores on this dataset (with MASE values ~1, ~.9, and ~.8, respectively). The predictions visually appear to improve with the MASE values. > Report on 27 datasets instead of 23 We have added evaluations on the missing four datasets (m4 quarterly, m4 yearly, weather, and traffic) in Table 1 (MASE) and Table 2 (WQL) here: https://anonymous.4open.science/r/icml25-DB6C/icml25.pdf. In these tables, we additionally report the geometric means of the scores over the 23 (zero-shot) datasets originally reported in our paper (row “Geometric Mean (ZS)”) and over all 27 datasets (“Geometric Mean (All)”). The results of Figure 5 continue to hold when adding the additional 4 datasets (which recall are not zero-shot for our model).
null
null
null
null
null
null
MOGIC: Metadata-infused Oracle Guidance for Improved Extreme Classification
Accept (poster)
Summary: This paper mainly explores methods to enhance classification performance using metadata in the task of Extreme Classification. Experiments on six popular benchmark datasets show that the method significantly improves the model performance. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: Yes, in Theoretical Justification of Oracle-Guided Losses Experimental Designs Or Analyses: Yes Supplementary Material: I have reviewed the supplementary material, including Visualization of MOGIC. Relation To Broader Scientific Literature: This paper makes solid contributions to the area of Extreme Classification. Essential References Not Discussed: No Other Strengths And Weaknesses: 1. The proposed MOGIC framework is innovative as it combines early-fusion of text-based metadata and late-fusion of memory items. 2. The two-phase training process involving Oracle training and Oracle-guided disciple training might introduce additional complexity. 3. Although it shows good performance on the tested datasets, its scalability to extremely large-scale or rapidly evolving datasets remains to be seen. Other Comments Or Suggestions: See weakness above Questions For Authors: See weakness above Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their detailed review. Please find our response to your comments below. 1. **The two-phase training process involving Oracle training and Oracle-guided disciple training might introduce additional complexity.** * **Response**: We agree with the reviewer that the two-stage training framework introduces some additional complexity compared to single-stage baselines. However, the improvements in performance and the increased robustness achieved through regularization in this two-stage approach present a reasonable tradeoff for the added complexity. Moreover, this complexity is limited to the training phase (a one-time cost), while the deployed disciple models remain highly efficient at inference. 2. **Although it shows good performance on the tested datasets, its scalability to extremely large-scale or rapidly evolving datasets remains to be seen.** * **Response**: The MOGIC approach can be readily scaled to larger datasets. However, there are no high-quality rapidly-evolving dataset for XC, to the best of our understanding. We believe this is a strong area for future research. The Wikipedia-500k is one of the largest datasets used by the XC community, comprising 1.8M training samples and 501K labels, while many other XC datasets such as EURLex-4.3K, Bibtex or AmazonCat-13K have significantly fewer labels (such as 4.3K in the EURLex-4.3K dataset or 13K in the AmazonCat-13K dataset). While LF-AmazonTitles-1.3M is another viable choice, we reported results on Amazon-131k, since metadata in this setting is not readily available, and must be generated using GPT-based approaches. In light of your comments, we are also working to include experimentation on the LF-AmazonTitles-1.3M dataset. While we will include these results in the final version of the manuscript, given the additional resources involved in generating the metadata and subsequently training the models, we do not have the results ready to report at this time, but are striving to incorporate them before the end of the rebuttal discussion phase.
Summary: The paper introduces MOGIC, a framework for improving extreme classification (XC) by leveraging metadata through a two-phase training approach. XC involves tasks with extremely large label spaces (e.g., product recommendations, Wikipedia tagging) where metadata can enhance accuracy but faces challenges like noise and latency. Existing methods use late-stage fusion for efficiency but underperform when metadata is clean. MOGIC trains an early-fusion Oracle model with access to ground-truth metadata to guide a disciple model (e.g., OAK, NGAME) via regularization. This approach improves precision and robustness without increasing inference latency, achieving 1–2% gains on six datasets. ## update after rebuttal Thanks for the responses and extra sensitivity analysis which partially addresses my questions. I will keep my positive rating of 3 considering the overall quality of this paper. Claims And Evidence: Yes Methods And Evaluation Criteria: 1. The method leverages both query-side and label-side metadata, enriching representations bidirectionally. Examples show improved label prediction by incorporating contextual metadata from both ends. 2. MOGIC demonstrates consistent performance gains across multiple datasets and metrics. It improves precision@1, NDCG, and propensity-scored metrics over state-of-the-art models like OAK. The method is validated on six benchmarks, showing broad applicability. Theoretical Claims: There are some theoretical proofs which seem to be correct. One typo is that, the symbol k in Inequality (11) should be K. Experimental Designs Or Analyses: Six datasets are used for the experiments. Three different LLMs are tested and many baselines are compared. I think the experimental evaluation is quite comprehensive. Supplementary Material: Yes. I reviewed most of the supp. materials with a focus on the theoretical analysis. Relation To Broader Scientific Literature: This work is related to extreme classification. And it is also related to RAG in LLMs. Essential References Not Discussed: Nil. Other Strengths And Weaknesses: 1. The approach is model-agnostic, enhancing both memory-based (OAK) and memory-free (NGAME, DEXA) XC models. Plug-and-play compatibility allows integration into existing architectures. Flexibility in Oracle choice (DistilBERT, Phi-2) balances performance and efficiency. 2. Theoretical analysis justifies the regularization losses, linking disciple performance to Oracle-guided training. Bounds on population loss show the disciple converges toward Oracle accuracy with finite samples. 3. MOGIC maintains low inference latency by avoiding early-fusion overhead. Training costs are manageable, and inference matches base models’ speed. Experiments confirm no latency increase compared to OAK. 4. Robustness to missing or noisy metadata is demonstrated through quantile-wise analysis and noise injection tests. The disciple outperforms the Oracle when metadata is perturbed, showing resilience to real-world conditions. Other Comments Or Suggestions: Comparisons with pure LLM-based approaches (without disciple models) are missing. Larger Oracles might outperform MOGIC if computational constraints are relaxed, but this trade-off is not quantified. Questions For Authors: 1. The Oracle’s dependency on high-quality metadata during training limits performance if metadata is sparse or biased. While MOGIC handles inference-time noise, training assumes reliable metadata linkages, which may not hold in all scenarios. 2. Hyperparameters α and β for loss balancing are set empirically. The paper does not analyze sensitivity to these choices, risking suboptimal tuning in new applications. 3. The framework may overfit Oracle biases, particularly if the Oracle’s metadata integration is noisy. Ablation studies show performance drops without metadata, hinting at potential over-reliance. Ethical Review Concerns: Nil. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your detailed review. Below are our responses to your comments. 1. **Comparisons with LLM-based approaches** * **Response**: We have already included comparisons of MOGIC against LLaMA and Phi-based Oracles without disciple models, when LoRA-finetuned for label generation (XC task) in Tab 3 (and row 1-3 below). Generative LMs cannot be directly employed for the XC task, and therefore, for these experiments we carry out classification by comparing the embeddings associated with final token of the query and the labels from the last layer of the SLM. Results reported in Tab. 3 of the paper summarizes this experiment (cf. Lines L367-380, col 1). We observe that the (65M sized) MOGIC disciple, trained with a DistilBERT/LLaMA-based oracle outperforms the 7B-scale fine-tuned SLMs used directly without any disciple. We attribute this worse performance to the fact that these SLMs were pre-trained for text generation and then adapted for label generation. However despite their performance, their embeddings, when used for regularization in the MOGIC framework, result in improved performance of the disciple models. We will improve the clarity of this discussion. * To further investigate performance comparisons against language models, as part of the rebuttal, we also include comparisons against LLaMA, Phi, and GPT when used directly for label generation (row 4-6). We observe that oracle models trained specifically for the XC task perform better than much larger LMs. Also, for the XC task, using the embedding from LMs is better than mapping their generations to the label set since the generations often contain labels which do not belong to the label set. *Oracle metrics on LF-WikiSeeAlsoTitles-320K* ||P@1|P@5| |-|-|-| |DistilBERT (65M) (cf.Table3)|47.63|22.75| |LLaMA-2 (7B) (LoRA) -Embed (cf.Table3)|34.20|16.21| |Phi-2 (2.7B) (LoRA) -Embed (cf.Table3)|33.32|15.48| |LLaMA-2 (7B) (LoRA) Gen|9.427|9.065| |Phi-2 (2.7B) (LoRA) Gen|8.267|8.031| |GPT 4-o Gen|14.57|12.26| 2. **Metadata dependency during training** * **Response**: As discussed in Tab. 8 (cf. Sec 4.3), disciples trained with an Oracle are more robust to noisy/missing metadata in comparison to their Oracle counterparts during inference. This robustness is not limited to noise at inference, but also noise present in the metadata during training. To simulate a training scenario wherein reliable metadata linkages are not available, we consider the following setting: We train MOGIC on LF-WikiSeeAlsoTitles-320K dataset wherein 50% of the metadata is replaced with a randomly selected metadata from the corpus. The following table summarizes the linker, the Oracle and disciple models' performance. We observe that MOGIC (OAK) is more robust to noisy metadata, even during training with negligible drop in performance. The MOGIC model trained with noisy metadata, continues to be SOTA, outperforming other baselines. *Linker metrics with 50% noise-added metadata, where 50 % metadata for each query is replaced by random metadata.* ||P@1|P@5|N@5|PSP@1|PSP@5| |-|-|-|-|-|-| |Ground-truth metadata|46.16|20.97|36.48|28.75|25.37| |Noisy metadata|27.98|9.90|20.82|21.62|14.10| *Oracle metrics with 50%noisy metadata* ||P@1|P@5|N@5|PSP@1|PSP@5| |-|-|-|-|-|-| |Ground truth Oracle|47.48|22.64|48.18|36.60|41.27| |Noisy Oracle|41.23|19.92|42.17|30.94|35.77| *Performance of MOGIC (OAK) and OAK with 50%noisy metadata during training* ||P@1|P@5|N@5|PSP@1|PSP@5| |-|-|-|-|-|-| |MOGIC(OAK) (cf.Table2)|34.62|17.93|35.70|27.44|33.18| |MOGIC(OAK) with 50%noisy metadata|34.29|17.68|35.37|27.89|33.07| 3. **Sensitivity of Loss Balancing Hyperparameters** * **Response**: We now include a sensitivity analysis over the $𝛼$ and $β$ hyperparameters, considering all combinations of $𝛼\in\{0.1,1,10\}$ and $β\in\{0.1,1,10\}$. We performed a sensitivity analysis prior to our experimentation and found that $(𝛼,β) = (1,0.1)$ leads to the best performance, which are the values we use for the experiments in the paper. We now present the ablations on the choice of $𝛼$ and $β$ on the LF-WikiSeeAlsoTitles-320K dataset, with all other hyperparameters as defined in Appendix H. This table summarizes the results. We observe that $(𝛼,β) = (1,0.1)$ results in the best performance. However, the performance of MOGIC is generally robust to the choice of $(𝛼,β)$ with a variance of 0.108 in P@1, when comparing across all the choices considered. We will improve the clarity of this hyperparameter choice in the final version of the manuscript, and include this ablation in the Appendix. |$𝛼$|$β$|P@1|P@5|N@5|PSP@1|PSP@5| |-|-|-|-|-|-|-| |0.1|0.1|34.23|17.75|35.40|27.05|32.67| |0.1|1.0|34.34|17.74|35.45|27.07|32.63| |0.1|10.0|33.70|17.47|34.83|26.23|32.07| |1.0|0.1|34.62|17.93|35.70|27.44|33.18| |1.0|1.0|34.56|17.91|35.66|27.32|33.12| |1.0|10.0|34.05|17.61|35.05|26.55|32.42| |10.0|0.1|34.16|17.57|35.04|26.84|32.39| |10.0|1.0|34.11|17.56|35.00|26.72|32.32| |10.0|10.0|33.61|17.30|34.47|25.89|31.66| --- Rebuttal Comment 1.1: Comment: Thanks for the responses and extra sensitivity analysis which partially addresses my questions. I will remain my rating at 3 considering the overall quality of this paper. --- Reply to Comment 1.1.1: Comment: Dear Reviewer We thank you for your suggestions. We would like to present some more results to further address your questions. - **Sensitivity of Loss Balancing Hyperparameters** To strengthen the sentivity analysis of MOGIC w.r.t to hyperparameters, we have also added the numbers of LF-WikiTitles-500K dataset. Since this is a large dataset and requires extensive amount of compute, for the rebuttal, we report numbers after 50 epochs of training on the three best performing $\alpha$, $\beta$ combinations from LF-WikiSeeAlsoTitles-320K dataset. The observation is consistent with that from the previous analysis and the $\alpha$, $\beta$ combination of 1.0, 0.1 performs the best. The performance of MOGIC is generally robust to hyperparameters, and the choice of hyperparameters seems to be agnostic of the dataset. * *Hyperparameter sensitivity analysis on LF-WikiTitles-500K* | | α | β | P@1 | P@5 | N@5 | PSP@1 | PSP@5 | |--:|-----|------|------:|------:|------:|------:|------:| | 1 | 1.0 | 0.1 | 43.92 | 17.04 | 32.57 | 27.09 | 24.71 | | 2 | 1.0 | 1.0 | 42.87 | 16.75 | 32.07 | 27.00 | 24.56 | | 3 | 1.0 | 10.0 | 43.42 | 16.67 | 31.98 | 26.49 | 24.14 | - **Metadata Dependency During Training and Oracle Bias** To further show the extent of the robustness of proposed framework to unreliable metadata during training, we have also extended the previous experiments to now include the LF-WikiTitles-500K dataset along with the LF-WikiSeeAlsoTitles-320K dataset. These results further go on to prove that MOGIC (OAK) is robust to noisy metadata, even during training with negligible drop in performance. This also proves that Oracle models trained with noisy data can still be used to train the disciple model. * *Linker metrics with 50% noise-added metadata, where 50 % metadata for each query is replaced by random metadata* | | P@1 | P@5 | N@5 | PSP@1 | PSP@5 | |----------------------:|------:|------:|------:|------:|------:| | Ground-truth metadata | 20.74 | 13.72 | 15.90 | 7.38 | 8.29 | | Noisy metadata | 10.06 | 4.99 | 6.44 | 7.12 | 5.48 | * *Oracle metrics with 50% noisy metadata* | | P@1 | P@5 | N@5 | PSP@1 | PSP@5 | |----------------------:|------:|------:|------:|------:|------:| | Ground-truth metadata | 64.32 | 29.92 | 50.58 | 37.41 | 39.75 | | Noisy metadata | 59.85 | 26.98 | 46.47 | 34.58 | 36.00 | * *Performance of MOGIC (OAK) with 50%-noisy metadata during training* | | P@1 | P@5 | N@5 | PSP@1 | PSP@5 | |-----------------------------------:|------:|------:|------:|------:|------:| | MOGIC(OAK) (cf.Table2) | 47.28 | 18.55 | 34.97 | 27.29 | 26.12 | | MOGIC(OAK) with 50% noisy metadata | 46.68 | 18.53 | 34.83 | 27.44 | 26.27 | - **Comparisons with LLM-based approaches** We have also added the propensity metrics for results from generative language models. The same trends follow. | | P@1 | P@5 | PSP@1 | PSP@5 | |-----------:|------:|------:|------:|------:| | DistilBERT (cf. Table 3) | 47.63 | 22.75 | 36.71 | 41.45 | | LLaMA+Metadata (LoRA) - Embed (cf. Table 3) | 34.20 | 16.21 | 30.46 | 31.93 | | Phi+Metadata (LoRA) - Embed (cf. Table 3) | 33.32 | 15.48 | 29.75 | 30.61 | | LLaMA+Metadata (LoRA) Gen | 9.427 | 9.065 | 12.86 | 9.982 | | Phi+Metadata (LoRA) Gen | 8.267 | 8.031 | 10.25 | 7.689 | | GPT+Metadata Gen | 14.57 | 12.26 | 16.86 | 12.92 | - **Testing on Large-Scale XML Datasets** As suggested by other reviewers, we have shown results of MOGIC(OAK) on LF-AmazonTitles-1.3M dataset which is an extremely large scale XML dataset with 1.3 million labels. We observe that our framework MOGIC(OAK) shows gains over OAK. * *Results on LF-AmazonTitles-1.3M benchmark datasets* | | P@1 | P@5 | N@5 | PSP@1 | PSP@3 | PSP@5 | |------------:|------:|------:|------:|------:|------:|------:| | MOGIC (OAK) | 48.93 | 38.49 | 46.45 | 35.78 | 38.92 | 40.59 | | OAK | 48.91 | 38.13 | 46.07 | 34.65 | 37.53 | 39.04 | --- --- We sincerely thank the reviewers for their constructive feedback, which has significantly improved the quality of our work. We warmly welcome any additional suggestions for further enhancement.
Summary: The authors propose a framework for building a disciple model which can perform extreme multi-label classification with the assistance of RAG-like metadata. This pipeline, MOGIC, is two-phase: in phase (1) an oracle with access to high-quality, ground-truth metadata is trained. In phase (2), a smaller, "disciple" model participates in knowledge "distillation-like" training to mimic the predictions of the oracle model but also make predictions on relevant metadata which might help with the downstream prediction. Through MOGIC, the disciple model is both robust to noisy metadata and performant on XML tasks such as WikiTitles-500K and AmazonTitles-131K. ## Update after rebuttal I am choosing to maintain my positive score in light of the rebuttal. Claims And Evidence: The results, especially the P@1 numbers, indicate the effectiveness of the approach. It is to my understanding that through this strategy, lightweight architectures such as DistilBERT can perform on par with Phi-2 and Llama-2. It is clear through the experiments section that usage of high-quality textual metadata is capable of training a high-quality disciple model. Methods And Evaluation Criteria: I do believe that the proposed method makes sense for the problem, especially in the age of LLMs. Classical XML architectures and papers from 2016-2024 did not extensively consider the addition of metadata for help with classification, but given the typically lackluster results on popular XML datasets (due to their extreme difficulty), calling RAG-like methodology is intuitive. The datasets are appropriate, though I would have liked to have seen performance on even more challenging datasets such as Amazon-670k (see questions below). Theoretical Claims: I did not check the correctness of the proof, though the bound makes sense as it becomes tighter with more metadata and samples. The authors should probably explain what the Rademacher constants are for those readers less familiar with PAC theory (are these just Rademacher splitting dimensions?). . Experimental Designs Or Analyses: The experimental designs appear to be fair and routine. The datasets are standard and frequently-encountered in XML literature. The augmentation of the XML samples and labels with metadata is new to me, but the construction of the metadata makes sense. Supplementary Material: I reviewed Sections G and I in the supplementary material. Relation To Broader Scientific Literature: Extreme multi-label classification is applicable to the broader recommender systems community. In my opinion, XML is still generally unsolved and challenging, so the contribution of new algorithms which pushes the P@1 on these datasets is meaningful and useful to the broader machine learning community. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Other Strengths: -I find the contribution useful as it re-frames the XML problem within the landscape of LLMs and RAG. It's a fresh perspective. Other Weaknesses: -The disciple training doesn't seem entirely novel. It resembles knowledge-distillation, except it's not immediately clear if the disciple architecture is smaller or simply the same as the oracle. Other Comments Or Suggestions: The authors could try to present a simple example of the features and labels for an XML problem. It likely won't be immediately clear to a reader outside of this field that the label is an enormously-sized, highly sparse binary vector corresponding to classes -- this is only briefly covered. The authors should also try to further emphasize hat these problems are difficult because there are very few samples per label. Questions For Authors: Is it possible to run tests on extremely challenging datasets such as WikiLSHTC-325K and Amazon-670K? I would be interested in seeing how metadata is constructed for these datasets and how MOGIC holds up. If not, could the authors explain why these datasets are not suitable for the described framework? I am also interested in seeing if a gold-standard method such as SLEEC can be defeated via MOGIC. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your detailed review. Please find our response to your comments below. 1. **Clarity on Rademacher constants** * **Response**: The Rademacher complexity constants $R_q$ and $R_l$ in Theorem 1 are scalar values which quantify the complexity or capacity of the hypothesis classes corresponding to the query tower and label tower, respectively. We use the standard definitions of Rademacher constants from the Statistical Learning Theory for Binary Classification problems (ref: https://www.cs.cmu.edu/~ninamf/ML11/lect1117.pdf). * *Explanation on Rademacher complexity:* Mathematically, Rademacher complexity constants $R_q,R_l$ are estimated as the average empirical loss of minimizing the hypothesis class on a data sample with randomly annotated labels i.e. labels are generated by a purely random Bernoulli distribution with probabilities 0.5 to be either positive or negative. In intuitive terms, the smaller the values of $R_q, R_l$, the less prone are the query tower and label tower to overfit the finite training data, and consequently the accuracy on the test set is expected to be better. 2. **Novelty of disciple training and clarity of oracle architecture** * **Response**: The disciple training in MOGIC is a novel variant of knowledge distillation (KD). Compared to standard KD, it differs in multiple ways. First, the metadata is provided as early concatenation in textual form for the oracle, but is used to train free parameters in the disciple's memory via a novel regularization framework. This can be viewed as KD from an early-concatenation model to a two-tower model. While all disciple models benefit from this framework, the additional parameters present in the memory-based disciples such as MOGIC (OAK) demonstrate the most gains. Furthermore, the oracle has access to ground-truth metadata, which is privileged information that is not available to the disciple model (unlike standard KD). In this MOGIC framework, the oracle can either be larger, or of the same size as the disciple. Table 3 of the main manuscript presents comparisons between larger (LLM-based, 2.7B/7B sized) oracle and a disciple-sized (65M sized), DistilBERT oracle (cf. Lines 367-380, column 1). 3. **Clarifying XML Labels and Data Scarcity** * **Response**: We will update the introduction of the manuscript accordingly. We will also include a detailed example, as suggested by you, in the appendix in the revised draft. 4. **Testing on Large-Scale XML Datasets like WikiLSHTC-325K or Amazon-670K** * **Response**: Datasets such as WikiLSHTC-325K (which contains 325K labels) do not contain raw text or label features associated with labels, and therefore have not been directly used for training encoder-based models such as NGAME, DEXA, DEXML, ANCE etc. This is why we chose to report on LF-WikiTitles-500k and LF-Wikipedia-500K dataset (which has 501K labels), which are from the same distribution and have the same task as WikiLSHTC (category prediction), but contain a larger label set, with label features/text available. Similarly, to be aligned with the baseline methods, we use LF-Amazon-131k dataset instead of Amazon-670K. To demonstrate MOGIC's effectiveness on larger-scale datasets, we are working to include experiments on the LF-AmazonTitles-1.3M dataset in the final version of the manuscript. While the results are not yet available due to the resource-intensive process of generating metadata and training models, we aim to include them before the conclusion of the rebuttal phase. 5. **Comparison against SLEEC.** * **Response**: SLEEC does not use label-text for classification, and therefore performs poorer than methods such as Parabel [1] on multiple standard datasets. A direct comparison against SLEEC is challenging, as we were unable to find an up-to-date implementation of the algorithm. Therefore, we choose to compare MOGIC against Parabel, which has been shown to outperform SLEEC across datasets[1]. We present these comparisons on LF-AmazonTitles-131K, LF-WikiSeeAlsoTitles-320K and LF-WikiTitles-500K. These results are reported in the table below, with performance numbers for Parabel obtained from ECLARE[2]: * *Performance of MOGIC(OAK) and Parabel on different benchmark datasets* | | P@1 | P@5 | PSP@1 | PSP@5 | |--:|--:|--:|--:|--:| | **LF-AmazonTitles-131K** | | | | | | MOGIC(OAK) | 47.01 | 22.40 | 40.62 | 50.33 | | Parabel | 32.6 | 15.61 | 23.27 | 32.14 | | **LF-WikiSeeAlsoTitles-320K** | | | | | | MOGIC (OAK) | 34.62 | 17.93 | 27.44 | 33.18 | | Parabel | 17.68 | 8.59 | 9.24 | 11.8 | | **LF-WikiTitles-320K** | | | | | | MOGIC (OAK) | 47.28 | 18.55 | 27.29 | 26.12 | | Parabel | 40.41 | 15.42 | 15.55 | 15.35 | [1] Prabhu, Yashoteja, et al. "Parabel: Partitioned label trees for extreme classification with application to dynamic search advertising." World Wide Web Conference 2018. [2] Mittal, Anshul, et al. "ECLARE: Extreme classification with label graph correlations." theWebConf 2021.
null
null
null
null
null
null
null
null
Few-Shot Learner Generalizes Across AI-Generated Image Detection
Accept (poster)
Summary: This paper adopts the concept of traditional few-shot learning (prior to 2022) and utilizes a prototype network to construct an AIGC image detector, with experimental validation demonstrating improved generalization performance. Claims And Evidence: Yes. Methods And Evaluation Criteria: Strength: - This paper employs a prototype network to construct an AIGC image detector, reducing reliance on training data from newly generated algorithms. Weakness: - This paper merely applies ProtoNet without any task-specific improvements for AIGC detection. Therefore, I believe its academic contribution is quite limited. - ProtoNet is also a decade-old algorithm, making it quite outdated in the few-shot learning domain. Recently, [https://dl.acm.org/doi/10.1145/3652583.3658035] utilized CLIP for few-shot AIGC detection. However, this paper does not provide a comparison with such approaches. Theoretical Claims: The paper does not present any theoretical contributions. Experimental Designs Or Analyses: The experiments in this paper lack testing on large-scale/high-quality datasets[i.e., WildFake[1], Chameleon[2]], and the compared methods are overly outdated. [1] A Sanity Check for AI-generated Image Detection @ ICLR'25 https://arxiv.org/abs/2406.19435 [2] WildFake: A Large-Scale and Hierarchical Dataset for AI-Generated Images Detection @ AAAI'25 Supplementary Material: The authors did not provide any supplementary materials. Relation To Broader Scientific Literature: The authors applied an algorithm from the few-shot learning domain and introduced it into the AIGC detection task. Essential References Not Discussed: N/a Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: Dear Reviewer fx2j, Thank you for your feedback and constructive comments. We appreciate the time and effort you invested in reviewing our work. Here are our responses to your concerns: Q1. Academic contribution is quite limited. To the best of our knowledge, our work is the first to systematically adapt few-shot learning for AIGC detection. While prior research has primarily focused on improving model generalization, our work uniquely leverages few-shot learning to mitigate performance degradation when detecting synthetic images from unseen domains or new generative models. The ability to achieve superior generalization with minimal samples from emerging generative models represents a key innovation of our approach. Specifically, our work makes the following key contributions: 1. Novel Adaptation of Few-Shot Learning: We propose a pioneering framework that leverages few-shot learning to significantly reduce performance degradation when detecting synthetic images from unseen domains or new generative models. This is particularly impactful given the rapid evolution of generative AI technologies. 2. Practical Efficiency: Unlike existing methods that require extensive retraining on new collected data, our approach achieves robust generalization with only minimal samples from new domains, offering a more scalable and resource-efficient solution. 3. Demonstration and Insights: Through comprehensive experiments, we demonstrate that our few-shot adaptation outperforms traditional fine-tuning approaches on widely used benchmarks. Furthermore, AIGC detection is a question-driven task, and our work provides a novel solution to address it. Our work introduces a new perspective in this field which can inspire future research directions and pave the way for more adaptive detection frameworks in the challenge of rapidly evolving generative technologies. Q2. ProtoNet is outdated in the few-shot learning domain. Our primary objective is to validate the fundamental effectiveness of few-shot learning. For this purpose, we intentionally adopt this vanilla yet classic method to ensure a comprehensive and interpretable evaluation. ProtoNet remains a principled and widely recognized baseline in few-shot learning, which allows us to isolate and rigorously assess the performance of few-shot learning without the confounding factors introduced by more complex tricks. Therefore, ProtoNet serves as an ideal choice due to its conceptual simplicity and proven reliability. This approach can help us better analyze the intrinsic advantages of few-shot learning, offering insights that may be obscured by more sophisticated but less concise methods. Q3. This paper does not provide a comparison with the given approach [1]. We appreciate your insightful comment regarding the comparison with the approach presented in [1]. Our paper focuses on evaluating few-shot learning methods for AI-generated image detection, and we selected several widely recognized baselines for comparison, which have also been used by this work [2]. We will include comparisons with CLIP-based methods for a more comprehensive evaluation. Q4. Lack testing on large-scale/high-quality datasets. As emphasized in our paper, the primary goal of this work is to propose a novel few-shot learning framework for AI-generated image detection. We adopt the GenImage benchmark due to its widespread use in prior research, which facilitates direct comparisons with existing methods. While numerous datasets have been proposed for deepfake detection, few have gained broad recognition as foundational benchmarks. This study [2] presented at ICLR 2025 is too recent to be thoroughly validated in the literature. Additionally, our current evaluation is constrained by extremely limited computational resources. We appreciate this insightful suggestion and plan to include more experimental results in the supplementary materials. In conclusion, our endeavor is to introduce a novel few-shot learning application for AIGC detection, addressing the critical gap in generalization for rapidly evolving generative models. We sincerely appreciate your constructive feedback, which is helpful for us to identify key areas for further refinement. While we acknowledge the current limitations, we believe our study offers a new perspective in AIGC detection, and its methodological innovation—coupled with empirical validation—merits reconsideration for publication. We would sincerely appreciate it if you would reconsider the potential and novelty of our contribution. Thank you again for your time and insightful critique. [1] Sohail Ahmed Khan and Duc-Tien Dang-Nguyen. 2024. CLIPping the Deception: Adapting Vision-Language Models for Universal Deepfake Detection. In Proceedings of the 2024 International Conference on Multimedia Retrieval. Association for Computing Machinery, New York, NY, USA, 1006–1015. [2] Yan et al. A Sanity Check for AI-generated Image Detection. CoRR, abs/2406.19435, 2024.
Summary: The paper presents the Few-Shot Detector (FSD), an innovative approach to detect AI-generated images, particularly from unseen generative models. Traditional fake image detectors often struggle with generalization to new models due to the scarcity and high cost of collecting training data. FSD circumvents this challenge by reformulating the detection task as a few-shot classification problem, enabling it to effectively classify images based on limited samples from unseen domains. ## update after rebuttal While I appreciate the authors’ rebuttal, key concerns remain unaddressed. Therefore, I remain inclined to reject the paper. Claims And Evidence: Yes Methods And Evaluation Criteria: Most existing methods address the domain generalization problem in forgery detection as a single domain generalization issue, primarily because generative models are not available for test images. This paper introduces few-shot learning into forgery detection, framing it as a multiple source generalization problem. While the proposed setting is theoretically reasonable, the paper does not adequately address key challenges associated with this new framework: 1. The methodology lacks a clear strategy for obtaining test images from the same domain (i.e., generative models) in real-world applications. 2. The paper does not establish a new benchmark for multiple domain generalization, which should include a comprehensive training paradigm, an evaluation setting, and a fair comparison with state-of-the-art methods. Overall, while the paper presents an new perspective, it requires significant improvements to effectively address these critical issues. Theoretical Claims: The nearest-neighbor method is employed to calculate the similarity between test images and prototypical representations. However, unlike the classic few-shot classification tasks that focus on image content, forgery detection emphasizes image authenticity. Consequently, a fake image may share similar content with real images, potentially leading to its misclassification as authentic. Experimental Designs Or Analyses: The comparison with state-of-the-art (SOTA) methods appears to be unfair. As previously mentioned, SOTA methods typically focus on single domain generalization, where models are trained on one type of generative model and tested on others. However, in line 290, the paper reports the average performance of six classifiers, all of which are trained on both GAN and DM models. This approach does not align with the standard practices of single domain generalization, thus skewing the results. Supplementary Material: There are no supplementary material. Relation To Broader Scientific Literature: This paper attempts to formulate domain generalization in forgery detection using multiple sources. However, it falls short in establishing a coherent framework and lacks a robust evaluation strategy. Essential References Not Discussed: No Other Strengths And Weaknesses: Strengths: The attempt to formulate few-shot learning in forgery detection is a promising approach. Weaknesses: 1. The methodology lacks a clear strategy for obtaining test images from the same domain (i.e., generative models) in real-world applications. 2. The paper does not establish a new benchmark for multiple domain generalization, which should include a comprehensive training paradigm, an evaluation setting, and a fair comparison with state-of-the-art methods. Overall, while the paper presents an new perspective, it requires significant improvements to effectively address these critical issues. Other Comments Or Suggestions: No Questions For Authors: No Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Dear Reviewer N8UV, Thank you for your detailed feedback and constructive comments. We have carefully considered each point you raised and would like to address them as follows: Q1: The lack of strategy for obtaining test images from the same domain in real-world applications. While current diffusion models are resource-intensive to train, widely adopted models are often accessible via APIs or open-source releases, serving as representative models. Our approach requires only a small batch of images from such a model to effectively detect generated content—not only from the same model but also across similar domains. Additionally, we have proposed a zero-shot detecting method in Section 3.3 to assess samples outside the training domain, ensuring robust generalization in real-world scenarios. Q2: Lack of a new benchmark for multiple domain generalization. Current widely used large-scale synthetic image datasets like GenImage and ForenSynths [1] have provided valuable resources for fake image detection, containing samples from diverse generative models with clear source annotations. We identify a critical gap in current research that most existing methods focus narrowly on binary classification (real vs. fake), overlooking the substantial domain-specific characteristics across different generative models. Our work serves as the first framework for multi-domain detection, and we have transformed GenImage dataset to a suitable benchmark for this task, laying groundwork for future benchmark creation once dataset diversity matures. Q3: A fake image may share similar content with real images. Our approach is based on the observation that synthetic images generated by different AI models exhibit distinct artifacts, as demonstrated in prior research [2]. While existing solutions often rely on training separate binary classifiers for distinct generative models—a process that is computationally expensive and impractical for large-scale deployment—our method takes a more efficient and generalizable approach. By focusing on classifying images based on their source generators rather than relying solely on content-based features, our network is explicitly trained to identify these model-specific artifacts. This design of our model ensures that it prioritizes forensic traces (e.g., noise patterns, texture inconsistencies, or spectral discrepancies) over semantic content, which is a key limitation in CLIP-based and other content-driven detection systems. Q4: This approach does not align with the standard practices of single domain generalization. Thank you for raising this important point. You're absolutely right to point out the divergence from conventional single domain generalization approaches. As detailed in our paper Section 4.2, this difference stems from our unique training paradigm which fundamentally requires diverse class samples during the training phase. We acknowledge that this represents a departure from standard practices, but it's a deliberate design choice that enables our model to learn more transferable features across domains. We anticipate that expanding the class diversity during training would further enhance the model's generalization capability, as it would allow the model to learn even more robust feature representations. However, this strategy may not generalize well to binary-classification tasks, where artifacts across categories can exhibit significant differences and shared characteristics among synthetic images may be absent. This is an important direction we plan to explore in future work. We appreciate this thoughtful observation and will carefully address this methodological distinction in our final version to ensure proper contextualization within the field of domain generalization research. Our approach pioneers a novel few-shot learning framework for deepfake detection and systematically explores the potential of multi-class detection in this field, demonstrating the feasibility of few-shot forensic analysis across generative models. While there exist limitations, it provides foundational insights for future research to address evolving synthetic threats. We sincerely appreciate your expertise in guiding methodological refinements, and we hope these contributions could be considered in the final assessment. Thank you for your time and understanding. [1] Sheng-Yu Wang, Oliver Wang, Richard Zhang, Andrew Owens, and Alexei A Efros. 2020. CNN-generated images are surprisingly easy to spot... for now. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 8695–8704 [2] Liu, Fengyuan , et al. "Which Model Generated This Image? A Model-Agnostic Approach for Origin Attribution." European Conference on Computer Vision Springer, Cham, 2025. --- Rebuttal Comment 1.1: Comment: Apologies for the delayed response. I appreciate the authors’ rebuttal, but I still have concerns in the following areas, which is why I am maintaining my original score: Q1: The lack of strategy for obtaining test images from the same domain in real-world applications. The authors emphasise that some current generative models (e.g., diffusion models) can provide a small number of samples via open-source implementations or APIs. However, in realistic scenarios, we may not be able to access any samples from these generative models at all, especially in the case of black-box models. Q2: Lack of a new benchmark for multiple domain generalization. The paper does not provide a clear comparison with existing SOTA methods under the same assumptions. Q3: A fake image may share similar content with real images. No experimental evidence is provided to support this claim. Q4: This approach does not align with the standard practices of single-domain generalisation. The authors did not include additional experiments to show whether their method remains effective under the same settings as SOTA methods (e.g., training on a single domain).
Summary: This paper introduces an approach to detecting AI-generated images by reframing the task as a few-shot classification problem. The Few-Shot Detector (FSD) uses a prototypical network to learn a specialized metric space, distinguishing between unseen fake images and real ones only using very few samples. By treating images from different generative models as separate classes and real images as a single class, FSD improves generalization to unseen models without extensive retraining. Claims And Evidence: Yes, the claims made in the submission are supported by evidence. The authors demonstrate through experiments on the GenImage dataset -- that FSD outperforms existing methods, achieving an average accuracy improvement of 7.4%. They provide analyses, including zero-shot and few-shot scenarios, cross-generator classification, and ablation studies on the number of support samples. Visualizations using t-SNE further support their claims. Methods And Evaluation Criteria: Yes, Reframing AI-generated image detection as a few-shot classification task is Using prototypical networks to learn a metric space is suitable for effectively distinguishing unseen fake images with limited samples. The use of the GenImage dataset as a benchmark and metrics like accuracy and average precision are standard and appropriate. Theoretical Claims: The submission does not present theoretical claims that require formal proofs. The methodology builds upon established techniques in few-shot learning, specifically prototypical networks. The contributions are primarily empirical, focusing on the application of these methods to AI-generated image detection. Experimental Designs Or Analyses: Yes. The experiments are valid, both few-shot and zero-shot scenarios. The authors conduct cross-generator classification and ablation studies on the impact of the number of support samples. Supplementary Material: there was no supplementary material. Relation To Broader Scientific Literature: The authors reconceptualized AI-generated image detection as a few-shot classification problem, building upon existing work in few-shot learning and AI-generated image detection. By employing prototypical networks, they extend methodologies used in few-shot learning to the domain of synthetic image detection. This approach addresses limitations in prior work that treated fake images as a single class and struggled to generalize to unseen generative models. Essential References Not Discussed: Based on my knowledge, the paper discusses the essential related works necessary to understand the context and contributions. It cites prior studies on AI-generated image detection, diffusion models, GANs, and few-shot learning. Other Strengths And Weaknesses: Strength: - reframing the detection task as a few-shot classification problem, which is a good contribution to the field. - The use of prototypical networks is interesting in learning a specialized metric space that generalizes to unseen classes with limited samples. - The method achieves state-of-the-art performance, with substantial improvements over existing approaches. - Addressing the challenge of limited data availability from unseen generators makes the approach relevant for real-world applications. Weakness: - There is a notable performance gap between few-shot and zero-shot scenarios, indicating limitations when no samples from the unseen class are available. - The approach assumes that images from different generators form distinct clusters in feature space, which may not hold true if generators produce very similar outputs. - Some sections could benefit from clearer explanations, particularly the zero-shot classification approach and the differences between training and testing strategies. Other Comments Or Suggestions: N/A Questions For Authors: - How does FSD perform when faced with generative models that are significantly different from those in the training set, such as models using different architectures, styles, or data domains? - In zero-shot scenarios, is there a way to improve performance without relying on samples from unseen classes? For instance, could domain adaptation or meta-learning techniques be integrated to enhance generalization? - How sensitive is the method to the choice of support samples? Are there strategies for selecting the most representative or informative samples to enhance performance with minimal data? Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear Reviewer aJMC, Many thanks for your careful reading and valuable comments. We hope our reply further reduces potential misunderstandings. Q1. How does FSD perform when faced with generative models that are significantly different from those in the training set? Our comprehensive evaluation on the GenImage dataset has examined FSD's cross-generator generalization capability across diverse diffusion models with varying architectures, covering both pixel space and latent space image generation. The results provide valuable insights into the detector's robustness against structurally different generative models. While these initial findings demonstrate promising generalization performance, we acknowledge the need for further validation with real-world generative models and plan to expand our testing in future work to strengthen these conclusions. Q2. Is there a way to improve performance without relying on samples from unseen classes in zero-shot scenarios? We acknowledge the performance gap in zero-shot settings and agree that integrating domain adaptation or meta-learning could further improve generalization. In our work, we have observed that generative models with similar structures tend to cluster closer together in the learned metric space. This suggests that as we accumulate sufficient samples from representative model types, detecting images from fine-tuned or LoRA-trained models should become easier and more straightforward. We anticipate this collection of representative samples will help minimize the current performance gap in zero-shot detection. Q3: How sensitive is the method to the choice of support samples? Our observations indicate that with fewer than 5 samples, the results tend to fluctuate within a certain range, showing noticeable variability. However, as the number of samples increases to around 100 (which is typically not difficult to collect in practice), the performance becomes significantly more stable and reliable. This suggests that the method achieves robust and confident outcomes when provided with an acceptable number of support samples. In conclusion, our approach pioneers a few-shot-learning way for deepfake detection. We believe this direction holds potential as a future trend in the domain. We will revise the manuscript to include more details about the zero-shot classification and training strategy. Thank you for your consideration.
null
null
null
null
null
null
null
null
Multi-Modal Object Re-identification via Sparse Mixture-of-Experts
Accept (poster)
Summary: This work introduces MFRNet, which mitigates insufficient interaction and feature imbalance via two modules. The Feature Fusion Module (FFM) uses a mixture-of-generators for pixel-level alignment, while the Feature Representation Module (FRM) employs a mixture-of-experts for balanced modality-shared and modality-specific features. Claims And Evidence: This work presents two key claims. The first claim highlights the limitations of recent approaches in modal interaction. To address this, the authors advocate for pixel-level interactions instead of feature-level interactions. Given the characteristics of multimodal images, this proposition is technically sound, as further validated by subsequent experiments. The second claim concerns the trade-off between feature representation quality and computational efficiency in existing methods. To tackle this issue, the authors propose leveraging a Mixture of Experts (MoE) to enable dynamic parameter allocation and streamline the model structure. This paradigm is widely recognized as effective, as MoE not only adapts to data variations but also enhances model efficiency by reducing redundancy. Therefore, the claims in this paper are well-founded. Methods And Evaluation Criteria: The paper presented in this paper primarily consists of two key modules: the Feature Fusion Module (FFM) and the Feature Representation Module (FRM). The FFM replaces traditional feature-level interactions with fine-grained interactions, which is particularly suitable given the unique characteristics of multispectral images. Meanwhile, the FRM leverages a Mixture of Experts (MoE) framework to balance modality-specific feature representation and structural redundancy, which is also technically sound. Theoretical Claims: This paper does not include extensive theoretical proofs; however, there are some ambiguities in the use of mathematical notations. For instance, in Equations (5) and (9), $W_1$ and $W_9$ originate from different modules and should be explicitly distinguished to avoid confusion. Experimental Designs Or Analyses: The experiments in this paper utilize Cumulative Matching Characteristics (CMC) and mean Average Precision (mAP) as evaluation metrics. The experimental setup includes comparative experiments (Table 1, 2, 3), ablation studies (Table 4), hyperparameter configurations (Table 5, 6, 7, 8, 9), and visualization (Figure 3). While the experiments are comprehensive, the analysis lacks depth. For instance, Table 3 presents results for M(TIR) without providing a corresponding discussion, in which the performance of MFRNet is lower than TOPReID. Supplementary Material: No supplementary material is provided. Relation To Broader Scientific Literature: The MFRNet in this paper is clearly presented and of good quality. The proposed two modules are built upon the Mixture of Experts (MoE) framework, which is not a novel technique. While MoE has been adopted in several topics existing in the computer vision community, its application to multimodal person re-identification, particularly from both interaction and representation perspectives, still holds meaning. Essential References Not Discussed: This paper could be better by considering published literature in the cross-modality ReID community such as [1, 2]. [1] Zhang, Yukang, and Hanzi Wang. "Diverse embedding expansion network and low-light cross-modality benchmark for visible-infrared person re-identification." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2023. [2] Park, Hyunjong, et al. "Learning by aligning: Visible-infrared person re-identification using cross-modal correspondences." Proceedings of the IEEE/CVF international conference on computer vision. 2021. Other Strengths And Weaknesses: Most of the strengths and limitations have been discussed in the previous sections. Overall, the paper is well-structured, with a clear research motivation and methodology. It effectively addresses the stated problem and achieves strong performance. However, a key limitation is that the implementation of the Mixture of Experts (MoE) strategy is simple, lacks significant technical innovation. Other Comments Or Suggestions: -The notation for Mean and Std in Equations (12) and (13) is inconsistent in font style. Please ensure uniform formatting for clarity and consistency. - In Equations (5) and (9), $W_1$ and $W_9$ originate from different modules and should be explicitly distinguished to avoid confusion Questions For Authors: Besides the above mentions, I have one more question: While the authors' claim is convincing, the results in Table 7 do not provide sufficient evidence that the Feature Fusion Module (FFM) requires more experts. It would be beneficial to include a comparison with the case where the number of experts in FFM is set to 1 to better validate this claim. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: > **Q1:** While the experiments are comprehensive, the analysis lacks depth. For instance, Table 3 presents results for M(TIR) without providing a corresponding discussion in which the performance of MFRNet is lower than TOPReID. > **A1:** Thank you for your suggestion. TOP-ReID benefits from specific training tailored for handling modality-missing scenarios, whereas our method does not include dedicated training for such cases. During training, we utilize all modalities. For testing, when a modality is missing, we supplement it using Equation 6 from the paper. For example, if RGB is missing, its features are generated based on NIR and TIR as follows: $I_R=w_R^{N}(I_N) \times g(I_N) + w_R^{T}(I_T) \times g(I_T)$. Here, $g(I_N)$ represents the features for RGB generated from NIR, $g(I_T)$ denotes the features for RGB generated from TIR, $w_R^{N}(I_N)$ and $w_R^{T}(I_T)$ are the respective weights.) To ensure model generalizability, we do not employ dedicated training methods for handling missing modalities. Notably, performing specific missing modality training could yield even better results. As shown in Table A, training our model to address the missing TIR led to a 7.7% improvement in mAP and a 7.0% increase in R-1 compared to TOP-ReID. Table A: Experimental results for missing TIR modality. | | mAP | R-1 | R-5 | R-10 | | --- | --- | --- | --- | --- | | TOP-ReID | 51.9 | 54.5 | - | - | | Ours | 51.6 | 49.5 | 67.7 | 76.7 | | Ours (RGB+NIR) | **59.6** | **61.5** | **72.7** | **80.6** | > **Q2:** This paper could be better by considering published literature in the cross-modality ReID community such as [1, 2]. [1] Zhang, Yukang, and Hanzi Wang. "Diverse embedding expansion network and low-light cross-modality benchmark for visible-infrared person re-identification." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2023. [2] Park, Hyunjong, et al. "Learning by aligning: Visible-infrared person re-identification using cross-modal correspondences." Proceedings of the IEEE/CVF international conference on computer vision. 2021. > **A2:** Thanks for your suggestion. We have included these papers in Sec. 2 to make our work more comprehensive. If you have any other recommended works that are related to this paper, you can provide them in the discussion. We are willing to add them in. > **Q3:** Besides the above mentions, I have one more question: While the authors' claim is convincing, the results in Table 7 do not provide sufficient evidence that the Feature Fusion Module (FFM) requires more experts. It would be beneficial to include a comparison with the case where the number of experts in FFM is set to 1 to better validate this claim. > **A3:** Thank you for your suggestion. We conducted additional experiments as shown in Table B. The results indicate that the optimal performance, with an average accuracy of 87.5%, is achieved when the number of experts in the FFM is set to 3. Table B: Performance analysis under different expert numbers for FFM. | Number | mAP | R-1 | R-5 | R-10 | Average | | --- | --- | --- | --- | --- | --- | | 1 | 78.7 | 81.0 | 90.8 | 93.5 | 86.0 | | 2 | 74.9 | 78.6 | 86.8 | 90.8 | 82.8 | | **3** | **80.7** | **83.5** | **91.9** | **94.1** | **87.5** | | 6 | 76.9 | 80.4 | 88.2 | 90.6 | 84.0 | | 9 | 79.2 | 82.3 | 90.7 | 93.5 | 86.4 | > **Q4:** In Equations (5) and (9), $W_1$ and $W_9$ originate from different modules and should be explicitly distinguished to avoid confusion. > **A4:** Thank you for your suggestion. We have added markers in the top-right corner to distinguish between different components. > **Q5:** The notation for Mean and Std in Equations (12) and (13) is inconsistent in font style. Please ensure uniform formatting for clarity and consistency. > **A5:** Thank you for your suggestion. We have corrected them following your comments.
Summary: This paper introduces MFRNet for multi-modal object re-identification. This approach addresses two core issues: insufficient pixel-level feature interaction and difficulty balancing between shared and specific modality features. The proposed Feature Fusion Module (FFM) fosters fine-grained cross-modal interaction, while the Feature Representation Module (FRM) efficiently merges modality-shared and modality-specific representations in a unified network. Experiments on three public datasets show that MFRNet significantly improves both accuracy and efficiency, with minimal computational overhead. Claims And Evidence: This work‘s claims are intuitive, convincing, and supported by its experiments. Methods And Evaluation Criteria: It looks well. These two limitations addressed in this topic are reasonable, the introduction of FFM and FRM is suitable for this task. Theoretical Claims: I have reviewed the methodology and corresponding equations in this work, and they appear to be both reliable and reasonable. However, Figure 2 may be misleading. Specifically, in Section 3.1, the description of the FFM states that transformations for all three modalities occur simultaneously. Yet, Figure 2 only illustrates interactions involving the RGB image, which may not fully represent the fusion process. Experimental Designs Or Analyses: The experimental design of the paper is comprehensive. In addition to the ablation studies on FFM and FRM, the paper also provides a detailed discussion of relevant hyperparameters, making the overall experiments convincing. Supplementary Material: The authors did not submit any supplementary materials. Relation To Broader Scientific Literature: Adapting to multi-modal data using MoE is an appropriate approach and is widely employed in current Multimodal Large Language Models. This work integrates this technique into the task of multi-modal object re-identification. Therefore, the impact of this paper is moderate, but it provides some valuable insights into methods in this field. Essential References Not Discussed: The necessary references have been discussed, but the related work section is somewhat lengthy. It would be better to condense it. Other Strengths And Weaknesses: Strengths: - The proposed framework is well-structured, with clearly defined motivations and a logically designed modular architecture. The integration of Feature Fusion Module (FFM) and Feature Representation Module (FRM) effectively enhances feature interaction and representation learning, addressing key challenges in multi-modal object re-identification. - The performance of this work is quite strong, achieving a significant improvement over the previous state-of-the-art. Weaknesses: - The captions lack some details. What do ‘RE’ and ‘GE’ mean in Figure 2? - Eq.12 and Eq.13 is not alignment. Other Comments Or Suggestions: I have no more comments, please check the aforementioned parts. Questions For Authors: 1. Given that prior works such as TOP-ReID [1], EDITOR [2], and RSCNet [3] both use an ImageNet-based ViT as the backbone, while MFRNet adopts a CLIP-based ViT, I am curious about MFRNet’s performance when using an ImageNet-based ViT. 2. Some details of the testing phase are not entirely clear. In Table 3, I would like to know how the FFM work in the absence of certain modalities. 3. Some captions lack some details. What do ‘RE’ and ‘GE’ mean in Figure 2? [1] "Top-reid: Multi-spectral object re-identification with token permutation." AAAI 2024. [2] "Magic tokens: Select diverse tokens for multi-modal object re-identification." CVPR 2024 [3] "Representation Selective Coupling via Token Sparsification for Multi-Spectral Object Re-Identification." IEEE Transactions on CSVT (2024). Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: > **Q1:** Specifically, in Section 3.1, the description of the FFM states that transformations for all three modalities occur simultaneously. Yet, Figure 2 only illustrates interactions involving the RGB image, which may not fully represent the fusion process. > **A1:** Thank you for your suggestion. This transformation process applies equally to all three modalities. While the figure specifically illustrates the NIR+RGB->TIR interaction, we have conducted the same interaction for NIR and RGB. We will revise the caption of Figure 2 in light of your feedback to ensure the process is described clearly. > **Q2:** The necessary references have been discussed, but the related work section is somewhat lengthy. It would be better to condense it. > **A2:** Thank you for your suggestion. We will accordingly reduce the content of the related work section. > **Q3:** Given that prior works such as TOP-ReID [1], EDITOR [2], and RSCNet [3] both use an ImageNet-based ViT as the backbone, while MFRNet adopts a CLIP-based ViT, I am curious about MFRNet’s performance when using an ImageNet-based ViT. > **A3:** Following your suggestions, we conducted experiments on our server to evaluate TOP-ReID and our method using ImageNet-based ViT and CLIP-based ViT. As shown in Table A, the results demonstrate that both methods perform better with CLIP-based ViT compared to ImageNet-based ViT. Specifically, with ImageNet-based ViT, our method outperforms TOP-ReID by approximately 2.2% in mAP and 3.7% in R1. With CLIP-based ViT, our method surpasses TOP-ReID by about 9.7% in mAP and 9.6% in R1. These findings underscore the superior performance of our method over TOP-ReID under identical settings and backbones. Table A: Performance comparison on ImageNet-based and CLIP-based ViT. | Methods | mAP | R-1 | R-5 | R-10 | | --- | --- | --- | --- | --- | | ViT: TOP-ReID | 67.4 | 69.1 | 80.9 | 86.0 | | ViT: Ours | **69.6** | **72.8** | **84.9** | **90.6** | | CLIP-ViT: TOP-ReID | 71.0 | 73.9 | 81.6 | 86.6 | | CLIP-ViT: Ours | **80.7** | **83.5** | **91.9** | **94.1** | > **Q4:** Some details of the testing phase are not entirely clear. In Table 3, I would like to know how the FFM work in the absence of certain modalities. > **A4:** Following your suggestion, we will refine this section in future versions to enhance clarity. For the missing modality, as illustrated in Equation 6 (lines 190-199), the other two modalities are utilized for prediction. For example, if RGB is missing, its features are generated based on NIR and TIR as follows: $I_R=w_R^{N}(I_N) \times g(I_N) + w_R^{T}(I_T) \times g(I_T)$. Here, $g(I_N)$ denotes the features for RGB generated from NIR, $g(I_T)$ represents the features for RGB generated from TIR, $w_R^{N}(I_N)$ and $w_R^{T}(I_T)$ are the respective weights. > **Q5:** Some captions lack some details. What do ‘RE’ and ‘GE’ mean in Figure 2? > **A5:** We sincerely apologize for any inconvenience caused. 'GE' refers to the generation experts in FFR, while 'RE' represents the representation experts in FRM. Based on your suggestion, we will revise Figure 2 accordingly. > **Q6:** Eq.12 and Eq.13 are not aligned. > **A6:** Thank you for your suggestion. We have correctted it in these days.
Summary: This paper proposes a novel Multi-modal Fusion and Representation Network (MFRNet) approach for multi-modal object re-identification, inspired by the sparse Mixture-of-Experts (MoE) paradigm. The proposed framework enhances performance by introducing a Feature Fusion Module (FFM) for fine-grained pixel-level cross-modal interaction and a Feature Representation Module (FRM) to extract modality-shared and modality-specific features dynamically. Experimental evaluations on three benchmark datasets, RGBNT201, RGBNT100, and MSVR310, demonstrate that the proposed method achieves superior performance compared to existing state-of-the-art methods. Claims And Evidence: Yes, the claims in the submission are generally supported by clear and convincing evidence. The authors present two well-structured modules with empirical results that support the proposed modules. Methods And Evaluation Criteria: Yes, the proposed methods and evaluation criteria make sense for addressing the problem. The selected benchmark datasets are appropriate, and the evaluation approach is well-aligned with the research objectives. Theoretical Claims: Yes, I have checked the correctness of the proofs for the theoretical claims and found no issues. The formulations and proofs are well-founded and align with standard methodologies in the field. Experimental Designs Or Analyses: Yes, I reviewed the experimental design and related analyses and found it to be sound and sufficient. Supplementary Material: There are not Supplementary Material. Relation To Broader Scientific Literature: The key contributions of this paper relate closely to the broader scientific literature in the following the Sparse Mixture-of-Experts (MoE) Integration. The paper builds upon the concept of Sparse MoE, which has been widely explored in deep learning to improve model expressiveness and parameter efficiency. Specifically, prior works such as GMoE [1] have leveraged MoE frameworks in domain generalization problem. The current paper extends this idea explicitly to multi-modal object re-identification, demonstrating its effectiveness in this context. Overall, the proposed contributions build upon and advance existing findings in sparse MoE modeling and achieve fine-grained pixel-level interactions, and multi-modal representation balancing, extending these concepts specifically into the multi-modal object re-identification task and demonstrating clear empirical advantages over prior state-of-the-art approaches. Reference: [1] Li, Bo, et al. "Sparse Mixture-of-Experts are Domain Generalizable Learners." In The Eleventh International Conference on Learning Representations, 2023. Essential References Not Discussed: Yes, related works that are essential to understanding the key contributions of the paper are adequately cited and discussed. Other Strengths And Weaknesses: Strengths: 1. The paper proposes an effective sparse Mixture-of-Experts (MoE) architecture for multi-modal ReID, achieving notable performance improvements over existing state-of-the-art models. 2. The paper conducts comprehensive ablation studies, clearly demonstrating the contribution and effectiveness of each module. 3. MFRNet achieves significant performance gains on three public datasets, validating the effectiveness of the proposed approach. 4. MFRNet exhibits a certain level of robustness in scenarios with missing modalities, even without explicit training for such cases. Weaknesses: 1. While the MFRNet presents a novel perspective within multi-modal ReID by introducing a sparse Mixture-of-Experts (MoE) framework, its overall novelty remains moderate. It is more like a representing of an effective combination of existing techniques. Other Comments Or Suggestions: 1. Page 2, Line 127-128: ‘we propose MRFNet’ should be ‘we propose MFRNet’. 2. Page 4, Line 217-218: ‘we aim to modality-specific’ should be ‘we aim to capture modality-specific’? 3. Figure 2 is slightly cluttered, and the author seems to have forgotten to describe GE and RE. Questions For Authors: 1. The visualization section appears to focus solely on the Feature Representation Module, while the visualization of the Feature Fusion Module is also essential for a comprehensive analysis. 2. Can the authors provide a more detailed comparison of computations with different methods? 3. The author mentioned that ‘TOP-ReID has a specific training phase for the modality missing condition’ while MFRNet does not. Will MFRNet be better than TOP-ReID in the 'M (TIR)' protocol with the same specific training phase? 4. Why can the Feature Fusion Module reduce computational complexity? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: > **Q1:** While the MFRNet presents a novel perspective within multi-modal ReID by introducing a sparse Mixture-of-Experts (MoE) framework, its overall novelty remains moderate. It is more like a representing of an effective combination of existing techniques. > **A1:** Our novelty primarily lies in two aspects: generation and representation. For FFM generation, unlike previous methods that rely on coarse-grained feature interactions, we creatively harness the pixel-level consistency of multimodal data (pixel-wise alignment) to achieve fine-grained multimodal interaction. For FRM representation, our approach avoids the independent encoding of each modality. Instead, we introduce multimodal feature representation experts based on the MoE architecture, achieving a balance between modality-specific and shared features while maintaining minimal computational cost. > **Q2:** Figure 2 is slightly cluttered, and the author seems to have forgotten to describe GE and RE. > **A2:** We sincerely apologize for any inconvenience caused. GE refers to the generation experts in FFR, while RE represents the representation experts in FRM. Based on your suggestion, we will revise Figure 2 accordingly. > **Q3:** The visualization section appears to focus solely on the Feature Representation Module, while the visualization of the Feature Fusion Module is also essential for a comprehensive analysis. > **A3:** Thank you for your suggestion. In response, we have also visualized FFM. However, due to the constraints of the rebuttal format, we are unable to include the image directly. Nonetheless, it will be incorporated in future versions of the paper. > **Q4:** Can the authors provide a more detailed comparison of computations with different methods? > **A4:** Following your suggestion, we compared the computational costs with three recent works. As shown in Table B, our method not only demonstrates significant improvements in metrics such as mAP and R-1 but also achieves the lowest Params and FLOPs, with values of 57.1M and 22.1G, respectively. Table A: Comparison of computational cost with recent methods. The best results are shown in bold. | | mAP | R-1 | R-5 | R-10 | Params(M) | Flops(G) | | --- | --- | --- | --- | --- | --- | --- | | HTT | 71.1 | 73.4 | 83.1 | 87.3 | 85.6 | 33.1 | | EDITOR | 66.5 | 68.3 | 81.1 | 88.2 | 117.5 | 38.6 | | TOP-ReID | 72.3 | 76.6 | 84.7 | 89.4 | 278.2 | 34.5 | | Ours | **80.7** | **83.5** | **91.9** | **94.1** | **57.1** | **22.1** | > **Q5:** The author mentioned that ‘TOP-ReID has a specific training phase for the modality missing condition’ while MFRNet does not. Will MFRNet be better than TOP-ReID in the 'M (TIR)' protocol with the same specific training phase? > **A5:** Following your suggestion, we conducted additional experiments targeting the 'M (TIR)' protocol. As shown in Table B, training our model to address the missing of TIR resulted in a 7.7% improvement in mAP and a 7.0% increase in R-1 compared to TOP-ReID. Table B: Experimental results for missing TIR modality. | M (TIR) | mAP | R-1 | R-5 | R-10 | | --- | --- | --- | --- | --- | | TOP-ReID | 51.9 | 54.5 | - | - | | Ours | 51.6 | 49.5 | 67.7 | 76.7 | | Ours (RGB+NIR) | **59.6** | **61.5** | **72.7** | **80.6** | > **Q6:** Why can the FRM reduce computational complexity? > **A6:** Thank you for your suggestion. As demonstrated in Equation 8, the FRM module leverages RepAdapter to build MoE representations. Compared to a traditional MLP, RepAdapter reduces computational costs by incorporating two convolutional layers. If replaced with the original MLP structure, as indicated in Table C, the parameters and FLOPs would increase by 3.7M (from 53.5 to 57.2) and 1.4G (from 20.7 to 22.1), respectively. Table C: Comparison of FRM and original MLP. | | Params (M) | FLOPs (G) | | --- | --- | --- | | Original MLP | 57.2 | 22.1 | | FRM | 53.5 | 20.7 | > **Q7:** There are several typos in the paper. > **A7:** Thank you for your suggestion. We will correct the typos in the updated version of the paper. --- Rebuttal Comment 1.1: Comment: Thank you for your response. My previous concerns have been well addressed. The paper demonstrates strong performance and efficiency, and I’m willing to raise my score to Accept. I hope the authors will include these additional experiments in the final version, as they would be highly valuable.
Summary: This work presents the Modality Fusion and Representation Network (MFRNet) aiming to address the limitations in modality interaction and representation of recent works. Two modules named Feature Fusion Module (FFM) and Feature Representation Module (FRM) are proposed to tackle the interaction and representation limitations, respectively. Both two modules follow a MoE structure. The FFM employs multiple generator experts to adaptively provide fine-grained interaction information, while the FRM employs diverse representation experts to extract and combine modality-specific and modality-shared features. Experiments are conducted in multi-modal ReID datasets. Both qualitative and quantitative results are reported to show the effectiveness of each component in the method. Claims And Evidence: This paper proposes two modules to solve two claimed problems. From its motivation and implementation, the modules present in this paper effectively support its claims. Experiments and visualization result also show the improvements in these two modules. Methods And Evaluation Criteria: Yes, the problem of multi-modal person re-identification is important and valuable when considering real application scenarios. The proposed method makes sense in this topic due to its performance and efficiency. Theoretical Claims: This work proposes two modules that are somewhat reasonable from both its claims and design. The FFM uses the MoE structure for the image completion, while the FRM uses the MoE structure for the representation. Since the MoE structure has already shown its ability in LLM to adapt to multiple modality data, using such a structure to strengthen the ability of multi-modal re-identification reasonable. Experimental Designs Or Analyses: I have checked the experiments of this work. Most experiments are appropriate and complete. While there still exist several concerns: 1. From Table 4, we can observe that the FRM has decreased the 'Params' and 'FLOPs'. Generally, the MoE structure should keep similar or slightly higher 'Params' and 'FLOPs' with the baseline during the inference. 2. The discussion of 'Params' and 'FLOPs' only contains this method itself. I understand that it can be similar efficiency to the baseline method. But it would be better if the author could show a comparison with recent methods. 3. Table 7 may not enough to show that 3 is the optimal selection. 4. FFM doesn't seem to conflict with VIT. It seems can be inserted in the network, but it is not discussed at all. Supplementary Material: This paper does not provide any supplementary materials. Relation To Broader Scientific Literature: This work uses the concept of MoE to solve the limitations of recent works in two ways. Compared to recent methods such as TOP-ReID [1] which uses multiple networks for different modalities and interacts with them in the final, this work largely simplified the network structure while maintaining the dynamical specific encoding processing and achieving fine-grained for each modality. Because this work aims to solve the specific limitations of this topic, I think it is moderately related to the general re-identification task. [1] Wang, Yuhao, et al. "Top-reid: Multi-spectral object re-identification with token permutation." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 38. No. 6. 2024. Essential References Not Discussed: I think essential references are adequate discussed in this paper. Other Strengths And Weaknesses: Strengths: 1. The idea of using MoE in interaction and representation is somewhat interesting in this topic. 2. This method has achieved strong performance while maintaining good efficiency. Weaknesses: 1. There still several typos: - The caption in Table 6, ‘…for FRM’ -> ‘…for FRM’ - The caption of Table 3 and Table 4 should be bolded. Other Comments Or Suggestions: This is a paper with sufficient motivation and an interesting solution. However, as mentioned before, there are several issues that limit the value of this work. If the two weaknesses can be addressed, I would like to improve my score. Questions For Authors: 1. From Table 4, we can observe that the FRM has decreased the 'Params' and 'FLOPs'. However, as an MoE structure, even ignoring the router's parameter, the 'Params' and 'FLOPs' should remain the same with the baseline method. How it can even decrease the 'Params' and 'FLOPs'? 2. The discussion of 'Params' and 'FLOPs' only contains this method itself. I understand that it can be similar efficiency to the baseline method. But, can you show the comparison with other methods? 3. From Table 7, what is the performance when number of experts of FFM is less than 3? Table 7 is insufficient to indicate that 3 is optimal. 4. FFM doesn't seem to conflict with VIT. Why is Table 9 only discussing before and after ViT? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: ## Response to Reviewer PMp6: > **Q1:** From Table 4, we can observe that the FRM has decreased the 'Params' and 'FLOPs'. Generally, the MoE structure should keep similar or slightly higher 'Params' and 'FLOPs' with the baseline during the inference. > **A1:** Thank you for your suggestion. As demonstrated in Equation 8, the FRM module leverages RepAdapter to build MoE representations. Compared to a traditional MLP, RepAdapter reduces computational costs by incorporating two convolutional layers. If replaced with the original MLP structure, as indicated in Table A, the parameters and FLOPs would increase by 3.7M (from 53.5 to 57.2) and 1.4G (from 20.7 to 22.1), respectively. Table A: Comparison of FRM and original MLP. | | Params (M) | FLOPs (G) | | --- | --- | --- | | Original MLP | 57.2 | 22.1 | | FRM | 53.5 | 20.7 | > **Q2:** The discussion of 'Params' and 'FLOPs' only contains this method itself. I understand that it can be similar efficiency to the baseline method. But it would be better if the author could show a comparison with recent methods. **A2:** Following your suggestion, we compared the computational costs with three recent works. As shown in Table B, our method not only demonstrates significant improvements in metrics such as mAP and R-1 but also achieves the lowest Params and FLOPs, with values of 57.1M and 22.1G, respectively. Table B: Comparison of computational cost with recent methods. | | mAP | R-1 | R-5 | R-10 | Params(M) | Flops(G) | | --- | --- | --- | --- | --- | --- | --- | | HTT | 71.1 | 73.4 | 83.1 | 87.3 | 85.6 | 33.1 | | EDITOR | 66.5 | 68.3 | 81.1 | 88.2 | 117.5 | 38.6 | | TOP-ReID | 72.3 | 76.6 | 84.7 | 89.4 | 278.2 | 34.5 | | Ours | **80.7** | **83.5** | **91.9** | **94.1** | **57.1** | **22.1** | > **Q3:** Table 7 may not enough to show that 3 is the optimal selection. > **A3:** Thank you for your suggestion. We conducted additional experiments as shown in Table C. The results indicate that the optimal performance, with an average accuracy of 87.5%, is achieved when the number of experts in the FFM is set to 3. Table C: Performance analysis under different expert numbers for FFM. | Number | mAP | R-1 | R-5 | R-10 | Average | | --- | --- | --- | --- | --- | --- | | 1 | 78.7 | 81.0 | 90.8 | 93.5 | 86.0 | | 2 | 74.9 | 78.6 | 86.8 | 90.8 | 82.8 | | 3 | **80.7** | **83.5** | **91.9** | **94.1** | **87.5** | | 6 | 76.9 | 80.4 | 88.2 | 90.6 | 84.0 | | 9 | 79.2 | 82.3 | 90.7 | 93.5 | 86.4 | > **Q4:** FFM doesn't seem to conflict with VIT. It seems can be inserted in the network, but it is not discussed at all. > **A4:** Following your suggestion, we further validated the approach by inserting FFM into the 3rd, 6th, and 9th layers of the network. As shown in Table D, applying FFM before the ViT leverages the pixel-by-pixel alignment of multimodal image data more effectively, resulting in optimal model performance. Table D: Performance analysis under different locations for FFM. | Location | mAP | R-1 | R-5 | R-10 | Average | | --- | --- | --- | --- | --- | --- | | 0 (Before ViT) | **80.7** | **83.5** | **91.9** | **94.1** | **87.5** | | 3 | 50.6 | 50.5 | 62.9 | 70.7 | 58.7 | | 6 | 74.0 | 78.7 | 86.8 | 91.1 | 82.6 | | 9 | 75.3 | 78.3 | 86.8 | 90.1 | 82.6 | | 12 (After ViT) | 76.7 | 79.7 | 86.8 | 92.0 | 83.8 | > **Q5:** There are still several typos in the paper. > **A5:** Thank you for your suggestion. We will correct these typos in the updated version of the paper. --- Rebuttal Comment 1.1: Comment: Thank you for your response and a well-written rebuttal. Most of my concerns have been effectively addressed. However, there are still a few points that require further clarification. I will decide my final score on the answers to these remaining questions: 1. The authors clarified that the reduction in parameters brought by FRM mainly stems from structure changes. Therefore, I believe it is necessary to also compare how much performance improvement is achieved by these structural modifications. This would help clarify the necessity of the MoE structure within FRM. 2. The performance curve shown in Table C is rather unusual, resembling a W-shape. It remains unclear whether there are additional performance peaks beyond the value of 9. --- Reply to Comment 1.1.1: Comment: > **Q1:** The authors clarified that the reduction in parameters brought by FRM mainly stems from structure changes. Therefore, I believe it is necessary to also compare how much performance improvement is achieved by these structural modifications. This would help clarify the necessity of the MoE structure within FRM. > **A1:** Thank you for your suggestion. To better illustrate the significance of the MoE structure in FRM, we remove the MoE while retaining other components. As shown in Table E, using only RepAdapter (excluding MoE) improves the average performance by 3.1%. Adding the MoE further boosts performance, resulting in a 3.5% increase compared to its absence, highlighting the MoE's positive impact on model performance. Table E: Performance analysis of FRM (RepAdapter+MoE). | Method | mAP | R-1 | Average | | --- | --- | --- | --- | | Base | 69.2 | 76.3 | 72.7 | | +RepAdapter | 74.2 | 77.5 | 75.8 | | +RepAdapter+MoE (FRM ) | **77.8** | **80.9** | **79.3** | > **Q2:** The performance curve shown in Table C is rather unusual, resembling a W-shape. It remains unclear whether there are additional performance peaks beyond the value of 9. > **A2:** Following your suggestion, we conduct additional experiments with parameters exceeding 9. As illustrated in the revised Table C, model performance decreases when the number of experts surpasses 9. Overall, the optimal performance is achieved when the number of experts is set to 3. Table C: Performance analysis under different expert numbers for FFM. | Number | mAP | R-1 | R-5 | R-10 | Average | | --- | --- | --- | --- | --- | --- | | 1 | 78.7 | 81.0 | 90.8 | 93.5 | 86.0 | | 2 | 74.9 | 78.6 | 86.8 | 90.8 | 82.8 | | 3 | **80.7** | **83.5** | **91.9** | **94.1** | **87.5** | | 6 | 76.9 | 80.4 | 88.2 | 90.6 | 84.0 | | 9 | 79.2 | 82.3 | 90.7 | 93.5 | 86.4 | | 10 | 76.4 | 80.1 | 88.0 | 92.0 | 84.1 | | 11 | 74.1 | 78.9 | 87.2 | 92.2 | 83.1 | | 12 | 74.1 | 77.8 | 86.8 | 91.1 | 82.4 | | 15 | 74.4 | 77.9 | 87.0 | 90.4 | 82.4 |
null
null
null
null
null
null
Global-Local Dirichlet Processes for Clustering Grouped Data in the Presence of Group-Specific Idiosyncratic Variables
Accept (poster)
Summary: The article presents a new method for performing Bayesian nonparametric clustering on data sets with both global and local variables, i.e. data sets for which some variables are only observed for a subset of the individuals. The paper presents a novel formulation of a clustering model that allows for global clusters on the variables observed for all individuals, and local clusters discriminating within those observed in specific subsets of individuals. A variational bound is introduced to perform inference through optimisation, and experiments are run on simulated data and a pan-cancer genomics data set. Comparisons are made with the same model specification with inference performed with MCMC, alongside the Hierarchical Dirichlet Process (HDP) for global clustering and finite Gaussian Mixture Models (GMM) for the local clustering. ##### update after rebuttal I am satisfied with the accessible parts of the authors' response. However, since they linked to external media which is not permitted in the openreview system, I have kept my score the same. Claims And Evidence: The methods are well-motivated and explained well. The theoretical grounding is reasonable coherent and rigorous. The experimental evaluation is a little light, relying on one simulated data example and one real data set, although each of them is performed thoroughly. The restriction to Gaussian likelihoods is quite strict, but will presumably be diversified in future work. Methods And Evaluation Criteria: The benchmark data sets are appropriate for the data set but are limited to one simulated study and one real data set. It is possible that finding real data sets with the appropriate structure for this model specification is somewhat challenging. Theoretical Claims: The theoretical claims and results appear sound but I have not checked them thoroughly. Experimental Designs Or Analyses: The experimental design seems reasonable. The use of a single clustering index (Adjusted rand index) is a potential drawback, as there are multiple ways of assessing cluster algorithms. In the simulated data set, we have a "ground truth" to compare against, but this evaluation method will not generalise to real data. There is no real comparison to competing methods for the real data set, which would be of interest. There is also no use of any kind of model fit index (BIC? ICL) to evaluate the model fit. The assessment of the computational properties of the inference is clear and fairly thorough, e.g. computational time, convergence criteria etc. Supplementary Material: I looked over some of the details of the variational bound derivation and optimisation algorithm, and some of the experimental details. Relation To Broader Scientific Literature: The broader scientific literature is fairly well described, including the historical 1973 Ferguson DP article followed by generalisations to the Hierarchical and Nested DPs. The more recent results concerning inconsistency of the DP for the number of clusters is acknowledged and some effort is made to counteract the potential resulting issues. Essential References Not Discussed: There are no major references missing from a machine learning perspective. It is likely that the cancer/genomics references could be more thorough, but the ICML audience will not necessarily mind so much. Other Strengths And Weaknesses: In general, this is an interesting methodological development that is well-motivated by an interesting application challenge. There are some extensions that might be necessary before this becomes generally usable for all problems of this nature, but the article itself is a reasonable contribution. Other Comments Or Suggestions: The quality of the written English is generally good. Questions For Authors: The presence of variables for some subgroups and not others is going to be informative to the global clustering. Does the method aim to make use of this? E.g. PSA is only measured when prostate cancer is also suspected: Would the presence of a PSA measurement represented as an indicator variable itself be informative to global clustering? Are you allowing the algorithm to make use of this information? What is the relevance of existing methods for missing data? We are running into a sort of Missing at Random/Missing Completely at Random/Missing Not at Random kind of situation where the presence or absence of data is potentially going to be informative in itself. Where does this framework fall within the MAR/MCAR/MNAR set of assumptions? What would happen if you treated the local variables as just “missing” in a Bayesian framework for all the other (non-local) individuals? What is the influence of the proportion of local vs global variables? To consider two extremes: 1. Two data sets bound together concerning two different illnesses, with different data collected for each of them, and the only shared variables being (e.g.) age and sex. What kind of structure will the method uncover here? 2. Two heavily overlapping data sets with many shared variables and only one or two “extra” local variables per population. What happens here? Would a whole separate clustering be “discovered” for the limited extra variables? Could it be done in a way that builds on the dominant narrative from the shared variables? How does the inference work for the discrete variables exactly? Can you use the gradients for the continuous variables and some other solution for the discrete ones? Is everything represented as a smooth continuous target function somehow? How robustly can you assess the convergence of a discrete optimisation algorithm? Does putting a gamma prior on the concentration really solve the concentration/specification problem for DPs? Is the issue of inconsistency of the number of clusters for DPs generally overstated if a relatively simple solution exists? The use of only Gaussian likelihoods is quite a hard constraint and likely to be false in reality much of the time. What is the relationship between the mixture component likelihood specification and the underlying clustering structure? Is it more important to specify the right data likelihood or the right underlying clustering structure? What kind of problems will you run into if you have the correct underlying clustering design but a misspecified likelihood? How easy would it be to adapt this method to non-Gaussian likelihoods? Why have you not done this? Can the PCA components in the real data example be safely assumed to be Gaussian? What if there are variables in-between local and global? i.e. there are a global variables shared between all individuals, local variables unique to specific subgroups, but also mid-level variables shared between some subgroups but not all of them? Would you have to introduce an extra component in equation 1? And extra indicator variables in addition to t and k? Could this become a more generic framework that could incorporate varying degrees of data granularity in that respect? Why did you only include a single real data set? Is it a challenge to find data sets with this structure in practice? Is it hard to perform the relevant data formatting and similar to prepare potential target data sets for this analysis pipeline? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for all the comments and questions, responses to some of which are in the textbox below. - **Question related to allowing our algorithm to make use of information on presence/absence of some variable*** **Response**: The presence of PSA measurement indicates prostate cancer. But since whether the patient has prostate cancer or not is a given information for our method, which determines the groups, we do not think it provides extra information for global clustering other than the PSA values themselves. However, as a possible future research, our model could be extended to achieve simultaneous clustering of groups and within-group observations. In such a context, the presence or absence of variables for some subgroups and not others can possibly be informative in identifying similar cancer subtypes. For example, the presence of PSA measurement represented as an indicator variable itself would be informative in identifying prostate cancer as a possible secondary cancer associated with the primary cancer of the given patient(s). - **Question related to relevance of existing methods for missing data in our setup** **Response**: Thank you for your question. While our framework does not follow conventional missing data mechanisms, the absence of certain clinical variables can be viewed as Missing Not at Random (MNAR) when missingness is tied to cancer type. For example, the absence of PSA measurements likely indicates the patient does not have prostate cancer, where PSA is a key diagnostic marker. More broadly, our approach accommodates scenarios where different groups have distinct relevant variables, which may appear missing not due to randomness but because they hold no relevance for those groups. - **Question related to treating the local variables as just *missing*** **Response**: As highlighted previously, our framework does not follow conventional missing data mechanisms. Treating local variables as "missing" and imputing them may not yield meaningful grouped clustering. For example, in our pan-cancer analysis, CEA is a key biomarker for colorectal cancer but irrelevant for esophageal and stomach cancers. Imputing CEA before clustering could obscure biologically meaningful subgroupings. Thus, explicitly modeling local variables is essential for interpretable and valid clustering results. - **Question on influence of the proportion of local vs global variables** **Response**: This is an excellent question. Please see our detailed explanation in the link to the PDF at the end of the response. - **Question on inference for discrete variables** **Response**: Our GLocal DP model and VI-based algorithm can accommodate both continuous and discrete variables in the global and local components. As long as the parameters of discrete variables are continuous, our VI algorithm can optimize the ELBO using coordinate ascent. However, if the parameters were discrete, alternative discrete optimization methods or continuous relaxations might be needed, which could be explored in future research. - **Question on putting a gamma prior** **Response**: We acknowledge that inconsistency in the number of clusters is an important and active area of research. Our intention is not to understate this challenge; rather, we adopted a non-informative gamma prior as one possible modeling choice to help mitigate the severity of clustering inconsistency. - **Question related to use of only Gaussian likelihoods in our model** **Response**: Our VI-based algorithm is designed for Gaussian distributions but is inherently flexible, supporting arbitrary likelihood and prior choices, including DP (Ferguson, 1973) and HDP (Teh et al., 2006). The Gibbs sampler in the Appendix is similarly extendable. Moreover, our VI approach generalizes to exponential family distributions (Blei & Jordan, 2006). The rationale for assuming Gaussian distribution for PCA components in the real data example are shown in the PDF. - **Question related to variables in-between local and global** **Response**: This is an excellent question. Please see our detailed answer in the link to the PDF at the end of the response. - **Question on including a single real data set** **Response**: Our team primarily works with genomics data, which is why we used TCGA as an example. Similar multi-omics datasets, such as ICGC, share the same structure, but we did not use them due to significant preprocessing requirements. Beyond genomics, grouped data with both global and local variables arise in fields like business, social sciences, and other applied domains, for which our model would provide a convenient framework for analysis, as discussed in the detailed response PDF. We would like to **refer** the reviewer to the detailed point-by point responses to the questions, with additional simulations results, figures, and comments found in the PDF [here](https://drive.google.com/file/d/1Q-8oAM7nyEtGOURmVQF8E-ZTzwQl8IOi/view?usp=sharing).
Summary: This paper considers the problem of clustering grouped data for which the observations may include group-specific variables in addition to the variables that are shared across groups. To allow for these group-specific variables to aid in the clustering, the paper proposes a novel Bayesian nonparametric approach, termed global-local (GLocal) Dirichlet process, that models the ``global-local" structure of the observations across groups. The paper characterizes the GLocal Dirichlet process using the stick-breaking representation and the representation as a limit of a finite mixture model. The paper theoretically quantifies the approximation errors of the truncated prior, the corresponding finite mixture model, and the associated posterior distribution. The paper develops a fast variational Bayes algorithm for scalable posterior inference, which is illustrated with extensive simulations and a TCGA pan-gastrointestinal cancer dataset. Claims And Evidence: The results are sound. Methods And Evaluation Criteria: The algorithm and experiments are sound. Theoretical Claims: The algorithm is sound. Experimental Designs Or Analyses: The experiments are sound. Supplementary Material: The additional experiments and proofs are sound. Relation To Broader Scientific Literature: To allow for these group-specific variables to aid in the clustering, the paper proposes a novel Bayesian nonparametric approach, termed global-local (GLocal) Dirichlet process, that models the ``global-local" structure of the observations across groups. Essential References Not Discussed: The paper seems to have discussed essential references. Other Strengths And Weaknesses: Strength: the paper uses group-specific variables to aid in the clustering Weakness: the derivations seem straightforward, and the paper's impacts do not seem significant for ICML Other Comments Or Suggestions: n/a Questions For Authors: The derivations seem straightforward, and the paper's impacts do not seem significant for ICML. What were the main technical innovations that overcame previously unresolved technical challenges? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the comment. - **The derivations seem straightforward, and the paper's impacts do not seem significant for ICML. What were the main technical innovations that overcame previously unresolved technical challenges?** **Response**: In this paper, our contributions are as follows. 1. We introduce a general Bayesian nonparametric framework, GLocal DP, for clustering grouped data by incorporating group-specific local variables. Most of the existing Bayesian or non-Bayesian clustering methods do not accommodate this idiosyncratic data structure. 2. We formally define the GLocal DP mixture model and establish its finite mixture model representation. We further demonstrate that, in the limit, the finite mixture model converges to the GLocal DP mixture model. In practical applications, this implies that a suitably truncated finite mixture model closely approximates the infinite GLocal DP mixture model. However, to provide a rigorous quantification of this approximation, we derive explicit truncation error bounds in Propositions 1 and 2 of the main manuscript. Importantly, while our derivations may appear straightforward, they constitute a nontrivial extension of the results in Ishwaran and James, 2001 due to the presence of both local and global components in our model. 3. In the Appendix of our main manuscript, we present a blocked Gibbs sampler for posterior inference in the GLocal DP, complementing the variational inference (VI)-based algorithm described in the main text. Since both inference approaches rely on a finite truncation of the infinite GLocal DP, our theoretical results provide practical guidelines for selecting appropriate truncation levels. Notably, existing Bayesian nonparametric models such as the hierarchical Dirichlet process (HDP, Teh et al., 2006) employ variational inference methods that assume a truncated variational family (Teh et al., 2007; Wang et al., 2011), where the truncation level is typically chosen to exceed the expected number of clusters. While this heuristic is widely used, no established theoretical framework exists for determining truncation levels with explicit bounds on the approximation error. Since the GLocal DP reduces to the HDP in the absence of local variables across all groups, our truncation error bounds extend naturally to the HDP, thereby offering valuable theoretical guidance for selecting truncation levels in VI-based inference algorithms for the HDP. 4. We develop a VI-based algorithm for posterior inference in the GLocal DP and evaluate its performance through simulation studies. Our results demonstrate that the VI-based algorithm achieves clustering accuracy comparable to that of the MCMC-based Gibbs sampler while significantly improving computational efficiency in terms of both memory usage and runtime. Moreover, our VI-based inference algorithm exhibits high scalability with respect to both the number of groups and the number of observations per group. 5. We conduct additional simulations (found in the link to the PDF at the end), illustrating that incorporating local variables not only enables our model to accommodate complex data structures but also enhances the clustering performance of shared variables compared to methods that rely solely on shared variables. 6. Additionally, our proposed method extends beyond the application to cancer genomics to a general grouped clustering framework, wherein the available data consists of important group-specific variables apart from the shared variables. In summary, our novel Bayesian nonparametric model addresses a critical gap in the clustering of grouped data by explicitly incorporating group-specific idiosyncratic variables—an aspect not accounted for in existing literature. Furthermore, we establish theoretical truncation error bounds for the truncated GLocal DP prior and mixture model, offering a principled approach to selecting truncation levels in both MCMC- and VI-based inference algorithms. These theoretical results also extend to inference algorithms for the HDP, further enhancing their practical applicability. Finally, we develop a highly scalable VI-based inference algorithm for GLocal DP, which can be readily adapted for HDP-based models as well. We would like to **refer** the reviewer to the additional simulations supporting the novelty and usefulness of our method along with other contributions of our work, provided in the PDF, [here](https://drive.google.com/file/d/18DGSpV1sx2qKS-JjD7deY1O6OFHOsyWV/view?usp=sharing).
Summary: This paper addresses the problem of clustering grouped data where observations may include both group-specific and shared variables across groups. The authors propose a novel Bayesian non-parametric approach called the global-local (GLocal) Dirichlet process. Unlike HDP, where clusters are derived from a common base measure, GLocal DP distributes clusters in each G_j across a global shared subspace and a group-specific local subspace. Where each cluster in G_j is characterized by both the "shared" distribution, which is modeled across global model, and the local distribution which is only modeled in the local group. Claims And Evidence: The claims in the paper are supported by the experiments. Methods And Evaluation Criteria: While not familiar with the real datasets myself, both them and and the synthetic datasets looks reasonable and a good way to evaluate the model. The one thing I am missing in the benchmarks is comparing with a DPMM individually each group, in the same way that was done with the GMM, that seems to me a like an important comparison that should be added, and an HDPMM where you do not differentiate between the global and local parts. In addition, I would propose to use NMI and Purity metrics as well, which in my opinions has high value when comparing clustering with a varying number of clusters, and they complement the ARI. Theoretical Claims: I have checked both GLocal DP proposed constructions, and the truncation approximation bounds error evaluation, and they all seems sound. Experimental Designs Or Analyses: Experiments look well, see above (`Methods And Evaluation Criteria`) for a missing but required comparison. Supplementary Material: I have checked the section about the blocked Gibbs sampler. In addition I was looking for a notation table / dictionary, which seems like a valuable addition for a paper such as this. Furthermore, I have read the additional section regarding the simulations. Relation To Broader Scientific Literature: The main contribution of the paper is proposing and addressing the global-local structure for multiple groups in a HDP-like setting, and while many of the discussed and relevant paper do not address this exact setting, there is a previous paper which proposed that setting prior, and addressed it in a similar fashion, see below. Essential References Not Discussed: I would like to point the authors to `Scalable and Flexible Clustering of Grouped Data via Parallel and Distributed Sampling in Versatile Hierarchical Dirichlet Processes, Dinari and Freifeld, UAI 2020`. Dinari and Freifeld discuss this exact setting in their paper, and propose a slightly different model to address it. They propose the Versatile HDP model, having a similar global-local structure, the difference is mostly around the way the group-wise weights are drawn. The larger difference is in the inference method, where the current paper propose a VI method, Dinari and Freifeld proposed a split-merge based MCMC inference method, with a large focus on per-group-size scalability. Other Strengths And Weaknesses: The paper is well written, and the different model construction are very helpful for the understanding of the flow. The supplementary add significant value with the Gibbs sampler. The main weakness of the paper related to the prior work by Dinari and Freifeld, which handles the same setting, and propose a slightly different model, reducing the novelty of the paper's main contributions (setting and model). A bit lesser concern is the scalability, the population sizes evaluated in the paper are small, HDP generally is used to tackle datasets with thousands of documents, each with thousands of words, this is kinda lacking here. Finally, see my comments on the evaluation methods for additional needed evaluations. Given that this paper does not compare versus other works, some additional work is needed to justify it, both in terms of results and practicality (running time, setup overhead, etc...) Other Comments Or Suggestions: I would suggest adding a notation table in the supplementary. Questions For Authors: The following questions are issues that my greatly affect my score - 1) Comparison, differences and gap between this work and Dinari and Freifeld, UAI 2020. 2) Scalability, how does this method addresses datasets with large number of documents, or on the other side, very large groups (e.g. images for example) 3) What kind of distributions are supported? Is the the same and DP and HDP, which are very flexible, or are there limitations? Both in general, and limitations between the global and local parts. Can the VI support this? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for all the comments. - **Comparison, differences and gap between this work and Dinari and Freifeld, UAI 2020.** **Response**: Dinari and Freifeld, 2020 discusses a setting in which data arises from multiple pre-defined groups and consists of variables that are shared across the groups (denoted by $x_{ji}$) as well as group-specific variables (denoted by $y_{ji}$), which is the same as in our case. They propose the versatile hierarchical Dirichlet processes (vHDP) to model this grouped data. However, there are some key differences in the modeling perspective of this method as compared to our proposed GLocal DP. Importantly, the two models are not special cases of each other. Particularly, their model is defined hierarchically by first modeling the shared global variables and then conditionally modeling the local variables conditional on the global clusters. Contrarily, our proposed GLocal DP mixture model is defined jointly for the global and local variables. Using our formulation, for two observations $i$ and $i' \neq i$ in the same group $j$, if $t_{ji}=t_{ji'}$, then automatically, they share the same global clusters i.e., $k_{jt_{ji}}=k_{jt_{ji'}}$. However, this is not the case for the vHDPMM, where the local clusters for individuals $i$ and $i'\neq i$ are defined conditional on their global clusters. This, we feel is restrictive (also possibly counter-intuitive as in the cosegmentation example of Dinari and Freifeld, 2020) and our model provides a natural method of estimating clusters, both corresponding to the global- and local-level. ### **Simulation Results** #### **Data from vHDPMM** To illustrate, we simulated data from the vHDPMM with three groups, where global variables followed a six-component trivariate Gaussian mixture and local variables followed a five-component bivariate Gaussian mixture. We then assessed clustering accuracy (measured by ARI between true and estimated clusters) for both global and local clusters using vHDPMM and our proposed GLocal DP. | | **Global-level clusters** | | | **Local-level clusters** | | | |--|--|--|--|--|--|--| | **Groups** | 1 | 2 | 3 | 1 | 2 | 3 | | **vHDPMM** | 1.00 | 0.98 | 1.00 | 0.59 | 0.79 | 0.44 | | **GLocal DP** | 1.00 | 1.00 | 1.00 | 0.59 | 0.79 | 0.34 | Table shows that the clustering accuracy of vHDPMM was good as expected, since the data was drawn from the vHDPMM. Furthermore, GLocal DP clustering accuracy is comparable to that from vHDPMM. #### **Data from GLocal DP** Next, we simulated data from the GLocal DP model with three groups, keeping all other settings unchanged. We then evaluated the clustering accuracy of both global and local clusters using the vHDPMM and our GLocal DP. | | **Global-level clusters** | | | **Local-level clusters** | | | |--|--|--|--|--|--|--| | **Groups** | 1 | 2 | 3 | 1 | 2 | 3 | | **vHDPMM** | -0.04 | -0.02 | -0.01 | 0.00 | -0.01 | 0.00 | | **GLocal DP** | 1.00 | 0.98 | 1.00 | 1.00 | 0.80 | 0.99 | Table shows that the clustering accuracy of GLocal DP was good as expected. However, the vHDPMM performed very poorly, both in terms of global- and local-level clustering. In summary, the GLocal DP offers greater flexibility, effectively adapting to different data-generating mechanisms while accurately capturing both global and local cluster structures. - **Question on scalability** **Response**: We conducted simulations to assess the scalability of our VI-based algorithm for the GLocal DP. Specifically, we considered two scenarios: 1. Varying the number of groups ($J$) while keeping the sample size per group ($n_j$) fixed. 2. Varying $n_j$ while keeping $J$ fixed. In the first scenario, we set $J = 5, 10, 100\$ with $n_j = 100$, and in the second, we set $n_j = 100, 200, 500$ with $J = 10$. All other settings matched those in the main manuscript. We see that computation time scales approximately linearly in both cases (see Figure 5 in the attached link to PDF at the end). These results highlight the efficiency of our VI-based approach even for large datasets, with large number of groups and/or sample sizes in each group. - **Question on the kind of distributions supported** **Response**: Our methodology is highly flexible and can accommodate a wide range of distributions for both the likelihood and prior, including those supported by DP (Ferguson, 1973) and the HDP (Teh et al., 2006). Furthermore, our VI-based approach can be readily generalized to settings where the data distribution belongs to the exponential family, as outlined in Blei and Jordan, 2006. Additionally, our model can integrate both continuous and/or discrete variables in both the global and local components. We would like to **refer** the reviewer to the detailed point-by point responses to all questions, simulations, figures, and our responses to additional comments/suggestions provided in the PDF, [here](https://drive.google.com/file/d/159nPJz-V6EN1euHOhPXRqgG6MZ-uBKl7/view?usp=sharing). --- Rebuttal Comment 1.1: Comment: I thank the author for their response, and I urge them to use the remaining time to fill the full response in the open-review system, and not via an external link, such that it can be addressed properly. Re-reading Dinari and Freifeld, I do not think your claim here is true, for observatiions $i,i'!=i$, if they share the same table, they do share the global cluster. which is the sum of all tables belonging to that clusters (for the global part that is). Chance you might have mixed between the models? While I can see the difference in the model itself, I am not convinced that it justifies a publication in ICML. The scalability experiments are not convincing enough, as the numbers are still much lower than you would expect in such a scenario. Again, I urge the authors to upload their full respsone to the system such that I could address it properly. ---------------------------Follow up comment (as I cannot add another which will be visible to the authors) Thanks for the clarifications and the additional inputs, I now have better understanding of the diffrences between the two models, and the different scenarios each model address, I agree with the authors that the different models address different settings (althou with alot of similarities). Also both models can probably handle the different settings with limited success. In addition to the previous comments, I would encourage the authors to add a graphical model depicting the model, would greatly contribute to the understanding, quite possibly on the supplementary. And a short discussion on the diffrences between your work and the aforementioned paper. I am now more leaning into accepting the paper, however I am still not convinced that ICML is the correct venue for this work, I belive AISTATS or UAI are more fiting venues, the experiments section should include the additional evaluations with real data as well. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for the additional comments. Below, we provide our full response. **Benchmark comparing with a DPMM individually each group** As suggested by the reviewer, the table below reports mean (SD) of ARI over 50 runs for local-level clustering. GLocal DP consistently outperforms DPMM and GMM separately on each group across scenarios. | Method | Pop. 1 - Low | Moderate | High | Pop. 2 - Low | Moderate | High | Pop. 3 - Low | Moderate | High | |------------|--------------------|----------|------|--------------------|----------|------|--------------------|----------|------| | GLocal DP | 0.84 (0.2) | 0.93 (0.12) | 0.96 (0.12) | 0.91 (0.11) | 0.95 (0.07) | 1 (0.01) | 0.96 (0.09) | 0.99 (0.03) | 1 (0.01) | | DPM | 0.76 (0.27) | 0.89 (0.19) | 0.93 (0.17) | 0.67 (0.21) | 0.8 (0.16) | 0.88 (0.16) | 0.68 (0.26) | 0.78 (0.24) | 0.92 (0.2) | | GMM | 0.58 (0.33) | 0.71 (0.27) | 0.67 (0.27) | 0.54 (0.23) | 0.56 (0.24) | 0.56 (0.19) | 0.61 (0.27) | 0.6 (0.22) | 0.68 (0.28) | **Comment on NMI and Purity metrics** As per the reviewer's suggestion, the table below shows the mean (sd) of NMI of the global-level clustering, which shows that GLocal DP is superior to HDP. | Method | Pop. 1 - Low | Moderate | High | Pop. 2 - Low | Moderate | High | Pop. 3 - Low | Moderate | High | |------------|--------------------|---------------|--------------|--------------------|---------------|------------|--------------------|---------------|--------------| | GLocal DP | 0.74 (0.27) | 0.88 (0.22) | 0.9 (0.22) | 0.78 (0.19) | 0.87 (0.19) | 0.95 (0.1) | 0.88 (0.16) | 0.92 (0.18) | 0.95 (0.11) | | HDP | 0.31 (0.24) | 0.35 (0.24) | 0.32 (0.28) | 0.26 (0.24) | 0.31 (0.21) | 0.3 (0.23) | 0.3 (0.21) | 0.35 (0.24) | 0.34 (0.25) | Even at the local-level GLocal DP has superior NMI compared to DPMM on each group as shown below. | Method | Pop. 1 - Low | Moderate | High | Pop. 2 - Low | Moderate | High | Pop. 3 - Low | Moderate | High | |------------|--------------------|----------|------|--------------------|----------|------|--------------------|----------|------| | GLocal DP | 0.82 (0.2) | 0.9 (0.13) | 0.95 (0.1) | 0.9 (0.1) | 0.95 (0.06) | 0.99 (0.02) | 0.95 (0.09) | 0.98 (0.04) | 1 (0.01) | | DPM | 0.74 (0.27) | 0.87 (0.19) | 0.93 (0.16) | 0.67 (0.19) | 0.8 (0.15) | 0.88 (0.15) | 0.66 (0.24) | 0.78 (0.23) | 0.92 (0.2) | As suggested by the reviewer, we evaluated the Purity metric and found that GLocal DP outperforms HDP at the global level, and both GMM and DPMM at the local level. Overall, GLocal DP consistently outperforms other methods across all metrics—ARI, NMI, and Purity—at both global and local levels. **Comment regarding the scalability** We acknowledge that the sample sizes are still much lower than what one would expect. However, we wanted to highlight that our algorithm runtime is nearly linear in the sample sizes and/or number of groups. **Comment on Dinari and Freifeld's method** *Dinari and Freifeld*'s model for the global features is defined as follows. $z_{ji}\sim Cat(\pi_j)$ and $p(x_j|\theta, z_j) = \prod_{i=1}^{n_j} f(x_{ji}; \theta_{z_{ji}})$. They define global cluster $k$ as $c_k = [(x_{ji}) : z_{ji} = k, j=1,..,J, i = 1,..,n_j]$. Furthermore, they define for each $j = 1, \dots, J$ and each $k=1, \dots, K$, $s_j^k = [(y_{ji}) \ \forall i:z_{ji}=k]$ as the collection of local features having the global features in global cluster $k$. Consequently, each $s_j^k$ is modeled with an infinite mixture model as follows. $z_{ji}^l \sim Cat(\pi_j^k)\ \forall i \ \text{s.t. } z_{ji} = k$ and $p(s_j^k|\theta_j^k, z_j^l) = \prod_{i:z_{ji}=k} f_j(y_{ji}; \theta^k_{z_{ji}^l})$. In summary, their model is defined hierarchically by first modeling the shared global variables and then conditionally modeling the local variables conditional on the global clusters. Hence, we feel that for $i'\neq i$, $z_{ji}^l = z_{ji'}^l$ does not ensure that $z_{ji} =z_{ji'}$ under the conditional modeling framework. In other words, for any group $j$, if $i \in s_j^k$ and $i'\in s_j^{k'}$, where $k \neq k'$, then the two observations $i$ and $i'$ cannot have the same global cluster, even if they share the same local feature. This is further highlighted through simulations reported before, showing that GLocal DP performs comparably to vHDPMM when data are generated from the vHDPMM. However, when data come from GLocal DP, vHDPMM fails to recover meaningful clusters, highlighting the greater flexibility and robustness of our model under model misspecification. **Comment on notation table** We will add a glossary of abbreviations and important definitions used throughout the paper to the supplementary.
null
null
null
null
null
null
null
null
RULEBREAKERS: Challenging LLMs at the Crossroads between Formal Logic and Human-like Reasoning
Accept (poster)
Summary: The paper RULEBREAKERS: Challenging Large Language Models at the Crossroads between Formal Logic and Human-like Reasoning introduces RULEBREAKERS, a dataset designed to evaluate large language models' (LLMs) ability to distinguish between logical rule-based conclusions and conclusions that align with human reasoning, which incorporates commonsense and factual knowledge. The study defines "rulebreakers" as scenarios where conclusions derived using formal logic contradict human expectations. Evaluating seven state-of-the-art LLMs, including GPT-4o, the paper finds that most models perform poorly on recognizing rulebreakers, often over-applying formal logic in a rigid manner. The authors identify two possible reasons for this failure: (1) models' poor utilization of world knowledge, and (2) suboptimal attention allocation in reasoning. Their findings highlight a crucial limitation of LLMs and provide a counterpoint to recent works that integrate formal logic to improve LLM reasoning, warning against potential divergences from human-like reasoning. Claims And Evidence: The claims in RULEBREAKERS: Challenging Large Language Models at the Crossroads between Formal Logic and Human-like Reasoning are largely supported by empirical evidence. Methods And Evaluation Criteria: Yes, the methodology is well-designed for assessing how LLMs handle a crucial failure mode: rigidly applying logical rules without considering common sense or world knowledge. The dataset ensures control over semantic content while allowing systematic testing across models. The evaluation metrics provide both direct (accuracy) and indirect (confidence, attention) insights into model reasoning. Theoretical Claims: There is no theoretical claim. Experimental Designs Or Analyses: The experimental design and analyses in RULEBREAKERS appear to be thoughtfully constructed, with several mechanisms to ensure validity. Supplementary Material: I have read the supplementary material. Relation To Broader Scientific Literature: The RULEBREAKERS paper is well-situated within the broader scientific literature at the intersection of formal logic, human-like reasoning, and LLM evaluation. It contributes to multiple strands of prior research, including cognitive science, natural language inference, logic-based AI methods, and LLM reasoning evaluation. Essential References Not Discussed: https://arxiv.org/pdf/2307.02477: Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks https://arxiv.org/pdf/2207.07051: Language models show human-like content effects on reasoning tasks Other Strengths And Weaknesses: The main finding of this paper seems to have already been well-studied by previous works: https://arxiv.org/pdf/2307.02477: Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks https://arxiv.org/pdf/2207.07051: Language models show human-like content effects on reasoning tasks Other Comments Or Suggestions: The naming of RULEBREAKERS vs. non-rulebreakers is clear, but a brief mention of alternative terminology (e.g., "rigid logic contradictions") could help connect the paper to related work in logic and commonsense reasoning. Questions For Authors: Are the authors aware of these two well-established works that study very similar topics and has come up with very similar conclusions? https://arxiv.org/pdf/2307.02477: Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks https://arxiv.org/pdf/2207.07051: Language models show human-like content effects on reasoning tasks Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: We thank Reviewer 8kkH for commending our “**methodology is well-designed**”, our claims are “**largely supported by empirical evidence**”, and that our experimental design and analyses are “**thoughtfully constructed, with several mechanisms to ensure validity**”. We further appreciate and agree with their assessment that our paper is “**well-situated within the broader scientific literature**” and “**contributes to multiple strands of prior research**”. **Weakness/Question: “finding well-studied by previous works” [1, 2]** The only weakness raised by the Reviewer may stem from a misunderstanding as the objective, methodology, and findings of these two works are fundamentally different from ours. In light of the clarifications to follow and the Reviewer’s positive comments above, we respectfully ask the Reviewer to consider whether an adjustment to their initial score may be appropriate. We kindly refer to **Appendix A (lines 829-833), where we differentiate our work from Lampinen et al. (2024) [1]** and the comparison table below. We did not initially include Wu et al. (2024) [2] because their methodology (applying a counterfactual ontology to logical templates) and findings have largely been established in **Saparov and He (2023) [3]**, which we also already distinguished in **Appendix A (lines 846-852)**. We are grateful for the suggestion and will add more discussion in the final version to make it clear. | | **Example instance in dataset** | **Dataset uses a counterfactual ontology?** | **Conclusion is valid according to logic?** | **Premises are not true in the real world?** | **Conclusion is not true in the real world?** | **Assuming the premises are true, does the conclusion contradict any of the Premises?** | **Relevant finding** | |---|---|---|---|---|---|---|---| | Lampinen et al. (2024) [1] | Premises: All Swedish cities are French cities. All French cities are in Poland. Conclusion: All Swedish cities are in Poland. | Yes to some extent. Examples that violate real-world knowledge were manually written. | Yes | Yes. Sweden, France and Poland are three separate countries, with different cities. | Yes | No | When performing **logical reasoning**, LLMs tend to be biased in judging conclusions that are true in the real world as logically valid, and conclusions that are false in the real world as invalid, even where the underlying logical structure is the same in both cases. | | Wu et al. (2024) [2] | Premises: Swedish cities are French cities. French cities are American cities. Conclusion: Swedish cities are American cities. | Yes | Yes | Yes. As above, Sweden, France and America are three separate countries, with different cities. | Yes | No | In a **logical reasoning** task, the more the premises deviate from real-world knowledge, the worse LLMs tend to perform in correctly applying logical rules to derive a valid conclusion. | | Our work | Premises: Anne is either in Stockholm or somewhere in Sweden. Anne is not in Sweden. Conclusion: Anne is in Stockholm. | No | Yes | Undetermined | Undetermined | Yes. In the example rulebreaker, the conclusion contradicts the second premise because it is factually impossible for Anne to be in Stockholm if she is not in Sweden, given that Stockholm is located in Sweden in the real world. | When **reasoning with natural language _in general_**, LLMs tend to accept conclusions that can be derived by rigidly applying logical rules, even when these conclusions contradict the premises in fact. | As shown in the table above, our work differs significantly from [1] and [2] in that (a) we do not use a counterfactual ontology, (b) we do not introduce premises or conclusions that are untrue in the real world; and (c) our findings are not concerned with models’ performance in purely logical reasoning tasks, but with their ability to reason in general with natural language. As the same two papers are mentioned in the Reviewer's question, we avoid repeating our response here for clarity. **Suggestion: alternative terminology** We thank the Reviewer for suggesting alternative terminology to connect our paper to other related work. We will incorporate this by describing rulebreaker cases as “factual contradictions arising from over-rigid reasoning with formal logic” in the final version. Additionally, we will make clear the connection and implications of our findings for LLM applications in knowledge-intensive tasks as we discuss in our S1 response to Reviewer twqU. [1] Wu et al. https://aclanthology.org/2024.naacl-long.102 [2] Lampinen et al. https://doi.org/10.1093/pnasnexus/pgae233 [3] Saparov and He. https://openreview.net/pdf?id=qFVVBzXxR2V
Summary: The authors propose a new dataset for single step reasoning, which consists of pairs of premises and a conclusion, which are answered in a binary way. The premise and the conclusion always are true if one only follows the logical reasoning. However the pair is divided into "rulebreakers", where the reasoning contradicts common knowledge, and "non-rulebreakers", where the reasoning is consistent with such knowledge. The dataset consists of 12,800 of such pairs, which generated based on a template, that the authors define in their paper. The authors then proceed to evaluate six open source large language model as well as GPT-4o. They evaluate their accuracy on getting the pairs completely correct as well as their performance on the individual sets. They also conduct experiments with two different types of logical reasoning and the model confidence in their outputs for the open source models. The authors conduct an analysis in two areas. Firstly, whether the performance of the models depends to a certain degree on their familiarity with the respective area of knowledge required for a given question, where they find the key insight that a model might be excellent at knowledge retrieval and/or logically reasoning, but can not recognize that a conclusion is inconsistent with factual knowledge. They also investigate whether models pay attention to the right information in the premises and here the insights are not conclusive, but in general the models that pay more attention to the factual information in the second premise also produce better results overall. ## update after rebuttal After a cursory reading of the two papers that reviewer 8kkH suggested, I agree with him, that they reduce the novelty of the presented approach and therefore also reduce my score to 3. However the authors have written very clearly how they construct their dataset, which is the main focus of their work. They also use a templated dataset, which makes it harder for language models to just learn the answers and also more extensible. Their evaluation shows the benefits of their dataset. The other papers in comparison rely on small datasets and/or manually written datapoints. I therefore think their paper deserves a fair chance and disagree with the negative view of reviewer 8kkH. In my opinion, the main focus of the article is the dataset, which the authors describe in detail and their description is well-written and easy to follow. So even if the authors can not fully explain the trends, as reviewer QkHy remarks, seen in their analysis, the evaluation shows the necessity for such a dataset. Claims And Evidence: Mostly yes. If I understand Figure 6 correctly, it is based on token probability of the first token in the country name. However the results include GPT-4o, a closed source model, where the authors previously claimed that they cannot access such information. The authors claim that most humans would recognize these "rulebreakers", however there is no evaluation data provided in the paper itself. Methods And Evaluation Criteria: Yes, the authors created their own dataset and evaluate various aspects of that dataset for seven different models. Theoretical Claims: There are no theoretical claims. The authors define the metrics for their evaluation with formulas, which look reasonable. Experimental Designs Or Analyses: Evaluation methodology seems simple enough and reasonable. Prompts and samples are documented well enough. The conducted analyses seems convincing, albeit sometimes inclusive. Supplementary Material: No, but I read the appendix. Relation To Broader Scientific Literature: The authors propose a new dataset for testing, whether language models can conduct reasoning, but also realizes if the reasoning conflicts with factual knowledge. While the idea for the dataset is simple, that idea is presented clearly and the description is easy to follow, which should make the use of the dataset fairly straightforward. The authors intend to contribute to a growing body of literature, that cautions the increased use of reasoning by language models. Essential References Not Discussed: No, not to my knowledge. Other Strengths And Weaknesses: strengths: * paper was clearly written, well presented and easy to follow * well executed and broad evaluation weaknesses: * idea is rather simple and only based on a few templates, which begs the question * only single step reasoning, but even so models show only poor or mediocre performance Other Comments Or Suggestions: typos: * line 395: extra white space between end of the sentence and the footnote number * line 832: "all As and Cs" - typo: "and" -> "are" Questions For Authors: * How can GPT-4o be in Figure 6, if the calculation relies on the token probability? * Did you consider providing a baseline based on humans solving these questions? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank Reviewer fmAM for their helpful feedback. We appreciate their positive comments that our evaluation is “**well executed and broad**”, our analyses are “**convincing**” and that our paper is “**clearly written, well presented and easy to follow**”. **Q1: “How can GPT-4o be in Fig 6, if calculation relies on token probability?”** We can only access probabilities of the top 20 most likely tokens predicted by GPT-4o. The Reviewer is correct that Fig 6 is based on probability of the first token in the country name. By design, our experiment’s control for factual knowledge (**lines 257-278**) **guarantees** that the first token in the country name is **the most likely token** predicted by the model at that timestep. We can therefore access its predicted probability. By contrast, our method for model confidence (**lines 251-267**) requires access to probability of tokens which may not be in the top 20 predictions. **Suggestion 1 (S1): “claim that humans recognize 'rulebreakers', but no evaluation data provided”**/**Q2: “Did you consider providing a baseline based on humans solving these questions?”** Our dataset is based on rulebreakers used in prior cognitive science studies (see table below), which were all manually **crafted by expert psychologists and tested on human participants under controlled settings**. These studies already establish a baseline, finding that people generally handle these rulebreakers as we expected. Our key contribution is in systematically scaling these examples by identifying recurring patterns, creating templates and generating controlled variations that preserve the structure of these initial examples. Also, **the ability to recognize rulebreakers is contingent on reasoners having the relevant knowledge (e.g. that “Stockholm is in Sweden”)**. Since LLMs have often been observed to possess broader factual knowledge, directly comparing their performance on RULEBREAKERS against humans would not be valid. As such, we consider that an additional human baseline would add limited value. We are happy to conduct a small-scale human annotation but, given limited time to hire annotators, will include it in the final version. | **Study** | **Examples from dataset** | **Our corresponding rulebreaker template(s)** | **No. of participants in study** | **Relevant finding** | |---|---|---|---|---| | Quelhas et al. (2010) [1] | “If Manuel plays a game, then he doesn’t play football” | MT (type, instance) | 28 | Participants generally avoid conclusions that factually contradict the premises, even where they can be derived by applying a logical rule (modus tollens). Instead, they conclude that “nothing follows” from the premises. | | Quelhas and Johnson-Laird (2017) [2] | “Andre is in Lisbon or he is in Portugal”; “Luis is eating chicken or he is eating meat” | DS (country, city), DS (type, instance) | 80 | Same as above, with respect to the logical rule of disjunctive syllogism. | | Johnson-Laird and Byrne (2002) [3] | “If Bill is in Brazil then he is not in Rio de Janeiro”; “If Ann is in the Hotel LaBlanc then she is not in the Champagne Suite” | MT (country, city) | 41 | When participants are familiar with the entities mentioned in the premises, they are more likely to recognize and avoid factually contradicting conclusions, as compared to when they are not. | **Weakness 1 (W1): “idea is rather simple and only based on a few templates”** Our approach is intentionally grounded on existing cognitive science studies: we ensure the **four** templates (from which we generated **25,600 instances**) are directly based on examples carefully crafted and validated in prior work with human participants. Expanding the set of templates is a promising direction for future work but, as the Reviewer rightly implies, such new templates will also need to be based on examples that are validated empirically with human participants. We hope our novel methodology and findings will inspire efforts in this direction from both NLP and cognitive science communities. **W2: “only single step reasoning”** As the Reviewer rightly points out, models exhibit poor to mediocre performance even in single-step settings. We believe that a targeted and rigorous study of single-step inferences is critical in exposing blind spots that lead to failures in multi-step settings. While we specifically focus on inherent patterns of a model’s reasoning process independent of any prompt engineering, future work could design prompting techniques to steer model behaviour and address the blind spot we identified of LLMs over-rigidly applying logical rules to accept and draw factually inconsistent conclusions. **S2: typos** Thank you for spotting these. We will incorporate the corrections in our final version. [1] Quelhas et al. https://doi.org/10.1080/17470210903536902 [2] Quelhas and Johnson-Laird. https://doi.org/10.1080/17470218.2016.1154079 [3] Johnson-Laird and Byrne. https://doi.org/10.1037/0033-295X.109.4.646
Summary: This paper introduces RULEBREAKERS, a dataset specifically created to assess LLMs on reasoning scenarios that emphasize "human-like reasoning" over logic reasoning. The study demonstrates that state-of-the-art LLMs, including GPT-4o, frequently apply logical rules, which is inconsistent with human reasoning. Claims And Evidence: No. "Human reasoning" is undefined and highly ambiguous. A person with even basic logic training would not find the example in Figure 1 counterintuitive. Moreover, while the dataset is claimed to evaluate human-like reasoning, it was neither created nor validated by humans, making the claim unsupported. Methods And Evaluation Criteria: No. See above. Theoretical Claims: N/A Experimental Designs Or Analyses: Yes Supplementary Material: No Relation To Broader Scientific Literature: This work tries to connect cognitive science and LLM reasoning. Essential References Not Discussed: No Other Strengths And Weaknesses: Strengths: Extensive compared multiple advanced LLMs. Weaknesses: 1. "Human reasoning" is undefined and highly ambiguous. A person with even basic logic training would not find the example in Figure 1 counterintuitive. Moreover, while the dataset is claimed to evaluate human-like reasoning, it was neither created nor validated by humans, making the claim unsupported. 2. Limited exploration into internal model mechanisms that cause failure. Other Comments Or Suggestions: 1. Potential effects of prompt phrasing on model performance are recognized but not deeply explored. 2. The hypothesis regarding model over-generalization of logic rules is plausible but could benefit from further empirical validation. Questions For Authors: 1. How to define "human-reasoning" and why? 2. Can you provide additional details on how model familiarity with entities impacts performance? 3. Have experiments been considered with models specifically fine-tuned for logical reasoning tasks? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank Reviewer QkHy for their helpful feedback and recognizing that our study was “**extensive**” in having “**compared multiple advanced LLMs**”. **Weakness 1 (W1): “human reasoning is undefined”/Question 1 (Q1): “how to define human-reasoning and why?”** We will **replace references to “human reasoning” with “knowledge-informed reasoning”**. We define reasoning to be “knowledge-informed” if a reasoner incorporates factual and commonsense knowledge in their reasoning process, and does not draw or accept conclusions that factually contradict any premises. Accordingly, a model that accepts the conclusions in our rulebreaker cases would fail to be “knowledge-informed”. Our definition is motivated by arguments in existing cognitive science literature that humans typically incorporate factual and commonsense knowledge to interpret natural language premises, and avoid drawing conclusions that, to their knowledge, factually contradict any premises. We discussed these in **lines 45-63 and 96-116** and will add more justifications for our definition. This term also highlights our findings' implications for LLM use in knowledge-intensive tasks, as discussed in our S1 response to Reviewer twqU. **W2: “example in Fig 1” is not counterintuitive** The example is commonly used in cognitive science works (e.g. [1], [2]) to demonstrate reasoning problems with “relevance conditional” statements. We will replace this with an example from our own dataset to better represent our rulebreakers. **W3: “dataset...was neither created nor validated by humans”** We kindly refer to our S1/Q2 response to Reviewer fmAM. **W4: “Limited exploration into internal model mechanisms that cause failure”** Please see **Appendix M**: we analyzed neuron activations in feedforward layers, complementing two potential causes diagnosed in Section 6. We welcome further suggestions on specific investigation methods. **Suggestion 1 (S1): “Potential effects of prompt phrasing…not deeply explored.”** We refer to **Appendices D and F**: we analyzed the breakdown of our results by different phrasings, and tested more re-phrasing and additions to confirm that our findings are robust against prompt variations. **S2: “hypothesis regarding model over-generalization …could benefit from further empirical validation”** See **Appendices F and G**: we validated our main results with further experiments, in addition to existing controls for models’ factual knowledge and prompt sensitivity (as above). **Q2: “Can you provide additional details on how model familiarity with entities impacts performance?”** To clarify **lines 396-416**, we found that a model generally performs worse on prompts containing unfamiliar entities, as compared to prompts containing familiar ones. This is supported by our qualitative analysis in **Appendix I**. **This trend mirrors findings from cognitive science [3]** that when participants are familiar with entities in the premises, they are more likely to recognize and avoid factually contradicting conclusions, as compared to when they are not. Separately, when comparing across models, we found that simply being familiar with entities does not guarantee good performance on RULEBREAKERS: while Gemma-2-27b-it is overall highly familiar with entities in our dataset, it scores among the poorest in paired accuracy. This shows that **a model can excel at recalling factual knowledge yet still struggle to apply that knowledge effectively in reasoning**. We will add this expanded explanation in the final version. **Q3: “Have experiments been considered with models specifically fine-tuned for logical reasoning tasks?”** We did not do so, as our work initially aimed to highlight a blind spot that exists even in general-purpose models. We expect the behaviour of over-rigidly applying logic to be even more pronounced in such fine-tuned models. To test this, we **evaluate a Llama-3.1-8B-Instruct** on Hugging Face [4] **fine-tuned on a dataset for propositional logic reasoning**, and compare its performance on RULEBREAKERS against the baseline model before fine-tuning. | | **Main experiment setup (% accuracy)** | | | **Appendix G setup** | | |---|---|---|---|---|---| | | Paired | Rulebreakers | Non-rulebreakers | % of correct conclusions generated in rulebreaker cases | % of correct conclusions generated in non-rulebreaker cases | | Baseline | **50.45** | **91.46** | 58.60 | **74.20** | 47.91 | | Fine-tuned | 11.42 | 32.93 | **75.48** | 0.40 | **65.75** | As expected, the fine-tuned model performs better on non-rulebreakers but substantially worse on rulebreakers compared to the baseline. We will include a detailed discussion of these results in the final version. [1] Johnson-Laird. Mental models. Cambridge University Press, 1983. [2] Quelhas et al. https://doi.org/10.1080/17470210903536902 [3] Johnson-Laird and Byrne. https://doi.org/10.1037/0033-295X.109.4.646 [4] huggingface.co/ergotts/llama_3.1_8b_prop_logic_ft
Summary: The authors introduce RULEBREAKERS, a dataset designed to assess LLMs' ability to reason using common sense and factual knowledge rather than strictly following formal logic. The experimental evaluation proposed in the paper spans over seven LLMs, and its findings uncover a notable weakness in these models' ability to identify and reject conclusions derived from propositional logic that are factually inconsistent with the premises. The paper is primarily well-written, easy to understand, and follow (excluding a few paragraphs). The results are interesting and highlight an open challenge that LLMs need to face. Meanwhile, the experimental settings could be extended to take into account large LLMs and hybrid approaches to test the impact of model complexity/size on its reasoning capability. Overall, the paper feels relevant. Thus, I consider the paper slightly above the ICML conference's acceptance threshold. ## update after rebuttal I thank the authors for the clear and detailed rebuttal they provided. Overall, I think the authors covered some of my doubts, while leaving also some open questions. Therefore, I am considering the paper to be generally solid and my feeling concerning it is still positive. Therefore, I am keeping my original review score (which was already positive). Claims And Evidence: The authors’ claims are largely supported by empirical evidence. Although the experimental evaluation could be extended to take into account larger LLMs or hybrid approaches, the performance of the tested models seems to back the authors’ claims in most experiments. Therefore, I believe that the authors' claims are sound. Methods And Evaluation Criteria: The methodology proposed by the authors to test the performance of LLMs on rule-breakers is reasonable. The evaluation metrics considered highlight different aspects of the LLM's capabilities, allowing the authors to identify a few different relevant challenges related to rule-breakers and LLMs. Overall, the methods and evaluation criteria are reliable. Theoretical Claims: The authors did not provide any theoretical claims given that the paper focuses on experimental analysis only. Therefore, this question is not applicable. Experimental Designs Or Analyses: The evaluation of the experiments proposed by the authors is sound and extensive enough to back up their claims. The rule-breakers are tested over a few different LLMs, thus giving an overview of general LLM limitations. However, the experimental evaluation could still be extended to larger LLMs to identify if the limitations highlighted by the authors are connected to model size or if an intrinsic issue exists with how the LLM thought process is brought about. Supplementary Material: Yes, I had a look at the appendix to better understand the prompt construction process and the findings of the additional experiments proposed by the authors. Relation To Broader Scientific Literature: The authors did not mention how the failure of LLMs to recognize and handle rule-breakers could impact their application from a real-world perspective. Essential References Not Discussed: I don’t believe there are any essential references missing. However, given that I’m not an expert of the topic, I might be wrong. Other Strengths And Weaknesses: STRENGTHS: - The paper tackles an interesting research field, analyzing the ability of LLM to reason in a human-like manner - The rue-breakers proposed by the paper seem to be constructed in a sound manner - The obtained dataset is extensive and represents a valid addition to the LLM community - The experimental evaluation highlights some interesting findings on LLM’s behaviour on such rule-breakers WEAKNESSES: - Some findings are not clearly presented by the authors. Such as section 5.2 or the claim on the familiarity effect in section 6. - The authors selected only small/medium-sized LLMs. Thus, the validity of the paper’s findings may be limited. - The authors mention that approaches exist that incorporate logic-based training metrics or constraints during inference in LLMs. Still, they do not check how these hybrid models perform in the rule-breaker settings. Other Comments Or Suggestions: - Although I understand that the model selection process was bounded by resource limitations, leveraging only small/medium-sized LLMs for the experimental evaluation represents a relevant issue for the paper. Indeed, it would be interesting to understand if larger, more complex models are capable of performing better on rule-breakers. As it stands, the experimental evaluation is a bit undersized with respect to the relevance of the task considered. - The temperature parameter used by the LLM represents a very relevant hyper-parameter to study for validating the output. However, the authors fail to mention anything about such a parameter in the paper (unless I failed to read it properly). The behaviour of an LLM when varying the temperature parameter may largely vary, thus altering the performance of the obtained output. Therefore, I would suggest the authors add some other experiments to show what happens at the variation of the temperature. - Looking at Figure 3, it seems that the performance of the same LLM largely varies across the different types of rule-breakers. Also, there does not seem to be a precise correlation between the performance of the different models on the different types of rule breakers. For example, Phi3 medium performs much better on MT (type, instance) than on DS (country, city). Meanwhile, Llama3 70B is the exact opposite. However, the authors seem to fail to mention anything concerning this aspect in their discussion of the results. Therefore, I suggest the authors add more insights concerning why these behaviours emerge. - Figure 5 is not presented clearly. In my opinion, section 5.2 is a bit tricky to follow as the authors confused the readers when presenting the results. I suggest the authors rephrase it a bit and be more careful in their writing. - The authors mention, “If our hypothesis (1) is correct, we would expect models to have a higher “familiarity” with respect to prompt pairs in their “recognized” group, i.e. those that the model has answered correctly. As shown in Figure 6, this holds true for all LLMs except for Meta-Llama-3-8B-Instruct.”. However, from Figure 6, it seems like the opposite is true. This may be due to the authors plotting the results in the form of a box plot, which does not allow us to distinguish the nuanced variations among the red and green boxes well. From an outside perspective, it seems like the boxes are very much overlapping, thus invalidating the authors' statement. Assuming that the authors’ statement is indeed true, I would suggest that the authors find a better representation of the results in Figure 6. A simple table with a mean and standard deviation of the familiarity would be enough. - Since the authors mentioned that there exist approaches incorporating logic-based training metrics or constraints during inference in LLMs, it would be interesting to check how these hybrid models perform in the rule-breakers dataset. Questions For Authors: - Could the authors discuss what they would expect to find out when applying the same experimental evaluation to larger models? - Did the authors consider the possible impact of the temperature parameter when prompting the LLMs in their experimental evaluation? - Looking at Figure 3, the performance of the same LLM largely varies across the different types of rule-breakers. Do the authors have any insight on why this happens? Also, there does not seem to be a precise correlation between the performance of the different models on the different types of rule breakers. For example, Phi3 medium performs much better on MT (type, instance) than on DS (country, city). Meanwhile, Llama3 70B is the exact opposite. Do the authors have any insight on why this is happening? - Could the authors provide additional experiments implementing hybrid LLM approaches incorporating logic-based training metrics or constraints during inference to test their performance on the rule-breakers dataset? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank Reviewer twqU for commending that our “**methods are reliable**”, our “**dataset represents a valid addition to the LLM community**”, our experiments are “**sound**”, and results “**highlight an open challenge**”. We are glad they found our paper “**primarily well-written**”, echoing Reviewer fmAM. **Suggestion 1 (S1): “mention how failure could impact LLM real-world applications”** LLMs’ ability to detect and handle knowledge conflicts is crucial to ensure robustness and guard against misinformation. Suppose a model is given this context, like DS-type rulebreakers in our dataset:``“The patient was informed that the treatment for their illness is available either in Berlin or somewhere in Germany. However, as it turns out, the treatment is not available anywhere in Germany.”`` Models rigidly applying logic would incorrectly conclude that “the treatment is available in Berlin”, contradicting the fact in context. Instead, an ideal model should recognize that, since Berlin is in Germany, the treatment is not available in Berlin either. To demonstrate, we add the **bold** phrases to premises in our prompts ("**Suppose we are told that** Anne is either in Stockholm or somewhere in Sweden. **However, as a matter of fact,** Anne is not in Sweden"), but found that models' performances on RULEBREAKERS remain mediocre (paired accuracy ranging 0.29 to 0.69). Due to space, we will add and discuss these results in the final version. **S2/Weakness 1 (W1): “Some findings not clearly presented: [Section 5.2, Section 6, Fig 5]”** We will replace Fig 5 with a table and revise Section 5.2 to improve clarity. Specifically, we will make clear that we use a model’s output probabilities as a measure of its confidence. We will also add to Section 6 the clarification we set out in our Q2 response to Reviewer QkHy. **W2: “selected only small/medium-sized LLMs”/Question 1 (Q1): “what would authors expect evaluating larger models?”** We included a 70B model and the large model GPT-4o. We do not observe a correlation between model size and performance on RULEBREAKERS: GPT-4o underperforms most models, and Llama-3-70B-Instruct underperforms its 8B variant. Thus, we consider that our results with selected model sizes **already make a strong point regarding limitations of current LLMs, including state-of-the-art GPT-4o**. **Q2: “Did authors consider impact of temperature when prompting?”/S3: “add experiments [re] temperature”** In our main experiment, we run only one forward pass to extract the most probable token without randomly sampling, so temperature is not used. In Appendix G, we prompt LLMs to generate conclusions by greedy decoding. Our results already make a strong case, but we follow the Reviewer’s suggestion and test a subset of models. As Table 1 shows, random sampling with temperature yielded mixed effects for different models but did not alter our findings. We will add these results in the final version. | **Temperature** | **Phi-3-mini-128k-Instruct** | **Llama-3-8B-Instruct** | **Mistral-7B-Instruct-v0.3** | |---|---|---|---| | Not set - greedy decoding (baseline) | 1.23 | **32.44** | **19.50** | | 0.1 | 1.47 | 32.34 | 19.26 | | 0.5 | 4.24 | 31.76 | 18.68 | | 1.0 | **7.59** | 28.51 | 17.12 | Table 1. % of correct paired responses **Q3: “performance of same LLM varies across rulebreaker types...no correlation between performance of different models on different types...any insight on why?”/S4: “add more insights on why these behaviours emerge”** We appreciate and agree with these observations. We expect models may differ in specific traits: some may be better at reasoning with geographical entities; others worse at recognizing some forms of negated premises. However, the fact that performance varies unpredictably shows that models do not handle these reasoning problems robustly or consistently. Rather, they may be affected by biases or quirks in the training data. **Q4: “additional experiments implementing hybrid LLM approaches?”/W3: “do not check how these hybrid models perform”** Our study aimed to expose a blind spot even in general-purpose models. We add a model specifically fine-tuned for logical reasoning (see our Q3 response to Reviewer QkHy), which enhances our findings and reflects our motivation for introducing “rulebreakers”. Hybrid systems like LogicLM [1] use LLMs to parse natural language statements into logical form. However, **each rulebreaker and non-rulebreaker in a pair share the same surface form and are both valid in propositional logic**. Thus, even if the parsing is correct, the system would theoretically accept the conclusion in both rulebreakers and non-rulebreakers, hence scoring 0 in paired accuracy. We will add this discussion to the final version. **S5: “better representation of Fig 6”** Per the Reviewer's suggestion, we will replace Fig 6 with a table of mean familiarity values (and std) for clarity. [1] Pan et al. https://aclanthology.org/2023.findings-emnlp.248 --- Rebuttal Comment 1.1: Comment: I thank the authors for their rebuttal. The authors addressed a few of my original doubts, especially the ones concerning the temperature parameter and the effect of model size on the obtained results. However, some other questions and doubts remain open even after the rebuttal. More in detail, the authors seemed to avoid (or at least circumvent) the question concerning the comparison with hybrid models. The authors briefly mention that such models would, in theory, score 0 paired accuracy. However, the authors focused exclusively on LogicLM, which is just one of the different hybrid approaches available. Moreover, such a claim is not supported by empirical evidence (possibly given the short time available for the rebuttal). Similarly, the answer to Q3 is a bit dry, and I'd suggest the authors expand on this explanation. While I agree that biases may arise depending on the training data used, they can not justify the low performance of LLMs on rule-breakers. Intuitively, there should be a deeper reason behind such a brittle performance, possibly related to the inherent regressive nature of LLMs. Could the authors elaborate more on this topic? Finally, I had a look at the other reviewers' insights and the related authors' responses. In particular, I agree with reviewer 8kkH that the relationship between the current submission and the findings of [1] and [2] is not well discussed in the paper. I would not go as far as reviewer 8kkH to say that "the main finding of this paper seems to have already been well-studied by previous works", but I'd suggest the authors to improve the presentation of the paper to better highlight the differences between RULEBREAKERS and [1] and [2]. Overall, I think that the rebuttal submitted by the authors covered some of my doubts, while leaving also some open questions. However, such a lack is not enough to justify lowering my review score. Neither the few filled gaps are enough to justify increasing my original review score (which was already positive). Therefore, I will keep my original score. [1]. Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks [2]. Language models show human-like content effects on reasoning tasks --- Reply to Comment 1.1.1: Comment: We thank the Reviewer for their helpful reply. _**“comparison with hybrid models”**_ To clarify, we consider our theoretical argument to apply generally to any models or systems that are explicitly guided or constrained to reason according to formal logic. However, we are happy to validate this by empirically testing LogicLM and also Symbolic CoT [1] and will include the results in the final version due to time. We welcome additional specific suggestions, though we kindly ask the Reviewer to take into account the prohibitive cost associated with pre-training such models. _**Further response to Q3**_ As we are the first to identify this systematic phenomenon, we agree that this opens up a rich area for further investigation and analysis in future work. For example, one of the patterns we noted in **lines 318-324** is that models generally perform better on the DS subset of RULEBREAKERS compared to the MT subset. A possible explanation for this performance difference could be the structural similarity between the natural language premises in different rulebreaker types and their apparent logical forms. Intuitively, for a model to recognize that a logical rule can be applied, it needs to recognize where segments of the natural language statements are negations of one another, e.g. abstracting these segments into **B** and **not B**. This mapping is more obvious and straightforward in MT cases than in DS ones. For example, in MT (country, city) cases, the **bolded** and _italicized_ parts form a clear negation pair: "If Anne is in Sweden, then **she is not in Stockholm**. _Anne is in Stockholm_." (Logical form: If A, then **not B**. _B_.) By contrast, the negation is less obvious in DS (country, city) cases: "**Anne is** either in Stockholm or **somewhere in Sweden**. _Anne is not in Sweden_." (A or **B**. _Not B_.) We conjecture that this may make it more difficult for models to map DS premises and conclusions to their logical forms, compared to MT ones. As a result, models may be less likely to rigidly apply logical rules, hence avoiding incorrect conclusions in DS rulebreaker cases. Nevertheless, as with other factors such as entity familiarity, the precise impact of this “structural-similarity effect” on model behavior may vary across models, potentially contributing to the performance variation we observe. We will include this expanded discussion in the final version of our paper. _**“relationship between the current submission and the findings of [2] and [3]”**_ We thank the Reviewer for their suggestion. As we explained in our response to Reviewer 8kkH, there appears to be a misunderstanding regarding the relationship between our submission and the findings of [2] and [3]. We explicitly discussed and distinguished [2] from our work in **Appendix A (lines 829-833)**; and similarly addressed Saparov and He (2023) [4], which predates the relevant methodology and findings in [3], in **lines 846-852**. Nonetheless, to ensure full clarity, we will include a more in-depth discussion in the final version of our paper, along with the comparison table we already provided in our response to Reviewer 8kkH. [1] Xu et al. 2024. https://aclanthology.org/2024.acl-long.720/ [2] Reasoning or Reciting? [3] Language models show human-like content effects on reasoning tasks [4] Saparov and He. https://openreview.net/pdf?id=qFVVBzXxR2V
null
null
null
null
null
null
UniDB: A Unified Diffusion Bridge Framework via Stochastic Optimal Control
Accept (spotlight poster)
Summary: This paper proposes a unified framework, UniDB, of diffusion bridge models based on Stochastic Optimal Control. This framework enhances the quality and detail of generated images by balancing control cost and terminal penalty. Claims And Evidence: Claim 1: UniDB helps to understand and generalize Doob’s $h$-transform. The theoretical derivations in the paper demonstrate that Doob’s $h$-transform is a special case of UniDB when the penalty coefficient approaches infinity. This provides a solid mathematical foundation for the claim. Claim 2: UniDB improves image quality by allowing the design of different controllers $u_{t, \gamma}$. The claim is supported by a lot of qualitative and quantified experimental results, demonstrating that controller design enhances image quality. Methods And Evaluation Criteria: The use of PSNR, SSIM, LPIPS, and FID as evaluation metrics in experiments on three high-resolution datasets (CelebA-HQ, Rain100H, and DIV2K) is reasonable. These metrics comprehensively assess image quality, covering aspects of pixel-wise similarity (PSNR, SSIM), perceptual quality (LPIPS), and generative diversity (FID). The chosen datasets are also well-suited for high-resolution image generation and restoration tasks, ensuring a robust evaluation. Theoretical Claims: The mathematical derivations in the paper are complete and generally reliable. However, two aspects require further clarification from the authors: 1. Choice of $L_1$ norm in the training objective (Equation 19): The paper does not provide a clear justification for using the $L_1$ norm instead of other alternatives like $L_2$ norm. The authors should explain whether this choice is based on empirical performance, theoretical considerations, or robustness to outliers. 2. Introduction of the state vector term $m$ in linear SDE: One of the paper's key novelties is introducing the $m$ term in the linear SDE form, but it is not explicitly explained how it is computed or designed in the main context. Further elaboration on the motivation, computation, and impact of $m$ would improve the clarity of the contribution. Experimental Designs Or Analyses: The experiments are comprehensive and well-designed, covering three different tasks to ensure robustness and generalizability. Additionally, the paper conducts an ablation study on the key penalty coefficient $\gamma$, which helps evaluate its impact on model performance. However, I suggest that the authors include DDBM as a benchmark for comparison. Since DDBM is also a Doob’s $h$-transform-based model and is mentioned in the **Preliminaries**, it would be beneficial to compare the proposed method against it. This would provide a clearer assessment of the advantages and potential improvements offered by the proposed framework. Supplementary Material: The supplementary material is complete and provides sufficient additional details to support the main paper. Relation To Broader Scientific Literature: 1. UniDB as a Generalization of Doob’s h-Transform: Doob’s h-transform has been widely studied in stochastic processes and has been applied in bridge modeling [1][2]. The paper demonstrates that Doob’s $h$-transform is a special case of the proposed UniDB framework when the penalty coefficient tends to infinity, providing a broader theoretical foundation for diffusion bridge models. 2. Controller Design for Improved Image Generation: Stochastic Optimal Control has been explored for diffusion bridge models [3], but existing approaches often lead to artifacts such as blurred or distorted details. By allowing the design of different controllers $u_{t, \gamma}$, UniDB provides greater control over generation quality, leading to improved image fidelity and diversity across multiple datasets. [1] Zhou, Linqi, et al. "Denoising diffusion bridge models.", 2023. [2] Yue, C., et al. "Image restoration through generalized ornstein-uhlenbeck bridge", 2024. [3] Park, B., et al. Stochastic optimal control for diffusion bridges in function spaces, 2024. Essential References Not Discussed: The key contribution of this paper is the introduction of Stochastic Optimal Control into the DDBM theoretical framework. However, a similar approach was explored last year in: [1] Zhang, Shaorong, et al. "Exploring the Design Space of Diffusion Bridge Models via Stochasticity Control." arXiv preprint arXiv:2410.21553 (2024). To differentiate from prior work, the paper should explicitly highlight the key distinctions between this work and Zhang et al. (2024), particularly in how Stochastic Optimal Control is formulated and applied. Other Strengths And Weaknesses: The paper is well-structured and clearly organized, making it easy for readers to follow. However, the novelty is sceptical, as a similar approach was explored last year. Other Comments Or Suggestions: See the above comments. Questions For Authors: See the above comments. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your feedbacks and comments. > Claim 1: Choice of $L_1$ norm in the training objective (Equation 19): The paper does not provide a clear justification for using the $L_1$ norm instead of other alternatives like $L_2$ norm. The authors should explain whether this choice is based on empirical performance, theoretical considerations, or robustness to outliers. Thanks for your feedback. **Using $L_1$ norm is a common trick** in the field of image restoration with **many prior works [G1-G4]**. We just follow to use the same trick as we mentioned in **line 270 of our paper**. The reason that choosing $L_1$ norm over other alternatives like $L_2$ norm in image restoration tasks have been analyzed in [G2-G4]. In [G2], "when the network is trained using $L_1$ as a cost function, instead of the traditional $L_2$, the average quality of the output images is superior for all the quality metrics considered." [G3, G4] establish the $L_1$ loss's superiority over $L_2$ through improved convergence and outlier robustness. [G1] Yue et al. "Image restoration through generalized ornstein-uhlenbeck bridge", ICML 2024. [G2] Zhao et al. "Loss Functions for Image Restoration with Neural Networks", IEEE Transactions on Computational Imaging 2017. [G3] Lim et al. "Enhanced Deep Residual Networks for Single Image Super-Resolution", CVPR 2017 workshop. [G4] Mu et al. "Riemannian Loss for Image Restoration", CVPR 2019 workshop. > Claim 2: Introduction of the state vector term $\mathbf{m}$ in linear SDE: One of the paper's key novelties is introducing the $\mathbf{m}$ term in the linear SDE form, but it is not explicitly explained how it is computed or designed in the main context. Further elaboration on the motivation, computation, and impact of $\mathbf{m}$ would improve the clarity of the contribution. We want to clarify that we've **never** claimed the introduction of the state vector term $\mathbf{m}$ in linear SDE is the **key novelty of our framework**. Instead, our main novelty lies in constructing the diffusion bridge in the form of stochastic optimal control and revealing that Doob's h-transform is a special case of ours. The introduction is just a **simple reformulation** referenced by prior works IR-SDE [G5] and GOUB [G1]: we reformulated the drift term $\theta_t(\mu - x_t)$ into $f_t x_t + h_t \mathbf{m}$. Particularly in our experiments of UniDB-GOU, for a fair comparison, we set $\mathbf{m} = \mu$ which is identical to GOUB [G1], ensuring consistency with baselines. [G5] Luo et al. "Image Restoration with Mean-Reverting Stochastic Differential Equations", ICML 2023. > Experimental Designs Or Analyses: The reviewer suggests that the authors should include DDBMs as a benchmark for comparison. Actually, we **have compared DDBMs** as a benchmark in **Appendix E "Additional Experimental Results"** of our paper, theoretically analyzing the application of UniDB to DDBMs (Appendix: A.8. Examples of UniDB-VE and UniDB-VP) and conducting extensive experiments (Tables 3, 4, and 5). Specifically, it outperforms DDBMs on both LPIPS and FID metrics, with gains reaching up to ~20\% in some cases. > Essential References Not Discussed: a similar approach was explored last year in: Zhang, Shaorong, et al. "Exploring the Design Space of Diffusion Bridge Models via Stochasticity Control". To differentiate from prior work, the paper should explicitly highlight the key distinctions between this work and Zhang et al. (2024), particularly in how Stochastic Optimal Control is formulated and applied. Our work takes a completely different approach from the prior work Zhang et al. (2024) (following simply denoted as SDB), specifically: - **Different purposes.** SDB mainly focuses on solving **singularities and accelerations** for training and sampling caused by the neglection of the impact of noise in sampling SDEs. In contrast, our UniDB aims to **address the issues resulting from Doob's h-transform (e.g. artifacts along edges and unnatural patterns)** that occur in the existing diffusion bridge models. - **Different methods.** Although both that paper and our work mention "Stochastic Control" in titles, **the two "Stochastic Control" are totally different**. SDB is more inclined towards **stochastic process**, adding noise into the base distribution and stochasticity to the reverse process which **is still based on Doob's h-transform**. While our UniDB focuses on stochastic **optimal** control, modeling the forward process as an **optimization problem** to analyze the drawbacks of Doob's h-transform. Therefore, the two articles are **fundamentally different** in nature. --- Rebuttal Comment 1.1: Comment: Thanks for the author's clarification on my confusion. I've raised up my rating.
Summary: This paper introduces UniDB, a diffusion bridge model framework that utilizes Stochastic Optimal Control (SOC) for process optimization, providing an analytical solution for the optimal controller. UniDB generalizes existing diffusion bridge models by showing that Doob’s h-transform is a special case where the penalty coefficient γ tends to infinity. By adjusting the trade-off between control costs and terminal penalties, UniDB improves image detail and quality while maintaining compatibility with existing models. Experimental results demonstrate its effectiveness in image restoration tasks. ## update after rebuttal I have carefully read the rebuttal and would like to maintain my original score. Claims And Evidence: UniDB claims to generate higher-quality images than Doob’s h-transform and supports this claim with experimental evidence on tasks such as super-resolution, inpainting, and deraining. However, additional explanation is needed regarding whether the optimal controller in LQ SOC directly contributes to producing sharper and more detailed images. Furthermore, through Proposition 4.3, it is shown from an LQ SOC perspective that the optimal controller obtained with a finite γ is preferable to that of the infinite case. However, it remains unclear whether there is a systematic analysis regarding the choice of γ. In Proposition 4.5 and Figure 2, a sufficiently large γ is selected to minimize differences in terminal point positions. Nevertheless, based on the results in Table 2, it appears that using an optimal controller with finite γ does not always lead to improvements in actual evaluation metrics. I would appreciate further clarification on this point. Methods And Evaluation Criteria: The authors demonstrate their approach’s strength through evaluation criteria widely used in prior works Theoretical Claims: The detailed proofs of the Theorems and Propositions in the main paper are clearly described in the Appendix. However, I noticed some minor issues when cross-referencing the statements with their proofs. 1. There are minor typos regarding the connection between the statements and the proofs. The authors are encouraged to check the appendix number references carefully. 2. In equation (53) of Appendix A.3, could the authors provide more details on the derivation of this part? $ \frac{1}{2} \displaystyle\frac{\gamma}{(1+\gamma e ^ {2 \bar{f}_T} \bar{g}_T^2)^2} ||a||_2^2 = \frac{\gamma}{2} || \mathbf{x}_T^u - x_T ||_2^2 $ The intermediate steps would help improve clarity. Experimental Designs Or Analyses: This paper follows a standard experimental design and evaluation process, so I believe there are no issues in this regard. Supplementary Material: I examined the validity of each section in the Appendix as well as their connections to the main paper, and the related concerns have been raised above. Relation To Broader Scientific Literature: No. Essential References Not Discussed: No. Other Strengths And Weaknesses: Please refer to the questions and comments provided above. Other Comments Or Suggestions: Please refer to the questions and comments provided above. Questions For Authors: Please refer to the questions and comments provided above. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We greatly appreciate your feedback and inquiries. > Claims And Evidence 1: Additional explanation is needed regarding whether the optimal controller in LQ SOC directly contributes to producing sharper and more detailed images. The over-control in Doob's h-transform violates the natural statistical properties of images, **prioritizing mathematical precision (pixel-perfect endpoints) over visual authenticity (realistic SDE trajectories)**. UniDB leads to better overall performance considering **both realistic SDE trajectories and target endpoint matching**. A comprehensive analysis can be found in response for **Reviewer 33jh**'s Claim and response for **Reviewer AzN7**'s Claim and Question 2. > Claims And Evidence 2: It remains unclear whether there is a systematic analysis regarding the choice of $\gamma$. Based on the results in Table 2, it appears that using an optimal controller with finite $\gamma$ does not always lead to improvements in actual evaluation metrics. In **Figure 2** of our paper, we illustrate that a range of $10^5$ to $10^9$ for $\gamma$ is pretty good. What we want to emphasize is not that there is a single best value of $\gamma$ for each dataset, but rather that most value of $\gamma$ chosen **within this interval can yield a better result than Doob's h-transform ($\gamma = \infty$)**. While suboptimal $\gamma$ values may occasionally degrade performance, our approach provides **a simple, efficient, and nearly cost-free way** to improve model performance, which works well in most cases. It is worth noting that when $\gamma$ is set to $1e7$, our model **outperforms all baselines across multiple tasks**, including super-resolution (on DIV2K, CelebA, and FFHQ), deraining on (Rain100H), and inpainting on (CelebA). > Question: Could the authors provide more details on the derivation of this part: $\frac{\gamma}{2}\left\||\mathbf{x}_T^u-x_T\right\||_2^2=\frac{\gamma\||a\||_2^2}{2\left(1+\gamma e^{2 \bar{f}_T} \bar{g}_T^2\right)^2}$? Yes, below is the detailed proof: At line 770 of our paper, we defined $a=e^{\bar{f}_T}x_0-x_T+\mathbf{m}e^{\bar{f}_T}\bar{h}_T$ for simple denotation. The first equation of Eq. 51 is a simple simplification by substituting the expression $\mathbf{x}_T^u$ from Eq. 50 into Eq. 51. A more detailed proof is as follows: $\left\||\mathbf{x}_T^u-x_T\right\||_2^2=\left\|\left\|\left(\frac{\gamma^{-1} e^{f_T}}{\gamma^{-1}+e^{2 f_T} \bar{g}_T^2}\right) x_0+\left(\frac{e^{2 \bar{f}_T} \bar{g}_T^2}{\gamma^{-1}+e^{2 \bar{f}_T} \bar{g}_T^2}\right) x_T+e^{\bar{f}_T}\left(\frac{\gamma^{-1} \bar{h}_T}{\gamma^{-1}+e^{2 f_T} \bar{g}_T^2}\right) \mathbf{m}-x_T\right\|\right\|_2^2$ $=\left\|\left\|\left(\frac{\gamma^{-1} e^{\bar{f}_T}}{\gamma^{-1}+e^{2 \bar{f}_T} \bar{g}_T^2}\right) x_0-\left(\frac{\gamma^{-1}}{\gamma^{-1}+e^{2 \bar{f}_T} \bar{g}_T^2}\right) x_T+e^{\bar{f}_T}\left(\frac{\gamma^{-1} \bar{h}_T}{\gamma^{-1}+e^{2 \bar{f}_T} \bar{g}_T^2}\right) \mathbf{m}\right\|\right\|_2^2$ $=\left\|\left\|\left(\frac{e^{\bar{f}_T}}{1+\gamma e^{2 \bar{f}_T} \bar{g}_T^2}\right) x_0-\left(\frac{1}{1+\gamma e^{2 \bar{f}_T} \bar{g}_T^2}\right) x_T+e^{\bar{f}_T}\left(\frac{\bar{h}_T}{1+\gamma e^{2 \bar{f}_T} \bar{g}_T^2}\right) \mathbf{m}\right\|\right\|_2^2$ $\begin{aligned} & =\frac{\left\|\left\|e^{\bar{f}_T} x_0-x_T+\mathbf{m} e^{\bar{f}_T} \bar{h}_T\right\|\right\|_2^2}{\left(1+\gamma e^{2 \bar{f}_T} \bar{g}_T^2\right)^2} \\\\ & =\frac{\||a\||_2^2}{\left(1+\gamma e^{2 \bar{f}_T} \bar{g}_T^2\right)^2}\end{aligned}$ Then, just mutiplying $\frac{\gamma}{2}$ on both sides of the equation, we can learn that $\frac{\gamma}{2}\left\||\mathbf{x}_T^u-x_T\right\||_2^2=\frac{\gamma\||a\||_2^2}{2\left(1+\gamma e^{2 \bar{f}_T} \bar{g}_T^2\right)^2}$. We will add the detailed proof in the revised version. > Suggestion: There are minor typos regarding the connection between the statements and the proofs. Yes, you are right. Thanks for pointing out them and we will correct them in the revised version.
Summary: This paper proposes a framework that unifies and extends various diffusion bridge methods by way of stochastic optimal control. In the case of linear dynamics, they derive a computationally tractable method, which can be thought of as a regularization of previous methods by the introduction of a new hyperparameter. Implementing this change requires only a minimal modification to existing code. They show that, by tuning this new hyperparameter, improved performance can be achieved in a number of benchmark examples. ## update after rebuttal My assessment has not changed substantially. I have maintained my score at 4: Accept In addition, earlier I posted a followup discussion regarding the proof of Prop. 4.3 under an incorrect heading, so the authors were unable to see it. I believe it would further improve the paper if the following was addressed, though it would not change my score either way: I do still believe that Proposition 4.3 is much more elementary than it is made out to be. Below is a more detailed version of the comment in my original review, on which the authors could base a revised elementary proof. Starting with the $\gamma=\infty$ case, the optimal control $u_{t,\infty}^*$ must lead to $x_T^{u_\infty^*}=x_T$, else the cost is infinite, and therefore $J(u_{t,\infty}^*,\infty)=\int_0^T\frac{1}{2}\|u_{t,\infty}^*\|^2_2dt$ Now looking at the finite $\gamma$ case, we can bound the minimum by the value at the control $u_{t,\infty}^*$: $J(u_{t,\gamma}^*,\gamma)=\min_u\{\int_0^T \frac{1}{2}\|u_t\|^2_2dt+\frac{\gamma}{2}\|x_T^{u}-x_T\|\} \leq \int_0^T \frac{1}{2}\|u_{t,\infty}^*\|^2_2dt+\frac{\gamma}{2}\|x_T^{u_\infty^*}-x_T\|$ The SDE determining $x_t$ doesn't depend on $\gamma$, hence in the above expression we still have $x_T^{u_\infty^*}=x_T$. Therefore we arrive at $J(u_{t,\gamma}^*,\gamma)\leq \int_0^T \frac{1}{2}\|u_{t,\infty}^*\|^2_2dt=J(u_{t,\infty}^*,\infty)$ Claims And Evidence: The majority of their claims are supported by clear and convincing evidence. Generalization through the methods of of stochastic control is well-grounded theoretically, and they show improved performance empirically on a number of convincing benchmarks The one claim that I don’t find substantiated revolves around their Proposition 4.3. They claim that this proposition lends theoretical support to the observation that introducing their new hyperparameter leads to improvements in practice. I find that result to be mathematically trivial and also to be disconnected from saying anything about how well the method performs in practice. However, I don’t think this proposition is integral to their work, and it could be removed (or de-emphasized) without any negative effects. Methods And Evaluation Criteria: I find the methods and evaluation criterial to make sense for the problem. Theoretical Claims: I checked the proof of theorem 4.1 and didn’t find any substantial issues. Experimental Designs Or Analyses: I reviewed the experimental designs in Section 5 and did not observe any issues. Supplementary Material: There were no attached supplementary materials. I did review the appendices. Relation To Broader Scientific Literature: The key contribution is a reformulation of diffusion bridge methods in the language of stochastic control, which motivates a natural one-parameter family of methods extending previous work, especially GOUB (Yue et al., ICML, 2024). I find this to be a natural and interesting extension. Essential References Not Discussed: I am not aware of any missing essential references. Other Strengths And Weaknesses: I found the contribution to be original and well-motivated theoretically, the writing to be relatively clear, and the performance improvements to be nontrivial. Other Comments Or Suggestions: 1) Equations 9 and 11 require expected values, yes? They are still stochastic control problems. 2) Above equation 19, the formula for the score should have a log. 3) Computations in Eq 36 and 37 have extraneous commas at the end of lines. 4) Line 705 appears to be repeated. Questions For Authors: 1) Regarding Proposition 4.3, perhaps I misunderstand, but I believe this proposition is trivial from the definitions. Under the hard constraint ($\gamma=\infty$) the terminal cost is enforced to be exactly zero, hence so both costs agree on $u^*_{t,\infty}$. Therefore the minimal cost when $\gamma<\infty$ can't be larger than the cost of the control $u^*_{t,\infty}$. I'm not sure this fact is deserving of its own proposition. I am also not convinced that comparing the minimal cost values in this way says anything about why one method performs better than another in any operational sense. I think the answer probably lies more in the direction of the soft constraint being a numerically better behaved regularization than the hard constraint. I think the authors need to provide a better intuitive discussion about why (17) should be expected to correspond to better performance in the experiments in section 5 or else remove this proposition and the surrounding paragraph from the main text, as well as alter the discussion in Section 4.5. 2) Could you comment further on the performance of UniDB (SDE) vs UniDB (ODE) in table 1. Specifically, is there a reason why the former performs well for LPIPS and FID while the latter performs well for PSNR and SSIM? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank you for your comment. > Claim & Question 2: The reviewer finds that result to be mathematically trivial and also to be disconnected from saying anything about how well the method performs in practice. However, The reviewer doesn’t think this proposition is integral to their work, and it could be removed (or de-emphasized) without any negative effects. The authors need to provide a better intuitive discussion about why (17) should be expected to correspond to better performance in the experiments in section 5 or else remove this proposition and the surrounding paragraph from the main text. We appreciate this feedback and can move Proposition 4.3 to Appendix. Here we add more analysis to help **understand the practical implications of our UniDB**. When $\gamma$ approaches $\infty$, our UniDB reduces to Doob's h-transform, and **the control term $u$ in Eq (12) becomes ineffective**, leading to $||u_{{\gamma}}||^2_2 \leq ||u_{{\infty}}||^2_2$. Although Doob's h-transform ensures $x_T^u$ reaches the target endpoint $x_T$ exactly, i.e., $||x_T^{u_{{\infty}}}-x^T||^2_2 = 0$, it may force the model to preserve even harmful noise/artifacts in the target. This is becuase Doob's h-transform will apply disproportionately large control inputs $||u_{{\infty}}||^2_2$ to achieve such exact matching. The **large $u$ in SDE trajectory may disrupt the inherent continuity and smoothness of images**. The over-control in Doob's h-transform violates the natural statistical properties of images, **prioritizing mathematical precision (pixel-perfect endpoints) over visual authenticity (realistic SDE trajectories)**. As shown in our **Figure 1**, Doob's h-transform can lead to artifacts along edges and unnatural patterns in smooth regions. Moreover, we want to emphasize that the discovery of **Proposition 4.3 is non-trivial**. Proposition 4.3 and related mathematical derivations in Appendix A.3 show that $\mathcal{J}(u_{{\gamma}},\gamma) \leq \mathcal{J}(u_{{\infty}},\infty)$. **Generally, one cannot determine the magnitude between $\mathcal{J}(u_{{\gamma}},\gamma)$ and $\mathcal{J}(u_{{\infty}},\infty)$** because although $||u_{{\gamma}}||^2_2 \leq ||u_{{\infty}}||^2_2$ but $||x_T^{u_{{\gamma}}}-x^T||^2_2 \geq ||x_T^{u_{{\infty}}}-x^T||^2_2 = 0$. We use strict mathematical derivations in Appendix A.3 to show that $\mathcal{J}(u_{{\gamma}},\gamma) \leq \mathcal{J}(u_{{\infty}},\infty)$ is true. **Though Proposition 4.3 does not directly mean better image quality by UniDB, it can show that UniDB leads to better overall performance considering both realistic SDE trajectories and target endpoint matching**. > Question 1: Is there a reason why the UniDB (SDE) performs well for LPIPS and FID while the UniDB (ODE) performs well for PSNR and SSIM? This is a common phenomenon in various Diffusion models [R1-R3]. As analyzed in [R1], "although solvers for the probability flow ODE allow fast sampling, their samples typically have higher (worse) FID scores than those from SDE solvers if no corrector is used". In the paper SDE-Drag [R2] and GOUB [R3], they compare the performance of the SDE and ODE models and get the similar phenomenon: SDE model performs well for LPIPS and FID. Particularly in GOUB [R3], the experimental results also demonstrate the better performance of ODE model for PSNR and SSIM. [R1]: Song et al. "Score-Based Generative Modeling through Stochastic Differential Equations.", ICLR 2021. [R2]: Nie et al. "The Blessing of Randomness: SDE Beats ODE in General Diffusion-based Image Editing", ICLR 2024. [R3]: Yue et al. "Image restoration through generalized ornstein-uhlenbeck bridge", ICML 2024. > Comments Or Suggestions: Equations 9 and 11 require expected values, yes? They are still stochastic control problems; Above equation 19, the formula for the score should have a log; Computations in Eq 36 and 37 have extraneous commas at the end of lines; Line 705 appears to be repeated. Yes, you're right. Thanks for pointing out these typos and we will correct them in the revised version.
Summary: This paper proposed a diffusion-based method for image restoration problems, e.g., super-resolution, deraining, and inpainting. Given a dataset of corrupted and clean image pairs, the goal is to construct a diffusion model that at inference generates clean images given corrupted images. The proposed method is based on stochastic optimal control (SOC), which re-frames the problem as an optimization over the drift of diffusion models. Such an SOC reformulation introduces a tunable hyper-parameter ($\gamma$) and recovers prior Doob-h-transform's based diffusion models as $\gamma$ approaches infinity. It is shown empirically that a rather large but finite $\gamma$ (~e7) improves performance. Claims And Evidence: I'm not convinced by the implication of Prop 4.3 (the paragraph below Prop 4.3, before Sec 4.3). The SOC objective $J$ is (artificially) constructed as a surrogate for searching the drift of SDE. That $J$ is smaller for finite $\gamma$ does *not* imply suboptimality in empirical performance. Intuitively, it does make sense to set $\gamma$ to infinity since we'd like xT^u to converge exactly to the given xT. Any finite $\gamma$ would fail to achieve that, as shown in Prop 4.5. What's presented in this paper is somewhat counter-intuitive but also interesting, yet it's rather empirically observed than theoretically justified. Methods And Evaluation Criteria: Yes Theoretical Claims: Thm 4.1 is expected but still quite nice to see the actual analytic form! Though, I did not check proof carefully. Experimental Designs Or Analyses: Comparison to baselines is insufficient. GOUB is a special case of UniDB (these values re-appear in Table 2 last column) and DDRM is not quite comparable as a non-learning-based method and requiring knowing additionally corruption type. The authors should compare their method to I2SB and its follow-up works (e.g., CDDB https://arxiv.org/abs/2305.19809). These works are also SOC-inspired method for solving image restoration. Supplementary Material: Yes Relation To Broader Scientific Literature: N/A Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: - What are x0 and xT in practice? To my best guess, x0 would be the clean image and xT would be the corrupted image. So in Eq 12 we learn a forward process that bring x0 close to xT for every (x0, xT) pair, and then reverse it with Eq 15. Is this correct? It'd be better to have clarification in Sec 3. Questions For Authors: - Eq 8 and 16 need more explanation. Why is it okay to drop the Brownian motion? This seems more like an empirical trick to get better PSNR and SSIM. - There seems to be some typos in Eq 15,16 \nabla p --> \log \nabla p Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank you for your comments and questions. We will provide a detailed response to these concerns. > Claim: The reviewer is not convinced by the implication of Prop 4.3: $\mathcal{J}$ is smaller for finite $\gamma$ does not imply suboptimality in empirical performance. Intuitively, it does make sense to set to infinity since we'd like $x_T^u$ to converge exactly to the given $x_T$. Thank you for raising this important point. We agree that the intuition behind Proposition 4.3 deserves clarification. Below, we address **why strict terminal matching ($\gamma\to\infty$) is not ideal** for practical performance, even if it seems mathematically appealing. **1. Exact terminal matching ($\gamma\to\infty$) harms image quality.** While Doob’s h-transform achieves $||x_T^{u_{\infty}}-x_T||^2_2 = 0$, it requires large control ($||u_{\infty}||^2_2\geq||u_{\gamma}||^2_2$) to force exact endpoint matching. The **large $u$ in SDE trajectory($dx_t=(f_tx_t+h_t\mathbf{m}+g_tu_{\gamma}) dt+g_t dw_t$)may disrupt the inherent continuity and smoothness of images**. Prioritizing pixel-perfect endpoints over smooth trajectories leads to "mathematically correct but visually unrealistic" outputs. Our experiments (Figure 1) confirm that Doob's h-transform can lead to artifacts along edges and unnatural patterns in smooth regions. **2. Why finite $\gamma$ works better?** By keeping $\gamma$ finite, UniDB explicitly balances two goals: **target matching** $||x_T^{u_{\infty}}-x_T||^2_2$ (terminal penalty) and **trajectory smoothness** $||u||^2_2$ (controller). Proposition 4.3 shows that **finite $\gamma$ achieves a lower total cost $\mathcal{J}$ not by sacrificing performance**, but by optimally **trading minor terminal mismatches for significantly smoother, more natural diffusion paths**. Thus, **Proposition 4.3 reflects a key insight**: real-world image generation benefits more from **stable trajectories than rigid mathematical constraints**. We will clarify this intuition in the revised manuscript. > Experiment: Comparison to baselines is insufficient: the authors should compare their method to I2SB and its follow-up works (e.g. CDDB). Thank you for your feedback. Below, we clarify why our current comparisons are sufficient and address your concerns about I2SB and CDDB: 1. **Direct Comparison to State-of-the-Art (DDBM > I2SB):** - **DDBM** (a SOTA baseline) is **strictly superior to I2SB** (see Table 2 in [DDBM paper], where DDBM achieves FID **4.43** vs. I2SB’s **9.34** on DIODE-256×256, and **1.83** vs. I2SB’s **7.43** on Edges→Handbags-64×64). - **UniDB outperforms DDBM** across multiple tasks (Tables 3–5 in Appendix E): Super-resolution (DIV2K, CelebA-HQ), Deraining (Rain100H). - Critically, **DDBM is a special case of UniDB** (UniDB with $\gamma=\infty$), meaning our method inherently subsumes and improves upon this stronger baseline. 2. **I2SB's Computational Burden.** - **Training I2SB is prohibitively expensive**. It requires **16×V100 GPUs** with **more than 1 week** (*per communication with I2SB authors*). - The inference time of I2SB is **90 seconds per 256×256 image** (vs. our method’s **less than 5 seconds**). - Given hardware/time constraints, reproducing I2SB (and its follow-ups) for fair comparison is practically infeasible during rebuttal. Since **DDBM already outperforms I2SB**, and UniDB outperforms DDBM, we argue that **direct I2SB comparisons are redundant**. 3. **CDDB is Orthogonal to Our Contribution.** - **CDDB is a training-free, plug-and-play refinement module** for I2SB, not a standalone method. - Such post-hoc techniques could also **integrate with UniDB** (e.g., applied during inference), but this is **beyond our paper’s scope**, which focuses on developing **a unified training framework for diffusion bridge**. We appreciate your suggestion and will explicitly discuss the relationship between UniDB, DDBM, I2SB, CDDB in the revised manuscript. > Comment: What are x0 and xT in practice? x0 would be the clean image and xT would be the corrupted image, is this correct? Yes, you're right. It's standard in diffusion-based image restoration research using x0 as the clean image and xT as the corrupted image. We'll make it clearer in the revised version. > Q1: Eq 8 and 16 need more explanation. Why is it okay to drop the Brownian motion? This seems more like an empirical trick to get better PSNR and SSIM. Yes, **Dropping Brownian motion** is an empirical trick that **we followed from GOUB as stated in line 165 of our paper**. It can obtain better results on image restoration tasks, capturing more pixel details and structural perceptions of images (which contributes to better PSNR and SSIM). This phenomenon has been verified by GOUB's three ablation experiments. > Q2: There seems some typos in Eq 15,16 \nabla p -> \log \nabla p. Yes, you're right. We will correct it in the revised version.
null
null
null
null
null
null
What Makes In-context Learning Effective for Mathematical Reasoning
Accept (poster)
Summary: In this paper, the authors investigate the theoretical explanation of in-context learning (ICL). They prove that the influence of the demonstrations can be estimated by two factors: LLM-oriented semantic similarity and inference stability of demonstrations. Based on it, they propose a LMS3 method, and the experiments on Llama2 and Llama3 validate the effectiveness of LMS3 under both one-shot and few-shot settings, as well as for both mathematical reasoning and commonsense reasoning tasks. Claims And Evidence: I have reviewed all the theoretical proofs in this paper and find the claims and results both reasonable and correct. Additionally, the authors have conducted experiments across one-shot to four-shot settings, providing convincing evidence of the proposed method's effectiveness. Methods And Evaluation Criteria: The proposed method is grounded in the theoretical findings, which makes sense and offers a novel and deeper understanding of ICL. Additionally, the experiments are conducted on three mathematical reasoning datasets and one commonsense reasoning dataset. Therefore, this paper provides strong empirical validation. Theoretical Claims: Yes, I have checked the correctness of the proofs in Section 3. They are self-contained and reasonable, providing a good perspective for discussing the influence of demonstrations. Experimental Designs Or Analyses: Yes, the experiments are conducted using two LLMs, four datasets, and four few-shot settings. Therefore, I think the effectiveness is validated. Supplementary Material: I review all sections in the supplementary material. They enhance the clarity and comprehensibility of the paper. Relation To Broader Scientific Literature: The authors expand the existing understanding for ICL based on a widely-used theoretical setting. Although the simplification of the softmax function in Eq.(3) may deviate from reality, it has been widely adopted in previous work. Essential References Not Discussed: No. The references are cited and discussed sufficiently Other Strengths And Weaknesses: Strengths: Frist, building on existing analyses of ICL, this work innovatively derives the theoretical relationship between test loss and demonstrations. Since no prior work has reached a similar conclusion, the novelty of this paper is well justified. Second, this paper explains how the effectiveness of ICL is determined by both the LLM-oriented Semantic Similarity and Inference Stability of Demonstration. I think this conclusion makes sense and it provides insights for practical applications of ICL. Third, the experiments are thorough, with the authors using two LLMs as backbones and conducting evaluations across 1-shot to 4-shot settings. Last, the writing is clear, making the paper easy to understand and reproduce. Weakness: The generalizability of the theory and methods in this paper could be further discussed (Q1 below). Besides, I still have some questions regarding the theoretical analyses (Q2 below). Other Comments Or Suggestions: In Eq.(23), lambda_1 is misused, as it was previously introduced to represent an eigenvalue Line 105, “and offers” should be “and offer” Line 75, “generate code” should be “generate codes” Questions For Authors: Q1: In this paper, the author emphasizes the analysis of situations where ICL is effective for mathematical reasoning. I believe the analysis is sound, but the conclusion could also be applied to other tasks, as the author has demonstrated its effectiveness on CommonsenseQA as well. Therefore, I think the author could further discuss the generalizability of this theory and explain whether it could be effective for other tasks. Q2: In Eq.(6), I am unsure how the pre-trained data D_pre and the demonstration z0 are optimized simultaneously, because in my understanding, D_pre is for pretraining and z0 is for the gradient update in inference phase? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate your recognition of our reasonable, correct, and self-contained theoretical analysis, the novelty and effectiveness of our method, and our convincing and strong empirical validation. $\bf{Q1}$: The generalizability of the theory and methods in this paper could be further discussed. $\bf{A1}$:Thanks for your constructive suggestion! In this paper, we are motivated by the observation that, on several math datasets, LLMs may perform worse in one-shot than in zero-shot setting. Therefore, in our experiments, we use these math datasets for evaluation, which can provide direct evidence of our method’s advantage. As highlighted in Appendix E, our theories and method can also generalize to other datasets and tasks. This is because they are built upon a general setting of transformer attention layer and the relationship between demonstrations and test samples, which is also suitable to other domains and tasks. For instance, in Section 5.6, we conduct experiments on CommonsenseQA dataset, which is a widely used large-scale commonsense benchmark in ICL research[1,2,3]. From Table 4, our LMS3 still achieves the best performance, which confirms its effectiveness and highlight its generalizability on a broader range of datasets/applications. Following your suggestions, we will supplement the above discussions in the revised version. [1] Compositional Exemplars for In-context Learning. [2] In-Context Learning with Iterative Demonstration Selection. [3] Rethinking the Role of Demonstrations: What Makes In-Context Learning Work? $\bf{Q2}$: In Eq.(6), I am unsure how the pre-trained data $D_{pre}$ and the demonstration $z_0$ are optimized simultaneously. $\bf{A2}$: Thanks for your valuable question! We sincerely apologize for the confusion and would like to clarify as follows. In the “Analogy to Linear Optimization” section, we interpret the influence of adding a demonstration $x$ as follows: we start with a linear function $F$, whose parameters are initialized on the pretraining dataset $D_{pre}$. Introducing $x$ is essentially equivalent to adding a training example $z_0$ to further optimize $F$, after which the optimized $F$ is used to reason the test sample $x_{test}$. Based on this idea, our goal is to quantify the change in test loss $L$ for $x_{test}$ resulting from adding $z_0$. To achieve this, inspired by the influence function, we define Eq.(6) to denote he parameters after training with $z_0$, which can be obtained by setting $\epsilon=\frac{1}{|D_{pre}|}$ in Eq.(6). On this basis, we further quantify the testing loss $L$ leveraging Taylor approximation as shown in Eq.(8) and (9), which serve as the foundation for subsequent theoretical analysis. Thus, in fact, Eq.(6) is a conceptual and intermediate tool for theoretical analysis rather than representing actual training on $D_{pre}$ and the demonstration $x$ simultaneously. In response to your comments, we will incorporate the above discussion and clarification to make our paper clearer. $\bf{Q3}$: In Eq.(23), $\lambda_1$ is misused. There are some typos. $\bf{A3}$: Thanks for your meticulous review and pointing out these issues! We will carefully correct the misuse of $\lambda_1$ and the typos in the revised version.
Summary: This paper aims to explore the underlying mechanism of in-context learning (ICL). To this end, the authors first theoretically analyze the influence of the demonstrations on inference performance, where they prove that the performance is bounded by an LLM-oriented semantic similarity and the demonstration stability. Then, based on this finding, they proposed a LMS3 method for demonstration selection. From extensive experiments on two widely-used LLM backbones and multiple few-shot settings, they validate the superiority of LMS3. ## update after rebuttal The authors have addressed my concerns, and l would like to retain my positive score. Claims And Evidence: I believe that the claims are supported by clear and convincing evidence. On one hand, the authors provide fundamental theoretical analyses to reveal the impact of demonstrations on ICL performance, offering a solid foundation for the proposed method. On the other hand, they conduct experiments with two representative LLMs and compare them against 10 baselines under various few-shot settings. Therefore, I believe the experiments are sufficient to validate the effectiveness of the proposed method. Methods And Evaluation Criteria: The proposed method has strong theoretical foundation and is well designed for the problem. This paper employs answer accuracy as the evaluation metric, which is a widely used and reasonable setting. This choice ensures the reliability of the results. Theoretical Claims: I have checked the proofs of all theoretical claims (i.e., Theorem 1 and Theorem 2) and verified their correctness. Additionally, I have ensured that all assumptions and derivations are logically consistent and properly justified. Experimental Designs Or Analyses: Yes, I have carefully checked the soundness and validity of the experimental designs and analyses. Specifically, I examined the correctness of the experimental setups to ensure that they align with standard practices in the field. I also examined the selection and preprocessing of the datasets used for evaluation. Additionally, the experiments have been run multiple times to ensure the stability of the findings. No significant issues were identified, and the experimental results are consistent with theoretical expectations. Supplementary Material: Yes, I have reviewed all details in the supplementary material. Specifically, I concentrated on the proof for Theorem 1, ensuring the correctness and logical consistency of all derivations. Additionally, I carefully examined the pseudo-code to confirm its alignment with the described methodology, checked the implementation details for completeness and reproducibility, and read the case study and discussions to ensure clarity and coherence with the main findings. Relation To Broader Scientific Literature: This paper builds on prior work in in-context learning by providing a theoretical analysis of how demonstrations influence LLM reasoning performance. Unlike prior heuristic-based or semantics-based selection methods, the proposed LMS3 is theoretically grounded, generalizable, and introduces a novel demonstration rejection mechanism. The empirical results further strengthen its contribution by demonstrating consistent improvements across multiple benchmarks and LLMs, addressing a key limitation of previous methods that lacked robustness across settings. Essential References Not Discussed: I think the related work has been discussed sufficiently. Other Strengths And Weaknesses: Strengths: 1. The paper provides a rigorous theoretical analysis of ICL, revealing the importance of LLM-oriented semantic similarity and inference stability for reasoning performance. These theoretical findings offer a deeper understanding of when and why demonstrations help or hurt ICL. 2. The proposed LMS3 method is simple but practical and efficient. Notably, the demonstration rejection mechanism is a novel contribution. I think it is the first exploration of when ICL should not be used. This perspective fills an important gap in existing research. 3. The empirical evaluation is thorough, covering two LLM backbones, 10 baselines, and several few-shot settings. Additionally, the experiments have been run multiple times and the authors provide the confidence intervals. These results consistently support the theoretical claims and demonstrate the robustness of LMS3. 4. The paper is clearly written, well-structured, and easy to reproduce. There are some minor issues with this paper: 1. I think the theoretical analyses are well-founded, but I hope to see if the authors could give more discussions about how to extend them to other domains or tasks (please see Question 1 below). 2. More experiments could be conducted to validate the necessity of the rejection mechanism (please see Question 2 below). 3. There exist some typos. Other Comments Or Suggestions: I found some typos, including: --Line 87, “remain”->”remains” --Line 116, “satisfies”->”satisfying” --Line 265, “suggests”->”suggest” Questions For Authors: 1. The theoretical analyses in this paper are highly general. Therefore, I suggest the authors to discuss the possibility to extend them to other domains or tasks (e.g., multi-choice QA). 2. Apart from the experiments in Section 5.5., I think the authors can combine the rejection mechanism with other demonstration selection method to further validate its necessity. 3. Just for curiosity, to measure the Inference Stability of Demonstration X, could we directly test the performance of the inference LLM on X (e.g., calculate the accuracy)? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate your recognition of our theoretical analysis, the clarity and good writing of our paper, and the strong performance of our method. As for your concerns: $\bf{Q1}$: The theoretical analyses in this paper are highly general. Therefore, I suggest the authors to discuss the possibility to extend them to other domains or tasks (e.g., multi-choice QA). $\bf{A1}$: Thanks for your recognition of our theoretical analysis and this valuable suggestion. We appreciate the opportunity to discuss its broader applicability. Although the motivation of our paper stems from observations in mathematical reasoning task, the underlying principles can be applied more broadly. Indeed, as discussed in Appendix E, our conclusions can be extended to other tasks beyond those explored in this work. This is because our theoretical analyses are based on a general setup of transformer architecture and the relationship between demonstrations and test samples. As long as a task can benefit from demonstration-based prompting (e.g., multi-choice QA as you mentioned), our theoretical conclusions about LLM-oriented Semantic Similarity and Inference Stability of Demonstration in Eqs. (21) and (22) remain applicable. To validate this, we applied our method to CommonsenseQA dataset in Section 5.7, which is a large-scale commonsense benchmark and has been widely used in ICL research[1,2,3]. As shown in Table 4, our method still achieves the best performance, further demonstrating its general applicability. We sincerely appreciate your insightful comment and are very willing to explore the performance of our method on more tasks for future research. We will also enrich our discussion section in the revised version. [1] Compositional Exemplars for In-context Learning. [2] In-Context Learning with Iterative Demonstration Selection. [3] Rethinking the Role of Demonstrations: What Makes In-Context Learning Work? $\bf{Q2}$: Combine the rejection mechanism with other demonstration selection method to further validate its necessity. $\bf{A2}$: Thanks for your constructive suggestion! Following your suggestion, we supplement additional experiments using LLaMA3-8B as the backbone as follows. Specifically, we apply our demonstration rejection mechanism to all baselines, denoted as "+Our" in the table below. |LLaMA3-8B| MAWPS | GSM8K | MATH | |-|-|-|-| | Random|0.951|0.813|0.330| | +Our|0.952|0.818|0.349| | Best-validate|0.932|0.817|0.332| | +Our|0.941|0.829|0.344| |TF-IDF|0.945|0.803|0.344| | +Our|0.946|0.818|0.351| | BM25|0.932|0.805|0.334| | +Our|0.934|0.812|0.335| | T5|0.948|0.817|0.330| | +Our|0.953|0.828|0.333| | BGEM3|0.938|0.802|0.340| | +Our|0.941|0.822|0.350| | OpenAI|0.965|0.809|0.346| | +Our|0.973|0.818|0.347| | SPELL|0.945|0.821|0.343| | +Our|0.946|0.826|0.345| | Influence|0.929|0.800|0.333| | +Our|0.935|0.810|0.340| | IDS|0.920|0.808|0.330| | +Our|0.932|0.823|0.346| The results consistently show that our mechanism enhances all baselines, regardless of whether they rely on retrieval-based similarity metrics (e.g., TF-IDF, BM25) or influence-based strategies. This suggests that our rejection mechanism can serve as a general enhancement technique that improves the robustness and effectiveness of various demonstration selection approaches. This also highlight the necessity of considering when to include a demonstration in in-context learning, rather than always providing demonstrations indiscriminately. Besides, our method leads to performance improvements across different datasets, demonstrating its broad applicability. Thank you again for your valuable suggestion! We will incorporate this experiment and its analysis into the revised version to further support our findings. $\bf{Q3}$: Just for curiosity, to measure the Inference Stability of Demonstration $X$, could we directly test the performance of the inference LLM on $X$ (e.g., calculate the accuracy)? $\bf{A3}$: Thanks for your insightful question! Yes, we believe that directly testing the performance of demonstration $X$, such as calculating accuracy, can serve as a direct measure of the Inference Stability of $X$. This provides a straightforward way to assess its stability. However, one potential challenge is that achieving a reliable measurement of stability in this way might require multiple calls to the LLM. This is because a single inference may not fully reflect the model's performance over time, while averaging results from multiple runs could give a more accurate measurement of the demonstration's stability. Consequently, this method could incur additional computational costs due to the need for repeated evaluations of the demonstration. $\bf{Q4}$: I found some typos. $\bf{A4}$: Thanks for your meticulous review and pointing out these typos! We will carefully correct them in the revised version.
Summary: In-context learning has been a key driver of LLM performance over the past few years. However, the performance of a model can vary (and sometimes even be negatively impacted) based on the content of the few-shot demonstrations provided in-context. This work provides a theoretical analysis of the conditions under which ICL is beneficial and find that performance depends on 2 keys factors :1) The semantic similarity of the demonstration to the test problem and 2) The inference stability of the demonstration which indicates how easily the LLM can solve the demonstration itself. Building on their theoretical insights, the authors present LMS3, a simple algorithm to select the demonstration(s) for a given problem by trading-off between the 2 objectives defined above. Results show that their proposed algorithm improves performance over existing ICL selection methods and surprisingly, they also find that for some questions, having no demonstrations is actually beneficial! Claims And Evidence: Yes, theoretical claims are supported by appropriate proofs - however I have not checked the math carefully. Their proposed algorithm is validated on 3 standard benchmarks for mathematical reasoning and shows strong performance. Methods And Evaluation Criteria: Yes, standard evaluation criteria (accuracy) are chosen and appropriate benchmarks are selected. Confidence intervals are also provided for the main set of results. Theoretical Claims: I did not check the correctness of proofs. Experimental Designs Or Analyses: Yes, experimental design and analysis is valid. Supplementary Material: N/A Relation To Broader Scientific Literature: This paper provides theoretical grounding to empirical findings that have been observed in prior work. Essential References Not Discussed: N/A Other Strengths And Weaknesses: - This a very-written paper with theoretical analysis backed by a practical algorithm. The paper also does a good job of building intuition and the final findings align well with prior work in the area. - The algorithm presented requires white box access to the LLM. However, given the strong generalization performance across LLMs, this might not be an issue. Other Comments Or Suggestions: N/A Questions For Authors: - In the K-shot setting, is it fair to treat the demonstrations as independent of each other? Aren't there potentially significant interaction effects that need to be accounted for? - Can insights from this work potentially be used to design a method that generates effective demonstrations? That is, use some form of optimization to generate a X that maximizes the score function. This might result in better performance compared to a fixed offline demonstration set. - By how much would the inference runtime for a query be affected (compared to random demonstration) if LMS3 is used using a fixed demonstration dataset. If I understand currently, since the embeddings of all demonstrations are computed independently and stored, the overhead should be minimal? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate your affirmation of the effectiveness of our experiments, the good writing of our paper, and the significance of our work. As for your concerns: $\bf{Q1}$: The algorithm presented requires white box access to the LLM. However, given the strong generalization performance across LLMs, this might not be an issue. $\bf{A1}$: Thanks for your valuable comment! Yes, as our theoretical analysis suggests, the influence of demonstrations is determined by two key factors: LLM-oriented Semantic Similarity and Inference Stability of Demonstration. As defined in Eqs. (21) and (22), both factors are computed based on model representations. While they require white-box access to the model, this aligns with intuition, as different LLMs may benefit from different demonstrations depending on their own capabilities and characteristics. To address the concern about generalization, as you pointed out, we examined this aspect in Section 5.6 by applying the demonstrations selected by our LMS3 method using Llama3-8B as backbone to ChatGPT and GPT-4. As shown in Table 3, these demonstrations still significantly improve their accuracy, demonstrating the strong generalization capability of our method. Under these conditions, LMS3 continues to achieve the best overall performance, highlighting its potential to provide valuable demonstrations even when working with closed-source LLMs in practical applications. We sincerely appreciate your thoughtful comment and hope this explanation addresses your concerns. $\bf{Q2}$: In the K-shot setting, is it fair to treat the demonstrations as independent of each other? $\bf{A2}$: Thank you for the insightful question! In this work, our theoretical analysis considers the most fundamental case and starts with a single attention layer in transformers. Under this setup, as shown in Eq. (17), the influences of different demonstrations on the representation of the test sample $h_{test}$ follow an almost linear relationship, which allows us to treat them independently. In a full transformer architecture where deeper interactions occur (e.g., multi layers of cross-attention), the representations of different demonstrations will interact in more intricate ways, which may lead to complex effects on $h_{test}$ that could not be directly measurable. Therefore, our findings provide a foundational understanding that can offer insights into practical scenarios. Moreover, even under this theoretical simplification, our method LMS3 consistently outperforms the baselines across all settings from 2-shot to 4-shot (Figure 3). This validates the feasibility of our theoretical results and the effectiveness of our method. Following your comment, we are very willing to further explore the impact of different demonstration combinations in the future. $\bf{Q3}$: Can insights from this work potentially be used to design a method that generates effective demonstrations? This might result in better performance compared to a fixed offline demonstration set. $\bf{A3}$: Thanks for your constructive suggestion! We fully agree the idea that our work can be used for effective demonstration generation. This is because our theoretical analysis sheds light on what characteristics contribute to their effectiveness and we can easily estimate the scores of demonstrations by Eq.(24) in our paper. This presents an exciting direction worth further exploration. Further considering this idea, we think the only challenge in implementation is ensuring that the generated demonstrations have correct answers. While a fixed offline demonstration set allows for manual curation to guarantee correctness, dynamically generated demonstrations require additional mechanisms to verify their validity and reliability. Developing such mechanisms remains an open and important question. We greatly appreciate your thought-provoking idea and will supplement the discussion it in our revised version. $\bf{Q4}$: Inference runtime for a query if LMS3 is used using a fixed demonstration dataset. If I understand currently, the overhead should be minimal? $\bf{A4}$: Thanks for your valuable question. Yes, your understanding is correct. Since the representations of all demonstrations can be precomputed in advance, the inference process in our LMS3 only involves encoding the test sample and retrieving the relevant precomputed information. This ensures that our method achieves the minimal computational complexity during inference as shown in Table 1. We appreciate your thoughtful question and hope this clarifies our approach. --- Rebuttal Comment 1.1: Comment: I thank the authors for taking the time to answer my questions! This paper presents interesting theoretical analysis and backs it up with interesting empirical analysis. The few limitations (generalizability and the assumption that demonstrations are independent of each other) will be interesting follow-up works. I have raised my score. --- Reply to Comment 1.1.1: Comment: We sincerely appreciate your constructive comments and valuable feedback! We will also incorporate our discussion on these aspects into the revised version. Thank you again for your time and for raising your score!
null
null
null
null
null
null
null
null
Fully Dynamic Euclidean Bi-Chromatic Matching in Sublinear Update Time
Accept (oral)
Summary: This paper studied geometric bi-chromatic matching (aka the optimal transportation problem) for discrete distributions in the dynamic setting, where points could undergo insertions and deletions. Here, we are given $n$ points inserted and deleted dynamically, and we want to always maintain an approximation to the optimal matching between the red and blue points. The main result of the paper is a dynamic algorithm that maintains an implicit representation of a $(1/\varepsilon)$-approximation to the optimal bi-chromatic matching in $O(n^{1+\varepsilon})$ pre-processing time and $O(n^{\varepsilon}/\varepsilon)$ update time. The paper further complements the algorithm with a lower bound of $\Omega(n)$ update time for $<2$ approximation, indicating that we could not expect $1+\varepsilon$-approximation as in the offline setting. The paper further conducted experiments, showing that the proposed algorithm has significant advantage for the efficiency over the static algorithm. Claims And Evidence: Yes. The algorithm is quite natural, and the detailed descriptions and the proofs are included in the appendix. Methods And Evaluation Criteria: Yes. Since this paper is the first to consider dynamic bi-chromatic matching, comparing it with the offline algorithm seems reasonable. Theoretical Claims: The main technical idea of the paper is to take advantage of the $p$-tree structure to maintain an implicit representation of the matching. Here, if we pick $p=n^{O(\varepsilon)}$, the $p$-tree will of of depth only $O(1/\varepsilon)$. The algorithm first tries to resolve matching inside each leaf node; if there is any surplus of a color, the algorithm aggregates the ‘surplus’ and matches the mass to sibling nodes in the $p$-tree. The process could be done recursively only on $O(1/\varepsilon)$ nodes per point update (since we only need to follow a path from leaf to root), which results in a sublinear update time algorithm. The design of the algorithm is quite natural and straightforward. Due to the simplicity for understanding, I did not check the details in the proof. I’m confident the algorithm is correct. Experimental Designs Or Analyses: Yes, nothing looks bad to me. Supplementary Material: The appendix contains the formal pseudocodes of the algorithm, the omitted proofs, a lower bound, and additional experiments. I briefly checked the lower bound and the additional experiments, and they look good. Relation To Broader Scientific Literature: Bi-chromatic matching (also known as optimal transportation) has a wide range of applications in machine learning. Solving the problem in the dynamic setting is an important problem in my opinion. Essential References Not Discussed: No essential reference is missing, as far as I know. Other Strengths And Weaknesses: My overall opinion of this paper is positive. Optimal transportation is an important problem in machine learning, and the dynamic setting is an important scenario increasingly common in modern applications. I’m surprised the problem was not studied before. From a technical perspective, the problem also correctly identified the gap between geometric matching vs. graph matching and obtained sublinear update time dynamic algorithm for the former. The paper is also well-written, and I could follow most of the arguments without checking the details. The techniques are relatively straightforward, and this might become an issue for technical contributions. However, I personally like simple and cute algorithms and do not have a problem with them. Other Comments Or Suggestions: The texts in Figures 2 and 3 are generally very small. I had to zoom in 400% to read. I understand this might be a problem with the template. My suggestion would be to have the same figures with better resolutions in the appendix and add a pointer. Questions For Authors: Do you have any opinion on how tight your result is? Is it possible to, say, get any approximation in $\text{polylog}(n)$ update time? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We would like to thank the reviewer for their time and for providing valuable comments. - “The texts in Figures 2 and 3 are generally very small. My suggestion would be to have the same figures with better resolutions in the appendix and add a pointer.” All figures will be replicated in the appendix, allowing for a more readable presentation. - “Do you have any opinion on how tight your result is? Is it possible to, say, get any approximation in $\text{polylog } n$ update time?” For a choice $\varepsilon = 1/ \log n$, our algorithm reports an $O(\log n)$-approximation solution in $O(\text{polylog } n)$ update time. We believe that there are instances where our current algorithms attain the claimed trade-off between approximation ratio and update time. It’s an interesting open question whether such a trade-off can be improved, and will likely require new ideas. --- Rebuttal Comment 1.1: Comment: Thanks for the clarification. I think it would be helpful if you could add more discussion about the tightness of your algorithm in your final version. In light of the discussion, I'll keep my score as it is. --- Reply to Comment 1.1.1: Comment: Thank you for the response. We'll add an example showing the tightness of our algorithm in the final version.
Summary: This papers presents a dynamic data structure that maintains a bipartite matching and supports insertions ans deletions of points. Given two point sets in $2$-dimensional space and a parameter $\varepsilon>0$, the data structure computes an $O(1/\varepsilon)$-approximate matching and handles updates in $O(n^{-\varepsilon})$ time. The data structure stores a $p$-tree, which is a hierarchical partitioning similar to quadtrees but with a higher fan-out of $p^2$. For each cell of the hierarchical partitioning, the data structure maintains an "implicit matching", which can be then converted to an explicit matching in linear time. The data structure gathers the excess demand/supply of the points in the sub-tree of that node at the center of cell, which are then matched to the excess masses at the centers of the sibling cells. Inserting/deleting a point into/from each set requires updating the matching computes for the cells along the path from the root to the leaf node containing the inserted point. By picking $p=n^{-O(\varepsilon)}$, the height of the tree will be $O(1/\varepsilon)$ and updating the matching of each cell would take $O(n^{-\varepsilon})$ time. ## update after rebuttal I keep my positive score. Claims And Evidence: The paper introduces a fully dynamic algorithm for Euclidean bi-chromatic matching that achieves an $O(1/\varepsilon)$ approximation with sublinear update time. The authors prove that no algorithm can achieve a $(2-\delta)$-approximation while maintaining sublinear updates, establishing a fundamental trade-off. Their method leverages a hierarchical grid-based partitioning scheme to efficiently handle insertions and deletions in $O(n^{-\varepsilon})$ time. Through experiments, they show that their algorithm significantly outperforms static recomputation methods while maintaining high accuracy. Real-world applications, such as tracking spatial distribution changes in taxi pickup/dropoff data, demonstrate its effectiveness in monitoring Wasserstein distance and evolving datasets. Methods And Evaluation Criteria: The authors propose a hierarchical grid-based partitioning approach that processes updates in a bottom-up manner, ensuring efficient maintenance of an approximate bi-chromatic matching with provable guarantees. Their evaluation, which includes theoretical analysis, synthetic benchmarks, and real-world datasets (such as taxi pickup/dropoff locations), is well-designed to demonstrate both the algorithm’s efficiency and its practical applicability in tracking dynamic spatial distributions. Theoretical Claims: I checked the correctness of the algorithms and lemmas. Experimental Designs Or Analyses: The experimental results makes sense. Supplementary Material: I checked almost all parts of the appendix. Relation To Broader Scientific Literature: The problem of computing bipartite matchings in a dynamic setting has not been extensively explored in the literature, as there are few known algorithms for it. However, I believe this problem could have potential applications in machine learning methods. Essential References Not Discussed: All relevant references that I am aware of are included. Other Strengths And Weaknesses: The paper is well-written and easy to follow. Other Comments Or Suggestions: * Line 95 LC: The sentence needs reformatting * Line 114 LC: There are two O() notations * Line 119 LC: input puts points -> input points * Line 168 RC: spread bounded $U$ -> bounded spread $U$ * Line 411 RC: use a use a -> use a * Missing the impact statement Questions For Authors: * In your experimental results, you used $p=2, 8, 32$, but your paper mentions the value $p$ as $n^{-\varepsilon}$, which is less than $1$. I assume that the diameter of the space plays a role in the choice of $p$ in your experiments, but I could not get a sense of what is in fact the value of $\varepsilon$. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We would like to thank the reviewer for their time and for providing valuable comments. For the minor comments and typos not addressed below, we will incorporate them in the next version of the paper. - “Missing the impact statement.” We will include the following impact statement, and we apologize for our oversight: This paper presents work whose goal is to advance the field of Machine Learning. There are many potential societal consequences of our work, none which we feel must be specifically highlighted here. - “In your experimental results, you used $p=2,8,32$, but your paper mentions the value $p$ as $n^{−\varepsilon}$, which is less than $1$. I assume that the diameter of the space plays a role in the choice of $p$ in your experiments, but I could not get a sense of what is in fact the value of $\varepsilon$.” To clarify, we choose the value of $p$ as $n^\varepsilon$ in the paper and obtain an approximation ratio of $O(1/\varepsilon)$. Here are some examples to give a sense of the scale of $\varepsilon$: - our largest experiment had $n=1 000 000$ and $p=2$ corresponding to $\varepsilon \approx 1/20$, - our uniform synthetic data set where points were integers between $0$ and $500$, the choice of $p=32$ corresponds to $\varepsilon \approx 0.55$. Empirically, all choices of $p$ achieve a significantly better approximation ratio in practice than our worst-case theoretical guarantee (see Figure 5 in Appendix G). --- Rebuttal Comment 1.1: Comment: Thanks for your clarification. I keep my score.
Summary: This paper gives an algorithm for dynamic bi-chromatic matching in Euclidean space with $O(\frac{1}{\varepsilon})$-approximation ratio and sublinear update time $O(\frac{n^\varepsilon}{\varepsilon})$ with theoretical guarantee and this algorithm is the frist sublinear update time algorithm for geometric dynamic bi-chromatic matching. In addition, this paper gives a proof of approximation lower bound for dynamic bi-chromatic matching in Euclidean, which shows that even if only insertion and deletion operations are considered, there does not exist a dynamic algorithm that implements both the $2-\delta$ approximation and the sublinear update time. The main technique is based on the static algorithm by [Agarwal-Varadarajan 2004], the idea is to construct a nested grid cells structure(p-tree) to get an $O(\frac{1}{\varepsilon})$-approximation matching and maintain the implicit matching on the tree with bottom-up manner. The theoretical results are validated with experiments on real-world datasets. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: Yes Experimental Designs Or Analyses: Yes. It is mentioned that [Xu-Ding 2024] studied the same problem. Thus, it would be desirable for this paper to have an experimental comparison against Xu-Ding 2004), specifically, the update time comparison. Supplementary Material: No Relation To Broader Scientific Literature: This paper states some connection between this question and the 1-Wassertein distance estimation (e.g. we can apply this matching to estimate 1-Wassertein distance measure). Besides, the paper presents experimental results on real-world datasets and shows how the algorithm performs when we use it to estimate the 1-Wassertein distance. Essential References Not Discussed: No Other Strengths And Weaknesses: Strengths: * The algorithm is simple and intuitive, with a solid theoretical guarantee. * The problem of bi-chromatic matching is well-studied and practical, and this paper gives some nice approaches to analysis that may be extended to some related work. Weaknesses: * The contribution is not clear enough; e.g., the approximation ratio of [Xu-Ding 2024] is not mentioned and used to compare with the theoretical results in this paper. Other Comments Or Suggestions: No Questions For Authors: No Ethics Expertise Needed: ['Other expertise'] Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We would like to thank the reviewer for their time and for providing valuable comments. - "The contribution is not clear enough in relation to [Xu-Ding 2024]." The Xu-Ding paper studies a slightly different problem: dynamic maintenance of optimal transport. Specifically, the goal is to design a data structure that accelerates executing 'one iteration' of the Network Simplex Algorithm on an explicit complete bi-partite graph in $O(n)$ time, where $n$ is the current number of points. Their work is concerned with maintaining an exact solution, within numeric computing accuracy, after the insertion/deletion of a vertex to the min-cost flow problem. Since their algorithm operates on a complete, bipartite graph, it requires $\Omega(n^2)$ space to store the edge weights and $\Omega(n)$ update time to insert a new vertex. Moreover, in the worst case, the update time can take up to $O(n^2)$ time. As such, the algorithm and the implementation cannot handle very large instances in practice. In contrast, our paper studies the problem of quickly maintaining an approximate solution to the Euclidean bi-chromatic bipartite matching (i.e., optimal transport with uniform demands), in time that is *sub-linear* in $n$. For example, taking $\varepsilon = 1/2$ gives an update time of $O(\sqrt{n})$ for updating an $O(1)$-approximate matching. Our proposed tree structure uses $O(n)$ space, regardless of $\varepsilon$, which makes the approach practical for very large inputs. Moreover, our lower-bound in Thm. F.1 shows that the approximation ratio must be at least $2$ for any dynamic algorithm to achieve a sublinear update time. In the related works section, our focus has been to compare our contributions with the known approximation algorithms that run in near-linear time and space, as those are also applicable to larger instances. That being said, as per the reviewer’s suggestion, we will include the above comparison to the Xu-Ding work in the next version of the paper. **Experimental comparison with [Xu-Ding 2024]** We ran the code of [Xu-Ding 2024] on our benchmark dataset (Unif-Vs-Gaussian). We have observed that the insertion time slowed down substantially after inserting $1$k point-pairs (source and sink). In fact, already inserting $5$k point-pairs took more than $12$GB RAM and longer than $3$ hours in total, which is when we had to terminate the process as it was running out of memory. Note that this is already $>1$s per update on a small instance. In contrast, our experimental results show that the update time of our proposed algorithm remains $<1$ms for very large instances (up to $n=10^6$ in Figure $2$; right), with the quality of the solution being within a factor of two from the optimum (see Figure $5$; right).
Summary: The paper studies the euclidean minimum cost bipartite matching problem: given n blue points and n red points in the 2d euclidean plane, we wish to compute a minimum cost bipartite matching between them, where the cost is measured in terms of euclidean distance. The novel component of the paper is to introduce dynamic updates to the point sets while requiring that an approximately optimal solution is always maintained. The trivial baseline is to recompute the matching after every update which can require linear time every update for a $1+\epsilon$ approximation. The paper presents a tradeoff which leads to much faster update times. The paper obtains an update time of $n^{\epsilon}$ but maintains a worse $O(1/\epsilon)$ approximation. The main idea seems to be inspired by the $1+\epsilon$ approximation algorithm in the static case. We first divide the input space into many nested regions using a quad-tree like data structure, but the fan out of the tree must be carefully controlled. When points get updated, we try to match from bottom up, but it is not clear if at some point $\Omega(n)$ updates need to be made. However, a careful update step introduced by the authors only needs to do work proportional to the fan out of the quad-tree like data structure. Empirical evidence is given showing that their algorithm can obtain multiple orders of speed improvement over the naive recalculating baseline while maintaining an accurate solution, in both synthetic and real world data. Overall, while the setting is a bit limited, I think it is a very solid contribution to an important problem. Claims And Evidence: Yes the proofs seem convincing and correct. Methods And Evaluation Criteria: I am not an expert but the experiments seem sound and show many magnitude speed gains over the naive solution which recomputes the matching every time for both real world and synthetic datasets. Theoretical Claims: I checked to the best of my ability. Experimental Designs Or Analyses: I did not check very carefully. Supplementary Material: No Relation To Broader Scientific Literature: No Essential References Not Discussed: None that I know of. Other Strengths And Weaknesses: I appreciate the authors attempt to make cler a complicated algoirthm in figure 1 Other Comments Or Suggestions: None. Questions For Authors: - Do the authors have an estimate for the constant in the $O(1/\epsilon)$ approximation. I understand that asymptotically it doesnt matter but it would be nice to understand what the factor is for say $\sqrt{n}$ update time. - Does the analysis work for weighted matchings e.g. something like optimal transport? Of course one can replicate the points by their weight but it leads to some blowup in the parameters. - The case of $\epsilon \approx 1/\log(n)$ seems especially interesting since one gets $O(\log n)$ update time. Is there any hopes of improving the running time in this regime? - Can the authors quickly survey what is known about the dynamic problem in high dimensional settings? i.e. when d is not constant? What is the dependency on $d$ of the current algorithms? Does an exponential factor in $d$ show up in the approximation or in the update time? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We would like to thank the reviewer for their time and for providing valuable comments. - “Do the authors have an estimate for the constant in the $O(1/ \varepsilon)$ approximation. I understand that asymptotically it doesn’t matter but it would be nice to understand what the factor is for say $\sqrt{n}$ update time.” We haven’t done an exact analysis of the constant in the $O(1/ \varepsilon)$ approximation guarantee. However, if one would want to achieve $O(\sqrt{n})$ update time, we expect the approximation ratio to be $< 20$. - “Does the analysis work for weighted matchings e.g. something like optimal transport? Of course one can replicate the points by their weight but it leads to some blowup in the parameters.” This is a great question. We believe it works and our algorithm should extend to this setting, but details need to be worked out. - “The case of $\varepsilon \approx 1/ \log⁡(n)$ seems especially interesting since one gets $O(\log ⁡n)$ update time. Is there any hopes of improving the running time in this regime?” This is a natural question to consider, but we’re not aware of any techniques of pushing down the runtime to something sub-logarithmic. - “Can the authors quickly survey what is known about the dynamic problem in high dimensional settings? i.e. when $d$ is not constant? What is the dependency on $d$ of the current algorithms? Does an exponential factor in $d$ show up in the approximation or in the update time?” To the best of our knowledge, the dynamic version of our problem in higher dimensions hasn’t been studied yet. However, we can easily extend our algorithm to work in any dimension $d$. The best possible trade-offs need to be worked out, but we believe that our approximation ratio has a multiplicative factor of $O(\sqrt{d})$ and the update time should be of the form $O(n^{\varepsilon d})$.
null
null
null
null
null
null
COSDA: Counterfactual-based Susceptibility Risk Framework for Open-Set Domain Adaptation
Accept (poster)
Summary: This paper establishes a novel causal-inspired theoretical framework for Open-Set Domain Adaption by exploring the susceptibility between two visual samples. Based on the theoretical analysis, the authors propose three components: the SRE for estimating the causal relativity; the CFA module to facilitate cross-domain feature alignment; and the VMP strategy for pseudo labeling. The theoretical proof seems correct and novel, which fills up the gap of causal inference in the OSDA tasks. Claims And Evidence: What is the meaning of “c' is the specific implementation of c” in Definition 1 (LL94-95), it is a little bit confusing that I cannot relate it to any real examples. What is the connection between these variables to the source/target sample? Since this definition affects the following proofs and model design a lot, it should be more clearly defined. The authors could give a concrete example for intuitive understanding. Methods And Evaluation Criteria: In the main paper, both Office31 and Image-CLEF utilized in experiments are quite simple benchmarks where the foreground objects and background in most of their samples are clearly distinguishable. I think the authors could conduct experiments on more challenging datasets (such as DomainNet), which contain more hidden causal relations that are suitable for evaluating the proposed framework. With the above concerns, It is also suspectable for the application of this framework in practical scenarios. The proposed causal analysis seems restricted between single samples with clear foreground objects, while the open world often exists more complicated inter-object relations. Including an extended discussion for these situations will be better. Theoretical Claims: The proofs in this paper seem correct without explicit logistic error. Experimental Designs Or Analyses: The overall results in the experiment sections are comprehensive. Despite the main results, the authors also conduct a lot of analysis to verify the effectiveness of incorporating susceptibility risk. In Figure 5, OSBP is a relatively old method (in 2018), whose performance is not outstanding in recent OSDA approaches. To a better understanding of the proposed framework, it is more appropriate to visualize the feature space of a more advanced method, especially ANNA (2023), which also explores the benefit of causality in OSDA. Supplementary Material: Yes, the supplementary contains several details for understanding the proposed framework and necessary proofs for the main theoretical results in the main paper. Relation To Broader Scientific Literature: The proposed framework theoretically demonstrates the potential of causal inference in OSDA, which may inspire the following research to develop more robust and interpretable recognition systems that align with human thought in the open world. Essential References Not Discussed: The second key contribution is a pseudo-labeling strategy using k-means for clustering in the target domain. A close relative idea has been proposed in prior work DCC[1] published in CVPR 2021, which is not discussed in the related work. Since OSDA/UniDA has been studied for nearly 7 years, the discussion for previous works is not comprehensive. Several classical researches are missing, such as OVAnet [2] and UADAL [3]. [1] Li G, Kang G, Zhu Y, et al. Domain consensus clustering for universal domain adaptation[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2021: 9757-9766. [2] Saito K, Saenko K. Ovanet: One-vs-all network for universal domain adaptation[C]//Proceedings of the ieee/cvf international conference on computer vision. 2021: 9000-9009. [3] Jang J H, Na B, Shin D H, et al. Unknown-aware domain adversarial learning for open-set domain adaptation[J]. Advances in Neural Information Processing Systems, 2022, 35: 16755-16767. Other Strengths And Weaknesses: Refer to the above comments. Other Comments Or Suggestions: LLine 133-135, \epsilon in Equation 3 is not defined. LLine 163-164, what does the abbreviation LB mean, couldn’t find it anywhere in the rest of this paper. The reference to the anonymous code repository is unavailable. Questions For Authors: Please refer to the above comments. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **Dear Reviewer rce5,** Thank you for your decision and constructive feedback. These detailed and professional comments have highly enlightened and encouraged us to make every effort to improve our work. We hope our responses could resolve the concerns. >**Claims And Evidence**. What is the meaning of “c' is the specific implementation of C” in Definition 1. **A1**. We sincerely appreciate the opportunity to clarify this important conceptual point. We provide a simplified binary example to clarify the meaning. As shown in Fig. 7 (Appendix), for _maple leaf_ classification, let $C$ denote the presence of _palm-shaped lobes_. There the specific implementation $c\in \{0,1\}$: - $c=1$: Sample exhibits palm-shaped lobes - $c=0$: Absence of this trait Then the different implementations of $C$ in the sample will influnence the probablity of the label. We hope this illustrative example has clarified the causal feature representation. > **Methods And Evaluation Criteria**. I think the authors could conduct experiments on more challenging datasets (such as DomainNet) **A2**. We sincerely appreciate the suggestion. We have expanded our experiments on both DomainNet and VisDA. *Table 1. Comparison results on DomainNet and VisDA with CLIP backbone**( Baseline results from [R1]).*** |Method|Domain|Net|(173/172)|VisDA|(6/6)|| |-|-|-|-|-|-|-| ||OS*|UNK|HOS|OS*|UNK|HOS| |DCC[1]|50.2|45.1|47.5|75.3|46.2|57.3| |OVANet[2]|65.1|48.5|55.6|60.4|61.2|60.8| |CROW[R1]|70.3|50.9|59.0|77.0|62.8|69.2| |**COSDA-CLIP**|**72.0**|**76.2**|**73.9**|**85.2**|**72.6**|**78.4**| **[R1] Cross-domain Open-world Discovery, ICML 2024** **Key Findings from Table 1:** 1. Substantial HOS Improvements: - On DomainNet, COSDA-CLIP achieves **73.9% HOS, outperforming CROW by 14.9%** - On VisDA, COSDA-CLIP reaches **78.4% HOS, outperforming CROW by 9.2%** 2. Dual Strengths in OS\* and UNK: With CLIP, COSDA particularly further enhances known-class classification and unknown-class detection (UNK). *Table 2. Performance comparison with [1][3] on Office-Home and Office-31* |Method|Office-Home|Office-31| |-|-|-| |DCC[1]|64.2|86.8| |UADAL[3]|68.7|88.1| |**COSDA**|**71.7**|**92.6**| Table 2 shows leading performance of COSDA on small-scale datasets compared with [1][3]. These experiments could further confirm the effectiveness of COSDA. **We will well cite [1][2][3] in the updated version.** >**Experimental Designs**: It is more appropriate to visualize the feature space of a more advanced method, especially ANNA (2023). **A3**. Thanks for your suggestions. We have added the requested comparison with ANNA in our feature space analysis. The visualizations can be found at https://anonymous.4open.science/r/tsne-5F2D/ > **Essential References Not Discussed**: A close relative idea has been proposed in prior work DCC[1] published in CVPR 2021, which is not discussed in the related work. Several classical researches are missing, such as OVAnet [2] and UADAL [3]. **A4**. We appreciate your insightful comments. **We will incorporate [1][2][3] into the updated related work.** DCC proposes Domain Consensus Clustering for universal domain adaptation, using semantical and sample-level consensus to effectively separate and distinguish common classes from private ones. OVANet proposes a universal domain adaptation method that learns an open-set threshold from source data via one-vs-all classifiers and adapts it to the target domain by minimizing class entropy. UADAL addresses open-set domain adaptation with unknown-aware adversarial learning, aligning known classes while segregating unknowns in feature space. We carefully read [1][2][3], and the difference is we exclude all known-class positive samples and cluster only the remaining negative samples (i.e., those not belonging to any known class). Besides, unlike one-vs-all strategies requiring C clustering operations per epoch (C = the number of known classes), COSDA achieves comparable performance with only one clustering per epoch. > Other Comments Or Suggestions **A5**. We sincerely appreciate your thorough review of our paper. **We will carefully correct the errors in the updated version and thoroughly review the text to prevent similar mistakes.** **LLine 133-135**: The parameter $\epsilon$ represents the degree of intervention on the causal features. It should be large to ensure that the semantic information can be disentangled (i.e., so that it influences the probability of label $Y$). **LLine 163-164**: The command `\LB` (which represents the label $Y$) was missing its backslash due to a typesetting error. **The anonymous code repository is unavailable**: We have updated the code in the previously provided anonymous link while also including necessary details for running it. We sincerely appreciate your insightful comments. Please let us know if you need any further information or if there are additional points you would like to discuss with us. Best regards, Authors of #10234
Summary: This paper introduces an adversarial adaptation framework called COSDA, which aims to address the challenges of unknown category recognition and domain drift in the open domain adaptation problem. The framework is based on causality theory and includes three novel components: (i) Susceptibility Risk Estimator (SRE), which is used to capture causal information and form a risk minimization framework; (ii) Contrastive Feature Alignment (CFA) module, which satisfies the external causal assumption and promotes cross-domain feature alignment based on mutual information theory proof; (iii) Virtual Multi-unknown-categories Prototype (VMP) pseudo-labeling strategy, which provides label information by measuring the similarity between samples and prototypes of known and multiple virtual unknown categories, thereby assisting open set recognition and intra-class discrimination learning. Experimental results show that the proposed method achieves state-of-the-art performance on benchmark datasets and synthetic datasets. ## update after rebuttal Thank you very much for the author's reply. I maintain my initial positive rating. Claims And Evidence: Compared with the traditional OSDA method, the main improvement of CSDA is the introduction of causal inference technology and Susceptibility Risk Estimator (SRE), which enables the model to better handle open set problems and sources of uncertainty. In addition, CSDA also uses pseudo-labeling strategies and contrastive learning strategies for virtual multiple unknown categories to further improve the performance of the model. This paper mainly introduces the causal inference based open domain adaptation method COSDA and conducts extensive experimental comparisons on three benchmark datasets. Methods And Evaluation Criteria: This paper conducts extensive experimental comparisons on three benchmark datasets. The experimental results show that COSDA achieves good performance on all benchmark datasets, especially when dealing with unknown categories. In addition, COSDA performs well for different evaluation indicators, such as unknown-class accuracy, known-class accuracy, overall accuracy, and harmonic mean accuracy. In the Ablation study, the authors further explored the impact of different components of COSDA on the performance. Finally, by applying COSDA to practical problems, the authors demonstrated its practical value in solving complex scenarios. Theoretical Claims: No Experimental Designs Or Analyses: The experiments are reasonable. Supplementary Material: No Supplementary Material Relation To Broader Scientific Literature: This work could be also finished by VLM, which is not interesitng with tranditional method. Essential References Not Discussed: No Other Strengths And Weaknesses: strengths: 1) This paper proposes an OSDA method based on a causal model. By introducing the concept of probability sensitivity, a risk assessment framework for adversarial unknown categories is proposed, and a contrastive feature alignment and virtual multi-unknown category prototype strategy are designed to achieve open domain classification tasks. 2) Authors conduct expensive experiments, which show that this method achieves significant performance improvements on three benchmark datasets. 3) Paper is easy to follow. And the shown figures are interesting with enjoyable color and design. Weaknesses: 1) As reported in the main tables, in most cases the proposed methods are not good and fail to be SOTA (State of the Art). This indicates that while the methods may show promise or have certain advantages in specific scenarios, they do not consistently outperform existing techniques across all metrics and benchmarks. 2) I think this task could potentially be completed by leveraging Video-Language Models (VLMs). However, the authors did not demonstrate scenarios or cases where VLMs might fail to work effectively. It's important for a comprehensive evaluation to include both the capabilities and limitations of such models. 3) No codes are provided, which is not too convincing. This lack of concrete examples makes it difficult to fully understand the implementation details and assess the validity of the claims being made. For a more robust evaluation, it's essential to have access to the specific code snippets or a complete codebase that demonstrates how the theoretical concepts are applied in practice. Without this, the explanation remains somewhat abstract and less actionable. Other Comments Or Suggestions: Please provide the code, which is important. Questions For Authors: No Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Dear Reviewer zv4D,** Thank you for your decision and constructive feedback. We have studied the comments carefully and made thorough revisions. We also greatly appreciate your insightful questions and hope that our responses have helped to clarify them. > **Weakness 1**: As reported in the main tables, in most cases the proposed methods are not good and fail to be SOTA (State of the Art). While the methods may show promise or have certain advantages in specific scenarios, they do not consistently outperform existing techniques across all metrics and benchmarks. **A1**. We appreciate the reviewer's attention to our experimental evaluation. Our comprehensive assessment includes three key metrics: **OS*** (known-class accuracy), **UNK** (unknown-class detection), and **HOS (balancing** both aspects). As highlighted in our contributions, COSDA achieves consistent improvements over SOTA methods: - **Absolute HOS gains**: +2.9% (Office-Home), +2.2% (Office-31), +1.0% (Image-CLEF) - **Dominant rankings**: - Image-CLEF: Achieved the highest HOS in 11 out of 12 subtasks, and the best UNK score in 9 out of 12 subtasks. - Office-31: Achieved the highest HOS in 3 out of 6 subtasks, and the best UNK score in 3 out of 6 subtasks. The method demonstrates both **generalizability** (highest average performance across all benchmarks) and **robustness** (most of the subtasks show statistically significant improvements). These results validate our design's effectiveness in handling the known-unknown class trade-off in many cases. > **Weakness 2**: I think this task could potentially be completed by leveraging Video-Language Models (VLMs). However, the authors did not demonstrate scenarios or cases where VLMs might fail to work effectively. It's important for a comprehensive evaluation to include both the capabilities and limitations of such models. **A2**. We sincerely appreciate this valuable suggestion. We have conducted additional experiments with CLIP ViT-L on challenging datasets DomainNet and VisDA. *Table 1. Performance comparison (%) on DomainNet and VisDA datasets using CLIP backbone. **(All baseline results are obtained from [R1]).*** |Method|Domain|Net|(173/172)|VisDA|(6/6)|| |-|-|-|-|-|-|-| ||OS*|UNK|HOS|OS*|UNK|HOS| |DCC|50.2|45.1|47.5|75.3|46.2|57.3| |UNIOT|59.2|45.1|51.2|75.7|49.4|59.8| |GLC|62.9|50.6|56.1|73.4|58.7|65.2| |CROW[R1]|70.3|50.9|59.0|77.0|62.8|69.2| |COSDA-CLIP|**72.0**|**76.2**|**73.9**|**85.2**|**72.6**|**78.4**| **[R1]Cross-domain Open-world Discovery, ICML 2024** **Implementation Details**. Considering both time constraints and GPU memory demands (particularly for the larger models), we utilized six 40GB NVIDIA A100 GPUs to execute the new experiments. DomainNet and VisDA use the same hyperparameter settings as smaller-scale datasets, specifically $\lambda_s=0.2$, $\lambda_{exo}=1$. But the learning rate has been reduced, specifically $lr = 5e-4$. **Key Findings from CLIP Backbone Experiments:** 1. _Substantial HOS Improvements_: - On DomainNet, COSDA-CLIP achieves **73.9% HOS, outperforming CROW by 14.9%**. - On VisDA, COSDA-CLIP reaches **78.4% HOS, outperforming CROW by 9.2%** . 2. _Dual Strengths in OS\* and UNK_: With CLIP, COSDA particularly further enhances known-class classification and unknown-class detection (UNK). - For DomainNet OS*, a 1.7% improvement; - For DomainNet UNK, a 25.3% improvement; - For VisDA OS*, an 8.2% improvement; - For VisDA UNK, a 9.8% improvement. *Table 2. Sub-Task Performance of COSDA on DomainNet with CLIP bakcbone* |subtask|P-R|P-S|R-P|R-S|S-P|S-R|**Avg.**| |-|-|-|-|-|-|-|-| |OS\*|76.6|68.8|71.7|70.9|66.1|78.1|72.0| |UNK|78.9|79.1|72.2|77.2|76.3|73.2|76.2| |HOS|77.7|73.6|71.9|73.9|70.8|75.6|73.9| Table 2 exhibited detailed results of COSDA for 6 subtasks on DomainNet with CLIP. These additional experiments could further confirm the effectiveness of COSDA. **We fully agree with the reviewer's insightful suggestion about VLMs. This represents a promising new direction worth exploring in OSDA.** > **Weakness 3 & Other Comments**: No codes are provided. **A3**. We sincerely appreciate the reviewer's emphasis on reproducibility. **As noted in our submission (Page 6, Line 321), we open-sourced the complete implementation. For greater visibility, we will relocate the code announcement to the abstract/introduction in the updated version.** **Current Implementation Overview:** - **Benchmark Support**: Office-Home, Office-31, Image-CLEF, DomainNet, VisDA - **Architecture Flexibility**: - CNN backbones (ResNet, VGG) - VLMs (CLIP) - **Training Frameworks**: - Multi-GPU distributed training - Single-GPU training We sincerely appreciate your insightful comments once again. Please let us know if you need any further information or if there are additional points you would like to discuss with us. Best regards, Authors of #10234
Summary: This paper addresses the Open-Set Domain Adaptation problem which is useful in real-world applications. They propose a novel Counterfactual-based susceptibility risk framework, consists of Susceptibility Risk Estimator, Contrastive Feature Alignment, and Virtual Multi-unknown-categories Prototype. Experiments on three datasets and benchmarks highlight its superior performance. Claims And Evidence: Yes Methods And Evaluation Criteria: The proposed method follows the traditional evaluation criteria in Open-Set domain adaptation. Theoretical Claims: Yes, no issues. Experimental Designs Or Analyses: The experimental designs are sound in general. However, the experiments on large-scale benchmark such as DomainNet and VisDA are missing. Supplementary Material: Yes, all parts. Relation To Broader Scientific Literature: The proposed Counterfactual-based susceptibility risk framework could be potentially helpful to other literature. Essential References Not Discussed: No Other Strengths And Weaknesses: Strengths 1. The paper addresses the Open-Set domain adaptation problem, which is a challenging and practical scenario. 2. The paper is well written and easy to follow. 3. The paper provides extensive experiments, showing the effectiveness and versatility of the proposed method. Major Weaknesses 1. The authors only use CNN backbones. More ablations on ViT backbone should be added, as it demonstrates strong generalization and adaptation performances compared with CNNs. 2. Although it is important for Open-Set domain adaptation to have a good performance on UNK, it is also crucial to have a good OS* value. However, the OS* of the proposed framework is worse than baselines on all datasets. For example, 0.3 on Image-CLEF, 3.2 on Office-31, and 6.7 on Office-Home. 3. Lack of experiments on large-scale benchmarks such as DomainNet and VisDA, which are commonly used in existing work [1,2]. [1] Upcycling Models under Domain and Category Shift, CVPR 2023 [2] LEAD: Learning Decomposition for Source-free Universal Domain Adaptation, CVPR 2024 Other Comments Or Suggestions: There are too many loss terms in eq (25) which is complex and may make optimization difficult. How do you balance the contribution of different terms? Questions For Authors: What's the performances of the proposed framework on DomainNet and VisDA? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Dear Reviewer pQYc,** We sincerely appreciate your constructive comments on our work. We have carefully addressed each point raised and incorporated corresponding improvements. We hope our responses have adequately addressed the concerns. >**Weaknesses 1 & Weaknesses 3**. More ablations on ViT backbone should be added; Lack of experiments on large-scale benchmarks such as DomainNet and VisDA. **A1**. We fully agree with the reviewer's valuable suggestion regarding the importance of evaluating our method on large-scale datasets with diverse backbones, as this would further validate the generalization of COSDA. In response to this suggestion, we implemented additional experiments on VisDA and DomainNet [1][2][R1][R2] and used CLIP ViT-L14-336px as the backbone. **[R1]Cross-domain Open-world Discovery, ICML 2024 [R2] Domain consensus clustering for universal domain adaptation, CVPR 2021** **Implementation Details**. Considering both time constraints and GPU memory demands, we utilized six 40G A100 GPUs to execute the new experiments. DomainNet and VisDA use the same hyperparameter settings as smaller-scale datasets, specifically $\lambda_s=0.2$ and $\lambda_{exo}=1$. The learning rate is $5e-4$. *Table 1. Performance comparison (%) on DomainNet and VisDA using CLIP backbone. **(Baseline results from [R1]).*** |Method|Domain|Net|(173/172)|VisDA|(6/6)|| |-|-|-|-|-|-|-| ||OS*|UNK|HOS|OS*|UNK|HOS| |DCC|50.2|45.1|47.5|75.3|46.2|57.3| |UNIOT|59.2|45.1|51.2|75.7|49.4|59.8| |GLC[1]|62.9|50.6|56.1|73.4|58.7|65.2| |CROW[R1]|70.3|50.9|59.0|77.0|62.8|69.2| |COSDA-CLIP|**72.0**|**76.2**|**73.9**|**85.2**|**72.6**|**78.4**| **Key Findings from CLIP Backbone Experiments:** 1. Substantial HOS Improvements: - On DomainNet, COSDA-CLIP achieves **73.9% HOS, outperforming CROW by 14.9%**. - On VisDA, COSDA-CLIP reaches **78.4% HOS, outperforming CROW by 9.2%** . 2. Dual Strengths in OS\* and UNK: With CLIP, COSDA particularly further enhances known-class classification and unknown-class detection (UNK). - For DomainNet OS*, a 1.7% improvement; - For DomainNet UNK, a 25.3% improvement; - For VisDA OS*, an 8.2% improvement; - For VisDA UNK, a 9.8% improvement. *Table 2. Performance comparison on VisDA (VGG19). **(Baseline results from [R2]).*** |Metric|OSBP|STA|DCC[R2]|COSDA| |-|-|-|-|-| |OS*|62.9|66.8|68.8|**80.7**| |OS|59.2|63.9|68.0|**70.1**| *Table 3. Sub-Task Performance of COSDA Across Backbones on DomainNet* |Subtask|CLIP|||ResNet|50|| |-|-|-|-|-|-|-| ||OS*|UNK|HOS|OS*|UNK|HOS| |P-R|76.6|78.9|77.7|65.4|43.4|52.2| |P-S|68.8|79.1|73.6|55.1|76.9|64.2| |R-P|71.7|72.2|71.9|45.1|56.6|50.2| |R-S|70.9|77.2|73.9|42.4|49.0|45.5| |S-P|66.1|76.3|70.8|50.8|79.2|61.9| |S-R|78.1|73.2| 75.6|70.8|85.9|77.6| |**Avg.**|72.0|76.2|73.9|54.9|65.2|58.6| *Table 4. Performance comparison with [1][2] on Office-Home and Office-31* |Method|Office-Home|Office-31| |-|-|-| |GLC[1]|69.8|89.0| |LEAD[2]|70.0|90.1| |**COSDA**|**71.7**|**92.6**| Table 2 shows leading performance on VisDA (VGG-19). Table 3 exhibited detailed results of COSDA for 6 subtasks on DomainNet with ResNet-50 and CLIP. Table 4 supplements the performance comparison between COSDA and references [1][2] on small-scale datasets. These additional experiments could further confirm the effectiveness of COSDA. **Additionally, we will well cite [1][2] you provided in the updated revisions**. > **Weaknesses 2**. The OS* of the proposed framework is worse than baselines on all datasets. For example, 0.3 on Image-CLEF, 3.2 on Office-31, and 6.7 on Office-Home. **A2**. We sincerely appreciate the reviewer's insightful observation regarding the OS* performance. We acknowledge that our method demonstrates a modest compromise in OS* scores. However, this trade-off enables significant improvements in UNK (33.2%, 43.4%, and 49.4% gains, respectively). HOS is the most crucial metric, which achieves a balance between known-class and unknown-class recognition. For HOS, COSDA outperforms the highest OS* methods by 19.3%, 28.1%, and 36.5% on three small-scale datasets. Notably, **COSDA achieves OS\*, UNK, and HOS improvement on challenging DomainNet (Table 1)**. These results could substantiate the advantages of our approach. > **Other Comments**. There are too many loss terms in Eq (25) which is complex and may make optimization difficult. How do you balance the contribution of different terms? **A3**. Thank you for raising this important point. To balance the contributions of different loss terms in the overall objective, we introduced weight hyperparameters for loss items. These hyperparameters were optimized via a grid search strategy to ensure an appropriate trade-off for the dataset. As shown in Fig. 6, we analyzed the impact of parameters. We sincerely appreciate your insightful comments again. Please let us know if you need any further information or if there are additional points you would like to discuss with us. Best regards, Authors of #10234 --- Rebuttal Comment 1.1: Comment: I want to thank the authors for the rebuttal, most of my concerns are addressed and I therefore increase the score to 3. --- Reply to Comment 1.1.1: Comment: Dear Reviewer pQYc, We sincerely appreciate your great support, which means a great deal to us! Engaging in this discussion with you has been truly rewarding. Thank you once again for your valuable time and effort ! Best regards, Authors of #10234
Summary: This paper introduces COSDA, a novel causal-based Open-Set Domain Adaptation (OSDA) framework. It proposes Susceptibility Risk, a theoretical approach to measuring and mitigating the risk associated with domain shifts and unknown category recognition. Then, three core components are developed: Susceptibility Risk Estimater (SRE), Contrastive Feature Alignment (CFA), and Virtual Multi-unknown-categories Prototype (VMP), all of which contribute to better feature alignment and improved classification of unknown categories. Extensive experiments on benchmark datasets (Office-Home, Office-31, and Image-CLEF) demonstrate that COSDA outperforms state-of-the-art methods, achieving significant improvements in accuracy and robustness. ## update after rebuttal Claims And Evidence: The claims are well supported by the evidence. Methods And Evaluation Criteria: The method and evaluation criteria make sense for the problem. Theoretical Claims: There is no obvious issue in all the definitions, lemmas, propositions, theorems, and corollaries. Experimental Designs Or Analyses: In general, the soundness of experimental designs and analyses is strong. However, since the experiments were conducted on Image-CLEF, Office-31, and Office-Home datasets, whether the method can perform well on harder data (like DomainNet [1] QuickDraw) remains unknown. [1] Moment matching for multi-source domain adaptation, ICCV 2019 Supplementary Material: Appendix A, C, D are checked. The results and discussion in the Appendix aligned well with the results and claims in the main paper. Relation To Broader Scientific Literature: This work views the OSDA setting from the causal inference perspective and bridges the gap between causal-inspired theoretical frameworks and OSDA. Essential References Not Discussed: - Other Strengths And Weaknesses: - Other Comments Or Suggestions: - Questions For Authors: 1. ImageNet-pretrained ResNet 50 is used for all the experiments. However, as discussed in [1] and [2], all the conclusions and insights can change when using a foundation model like CLIP or DINO_v2 as the backbone. Also, given that stronger pretrained backbones generally enhance image classification performance, it is unclear why ResNet-50 remains the preferred choice. Could you discuss this decision and its potential impact on the findings? 2. The step of building the centroids of all known and unknown classes shares the same idea as in [2] and [3]. Could you please discuss the main difference between method level and motivation level? 3. The motivation for using causality is strong and clear. However, the motivation behind the overall method design is difficult to follow.. The framework consists of multiple components (SRE, CFA, and VMP), but their individual necessity and how they collectively contribute to the causal inference remain unclear. Could you clarify the motivation behind designing these three components, their specific roles in causal inference, and how they interconnect within the overall methodology? [1] Universal domain adaptation from foundation models: A baseline study [2] Cross-domain Open-world Discovery, ICML 2024 [3] Domain consensus clustering for universal domain adaptation, CVPR 2021 Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **Dear Reviewer 6vAQ,** Thank you for your decision and constructive feedback. We have stuied the comments carefully and made through revisions. We hope that our responses have helped to clarify the concerns. > **Experimental Designs Or Analyses & Q1**. whether the method can perform well on harder data (like DomainNet [1]) remains unknown ; All the conclusions and insights can change when using different backbones. Why ResNet-50 remains the preferred choice? **A1**. We sincerely appreciate your insightful suggestions and questions. Initially, we adopted ResNet50 as the backbone to maintain consistency with prior work in this domain. However, as you rightly pointed out, evaluating the model with more advanced backbones and larger-scale datasets would better demonstrate its robustness. Following this suggestion, we implemented addtional experiments on VisDA[3] and DomainNet[1][2] and discovered that **our method demonstrates good performance on both CNN-based and CLIP-based architectures**. *Table 1. Comparison results on DomainNet and VisDA with CLIP backbone **(All baseline results are obtained from Reference [2]).*** |Method|Domain|Net|(173/172)|VisDA|(6/6)|| |-|-|-|-|-|-|-| ||OS*|UNK|HOS|OS*|UNK|HOS| |DCC[3]|50.2|45.1|47.5|75.3|46.2|57.3| |UNIOT[1]|59.2|45.1|51.2|75.7|49.4|59.8| |CROW[2]|70.3|50.9|59.0|77.0|62.8|69.2| |**COSDA-CLIP**|**72.0**|**76.2**|**73.9**|**85.2**|**72.6**|**78.4**| **Implementation Details**. Considering both time constraints and GPU memory demands (particularly for the larger models), we utilized six 40GB NVIDIA A100 GPUs to execute the new experiments. DomainNet and VisDA use the same hyperparameter settings as smaller-scale datasets, specifically $\lambda_s=0.2$, $\lambda_{exo}=1$. But the learning rate has been reduced, specifically $lr = 5e-4$. **Key Findings from CLIP Backbone Experiments:** 1. Substantial HOS Improvements: - On DomainNet, COSDA-CLIP achieves **73.9% HOS, outperforming CROW by 14.9%**. - On VisDA, COSDA-CLIP reaches **78.4% HOS, outperforming CROW by 9.2%** . 2. Dual Strengths in OS\* and UNK: With CLIP, COSDA particularly further enhances known-class classification and unknown-class detection (UNK). *Table 2. Performance of COSDA on DomainNet with ResNet50* ||ResNet50||| |-|-|-|-| ||OS\*|UNK|HOS| |**Avg.**|54.9|65.2|58.6| We also note that with ResNet50, COSDA surpasses DCC [3] and UNIOT [1], confirming its robustness across backbones. **The suggested references will be well cited in our updated version.** > **Q2**. The step of building the centroids of all known and unknown classes shares the same idea as in [2] and [3]. Please discuss the main difference between method level and motivation level. **A2**. We sincerely appreciate your insightful questions. In our work, the **VMP** module serves as a functional component to ensure **CFA**, rather than acting as a standalone clustering solution. I carefully read the papers you provided, and the difference lies in the fact that we **exclude all known-class positive samples and cluster only the remaining negative samples** (i.e., those not belonging to any known class), unlike [2][3], which cluster _all target_ samples. Due to the reduction in the number of samples and clusters, VMP has saved on the computational cost. Besides, unlike one-vs-all strategies requiring C clustering operations per epoch (C = the number of known classes), COSDA achieves comparable performance with only one clustering per epoch. This design reduces complexity from O(CN) to O(N). Additionally, we acknowledge that advanced clustering could further optimize this step and will explore this in future work. > **Q3**. Could you clarify the motivation behind designing SRE, CFA, and VMP, their specific roles in causal inference, and how they interconnect within the overall methodology? **A3**. We appreciate your questions regarding the methodological framework. SRE quantifies causal feature representation ability via susceptibility analysis. Direct SRE estimation in the target domain is infeasible due to label scarcity. To resolve this problem, we first decompose target-domain expected risk into open-set and closed-set, then bridge source/target risks using domain-invariant representations, and finally derive generalization bounds to narrow the gap between empirical and expected susceptibility risks (Theorems 1-2 & Corollary 1). To satisfy the exogeneity assumption for causal identifiability, we propose CFA, which encourages independence across causal features belonging to different categories via information bottleneck (Proposition 2) and then introduces VMP pseudo-label strategy (Eq. 17-19). Additionally, our code is open-sourced to show how SRE, CFA, and VMP are integrated end-to-end. We sincerely appreciate your insightful comments once again. Please let us know if you need any further information or if there are additional points you would like to discuss with us. Best regards, Authors of #10234 --- Rebuttal Comment 1.1: Comment: Thank you for your effort during the rebuttal. I am glad to see your method working well with the CLIP backbone and outperforming CROW, the current state-of-the-art, by a large margin. Also, the explanations of Q2 and Q3 are clear. I will update the score from 3 to 4. Some suggestions to update the current draft: 1. The experiments in Q1 can be added to a new section 4.6, showing the robustness across different backbones, especially the strong backbones (foundation model). 2. The answer to Q3 clearly explains the motivation for the design of the method. It can be added at the beginning or end of the method part. Right after section 3.1 or at the end of section 3 as a summary. Since this approach consists of many parts, it is better to provide the reader with a clearer and logical understanding of the design. --- Reply to Comment 1.1.1: Comment: Dear Reviewer 6vAQ, We sincerely appreciate your great support, which means a great deal to us! In response to your suggestions, we will add the experiments in Q1 to a new section 4.6, and add our response to Q3 at the beginning or end of the Methods section. Once again, we are deeply grateful for your support and guidance. Best regards, Authors of #10234
null
null
null
null
null
null
Constrained Exploitability Descent: An Offline Reinforcement Learning Method for Finding Mixed-Strategy Nash Equilibrium
Accept (poster)
Summary: This paper proposes an offline RL method to solve mix-strategy Nash Equilibrium via a game-theoretic method, exploitability descent. Claims And Evidence: Is best-iterate convergence better than average-iterate convergence? Would be good to get more detailed comments on this. Methods And Evaluation Criteria: The main concern is the algorithmic design in Algo 2. From my understanding, we should compute exploitability first and then do descent of it. So \nu_0 shouldn't be \nu_\beta, but br(\mu_0). In practice it might not influence a lot, but I wonder if it is a simple mistake or the authors do it on purpose? Evaluation demonstrates the convergence to mixed-strategy NE. But the results in Fig 2 to me did not show much gap from model-based ED. Given ED is originally widely evaluated on pokers, it would be more convincing to test on pokers. Current environments are relatively toy. Some baselines are lacking. To me, the proposed will be more convincing if it can beat behavior cloning. In those perfect-information games, it would be better to compare with simple in-sample minimax Q-learning (like multi-agent version of Implicit Q-Learning) introduced in this work: Tang, Xiaohang, et al. "Adversarially Robust Decision Transformer." *arXiv preprint arXiv:2407.18414* (2024). Theoretical Claims: In general make sense. Minor one: The notation of policy is problematic. In Algorithm 1 and 2, the authors have \mu sometimes a function of s, sometimes of s and a. Would be better to be rigorous. Experimental Designs Or Analyses: Fig 2 is relatively confusing. I cannot conclude by looking at the dynamics of action probabilities without comparing to the solution of the game. Shouldn't the \mu^* and \nu^* plotted in the figure as in Fig 1? Supplementary Material: NA Relation To Broader Scientific Literature: This paper extend the following paper to offline RL setting: Lockhart, Edward, et al. "Computing approximate equilibria in sequential adversarial games by exploitability descent." arXiv preprint arXiv:1903.05614 (2019). Essential References Not Discussed: Some other offline RL for game solving literatures: Li, Shuxin, et al. "Offline equilibrium finding." arXiv preprint arXiv:2207.05285 (2022). Tang, Xiaohang, et al. "Adversarially Robust Decision Transformer." *arXiv preprint arXiv:2407.18414* (2024). Chen, Jingxiao, et al. "Offline Fictitious Self-Play for Competitive Games." *arXiv preprint arXiv:2403.00841* (2024). Other Strengths And Weaknesses: Strength: This paper studies the open problem of offline equilibrium finding, leverage exploitability descent is an interesting direction. Weakness: The main issue is the insufficient environment and baselines. I'm happy to increase the score if the concerns are addressed. Other Comments Or Suggestions: NA Questions For Authors: Is Markov Game a good formulation? For matrix game it is fine since it's state free. But tree-form game (extensive-form game) is history-based. In this case, will MG be a limited formulation? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for reviewing this paper and providing valuable comments. We are updating the manuscript according to the comments from all reviewers. Here, we reply to your questions and concerns about the paper. **[Claims And Evidence]** Yes, average-iterate convergence means that we have to preserve an averaged policy from history policies. When the policy is represented by neural networks, such an averaging can hardly be accurate if we only preserve the parameters for the current policy. Even if we save the models of all history policies, it is costly to generate the average policy by querying each one of them. In comparison, best-iterate convergence usually has a similar behavior as last-iterate convergence, with a near-monotone policy improvement over training iterations. Under last-iterate convergence (which we prove for CED), it is reasonable to only preserve the current policy. **[Methods And Evaluation Criteria]** _**Comment**_: The main concern is the algorithmic design in Algo 2. From my understanding, we should compute exploitability first and then do descent of it. So \nu_0 shouldn't be \nu_\beta, but br(\mu_0). In practice it might not influence a lot, but I wonder if it is a simple mistake or the authors do it on purpose? Yes, we do it on purpose. Actually, a major technical difference between CED and ED is that CED optimizes the min-player strategy $\nu$ rather than the max-player strategy $\mu$. The purpose of CED is not to guarantee that $\mu$ approaches Nash equilibrium like ED, which is theoretically impossible when $\nu$ is regularized. Instead, we prove the unexploitability of the last-iterate $\nu$ under the convergence of $\mu$, with a theoretical analysis quite different from common best-iterate analysis on exploitability descent. Besides, CED does not compute an exact best response br$(\mu)$, but a state-level approximate BR in the last line of Algorithm 2. Since CED optimizes $\nu$, its computation is placed after $\mu$. _**Comment**_: Some baselines are lacking. To me, the proposed will be more convincing if it can beat behavior cloning. Actually, behavior cloning (BC) can serve as the first step of CED, i.e., to compute the behavior policy $(\mu_\beta,\nu_\beta)$ from the dataset. Since CED initializes $(\mu,\nu)$ to be $(\mu_\beta,\nu_\beta)$, the performance of the behavior policy corresponds to the starting points in Figures 3 and 4. In the tabular case, our existing experiments verify that CED consistently improves the performance of the behavior policy, which is in theory close to the result of BC. Currently, we have also implemented CED in a large-scale perfect-information game that simulates a two-team robotic combat scenario, where each team consists of three homogeneous robots. The game map is abstracted as a 100-node graph, where each robot can move to an unoccupied adjacent node or attack an enemy at each time step. A GIF illustration for a complete game is provided in the anonymous link https://sites.google.com/view/icml-2025-9335/. Among the mentioned references, Tang et al. (2024) require separate data relabeling and decision transformer training. Li et al. (2022) and Chen et al. (2024) employ PSRO and FSP, respectively, both of which require preserving all history policies. For a direct comparison, we alternatively use offline self-play (OSP in Chen et al.) as a baseline to test the last-iterate performance of CED under the same offline dataset (with 1000 trajectories) and network architecture (for representing the in-team joint policies). We consider two initializations with BC-approximated behavior policy or random policy. Figures A2 and A3 in the link show that CED can defeat BC policy under either initialization and has a comparatively better offline learning performance than OSP. **[Theoretical Claims]** Thanks for the comment. For the notation $\mu(s,a)$, it is defined as the probability of selecting action $a$ under policy $\mu(s)$ (in Line 125 of Page 3). **[Experimental Designs Or Analyses]** Lines 362-365 on Page 7 suggest that $\mu^*=\nu^*=(1/9,/1/9,1/9,1/3,1/3)$, and the dashed lines in Fig 2 correspond to $1/9$ and $1/3$, respectively. We will make it clearer in our revision. **[Questions For Authors]** As is stated in the conclusion part, we agree that it is a good direction to extend CED to imperfect-information games like poker. However, as shown in our response to the other two reviewers, CED has a few technical differences with ED and does not naturally follow the applicability of ED. In terms of offline learning, current theoretical works mainly focus on MGs and have few guarantees in imperfect-information games like history-based extensive-form games (EFGs). While we agree that MG is a simplified formulation, it is still challenging to answer fundamental problems like how to estimate the counterfactual value using offline game data in EFGs. Thanks again for your comments. We are looking forward to having further discussions with you. --- Rebuttal Comment 1.1: Comment: Thanks for the response. I have increased the score. If Figures 3 and 4 have BC, please mark it and add it to the legend for easier comparison. --- Reply to Comment 1.1.1: Comment: Thank you! We have marked BC in our updated figures for a direct comparison.
Summary: The authors extend Exploitability Descent to the offline setting by applying a regularization constraint to minimize distance to the behavior policy. They provide theoretical guarantees for convergence under uniform concentration assumptions, and they provide experiments empirically validate their method, CED, on toy games. Claims And Evidence: The authors both prove and demonstrate CED's convergence properties under uniform concentration assumptions, and they also include an empirical result showing improvement over the behavior policy with non-uniform coverage. Methods And Evaluation Criteria: The evaluation criteria (demonstrating convergence to optimal actions and reduction of NashConv) with and without uniform dataset coverage are appropriate evaluation criteria. However, the games tested are incredibly small. The authors argue in appendix C.2 that larger scale games are not evaluated on because calculating approximate NashConv with RL is an inaccurate measurement to use. I disagree with this argument under certain conditions. I believe this measurement accuracy tradeoff is acceptable if they were to scale up to slightly larger games in which RL algorithms can still reliably find approximately optimal solutions. Theoretical Claims: I did not rigorously check the correctness of proofs. Experimental Designs Or Analyses: The experimental design is sound. Leaving out an action at every state is a reasonable choice for the non-uniform coverage experiment. Supplementary Material: I read appendices B though D. Relation To Broader Scientific Literature: Most model-free methods for Markov games consider the online setting. Here, they extend Exploitability descent to the offline setting with Constrained Exploitability Descent. Essential References Not Discussed: The literature review is sufficient and accurately contextualizes this work in the broader field. Other Strengths And Weaknesses: Strengths: - Extending online Markov game algorithms to the offline setting has immediate and clear utility for the community as a whole. - The paper is well-written, and the claims made are reasonable and validated. Weaknesses: - My main complaint with this paper is that only toy games were tested on, and the authors did not consider an extension of CED to function approximation. I think doing so would have made the paper significantly stronger. - I also am concerned with the scalability of CED (or future extensions of it) to larger games, and I ask the authors to address this in Questions. Other Comments Or Suggestions: The y-axes for Figure 2 need to be labelled. More descriptive figure captions in general would improve readability greatly. Questions For Authors: A critical limitation to scaling up ED with neural networks compared to methods like PSRO[1], MMD[2], and NFSP[3], is that an approximate best response operator would be required for every single gradient update. Wouldn't this easily cripple the usability of ED-based methods in larger games? Could the authors please comment on the severity of this limitation and discuss approaches the field has made towards addressing this? I think adding a paragraph on this would address a blind spot in the limitations. [1] Lanctot, Marc, et al. "A unified game-theoretic approach to multiagent reinforcement learning." [2] Sokota, Samuel, et al. "A Unified Approach to Reinforcement Learning, Quantal Response Equilibria, and Two-Player Zero-Sum Games." [3] Heinrich, Johannes, and David Silver. "Deep reinforcement learning from self-play in imperfect-information games. Ethical Review Concerns: I have no ethical concerns for this work. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for reviewing this paper and providing valuable comments. We are updating the manuscript according to the comments from all reviewers. Here, we reply to your questions and concerns about the paper. **[Questions For Authors]** _**Question**_: A critical limitation to scaling up ED with neural networks compared to methods like PSRO[1], MMD[2], and NFSP[3], is that an approximate best response operator would be required for every single gradient update. Wouldn't this easily cripple the usability of ED-based methods in larger games? Could the authors please comment on the severity of this limitation and discuss approaches the field has made towards addressing this? I think adding a paragraph on this would address a blind spot in the limitations. We agree that directly extending ED to deep RL algorithms faces the problem that we need to approximate a best response (BR) of the current $\mu$ in every single gradient update. However, this requirement is relaxed in the proposed method CED (Algorithm 2). Actually, the "approximate best response" that we require in the second inner loop is only at the level of each single state, given a state-action value function $Q^{\mu_\beta,\nu_\beta}$ preprocessed outside the main loop. This is in sharp contrast to computing an exact BR against current $\mu$ and does not need a separate BR oracle at all. In our tabular experiments, we simply traverse all states and compute current $\nu$ by Lemma 4.1. When we employ a function approximator for $\nu$, a direct approach is to update its parameters along the gradient of the current target in Line 191. Since this target only changes with $\mu(s)$ at each state $s\in S$, it is reasonable for $\nu$ to take a comparative amount of gradient steps as $\mu$ in each iteration of the main loop. We can add a paragraph and discuss the benefit of this difference for potential deep RL extensions. **[Other Strengths And Weaknesses]** _**Comment**_: My main complaint with this paper is that only toy games were tested on, and the authors did not consider an extension of CED to function approximation. I think doing so would have made the paper significantly stronger. Thank you for this comment, and we agree that it is more convincing to evaluate CED under function approximation. Currently, we have implemented CED in a large-scale Markov game that simulates a two-team robotic combat scenario, where each team consists of three homogeneous robots. The game map is abstracted as a 100-node graph, where each robot can move to an unoccupied adjacent node or attack an enemy at each time step. The HP reduction is influenced by the terrain and actual distance between the attacker and the target. A GIF illustration for a complete game is provided in the anonymous link https://sites.google.com/view/icml-2025-9335/. We construct an offline dataset that contains $2000$ game trajectories, where the actual behavior policy for both teams is a cooperative MARL policy previously trained against a rule-based opponent. We first use supervised behavior cloning (BC) to approximate the behavior policy from the dataset. This corresponds to the first step of CED (Algorithm 2), and we verify that the learned behavior policy has a win rate around 50% against the actual behavior policy. Therefore, we simply use the win rate against the learned behavior policy (i.e., the result of behavior cloning) as the performance measure. We compare the last-iterate performance of CED with offline self-play (OSP in [4]) under the same network architecture. We consider initializations under either behavior policy or random policy. Figure A2 in the link shows that, under either initialization, both CED and OSP eventually outperform BC-approximated behavior policy, and CED has a comparatively better learning performance than OSP. Figure A3 shows the tested win rates of the four learned policies against each other. While the initializations under behavior policy have a clear advantage over random initializations, random-initialized CED still has a close win rate against BC-initialized OSP under the same number of iterations. **[Other Comments Or Suggestions]** _**Comment**_: The y-axes for Figure 2 need to be labelled. More descriptive figure captions in general would improve readability greatly. Thank you for this comment. In our revision, we have labeled the y-axes for Figure 2 and provided more descriptive captions for the figures. **References**: [1] Lanctot, Marc, et al. "A unified game-theoretic approach to multiagent reinforcement learning." [2] Sokota, Samuel, et al. "A Unified Approach to Reinforcement Learning, Quantal Response Equilibria, and Two-Player Zero-Sum Games." [3] Heinrich, Johannes, and David Silver. "Deep reinforcement learning from self-play in imperfect-information games." [4] Chen, Jingxiao, et al. "Offline Fictitious Self-Play for Competitive Games." Thanks again for your comments. We are looking forward to having further discussions with you. --- Rebuttal Comment 1.1: Comment: Thanks for the clarifications and additional results. Both of my concerns are somewhat mitigated, and I am leaning towards Accept. --- Reply to Comment 1.1.1: Comment: We are delighted that our response and additional results help to mitigate your concerns. Thank you again for providing the valuable comments and helping us further improve this paper.
Summary: This paper introduces Constrained Exploitability Descent (CED), a novel model-free offline reinforcement learning algorithm for adversarial Markov games (MGs). The authors demonstrate, both theoretically and empirically, that, unlike in MDPs, an optimal policy can be learned under policy constraints in adversarial MGs. They prove that CED converges to an unexploitable min-player policy under uniform coverage without relying on generalized gradients. Experiments in multiple game scenarios validate these theoretical results, and similar to single-agent offline RL algorithms, CED can improve the behavior policy even with non-uniform data coverage. Claims And Evidence: See methods and evaluation below Methods And Evaluation Criteria: * The proof of theorem 1 relies on the assumption that $\frac{1}{\epsilon} \rightarrow 0$, I am curious about if the empirical performance will be improved with increasing $\epsilon$. Theoretical Claims: Seems correct to me. Experimental Designs Or Analyses: Simple setup but sufficient to support the goal of this method Supplementary Material: I only quickly browse the proof and seems fair to me. Relation To Broader Scientific Literature: Contribute to game theory/ multi-agent in offline RL Essential References Not Discussed: I think the authors may already cover most of the references. Other Strengths And Weaknesses: Strengths: * The paper presents solid theoretical results, proving that CED converges to a stationary point in deterministic two-player zero-sum Markov games, given the assumption of uniform data coverage. * CED does not rely on generalized gradient computation. Weaknesses: * Theorem 5.2 only provides asymptotic convergence analysis under assumption of uniform coverage Other Comments Or Suggestions: N/A Questions For Authors: * Could you elaborate more on the novelty of the proposed method? It seems that the contribution is in terns of the improvement. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for reviewing this paper and providing valuable comments. We are updating the manuscript according to the comments from all reviewers. Here, we reply to your question and concern about the paper. **[Questions For Authors]** _**Question**_: Could you elaborate more on the novelty of the proposed method? It seems that the contribution is in terms of the improvement. Yes, we can provide more explanations on the novelty of CED. While the proposed method resembles ED, the learning behaviors are quite different. ED exhibits best-iterate convergence, while CED guarantees last-iterate convergence. The improvement from best-iterate convergence to last-iterate convergence is usually at the sacrifice of policy optimality. Actually, after we apply divergence regularization to the computation of the min-player policy $\nu$, the convergence of the max-player policy $\mu$ improves, but the convergent point is no longer a Nash equilibrium policy. The CED method, however, is established upon the surprising observation that the opponent policy $\nu$ can instead preserve the property of being unexploitable as long as no explicit regularization is applied to the update of $\mu$. That is why we apply direct policy constraint on $\mu$ rather than policy penalty under the offline setting. The proposed method eventually guarantees last-iterate convergence, policy unexploitability, and bounded distributional shift at the same time. **[Methods And Evaluation Criteria]** _**Comment**_: The proof of theorem 1 relies on the assumption that $\frac{1}{\epsilon}\to\infty$, I am curious about if the empirical performance will be improved with increasing $\epsilon$. Thank you for this comment. Actually, while the assumption $\frac{1}{\epsilon}\to\infty$ theoretically guarantees the convergence of CED, it is not a necessary condition in practical games. As is stated in the second paragraph of Section 6.3, it does not affect convergence to use a small (but not too small) regularization parameter $\epsilon$ in the soccer game. As is shown in Figure 3 (right), while CED cannot converge under $\epsilon=10^{-4}$, it actually converges when $\epsilon\geq 10^{-3}$. Besides, the performance under $\epsilon=10^{-3}$ is better than under $\epsilon=10^{-2}$. Therefore, the empirical performance of CED is not guaranteed to improve with increasing $\epsilon$. Empirically, there is a trade-off with respect to the selection of $\epsilon$. If $\epsilon$ is very large, the computation of $\nu$ is reduced to behavior cloning. The equilibrium property of the converged $\bar{\nu}$ is no longer guaranteed since the interior-point premise for Theorem 5.6 can hardly be satisfied. From our experience, as long as $\epsilon$ is sufficiently large for the practical convergence of CED, a relatively small $\epsilon$ generally guarantees a relatively low NashConv of the learned policy $\nu$ under the same number of iterations. Thanks again for your comments. We are looking forward to having further discussions with you.
null
null
null
null
null
null
null
null
Non-asymptotic Error Bounds in $\mathcal{W}_2$-Distance with Sqrt(d) Dimension Dependence and First Order Convergence for Langevin Monte Carlo beyond Log-Concavity
Accept (poster)
Summary: When generating samples from a target distribution $\pi$ from a large dimension $d$ -- including when the normalization constant is unknown -- one often employs Langevin Monte Carlo (LMC). This method starts by constructing a Langevin diffusion where its invariant distribution matches the desired target distribution and then runs a discretized version of the diffusion to generate samples. Since the discretization stepsize $h$ introduces some error, these samples are not exactly from the target distribution. This paper, along with a vast literature before it, aims to quantify the rate of convergence of LMC samples toward the target distribution in terms of the $L_2$ Wasserstein metric and terms $h$ and $d$. The paper argues under a wide set of assumptions, e.g. the target distribution satisfies a log-Sobolev inequality and dissipaivity condition, that the error rate is $\tilde{O}(\sqrt{d}h)$ which is state of the art. Crucially, this improves upon past work that assumes the target distribution must be strongly log concave. The authors also provide a framework for proving non-asymptotic results for other samplers, namely the projected LMC sampler pLMC. ## update after rebuttal Based on the authors feedback and other reviews, I'm inclined to keep my score of a weak accept. Claims And Evidence: Yes, the simplified theoretical argument in the paper seems reasonable, and all assumptions required to obtain the desired bounds on the LMC convergence are clearly laid out. The theoretical arguments are corroborated by some empirical experiments to demonstrate that their rates of convergence are followed in the case of a Gaussian mixture model target distribution. Methods And Evaluation Criteria: The core contribution of the paper is theoretical, although the empirical experiment offered in Section 4 does align with their theoretical prescriptions. Theoretical Claims: I briefly investigated some of the proofs for the arguments supplied (Appendix B and C) and saw no obvious errors. Experimental Designs Or Analyses: The empirical work in this paper is light, but what the authors have provided appears valid. Supplementary Material: I looked through some parts of the Appendix, mostly parts B and C. Relation To Broader Scientific Literature: The theoretical work to quantify the convergence rates of LMC is mostly of theoretical interest, but there is a good chance that ideas from this work could lead to a practical variation of LMC that achieves better convergence properties. Essential References Not Discussed: To the best of my knowledge (which in this field is not up to date) the authors have included necessary references. Other Strengths And Weaknesses: For the most part, the paper is very thorough and well written. The theoretical arguments are carefully laid out and a simplified version in the main paper offers guidance on how to read the proofs. I found the comparison to other work also eludicating, as this field has many papers which all have relatively small but significant differences in the error bounds they can provide. The main novelty of the paper appears to be the way that the authors decompose the problem into two pieces: a finite time mean square fundamental convergence theorem for SDEs that handles quantifies discretization error accumulated from LMC, and an appeal to erdogicity that bounds the error from only running a finite number of LMC steps. This does seem to be a significant approach for tackling LMC convergence when the target distribution is appropriately behaved. This does require that the target distribution satisfy a log-Sobolev inequality and dissipaivity condition, which may be a larger assumption than previous work. This may be the work's biggest weakness. Other Comments Or Suggestions: There are a couple possible typos: L431: It should read "probability distributions" Figure 1(c) and (d): Should the x-axis be "stepsize" instead of timesteps? It seems that the discretization error should decrease with the number of time steps, but perhaps I'm confused by the terminology. Questions For Authors: [Q1] While most assumptions laid out in this paper are relatively simple to understand, Assumption 3.3 seems less obvious. How severe of an assumption is this? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: ## Response to Reviewer rDrb Thank you for your valuable feedback on our paper. We are grateful for your thoughtful comments, which have guided us in refining the manuscript. Here, we address each of your questions in detail and highlight the changes made accordingly. ### About *Weakness* > This does require that the target distribution satisfy a log-Sobolev inequality and dissipaivity condition, which may be a larger assumption than previous work. This may be the work's biggest weakness. **Response**: Thanks a lot for your valuable comment. The log-Sobolev inequality and dissipaivity condition is indeed more strict than the Talagrand transport inequality and the Poincare inequality. However, the error bound $O(\sqrt{d}h)$ was obtained for LMC under a strongly log-concave condition, see, Li et al. (2022). The main aims of this work are to answer the key question: *Can the error bound $O(\sqrt{d}h)$ still hold true for LMC without the strongly log-concave condition?* As discussed above, compared with the strongly log-concave condition, the log-Sobolev inequality and dissipaivity condition are weak. We aim to do the error analysis under more relaxed conditions for further work. ### About *Comments Or Suggestions* > L431: It should read "probability distributions". **Response**: Corrected! Thanks! > Figure 1 (c) and (d): Should the x-axis be "stepsize" instead of timesteps? It seems that the discretization error should decrease with the number of time steps, but perhaps I'm confused by the terminology." **Response**: Thanks for pointing out this issue. You are absolutely right that the x-axis is "stepsize". We are sorry for this and will fix this typo in the revision. ### About *Questions* > While most assumptions laid out in this paper are relatively simple to understand, Assumption 3.3 seems less obvious. How severe of an assumption is this? **Response**: Thanks for your comment. Assumption 3.3 means that moments of a numerical algorithm should be uniform-in-time bounded, which is essentially used in the infinite-time convergence analysis. In Section 2, concrete numerical methods (such as the LMC algorithm and the pLMC algorithm) are provided satisfying Assumption 3.3 under some assumptions (see Lemma 2.9 and Lemma 2.15). In the revision, we will add some comments and discussions following Assumption 3.3 for better readability.
Summary: This paper addresses the challenge of sampling from non-log-concave distributions, including those that satisfy a dissipativity condition or a log-Sobolev inequality. The authors approach this problem by discretizing the Langevin dynamics and establish a state-of-the-art convergence rate of d^{1/2}\varepsilon^{-1} in the W_2 distance, under the assumptions of gradient Lipschitz continuity and linear growth of the third derivative. The theoretical findings are further verified by numerical experiments. Claims And Evidence: The paper’s primary claims are supported by a rigorous theoretical framework and are backed by numerical experiments on controlled examples. However, there are areas where the evidence is less convincing: - the optimal error bound sounds confusing since this paper gives improved bound with stronger assumptions and there is no lower bound compared to their upper bound. Methods And Evaluation Criteria: The proposed methods are well-suited to the problem. The paper develops a robust uniform-in-time convergence framework and provides optimal error bounds in W_2 distance, which are standard and relevant metrics for evaluating sampling algorithms. The use of synthetic benchmarks, such as Gaussian mixtures and double-well potentials, is appropriate for initial validation, though further testing on diverse or real-world datasets could enhance the evaluation. Theoretical Claims: I only read the proof in the main body. Experimental Designs Or Analyses: I only check it in high level. Supplementary Material: I did not review it. Relation To Broader Scientific Literature: Their framework closely resembles traditional SDE discretization methods. However, while most analyses assume a bounded time horizon, in the context of sampling the relevant time horizon scales as \log d. In this paper, the authors explicitly characterize how the error depends on T. Essential References Not Discussed: I think the related works appear to be appropriately cited and discussed in the paper. Other Strengths And Weaknesses: There are too many assumptions and the presentation could be clearer. Other Comments Or Suggestions: - Line 330, Assumptions 3.1, 3.4, 3.5, 3.3 -> Assumptions 3.1, 3.3, 3.4, 3.5 - Line 331 assssumed -> assumed - In Assumption 2.6, There exists a dimension independent constant -> There exists a dimension‐independent constant - Line 424 This framework can also applies to -> This framework can also be applied to Questions For Authors: - Can your framework be extended to other metrics? - Beyond the difference in time horizon, how does your framework in Section 3 differ from that of Tretyakov & Zhang (2013)? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: ## Response to Reviewer H65o We sincerely appreciate your time and effort in reviewing our manuscript. Your insightful comments and constructive suggestions have greatly helped us improve the quality of our work. Below, we provide a point-by-point response to each of your concerns, along with the corresponding revisions in the manuscript. ### About *Claims and Evidence* > However, there are areas where the evidence is less convincing: the optimal error bound sounds confusing since this paper gives improved bound with stronger assumptions and there is no lower bound compared to their upper bound. **Response**: Thank you for pointing this out. We apologize for any confusion caused by this statement. You are right that there is no lower bound here and we will remove the word "optimal" and revise this statement throughout the paper, following your comments. ### About *Weakness* > There are too many assumptions and the presentation could be clearer. **Response**: Thanks. We agree that the current presentation can be improved for better readability. Indeed, Section 3 is a general framework of uniform-in-time convergence for general SDEs and Section 2 is focused on the particular Langevin SDEs. In the revision, we plain to reformulate "assumptions" in Section 3 as several “conditions” (e.g. $H_1$, $H_2$,...). Then Section 2 put some assumptions on Langevin SDEs so that conditions in Section 3 can be satisfied and theoretical results there can be applied to Langevin SDEs. If you have any other good idea, please let us know. Thanks a lot. ### About *Comments Or Suggestions* > - Line 330, Assumptions 3.1, 3.4, 3.5, 3.3 -> Assumptions 3.1, 3.3, 3.4, 3.5. > - Line 331 assssumed -> assumed. > - In Assumption 2.6, There exists a dimension independent constant -> There exists a dimension‐independent constant. > - Line 424 This framework can also applies to -> This framework can also be applied to. > **Response**: Corrected. Thanks! ### About *Questions* >1. Can your framework be extended to other metrics? **Response**: Thanks for your question. Yes! Our framework can be extended to other metrics, which relied on 3 conditions below: - the metric satisfies the triangle inequality, such as total variation (TV), $W_p, p\in[1,\infty)$ distances; - the underlying Langevin dynamics is exponential ergodic with respect to the chosen metric; - the sampling algorithm admits finite-time convergence and uniform-in-time moment bounds. Once these properties are verified, the general framework in Section 3 remains applicable. This would be an interesting direction for our future research. > 2. Beyond the difference in time horizon, how does your framework in Section 3 differ from that of Tretyakov and Zhang (2013)? **Response**: Thanks a lot. To be honest, the finite-time convergence theorem (Theorem 3.7) in our paper, as well as Tretyakov and Zhang (2013), follows the original idea of Milstein (1988). The difference is that we need to provide explicit dependence on time and other parameters in the error bound, which are not done in Milstein (1988) and Tretyakov and Zhang (2013). Such explicit dependence, particularly on time $T$, is essential, as it allows us to combine the finite-time error estimates (Theorem 3.7) and the exponential ergodicity of the SDEs to establish the uniform-in-time convergence result (Theorem 3.9). As commented by the reviewer cxoz: "The authors establish this result through a new discretization analysis for SDEs which combines uniform-in-time LMC moment bounds with a finite-time fundamental mean-square convergence theorem".
Summary: This paper establishes an almost optimal convergence rate of $\tilde{O}(\sqrt{d}/\epsilon)$ in $W_2$-distance for Langevin Monte Carlo (LMC) when the target measure satisfies the log-Sobolev inequality, along with dissipativity and smoothness conditions. The authors establish this result through a new discretization analysis for SDEs which combines uniform-in-time LMC moment bounds with a finite-time fundamental mean-square convergence theorem. For non-smooth settings where the gradient norm may grow super-linearly, the authors study a projected version of LMC and establish $W_2$ convergence bounds using their discretization analysis. Claims And Evidence: The claims are supported by rigorous statements and proofs. Methods And Evaluation Criteria: Not applicable. Theoretical Claims: I didn't verify the correctness of the proofs, but the overall strategy seems sound. Experimental Designs Or Analyses: Not applicable since the paper is mostly theoretical. Supplementary Material: I didn't review the supplementary material. Relation To Broader Scientific Literature: The analysis of Langevin Monte Carlo is a fundamental problem in sampling and of interest to many researchers in the ICML community. Beyond establishing the optimal $W_2$ convergence rate for LMC under the log-Sobolev inequality and smoothness, the discretization analysis introduced here may be used in future analyses of LMC and its variants as well. Essential References Not Discussed: Most relevant references are cited. There are additional references in the *other comments or suggestions* section below whose discussion can help give a broader picture of LMC analysis. They are presented below. Other Strengths And Weaknesses: **Strenghts**: As mentioned above, the discretization analysis can open the room for novel analyses of LMC under different assumptions or for analyzing different variants of LMC. The paper also handles locally Lipschitz potentials which is a valuable contribution that has received less attention in the literature. **Weaknesses**: * There is not sufficient discussion on the implications of Assumption 2.2, and I think certain statements about when it holds might be incorrect. In fact, this assumption may even not he necessary, I think there are alternative approaches to prove continuous-time exponential ergodicity in $W_2$ under a log-Sobolev inequality, discussed below. * The dependence on most constants is implicit, which makes interpreting the results for certain examples complicated. The dimension-dependence of some constants is also not clear. Other Comments Or Suggestions: * I believe there is an alternative approach to prove Proposition 2.5 which does not rely on Assumption 2.2. Specifically, the log-Sobolev inequality guarantees that $\mathrm{KL}(\nu p_t, \pi) \leq e^{-4t/\rho} \mathrm{KL}(\nu, \pi)$. Moreover, a log-Sobolev inequality implies Talagrand's transport inequality with the same constant, i.e. $W_2(\nu p_t, \pi) \leq \sqrt{\rho \mathrm{KL}(\nu p_t, \pi)}$. Therefore $W_2(\nu p_t, \pi) \leq e^{-2t/\rho} \sqrt{\rho \mathrm{KL}(\nu p_t, \pi)}$, which only required Assumption 2.3. * Using the above argument, in fact we can weaken Assumption 2.3 to only a Poincaré inequality (as done in Chewi et al., 2024 for LMC) or even to weak Poincaré inequalities that covers heavy-tailed distributions (as done in Mousavi-Hosseini et al., 2023 for LMC). While the transport inequality no longer holds here, $W_2$ can be bound with Rényi distances (see Chewi et al., 2024 and Mousavi-Hosseini et al., 2023 for respective settings). Combined with the discretization analysis here, these approaches can lead to new error bounds for LMC in sub-exponential or heavy-tailed settings. * For better readability, the authors can change the list of assumptions in line 231 to Assumptions 2.1-2.3 and Assumptions 2.12-2.14. References: S. Chewi et al. "Analysis of Langevin Monte Carlo from Poincaré to Log-Sobolev." Foundations of Computational Mathematics 2024. A. Mousavi-Hosseini et al. "Towards a Complete Analysis of Langevin Monte Carlo: Beyond Poincare Inequality." COLT 2023. Questions For Authors: 1. I don’t see how Assumption 2.2 follows from Assumption 2.6. A simple application of Cauchy-Schwartz and triangle inequalities results in $$\langle x - y, \nabla U(x) - \nabla U(y)\rangle \geq -(2L’_1 d^{1/2} + L_1 \vert x \vert + L_1 \vert y \vert)\vert x - y\vert$$ But $\vert x \vert, \vert y \vert$ are not bounded. Also, dependence on the initial Wasserstein distance is not clear. It seems more like Assumption 2.2 is a form of relaxed convexity. For $L = 0$, it exactly implies convexity of negative log-density. There should be a discussion on when this assumption is satisfied. 2. The LSI constant $\rho$ should be missing somewhere in Proposition 2.5 and Theorems 2.10 and 2.16, as it controls the convergence rate. 3. How does $C_\nabla$ of Lemma 2.15 depend on dimension? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: ## Response to Reviewer cxoz We really appreciate your carefully reading and insightful comments. We will respond to each comment below and revise the manuscript according to these suggestions. ### About *Weakness* >1. There is not sufficient discussion on the implications of Assumption 2.2, …, discussed below. **Response**: Thanks. You are absolutely right. We apologize for your confusion caused by a typo here. The correct condition in Assumption 2.2 should be $$\langle x-y,\nabla U(x)-\nabla U(y)\rangle\geq -L|x-y|^2.$$ We confirm that the analysis throughout the paper is based on the correct form above, and we will fix this typo in the revision. >2. The dependence on most constants is implicit, …. The dimension-dependence of some constants is also not clear. **Response**: Thank you. We apologize for any confusion caused by the unclear of dependence to the constants. We will discuss the dependencies of the parameters in detail and explicitly explain how they depend on the number of dimensions $d$ in the revision. ### About *Comments or Suggestions* >1. I believe there is an alternative approach to prove Proposition 2.5 which does not rely on Assumption 2.2. …, which only required Assumption 2.3. >2. Using the above argument, …, these approaches can lead to new error bounds for LMC in sub-exponential or heavy-tailed settings. **Response**: Thanks for your insightful and interesting discussion. The direction you suggested is really promising for extending the analysis to more general settings. However, essential difficulties still exist. On the one hand, we believe Proposition 2.5 can hold without Assumption 2.2, but the analysis of finite-time mean-square convergence of numerical methods with convergence rates essentially rely on the use of Assumption 2.2, which is nothing but the one-sided Lipschitz (also called monotonicity) condition on the drift coefficients $-\nabla U(x)$. As far as we know, without the monotonicity condition, obtaining mean-square convergence rates is highly non-trivial for numerical SDEs (see Hutzenthaler and Jentzen, Ann. Probab., 2020). On the other hand, our approach relies on two key properties: (i) the chosen metric must satisfy the triangle inequality; (ii) the metrics on both sides of the inequality must remain consistent. Since the KL divergence does not satisfy the triangle inequality, inequalities such as KL$(\nu p_t,\pi)\leq e^{-4t/\rho}$KL$(\nu,\pi)$ do not work seemingly. We sincerely thank the reviewer for bringing these two works of Chewi et al. (2024) and Mousavi-Hosseini et al. (2023) into our notice. Ideas in these works are very inspiring, and we will definitely aim to extend our framework to incorporate weaker functional inequalities and alternative metrics for our future research. This is not trivial and we need more time to do it. In the revision, we will cite and make some comments on these two papers. >3. For better readability, the authors can change the list of assumptions in line 231 to Assumptions 2.1-2.3 and Assumptions 2.12-2.14. **Response**: Thanks for your suggestions. We will do this in the revision. ### About *Questions* >1. I don’t see how Assumption 2.2 follows from Assumption 2.6. …. There should be a discussion on when this assumption is satisfied. **Response**: Thanks a lot for your helpful comments. As mentioned to the first question in *About the weakness*, this issue comes from a typo in the previous manuscript, which will be corrected in the revision. >2. The LSI constant $\rho$ should be missing somewhere in Proposition 2.5 and Theorems 2.10 and 2.16, as it controls the convergence rate. **Response**: Thanks a lot for your constructive suggestions. In the revision, we will explicitly show it in corresponding statements. >3. How does $C_{\nabla}$ of Lemma 2.15 depend on dimension? **Response**: Thanks a lot for pointing out this issue. The constant $C_{\nabla}$ is independent of the dimension, as it can be written $C_{\nabla}=L_1'+L_1$. Please refer to Lemma 3.3 in Pang et al.(2025) for details.
Summary: The authors derive a sampling error bound in Wasserstein-2 for a discrete time discretization of Langevin Monte Carlo. The bound contains two parts, an error term due to finite time truncation of the Langevin dynamics, and an error term due to discretization over a finite time horizon. The innovation of this work appears to be in the analysis of the finite time discretization term, which is done entirely through moment-based calculations. Notably, they bound the error accumulation over short time intervals (Lemma E.1) and long time intervals (Lemma E.2) using rather many assumptions on the regularity of the drift, plus dissipativity of the target measure. The dissapitivity condition implies some uniform-in-time moment estimates (Lemma 2.4) on the true and discretized processes, and the regularity assumptions on the drift are used to transfer these moment estimates into bounds on the discretization error over short time intervals. This analysis is similar in style to bounds on ODE discretization error, where $O(h)$ step size dependence is typical. However, to carry out this approach for stochastic dynamics the authors must apply Itô's formula to the drift $\nabla U(x)$, forcing them to use Assumption 2.8 linear growth of $|\nabla(\Delta U(x))|$ which appears to be uncommon in the literature. Claims And Evidence: The theoretical claims made in this work are supported by clear and convincing evidence in the form of proofs. Methods And Evaluation Criteria: N/a Theoretical Claims: I read the proofs of Theorem 2.1, Lemma 2.4, Theorem 3.7, Lemma E.1, and Lemma E.2. Experimental Designs Or Analyses: Figures 1(c) and 1(d) are a convincing proof of concept for this work. Supplementary Material: I reviewed sections A, C, and E. Relation To Broader Scientific Literature: According to Table 1, this work proves an optimal error bound of $\tilde{O}(d^{1/2} \epsilon^{-1})$ without log-concavity. A bound of the same order was already shown in (Li et al. 2022) using both log-concavity and the third order growth condition on $U(x)$ required by this work. Essential References Not Discussed: Please discuss how the proof in this work differs from that of Li et al. 2022. How (if at all) is the log-sobolev condition used in the discretization error bound over a finite time horizon? I ask because, it is important to clarify which parts of the present work are original relative to Li et al. 2022, which uses similar assumptions and a similar technique (Gronwall-based discretization error bounds via direct control of moments). Does the finite time discretization error analysis of Li et al. 2022 make use of log-concavity? If not, would their approach work equally well using only a log-sobolev inequality to bound error due to truncation at a finite time? Other Strengths And Weaknesses: The major weakness of this work is that it is rather unclear what is the originality, given the significant similarities between it and Li et al. 2022. Relaxing the assumptions of Li et al. 2022 from log-concavity to merely log-sobolev is rather incremental if the proof techniques are largely the same. Another weakness of this work is that it is hard to read because of many different assumptions introduced throughout. I counted 9 assumptions spread across Section 2, only 5 of which are required by Theorem 2.10, but then four more assumptions stated in Section 3, which contains Theorem 3.7 that is essential in the proof of Theorem 2.10. Does Theorem 2.10 also require the assumptions in Section 3? Are any of these assumptions redundant? Could the presentation be simplified so it is easier to keep track of the many requirements of this analysis? Other Comments Or Suggestions: N/a Questions For Authors: 1. Does Theorem 2.10 require Assumptions 3.1, 3.3, 3.4, 3.5 indirectly through its use of Theorem 3.7? If so, please add them to the statement of Theorem 2.10 so that the statement is correct. Are any of the assumptions stated in Section 3 redundant? 2. How does the proof technique in this work differ from that of Li et al. 2022? What are the original contributions contained in the techniques used by the present paper? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: ## Response to Reviewer DL9W We sincerely thank the reviewer for constructive suggestions and comments. Next we address all comments point-by-point and will revise the manuscript to incorporate these suggestions. ### About *Summary* > However, to carry out … Assumption 2.8 … which appears to be uncommon in the literature. **Response**: Thanks. We would like to mention that, Assumption 2.8 comes from Li et al. 2022, where the authors remarked that, it is not necessarily stronger than the widely used Hessian Lipschitz condition (see Section 4 in Li et al. 2022). In fact, as pointed out by Li et al. 2022, there exist potentials, e.g. $U(x)=x^{4}$, that satisfy Assumption 2.8 but violate the Hessian Lipschitz condition. Moreover, it is shown that a class of Gaussian mixtures satisfy Assumption 2.8. ### About *Essential References Not Discussed* >1. Please discuss how the proof in this work differs from that of Li et al. 2022. **Response**: Thanks. To show the difference between our work and Li et al. (2022) clearly, it is worthwhile to illustrate the main idea of Li et al. (2022). Indeed, their arguments are based on a direct long-time mean-square convergence analysis of the numerical method, which essentially relies on the use of the log-concavity condition. To see this fact, we provide a key idea behind their error analysis. Suppose we get the following error estimates: $$E[|X_{t_{k+1}}-Y_{k+1}|^2]\leq E[(1+\epsilon h)|X_{t_k}-Y_k|^2-2h\langle X_{t_k}-Y_k , \nabla U(X_{t_k})-\nabla U(Y_k)\rangle]+c_2 h^3, $$ where $\epsilon >0$ can be sufficiently small. Then the log-concavity condition, i.e. $$\langle x-y,\nabla U(x)-\nabla U(y)\rangle \geq c_1 |x-y|^2, c_1>0 $$ is essentially used here to arrive at the contraction: $$E[|X_{t_{k+1}}-Y_{k+1}|^2]\leq[(1-(2c_1-\epsilon)h]E[|X_{t_{k}}-Y_{k}|^2]+c_2h^3, $$ for $0<\epsilon<2c_1$. Armed with the contraction, one can easily get the uniform-in-time error bound by iteration. However, without the log-concavity, this framework does not work in obtaining uniform-in-time error bounds. In contrast to Li et al. (2022), we develop a new framework of uniform-in-time error analysis without the log-concavity. The proof consists of two key components. First, we derive the finite-time mean-square convergence error bounds for the LMC, which grow exponentially with respect to the time length $T$. Arguments for this step require only Lipschitz condition and avoid any convexity assumption. Second, we obtain uniform-in-time error bounds by relying on the exponential ergodicity of the Langevin dynamics, which is available under one-side Lipschitz condition and log-Sobolev inequality (see Subsection 2.4 for details). Moreover, our framework also works for the case of non-globally Lipschitz continuous $\nabla U$, which is not even investigated in Li et al. (2022). To summarize, the approach of error analysis is essentially different from Li et al. (2022) and more relaxed conditions (non log-concavity and non-glabally Lipschitz conditions) are used to cover more problems. >2. How (if at all) is the log-sobolev condition … due to truncation at a finite time? **Response**: Thanks. The LSI is not used in the finite-time error analysis of our paper. It is only used/required to obtain the exponential ergodicity of the Langevin dynamics (see Proposition 2.5 and Step 3 in Subsection 2.4 for details). As explained above, the authors of Li et al. (2022) did not do finite-time error analysis but used the log-concavity essentially to directly carry out a long-time error analysis and derive an infinite-time convergence theorem. ### About *Weakness* >1. The major weakness … are largely the same. **Response**: We deeply apologize for not making the originality clear, to make you confused. As explained in our response to your previous comments, the approach of error analysis in our paper is essentially different from Li et al. (2022). As commented by the reviewer cxoz: "The authors establish this result through a new discretization analysis for SDEs which combines uniform-in-time LMC moment bounds with a finite-time fundamental mean-square convergence theorem". >2. Another weakness … of this analysis? **Response**: : Thanks. We apologize for the current presentation. Indeed, Section 3 is a general framework of uniform-in-time convergence for discretizations of general SDEs, independent of Section 2. So Theorem 2.10 does not use assumptions in Section 3. Some assumptions in Section 2 for particular Langevin SDEs can be regarded as particular ones in Section 3. But they are not redundant and we agree that the current presentation can be improved in the revision for better readability. Due to the work limitation, please refer to our responses to *About weakness* of the reviewer H65o for details. ### About *Questions* **Response**: Thanks. These concerns have been carefully addressed previously. Hope you are satisfied with our answers. Please tell us once you have further questions. --- Rebuttal Comment 1.1: Comment: I thank the authors for the informative and clarifying responses to my questions. I especially appreciate the very clear explanation how this work differs from the approach taken by Li et al. 2022. Conditional on the proposed changes (see below) to make the assumptions clearer, I am willing to raise my score to 3. Proposed changes: " In the revision, we plain to reformulate "assumptions" in Section 3 as several “conditions” (e.g,...). Then Section 2 put some assumptions on Langevin SDEs so that conditions in Section 3 can be satisfied and theoretical results there can be applied to Langevin SDEs. " --- Reply to Comment 1.1.1: Comment: Thank you so much for your raising the score to 3. As you proposed, we will make some changes in the revision. Again, thanks for your help and suggestions.
null
null
null
null
null
null
DEFAME: Dynamic Evidence-based FAct-checking with Multimodal Experts
Accept (poster)
Summary: The authors introduce DEFAME, an automated fact-checking framework designed to process multimodal claims using multimodal evidence. DEFAME operates within a zero-shot MLLM pipeline structured into six stages: action planning, action execution (via multimodal web retrieval and GeoClip tool use), result summarization, reasoning about claim veracity, verdict prediction, and verdict justification. Evaluated on three established multimodal fact-checking datasets (AVERITEC, MOCHEG, VERITE) and a newly proposed dataset (ClaimReview2024+), DEFAME outperforms existing MLLMs and multimodal fact-checking approaches while generating higher-quality fact-checking reports compared to the base MLLM with CoT prompting. Claims And Evidence: Certain claims made in the submission lack sufficient supporting evidence: 1. Novelty of DEFAME: The authors claim that DEFAME is "the first multimodal AFC system that can handle multimodal claims as well as retrieve and process multimodal evidence" (Lines 72–74). However, this is fundamentally inaccurate, as prior work [1] has already implemented multimodal retrieval (both text and image) for multimodal misinformation detection and was evaluated on the same MOCHEG dataset. This undermines the claimed novelty of DEFAME. 2. State-of-the-art results: The claim that DEFAME "establishes new state-of-the-art results on three diverse and widely used benchmarks" (Lines 78-80) is only partially supported by Table 3. The table presents incomplete comparisons, as only GPT-4o and GPT-4o with CoT are evaluated on all four datasets, while other baselines are tested on at most two datasets. This weakens the claim that DEFAME definitively outperforms prior approaches. [1] Tahmasebi et al., Multimodal Misinformation Detection using Large Vision-Language Models, arXiv:2407.14321, 2024. (Published at CIKM 2024) Methods And Evaluation Criteria: The authors evaluate DEFAME on four datasets (three existing and one newly introduced), which represents a comprehensive selection. However, the inconsistency in evaluation metrics across datasets is not justified. Specifically, while Accuracy is reported for most datasets, F1 score is used for MOCHEG, without explanation. Theoretical Claims: N/A: the paper does not contain theoretical claims. Experimental Designs Or Analyses: While the authors claim that DEFAME achieves state-of-the-art performance across diverse benchmarks, the experimental results in Table 3 are incomplete. Specifically, apart from GPT-4o and GPT-4o with CoT, all other baselines are evaluated on at most two out of the four datasets, limiting the robustness of the comparison. Furthermore, on the newly introduced ClaimReview2024+ dataset, DEFAME is only compared against base MLLM approaches that lack task-specific framework design, web retrieval, and tool use, making its superiority expected. These gaps weaken the claim that DEFAME establishes a new state of the art. Additionally, Table 7 reveals significant efficiency concerns. DEFAME with GPT-4o requires around 28× the time and 21× the input tokens compared to GPT-4o with CoT, and even its ablated variants remain highly resource-intensive. This raises serious questions about DEFAME’s practicality for real-world deployment. Supplementary Material: I have reviewed the appendix in full detail. Relation To Broader Scientific Literature: Compared to the broader scientific literature, the conceptual and technical contributions of this paper remain highly limited. The claimed key novelty of DEFAME -- being "the first multimodal AFC system that can handle multimodal claims as well as retrieve and process multimodal evidence" (Lines 72–74) -- is inaccurate. Prior work [1] has already implemented multimodal evidence retrieval (text and image) for misinformation detection and was evaluated on the same MOCHEG dataset. DEFAME is a zero-shot MLLM pipeline for standard multimodal fact-checking, but its retrieval paradigm closely resembles existing retrieval-augmented MLLM approaches, such as [1] and MMD-Agent [2]. While DEFAME's integration of reverse image search and geolocation is novel in execution, these contributions are incremental given the lack of originality in problem formulation and overall framework design. [1] Tahmasebi et al., Multimodal Misinformation Detection using Large Vision-Language Models, arXiv:2407.14321, 2024. (Published at CIKM 2024) [2] Liu et al., MMFakeBench: A Mixed-Source Multimodal Misinformation Detection Benchmark for LVLMs. arXiv: 2406.08772, 2024. (Published at ICLR 2025) Essential References Not Discussed: Please refer to References [1] and [2] under "Relation To Broader Scientific Literature". Additionally, the authors fail to discuss SNIFFER [3], a representative work that leverages MLLMs for explainable out-of-context misinformation detection. Given SNIFFER’s focus on MLLM-empowered explainable detection, its omission further weakens the discussion of related work. [3] Qi et al., SNIFFER: Multimodal Large Language Model for Explainable Out-of-Context Misinformation Detection. CVPR 2024. Other Strengths And Weaknesses: Please refer to comments under "Relation To Broader Scientific Literature", "Methods And Evaluation Criteria", and "Experimental Designs Or Analyses". Other Comments Or Suggestions: N/A Questions For Authors: 1. Completion of Table 3: The performance of existing approaches beyond GPT-4o and GPT-4o with CoT remains unclear across all four datasets, and the evaluation metrics are inconsistent (Accuracy vs. F1 score). Specifically, how do retrieval-augmented approaches perform on the newly constructed ClaimReview2024+ dataset compared to DEFAME? The lack of such comparisons weakens the claim of DEFAME’s superiority. 2. Efficiency of Existing MLLM-Empowered Approaches: Table 7 shows that DEFAME consumes significantly more tokens and execution time than MLLM prompting, which is expected due to its evidence retrieval process. However, there is no direct comparison with other task-specific MLLM-empowered approaches that also utilize evidence retrieval. Without this, it remains unclear how DEFAME’s efficiency compares to competing baselines. 3. Explanation Quality Comparison: In Section 4.6, DEFAME’s explanation quality is only compared against bare MLLM prompting (GPT-4o with CoT), which lacks external evidence access. Given this, DEFAME’s superior performance is expected rather than insightful. A missing analysis is how DEFAME’s generated explanations compare to those from other retrieval-augmented, task-specific MLLM approaches. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank for the time invested into the review. Please find our response to your concerns below. ### First to handle multimodal claims and evidence The work by Tahmasebi et al. (CIKM 2024), i.e. LVLM4F, was covered in our paper, most notably in the prior work overview (Table 1) and as a baseline in Table 3. Critically, LVLM4FV **is not designed to handle multimodal claims**: “Given a **textual** claim 𝑄 and a corpus of evidences C as input [...]”, p. 3. ### Key novelty Please refer to our response to Reviewer bvvH (“Originality and Theoretical Contribution”) for more details on the novelty of DEFAME. ### Prior work Thanks for pointing out SNIFFER. Please refer to our response to Reviewer K7Ta (“Comparison to Prior Work”). ### ”Incomplete” Table 3 The other methods in Table 3 have not been evaluated on all benchmarks for several methodological reasons. Most importantly, the methods have different task specializations, targeting only a particular subtask of fact-checking: CFR, GPT-CoT, LVLM4FV, and MetaSum are limited to text-only claims (not provided by VERITE and partially CR+). CHASMA and AITR focus solely on OOC detection and both require visual input, making them inapplicable to text-only claims (like in AVeriTeC, MOCHEG, and partially CR+). For CFR, the code and model weights have not been publicly released. GPT-CoT requires gold evidence as input to achieve the reported competitive numbers, not available for VERITE and CR+, where evidence retrieval is considered an integral part of the fact-checking task. Moreover, GPT-CoT mainly builds on GPT-3.5-Turbo, implying that the much stronger GPT-4o (CoT) baselines provide a better comparison than GPT-CoT. CHASMA lacks a publicly available, trained model. Furthermore, CHASMA and AITR both require training - which is impossible for CR+ as there is no training data available. LVLM4FV, MetaSum, and AITR require a predefined evidence corpus for retrieval, which is infeasible to create for CR+. Thus, it is not possible to run the previous methods on the benchmarks they were not designed for (or if it is technically feasible, it will produce meaningless results). With DEFAME, we deliver a method that - despite its generality - is able to beat even the specialized methods. ### Inconsistency in metrics Metrics (F1, accuracy) match those used in the original benchmarks and prior work for consistent intra-benchmark comparison. If the reviewer finds it helpful, we are open to adding F1 score as an additional metric to AVeriTeC and CR+ or, alternatively, add accuracy to MOCHEG. ### Efficiency We agree that DEFAME has high token usage and acknowledge that there is room to reduce it. As correctly anticipated, the high token consumption is due to the processing of external evidence. External evidence can incorporate entire webpages, full PDFs, and other long documents that Firecrawl can turn into Markdown representation. We truncate inputs only when they exceed the maximum context window of the MLLM, which is 128k tokens for the GPT models used. Since the GPT baselines do not process any external evidence, unsurprisingly, their token consumption is fairly low - *at the cost of random-like performance for unseen claims*. Table 7 also reveals that the integration of planning (DEFAME with GPT-4o) reduces token consumption by about 20K tokens (almost a third) compared to the planning-ablated variant “Static Actions,” which executes all available actions. Since efficiency was not a goal of DEFAME, we offer to add it as a future direction to the discussion section. It is hard to compare DEFAME’s efficiency with the methods in Table 3 due to (a) no reported resource usage (e.g., tokens or FLOPs), and (b) significant differences in architecture; also many are specialized for sub-tasks like OOC detection or evidence summarization and do not perform retrieval. Thus, direct efficiency comparison is not meaningful. --- Rebuttal Comment 1.1: Comment: I thank the authors for their clarification regarding LVLM4F and, accordingly, I have raised my score to 2. I have also reviewed the authors' response to Reviewer K7Ta regarding the “Comparison to Prior Work.” To clarify, my concern, shared by other reviewers, is not about the novelty of this work per se. Rather, the omission of closely related works such as SNIFFER and MMFakeBench hinders a comprehensive understanding of the current landscape in multimodal automated fact-checking. Discussing these works is important for properly contextualizing the contributions of this paper. Finally, I recommend aligning the evaluation metrics across datasets (i.e., reporting both Accuracy and F1 scores consistently). Otherwise, please include a clear explanation in the experimental section for only reporting Accuracy or F1 for certain datasets. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for their constructive feedback and appreciate the opportunity to clarify the evaluation metrics and our handling of related work. ### On Related Work We acknowledge the omission of SNIFFER and have addressed it in our response, along with detailed distinctions between DEFAME, SNIFFER, and MMFakeBench (MMD-Agent). These clarifications will be incorporated into the final version. If the reviewer has additional suggestions, we are happy to include them as well. Most importantly, **none of these works renders DEFAME obsolete**. To the best of our knowledge, DEFAME remains the only system that jointly supports multimodal claims and multimodal evidence, dynamic tool planning, and full explanatory output in a zero-shot setting. ### On Evaluation Metrics (Accuracy vs. F1) MOCHEG is the only benchmark where we report micro-F1 score, following the convention established in its original paper. All the other benchmarks use accuracy. Note that **micro-F1 and accuracy are mathematically equivalent** in this setting—multiclass classification with exactly one correct label per instance. The reason for this confusion/redundancy stems from the MOCHEG paper. Close examination of the MOCHEG codebase (and follow-up works) reveals that the metric uses the standard [scikit-learn](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.f1_score.html) implementation of micro-F1 score, which [indeed is](https://scikit-learn.org/stable/modules/model_evaluation.html#multiclass-and-multilabel-classification) accuracy in the multi-class-single-label setting. **We will add this explanation to the final version for clarity.** We provide a complementary explanation below to clarify the equivalence of micro F1 and accuracy in the multi-class-single-label setting: This is due to two key properties: 1. **No True Negatives (TN):** In multiclass prediction, each instance has a single gold label. Therefore, every correct prediction is a True Positive for a specific class. Hence, there are no True Negatives; each prediction is either a True Positive or an incorrect prediction. 2. **One-to-one error symmetry:** Every incorrect prediction contributes **one False Positive and one False Negative**, implying FP = FN and total false predictions are double-counted in standard F1 terms. Micro F1 is defined as $$ \text{F1}_{\text{micro}} := \frac{2 \cdot \text{precision} \cdot \text{recall}}{\text{precision} + \text{recall}} = \frac{2 \cdot \text{TP}}{2 \cdot \text{TP} + \text{FP} + \text{FN}}. $$ Now observe that since the TPs capture all correct predictions and since each incorrect prediction contributes one FP and one FN, we can write: $$ \text{TP} + \frac{1}{2}(\text{FP} + \text{FN}) = \text{All predictions}. $$ Therefore, $$ \text{F1}_{\text{micro}} = \frac{\text{TP}}{\text{TP} + \frac{1}{2}(\text{FP} + \text{FN})} = \frac{\text{Correct predictions}}{\text{All predictions}} = \text{Accuracy}. $$ This identity holds in any multiclass single-label setting, and we will make this equivalence explicit in the paper to avoid confusion.
Summary: This paper tackles the challenge of scalable and explainable fact-checking in the presence of disinformation, particularly in multimodal contexts. The authors propose DEFAME, a modular, zero-shot multimodal large language model (MLLM) pipeline for open-domain claim verification. Unlike prior methods that are either text-only or overly reliant on parametric knowledge, DEFAME operates through a six-stage dynamic tool selection process, incorporating both textual and visual evidence to generate structured, explainable verification reports. Extensive evaluation on VERITE, AVERITEC, and MOCHEG demonstrates DEFAME’s superiority over existing fact-checking models, setting a new state-of-the-art for uni- and multimodal fact-checking. Furthermore, the authors introduce CLAIMREVIEW2024+, a new benchmark that ensures post-GPT-4O knowledge cutoff validity, highlighting DEFAME’s temporal generalizability and real-time fact-checking capabilities, significantly outperforming the GPT-4O Chain-of-Thought baseline. Claims And Evidence: Claim: The paper claims that DEFAME is the first multimodal Automated Fact-Checking (AFC) system capable of handling multimodal claims while also retrieving and processing multimodal evidence. Question: However, based on my literature review using Google Scholar, there are existing studies [r1], [r2], [r3] that employ Large Vision-Language Models (LVLMs) with knowledge retrieval for multimodal misinformation detection. [r1] Qi P, et al. SNIFFER: Multimodal Large Language Model for Explainable Out-of-Context Misinformation Detection. CVPR, 2024. [r2] Liu X, et al. Mmfakebench: A mixed-source multimodal misinformation detection benchmark for lvlms. ICLR 2025. [r3] Xuan K, et al. LEMMA: Towards LVLM-Enhanced Multimodal Misinformation Detection with External Knowledge Augmentation. 2024. Methods And Evaluation Criteria: The proposed methods and evaluation criteria are well-aligned with the problem of multimodal fact-checking and effectively demonstrate the capabilities of DEFAME in real-world misinformation detection. 1. Diverse Benchmark Selection. The authors evaluate DEFAME on text-only (AVERITEC), image-text (VERITE), and multimodal (MOCHEG) datasets, covering varied fact-checking challenges. 2. The introduction of CLAIMREVIEW2024+ is commendable, as it tests DEFAME’s ability to verify claims beyond the GPT-4O knowledge cutoff, addressing potential data leakage issues. Theoretical Claims: The paper primarily focuses on the practical application of multimodal fact-checking rather than developing new theoretical foundations or formal proofs. As such, no explicit theoretical claims or mathematical proofs are presented that require verification. Therefore, this raises a potential concern regarding the lack of formal theoretical justification for some of the proposed design choices in DEFAME. Experimental Designs Or Analyses: Strengths of Experimental Design 1. Use of Diverse and Established Benchmarks. The evaluation leverages three well-known datasets (VERITE, AVERITEC, MOCHEG) to assess DEFAME's text, image-text, and multimodal fact-checking capabilities. 2. Ablation Studies for Key Components. The six-stage verification process is systematically analyzed through component ablations (removal of Web Search, Image Search, Geolocation, Reverse Image Search). Weakness of Experimental Design: 1. CLAIMREVIEW2024+ is an important contribution, but the dataset construction methodology lacks clarity. Unclear aspects: What criteria were used for claim selection? How were fact-checking labels assigned and validated? Does it truly reflect real-world misinformation trends? A more detailed dataset creation process would improve trust and reproducibility. Supplementary Material: This paper do not contain any supplementary material. Relation To Broader Scientific Literature: The paper provides a clear review of the three key AFC components: Claim detection & extraction Evidence retrieval Verdict prediction It references leading AFC models, such as: Text-only AFC systems (e.g., FEVER, AVERITEC, FACTCHECK-BENCH) Multimodal AFC systems (e.g., VERITE, MOCHEG, NEWSCLIPPINGS). DEFAME extends prior work: Unlike text-only AFC systems, DEFAME incorporates multimodal evidence retrieval and reasoning. Unlike previous multimodal AFC approaches, DEFAME performs dynamic evidence retrieval rather than relying solely on pre-annotated evidence. Essential References Not Discussed: Two key related works—Sniffer (r1 SNIFFER: Multimodal Large Language Model for Explainable Out-of-Context Misinformation Detection. CVPR, 2024.) and MMFakeBench (r2 Mmfakebench: A mixed-source multimodal misinformation detection benchmark for lvlms. ICLR 2025.)—are missing from the current discussion, and they are essential for understanding the context of multimodal fact-checking and misinformation detection. Other Strengths And Weaknesses: The paper has notable strengths in terms of application significance, empirical rigor, and modular system design, but also has some weaknesses related to originality, clarity of contributions, and unexplored limitations. Strengths: 1. Strong Empirical Validation. The evaluation is conducted on three well-established AFC benchmarks (VERITE, AVERITEC, MOCHEG) and a new dataset (CLAIMREVIEW2024+), providing comprehensive empirical insights. 2. Modular and Scalable System Design. DEFAME is modular, allowing flexibility in integrating different retrieval tools (Web Search, Image Search, Reverse Image Search, Geolocation). Weaknesses for Improvement 1. Originality Could Be Better Clarified. While DEFAME combines existing techniques innovatively, it does not introduce a fundamentally new theoretical model. The paper does not sufficiently differentiate DEFAME’s retrieval mechanism from prior RAG-based AFC models 2. If possible, provide a more formal description of the six-stage verification pipeline to enhance clarity. 3. The mathematical formulation of DEFAME’s retrieval pipeline and reasoning steps could be clearer. 4. The dataset construction details for CLAIMREVIEW2024+ are not fully transparent. Other Comments Or Suggestions: Line 48 (Most multimodal claim verification systems cannot even retrieve the evidence needed to verify a claim (Fu et al., 2024; Vo-Hoang et al., 2024; Tang et al., 2024),.) contains an extraneous comma that was mistakenly added. Questions For Authors: Overall, this paper presents a well-structured approach to multimodal fact-checking with strong empirical validation. However, to align with ICML’s emphasis on theoretical rigor, I would like to ask the following critical question regarding the methodological formalization in Section 3: Can the authors introduce a more formal theoretical and mathematical representation of the method in Section 3? * The current presentation of DEFAME’s six-stage verification process is primarily descriptive and algorithmic, but lacks a formal mathematical framework. Incorporating a structured formalization would significantly enhance the paper’s impact. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: ### Claim on Novelty / Related Work Thanks for pointing out SNIFFER, MMD-Agent (from the MMFakeBench paper), and LEMMA. Please refer to our response to Reviewer K7Ta. ### No Theoretical Claims We believe that absence of theoretical claims is not unusual for papers under “Application-Driven Machine Learning”, the ICML topic that we have submitted to. At the same time, we offer a more formal mathematical representation of the framework below. ### No Supplementary Material, Details for CR+ Please refer to Appendix Sections A–L (pp. 16 ff.), which contain extensive additional material. Some of your questions are addressed in Appendix J. We also point you to our response to Reviewer K7Ta on “ClaimReview2024+ Details”, where these details are further elaborated. ### Originality and Theoretical Contribution DEFAME’s originality lies in its dynamic, modular integration of retrieval tools for handling both multimodal claims **and** multimodal evidence—capabilities not jointly supported by prior work. While it builds on established components like CoT prompting and tool use, DEFAME combines them into a unified framework that supports zero-shot fact-checking across a diverse set of benchmarks: AVeriTeC (text claims & text evidence), MOCHEG (text claims and multimodal evidence), VERITE (multimodal claims and potentially multimodal evidence), ClaimReview2024+ (both uni- and multimodal claims with potentially multimodal evidence). To our knowledge, it is the only system capable of operating across all these scenarios, while previous methods are specialized and typically incompatible with at least one modality setting (e.g., requiring only text, only images, or static corpora). We will make this contribution more explicit. ### Formal Representation of DEFAME We appreciate the suggestion to formalize the DEFAME pipeline. We agree that a formal view helps clarify the role of each stage. Let $\mathcal{T}$ and $\mathcal{I}$ be the spaces of text and images. Define $\mathcal{M} := (\mathcal{T} \cup \mathcal{I})^*$ as the space of multimodal sequences, and $\mathcal{Y}$ the space of verdict labels. DEFAME is a function: $$ \mathcal{F} : \mathcal{M} \rightarrow \mathcal{M} \times \mathcal{Y}, \quad \mathcal{F}(c) = (R_\text{out}, y_\text{out}), $$ where, given a claim $c \in \mathcal{M}$, the output consists of a report $R_\text{out}$ containing the fact-check, and a predicted verdict $y_\text{out}$. DEFAME proceeds iteratively up to $N$ steps as follows: - $(R^{(i+1)}, y^{(i+1)}) := \mathcal{F}_\text{iter}(R^{(i)})$ - $i^* := \min \\{ i \leq N \mid y^{(i)} \ne \text{NEI} \text{ or } i = N \\}$ Final outputs: - $R_\text{out} := \mathcal{S}_6(R^{(i^*)})$ (justification) - $y_\text{out} := y^{(i^*)}$ The justification stage $\mathcal{S}_6$ appends a rationale to the final report. Each iteration $$\mathcal{F}_\text{iter} = \mathcal{S}_5 \circ \mathcal{S}_4 \circ \mathcal{S}_3 \circ \mathcal{S}_2 \circ \mathcal{S}_1$$ consists of: 1. Planning ($\mathcal{S}_1$): Select actions $A \subseteq \mathcal{A}$ based on $R^{(i)}$ 2. Execution ($\mathcal{S}_2$): Retrieve evidence $E := \\{ \tau(a) \mid a \in A \\}$, where $\tau$ is a tool executing corresponding action $a$. 3. Summarization ($\mathcal{S}_3$): $R_1^{(i)} := \sigma(E, R^{(i)})$, where $\sigma$ summarizes evidence $E$ conditioned on the current report $R^{(i)}$ and appends it to the report. 4. Develop ($\mathcal{S}_4$): $R_2^{(i)} := \mathcal{S}_4(R_1^{(i)})$, where $\mathcal{S}_4$ is a generative model that performs structured reasoning and expands the report with the generated NLI sequence. 5. Verdict Prediction ($\mathcal{S}_5$): $(R_3^{(i)}, y^{(i)}) := \mathcal{S}_5(R_2^{(i)})$, where $\mathcal{S}_5$ is a classifier over multimodal sequences, returning a verdict $y^{(i)} \in \mathcal{Y}$ and an updated report $R_3^{(i)} \in \mathcal{M}$ which is the input report $R_2^{(i)}$ expanded by a summary of the key takeaways from the report alongside with the verdict. We welcome feedback on whether including this formalization (or an extended/abbreviated version) in the main paper would be helpful.
Summary: The paper presents a novel approach to automated fact-checking designed to address the growing problem of disinformation. The authors introduce DEFAME, a modular system that uses a six-stage pipeline to dynamically select and use various tools for retrieving and evaluating both textual and visual evidence. This system is capable of handling multimodal claims (text and images) and generating detailed, explainable reports of its fact-checking process. Unlike previous approaches that were often limited to text-only analysis or lacked transparency, DEFAME integrates multiple evidence sources, including web searches, reverse image searches, and geolocation tools, to verify claims. The system was evaluated on several established benchmarks (VERITE, AVERITEC, MOCHEG) and demonstrated superior performance compared to existing methods, establishing new state-of-the-art results. Additionally, the authors created a new benchmark, CLAIMREVIEW2024+, featuring claims that occurred after the knowledge cutoff of GPT-4O to ensure more realistic evaluation scenarios. The results show that DEFAME significantly outperforms GPT-4O baselines on this new dataset, demonstrating its potential for real-time fact-checking in dynamic information environments. Claims And Evidence: The paper has several concerns regarding the experimental setup and comparisons made. 1. The results reported on the AveriTec dataset were not obtained using the default settings; instead, the paper solely relied on accuracy as the evaluation metric. 2. The comparison with GPT-4o seems unfair. A fair comparison would involve contrasting GPT-4o in a multi-turn setup with DEFAME in a multi-turn setup, as well as comparing GPT-4o in its current single-turn setup with DEFAME in a single-turn setup. 3. Lastly, the paper failed to compare its approach with existing efforts that combine Large Vision-Language Models (LVLMs) and RAG (see section Essential References Not Discussed). Methods And Evaluation Criteria: The proposed methods and evaluation criteria are well-suited for the problem of automated fact-checking, particularly in handling multimodal claims and evidence. Theoretical Claims: This paper did not provide any proofs for theoretical claims. Experimental Designs Or Analyses: See the section "Claims And Evidence" above. Supplementary Material: I have reviewed all the supplementary material. Relation To Broader Scientific Literature: In summary, DEFAME's contributions build upon and advance the state of the art in automated fact-checking。 Essential References Not Discussed: It appears that this paper has overlooked several important related works. The paper claims that DEFAME is the first multimodal Automated Fact-Checking system capable of handling multimodal claims while also retrieving and processing multimodal evidence. However, there are many existing works that employ Large Vision-Language Models (LVLMs) with knowledge retrieval for multimodal misinformation detection. [1] SNIFFER: Multimodal Large Language Model for Explainable Out-of-Context Misinformation Detection. CVPR 2024. [2] MMFakeBench: A Mixed-Source Multimodal Misinformation Detection Benchmark for LVLMs. ICLR 2025 (but arXiv on 13 Jun 2024) [3] LEMMA: Towards LVLM-Enhanced Multimodal Misinformation Detection with External Knowledge Augmentation. arXiv 2024 [4] Multimodal Misinformation Detection using Large Vision-Language Models. CIKM 2024 Other Strengths And Weaknesses: 1. While DEFAME represents an advancement in multimodal fact-checking, its contributions primarily lie in the workflow and prompt engineering. However, I did not see significant differences between the proposed framework and existing works that combine LVLMs and RAG. The only additions are reverse image search and geolocation, but these two actions only provide limited improvements. 2. The paper claims that DEFAME is the first multimodal Automated Fact-Checking system capable of handling multimodal claims while also retrieving and processing multimodal evidence. However, there are many existing works that employ LVLMs with knowledge retrieval for multimodal misinformation detection. 3. Additionally, many details of the proposed dataset ClaimReview 2024+ are missing. The evaluation of the proposed method is also not comprehensive, as discussed in the section "Claims and Evidence" above. Other Comments Or Suggestions: See above. Questions For Authors: No Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We value the time invested in the review and thank for the feedback and suggestions! Please find our response to your concerns below. ### Comparison to Prior Work Thanks for pointing out **SNIFFER**, a related work that was unintentionally omitted. Critically, unlike DEFAME, SNIFFER is **incapable of retrieving multimodal evidence** (“we input both the news caption and the text from webpages retrieved [...] into the LLM”, p. 5). It performs no planning. SNIFFER has only one tool (entity extraction) that is always executed. It cannot dynamically decide to retrieve additional evidence. SNIFFER is limited to news content and does not generalize beyond OOC detection. It is inapplicable to text-only inputs. Additionally, it requires finetuning - which DEFAME does not. **MMD-Agent** was missing from our initial submission as it was not peer-reviewed back then, but we are happy to integrate it now. MMD-Agent **does not incorporate visual evidence**. Specifically, Figure 4 and the [published code](https://github.com/liuxuannan/MMFakeBench/tree/main/eval/prompt_template/MMD_Agent) indicate that the gathered evidence is text-only. The paper only mentions Wikipedia as the source of external evidence (pp. 7 and 10). Additionally, their pipeline is static and does not allow for follow-up evidence retrieval. It does not include systematic planning. MMD-Agent is restricted to news. Finally, the outputs lack justifications and, therefore, the overall explainability of MMD-Agent is limited. You mentioned the work by Tahmasebi et al. (CIKM 2024). The method introduced there is referred to as **LVLM4F**, which our paper covers in detail, most notably in the prior work overview (Table 1) and as a baseline in Table 3. Critically, LVLM4FV **is not designed to handle multimodal claims**: “Given a **textual** claim 𝑄 and a corpus of evidences C as input [...]”, p. 3. Finally, we did not include **LEMMA** for the following reasons. First, it is not peer-reviewed. Second, LEMMA **only retrieves textual evidence**: The "Vision Evidence" in Figure 4 of the LEMMA paper refers to "a list of web page's title" (p. 6). These web pages were retrieved via reverse image search using the input image - which is probably why they call it “vision evidence.” Third, it remains unclear if LEMMA applies to text-only inputs. Finally, it cannot retrieve further evidence after the first pass, does not involve any efficient tool use planning, and does not provide any comprehensible justification generation, limiting its explainability. Considering all four references, our claim remains valid that no prior published MAFC method can handle both multimodal claims **and** multimodal evidence. ### ClaimReview2024+ Details Complementary to Appendix J, we are happy to add more details on CR+ in the following: Claims were collected via the Google Fact-Check API by issuing queries across ten broad topics (climate change, politics, health, …). We deduplicated results based on the review URL to avoid overlap. For each claim, we also collected the date and author (claimant) to preserve context. Label assignment was handled in two stages: Trivial labels were automatically mapped using an LLM-based script (code included in the release), while non-trivial cases were manually annotated by a PhD-level MAFC researcher. A full validation pass was conducted by a student who compared extracted content (text, label, date, claimant, image) with the original fact-check articles to ensure accuracy. Images were manually curated since the Google API only returns teaser images, which often contain overlays or composites. Manual curation ensured that claim images are as close as possible to the original ones referenced in the fact-check. Because the Google Fact-Check API aggregates claims from leading organizations such as Snopes, PolitiFact, and AFP—whose focus is on timely and harmful misinformation—we believe CR+ reflects real-world misinformation trends. We will clarify these details in the main paper for greater transparency. Thank you for the feedback. ### AVeriTeC metric We used accuracy as the main metric for AVeriTeC to align with recent follow-up work, where accuracy has become a common evaluation metric for veracity prediction [Singhal et al., 2024; Cao et al., 2023]. The original AVeriTeC paper conditions veracity scoring on alignment between retrieved and gold evidence, which can confound evaluation of general-purpose systems. Following subsequent work, we omit this step to enable broader comparability and support systems that retrieve evidence freely. ### Evaluation “Not Comprehensive” You point out that “evaluation of the proposed method is also not comprehensive” and refer to section "Claims and Evidence." There, however, you write that “the claims made in the submission are supported by clear and convincing evidence.” Please clarify. --- Rebuttal Comment 1.1: Comment: The section "claims and evidence" was updated and listed a few missing experiments. I have read the authors' responses to other reviewers that share similar concerns; some of these concerns have been addressed. Therefore, I will increase my rating to weak acceptance for this paper. --- Reply to Comment 1.1.1: Comment: We sincerely thank the reviewer for considering our paper for acceptance. We are happy to address the remaining/missed concerns in the following: ### AVeriTeC Metrics As noted in our previous response, we used accuracy as the main metric for AVeriTeC to enable comparison with recent work, where it has become a commonly reported metric for veracity evaluation. The original AVeriTeC paper conditions veracity scoring on the alignment between retrieved and gold evidence, framed as QA-style inputs. In contrast, DEFAME’s standard output is a structured fact-checking report intended to support explainability and generality across benchmarks. We implemented the required method adaptation to support the official AVeriTeC scoring protocol and included the results in Figure 5 in our paper. ### Experimental Fairness Thanks for suggesting additional experiments to increase comparison fairness. We followed your suggestion and executed GPT-4o with CoT prompting in a multi-turn variant, leveraging the same reiteration criterion as in DEFAME. The results are shown in the following table. | Method | Turns | MOCHEG (F1) | VERITE (Acc.) | CR+ (acc.) | | ---------- | ------ | ----------- | ------------- | ---------- | | DEFAME | Multi | **59.5** | **84.5** | **68.7** | | GPT-4o CoT | Multi | 57.0 | 82.3 | 40.7 | | DEFAME | Single | 47.7 | 82.8 | 63.3 | | GPT-4o CoT | Single | 49.6 | 79.7 | 36.6 | Even in a multi-turn setup, GPT-4o scores lower than DEFAME. However, the additional turns help GPT-4o to improve over the single-turn baseline. The multi-turn setup incentivizes GPT-4o to leverage its parametric knowledge. This is expected because most data of MOCHEG and VERITE is leaked (recall that almost all their claims are from before GPT’s knowledge cutoff). To simulate a more realistic scenario, we also evaluate on CR+ which contains mostly claims from after the knowledge cutoff. Indeed, on these “unseen” claims, GPT-4o with multi-turn lacks behind strongly, with a **gap of 28.0 percentage points** in accuracy. ### Related Work Using LVLMs and RAG We thank the reviewer for pointing at the four references—SNIFFER, MMD-Agent (MMFakeBench paper), LVLM4FV, and LEMMA—which we extensively addressed in our previous response. To complete the picture of works that combine LVLMs with RAG, to the best of our knowledge, there remains only one more method: RAGAR. Our paper already covers RAGAR in Table 1. Critically, RAGAR **cannot retrieve multimodal evidence**. (The RAGAR paper misleadingly refers to it as “multimodal evidence,” perhaps because it was retrieved with reverse image search. In fact, the evidence consists only of text or image captions.) Moreover, RAGAR reduces multimodal claims into fully verbalized descriptions. That is, all follow-up reasoning is text-only and may miss important details in the image. In contrast, DEFAME is aware of the full claim image throughout the whole pipeline, allowing it to compare it to evidence images, reason with it, etc. We are happy to add the missing references to the camera ready, complemented by the clear distinctions as pointed out in our responses.
Summary: This paper introduces DEFAME, a multimodal pipeline for open-domain text-image claim verification. DEFAME operates as a six-stage process that handles both multimodal claims and evidence while generating structured reports. It dynamically selects appropriate tools to extract and evaluate both textual and visual evidence, using web search, image search, reverse image search, and geolocation tools. The authors evaluated DEFAME on three available benchmarks (VERITE, AVERITEC, and MOCHEG) where it surpassed previous state-of-the-art methods. They also introduced CR+, a benchmark containing claims after GPT-4o's knowledge cutoff, where DEFAME significantly outperformed GPT-4o and GPT-4o CoT. Ablation studies and human evaluations confirmed DEFAME's components each contribute to its performance and that it provides better justifications than baseline LLMs. The authors address an important issue in the proliferation of misinformation and identify critical limitations of current MLLMs, particularly their reliance on static parametric knowledge and inability to access up-to-date evidence. Claims And Evidence: Please see other sections for my comments. Methods And Evaluation Criteria: 1. The paper needs a clearer explanation of the "summarize," "develop," and "justify" stages. While prompts for each stage are included in the appendix, their specific goals remain unclear. The ablation study should also evaluate performance without these steps (currently only includes w/o Develop). Consider adding ablations for w/o summarize and w/o justify as well. 2. The comparison with GPT-4o requires more methodological consistency. Currently, GPT-4o appears to be used in a single-turn manner. More meaningful comparisons would be between GPT-4o multi-turn (prompting again when the result is NEI) with DEFAME multi-turn, and between GPT-4o single-turn (current setup) with DEFAME single-turn. 3. The human evaluation section needs more detail. Information should be provided about the evaluators' backgrounds and relevant experience in fact-checking. The paper should clarify what content was shown to evaluators (claims, actions, evidence, elaboration, final judgment, justification?) and which specific components were evaluated. The claim that DEFAME "provides better justification compared to base MLLM" suggests component-specific evaluation, but it would be more reasonable to evaluate only the evaluation, judgment, and justification parts of the report. 4. In Table 3, should also include performance of other models on CR+ Theoretical Claims: No proofs in this paper. Experimental Designs Or Analyses: 1. The ablation study should be extended to include the AVeriTec and CR+ datasets. 2. The paper misses an opportunity to discuss in depth how each component of the agent system complements a baseline MLLM. While section 4.3 provides examples of how DEFAME outperforms GPT-4o, it lacks systematic analysis beyond these examples. Several questions remain unanswered: Why does Web Search significantly help MOCHEG in the ablation table? Why does VERITE benefit substantially from reverse search? The ablation should be compared alongside baseline GPTs—since GPT-4o outperforms DEFAME without Web Search, comparisons between GPT-4o + websearch and DEFAME would help determine whether the other components truly add value. Without these detailed analyses, the paper lacks convincing evidence that all components should be included. 3. The confusion matrix for MOCHEG-DEFAME in Appendix Figure 7 reveals that DEFAME struggles with classifying NEIs. The authors should confirm whether this is due to the outdated information issue mentioned in Appendix G, or provide alternative explanations. 4. The effectiveness of the planning phase requires further examination. How would performance change if all actions were taken for every example instead of being selectively planned? Supplementary Material: I reviewed the appendix Relation To Broader Scientific Literature: Other works largely focused on isolated aspects of fact-checking—such as text-only verification, evidence retrieval, or uni-modal approaches—DEFAME integrates these perspectives into a comprehensive end-to-end solution. The authors position their work within the growing body of research on multimodal reasoning, retrieval-augmented generation, and explainable AI systems. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Other Strength: 1. The paper compares carefully about time and computational cost of DEFAME and different ablations 2. Appendix includes very useful information that did not make to the main paper Other Comments Or Suggestions: I think this paper has great potential, with improved discussion and a more detailed ablation (or reasonable explanation) I am willing to increase my score to accept. Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank for highlighting that the “paper has great potential” and are grateful for pointing out the usefulness of the information provided in the appendix. We appreciate the time invested into the review and are happy to address the concerns in the following. ### Summarize, Develop, and Justify Stages - The summarize stage serves several roles. Efficiency and Comprehensiveness: Without it, retrieved web evidence (entire web articles, PDFs, etc.) would go directly into the report, resulting in an excessively long document containing unfiltered raw information, incl. ads and other irrelevant information. Report documents have length of typically a few thousand tokens, but raw evidence can go well beyond hundreds of thousands. This would increase the required computational cost by orders of magnitude and decrease the human readability of the report. Performance: Upon your request, we compared standard DEFAME against simple truncation at 1000 characters and observed a drop from 68.7% to 63.2% accuracy on CR+, confirming its practical necessity. - The develop stage mainly targets to determine how well the claim is supported by the gathered evidence. This is done by contrasting the claim with the evidence and performing Natural Language Inference (NLI) through Chain-of-Thought (CoT). We expect from the NLI to deduce facts helpful for the fact-check. Moreover, we expect that gaps of missing evidence become evident during that process. This stage prepares the judge stage where the actual classification happens. - The justify stage aims to produce a comprehensive summary of the report to serve the user as a quick explanation of the decision. It is purely explanatory and does not affect label prediction—ablating it would not change performance metrics. ### GPT-4o Comparison (Single-turn vs. Multi-turn) We conducted an additional experiment using multi-turn GPT-4o with CoT prompting leveraging the same reiteration criterion as in DEFAME. It achieved 40.7% accuracy on CR+, only slightly improving over the single-turn variant. The primary limitation remains GPT-4o’s lack of access to external evidence, leading to overuse of NEI. ### Human Evaluation As detailed in Appendix H, evaluators saw complete reports (claim, evidence, verdict, justification); baseline outputs were reformatted for fairness (Appendix K). While not trained fact-checkers, all evaluators had higher education and familiarity with MLLMs. ### Other Models on CR+ Prior SOTA methods weren’t included on CR+ due to multiple reasons. Please refer to our response to Reviewer 3bbT (*“‘Incomplete’ Table 3”*) for more details. ### Ablation Scope Thank you for recommending additional ablations, which we now extended to CR+, see the results in the table below. |Variant|CR+ Acc.| |-|-| |DEFAME|**68.7**| |w/o Geolocation| 65.7| |w/o Reverse Search| 64.0| |w/o Image Search| 63.7| |w/o Web Search| 59.7| |Single Turn| 63.3| |Static Actions| *68.0*| |w/o Develop|67.0| |w/o Summarize|63.2| |Unimodal Develop| 65.7| The values confirm that all components contribute meaningfully to DEFAME’s performance - removing any would hurt the method. We intentionally did not extend ablations to AVeriTeC due to its unimodal nature, which renders many multimodal ablations inapplicable. ### Component Contributions We observe distinct tool contributions across datasets due to the fundamental differences in the tasks: VERITE targets Out Of Context (OOC) detection which (as already shown by previous work) benefits highly from applying Reverse Image Search to the input image. MOCHEG benefits from Web Search strongly as, in contrast to VERITE, it is constructed from real-world claims only. We performed an extra ablation DEFAME with GPT-4o + Web Search (GPT WS) on CR+, where GPT WS was allowed to perform a single round of evidence retrieval and to apply CoT. GPT WS achieved an accuracy of 54.7% which is clearly better than the native baselines but still strongly worse than DEFAME (accuracy 68.7%). Would you like to see additional ablations on MOCHEG and VERITE? ### NEI Confusions in MOCHEG Yes, many NEI errors result from outdated or time-sensitive labels, as discussed in Appendix G. Others stem from DEFAME’s current limitation of not assessing source credibility. For example, in one case, DEFAME classified the claim "Former President Jimmy Carter said 'America has no functioning democracy at this moment.’” as Supported based on an article from truthout.org. However, Snopes labeled the same claim as Not Enough Information, arguing that further verification from more authoritative or corroborated sources was necessary. This illustrates how differences in evidence sufficiency thresholds—and subjective judgments about source credibility—can lead to apparent disagreement. ### Effect of Planning Table 4 and the additional ablations on CR+ in the table above already include the removal of planning, referred to as “Static Actions” (all tools used every time).
null
null
null
null
null
null
Towards Efficient and Scalable Implementation of Differentially Private Deep Learning
Reject
Summary: This focus of the paper is on computational efficiency of implementing Differentially Private Stochastic Gradient Descent (DP-SGD), which is commonly used for private ML model training. In particular, the paper focuses on 1. _Poisson subsampling for generating batches_: while typical implementation have used shuffling based batches, recent work has shown that this can have a worse privacy guarantee. 2. _JAX based implementation_: The paper reports that non-private SGD implemented in JAX can be faster than PyTorch, however a naive implementation of DP-SGD in JAX suffers a much lower throughput due to recompilation of computation graphs in JAX. The contributions of the paper include: * It proposes a novel JAX-based implementation of DP-SGD called _Masked DP-SGD_, which correctly implements Poisson subsampling while avoiding the JAX recompilation issues, that a naive implementation would suffer due to variable batch sizes. The idea is to use the standard approach of using small physical batches and gradient accumulation to simulate large logical batches. But to adapt this for Poisson subsampling where the batch size can be varying, some of the examples are masked out for each optimization step. * The paper evaluates and benchmarks various strategies for reducing the computational cost of DP-SGD. This includes efficient gradient clipping techniques like Ghost Clipping, Mixed Ghost Clipping, and Book Keeping. It is reported that efficient clipping implementations can roughly halve the cost compared to a naive implementation in Opacus (currently supported in PyTorch). Th JAX implementation of Masked DP-SGD achieves comparable or better performance than efficient PyTorch-based methods. * It studies the impact of lower precision training using TF32 (representation) on the throughput of DP-SGD, finding potential speedups for certain model sizes. The paper also notes concerns regarding the theoretical privacy guarantees in such settings. * It also studies the scalability of DP-SGD in distributed training environments (with multiple GPUs), demonstrating that DP-SGD scales even better than non-private SGD when using a large number of GPUs (up to 80), likely due to its slower pace and less frequent network saturation. ### Post-rebuttal update I continue to maintain my score and recommendation. Claims And Evidence: All claims made in the paper are supported by evidence. It is great that the entire source code for experimentation is made available. Methods And Evaluation Criteria: The proposed method and evaluation criteria are sound. Theoretical Claims: There aren't any non-trivial theoretical claims in the paper. Experimental Designs Or Analyses: The experimental setup looks sound to me. Supplementary Material: I downloaded the source code, and briefly skimmed through it, but did not verify it in a lot of detail. But I highly appreciate that the code is made available! Relation To Broader Scientific Literature: The paper positions itself nicely within existing literature by providing an exhaustive empirical evaluation of different methods for implementing DP-SGD. This is helpful even beyond the introduction of the "Masked DP-SGD" technique that it introduces. Essential References Not Discussed: As far as I can tell, all relevant literature is adequately discussed and cited. Other Strengths And Weaknesses: ### Strengths * _Thorough Empirical Evaluation:_ The paper presents extensive benchmarking results across different model architectures (Vision Transformers and ResNets), frameworks (PyTorch and JAX), and hardware (NVIDIA V100 and A100 GPUs). * _Novel JAX Implementation:_ The proposed Masked DP-SGD method offers a promising avenue for efficient and correct DP-SGD training, overcoming the recompilation issues when naively implementing in JAX. * _Open Source Contribution:_ The source code for their implementation is made available in supplementary material, which I highly appreciate! Overall, this paper makes a significant contribution to the field by providing a detailed empirical analysis of the computational costs associated with correctly implemented DP-SGD and by proposing and evaluating effective optimization strategies, including a novel JAX-based approach. The findings are relevant to both researchers and practitioners working on deploying differentially private deep learning. I recommend acceptance. Other Comments Or Suggestions: ### Minor comments: * Figure 5: The y-axis is not clear. Is it the ratio of throughput of TF32 / FP32 or FP32 / TF32? I think it is the former, but it would be better to be explicit about the same. Questions For Authors: One thing that was not clear to me in _Masked DP-SGD_ is how the batches are sampled at each step. The pseudocode simply says $B \\gets \\{x_{j_1}, \\ldots, x_{j_m}\\}$, but how to do this efficiently is not clear in cases where the dataset is too large to fit in memory. Perhaps _Masked DP-SGD_ can be applied in conjunction with the method of _Scalable DP-SGD_ proposed in [Chua et al. (2024b)](https://www.arxiv.org/abs/2411.04205), which is supposed to work when the dataset size is very large? I would appreciate some discussion about the same. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thanks for your time and your insightful review. > Figure 5: Thanks for the comments! It is correct, the ratio of the throughput is TF32/FP32 and that should be clarified in the figure caption. It will be updated for the camera ready version. > One thing that was not clear to me in Masked DP-SGD is how the batches are sampled at each step. The masked DP-SGD batch sampling can be composed in two main steps. First, we sample the batch size $B^{(t)}$ from a Binomial distribution $\text{Bin}(N, q)$, where N is the number of samples and $q$ is the subsampling rate. We round $B^{(t)}$ up to the next integer multiple of physical batch size $p$: $B_+^{(t)} = p\lceil B^{(t)} / p\rceil$. Second we permute the data set, and pick the first $B_+^{(t)}$ elements as the full logical batch that we process as chunks of $p$ samples. Finally when the gradients are aggregated, we throw away (by zeroing the gradients) for the padded $B_+^{(t)} - B^{(t)}$ samples. > How to do this efficiently is not clear in cases where the dataset is too large to fit in memory. Perhaps Masked DP-SGD can be applied in conjunction with the method of Scalable DP-SGD proposed in Chua et al. (2024b), which is supposed to work when the dataset size is very large? This is a very interesting future direction that we are happy to discuss! Indeed the current sampling procedure of the Masked DP-SGD might struggle with very large data sets, as permuting the indices becomes very expensive in that case. We could indeed combine the Chua et al. (2024b) approach for the subsampling with the masked approach by simply replacing the Truncate/Pad step in their approach with our masking step, and processing the minibatches again as chunks of $p$. --- Rebuttal Comment 1.1: Comment: Thanks for the response. Actually now I am even more confused about how the batches are sampled in Masked DP-SGD. I understand the part about sampling the batch size $B^{(t)}$ from the Binomial distribution, but the part about permuting the dataset seems inefficient. Doesn't that require a random permutation at _each_ step of training? That seems quite inefficient to me... If the implementation is actually permuting the dataset once, then that would be an incorrect implementation of Poisson subsampling. --- Reply to Comment 1.1.1: Comment: Thanks for your reply and your very detailed question regarding our implementation! > If the implementation is actually permuting the dataset once, then that would be an incorrect implementation of Poisson subsampling. The reviewer is absolutely correct that the permutation needs to happen at every step to be DP. We would like to clarify that the implementation permutes at every step of Poisson subsampling (see lines 339-354 of `jax_exp/jax_mask_efficient.py` in the supplement) and thus the implementation is doing correct Poisson subsampling and is DP. Lines 339-354 of `jax_exp/jax_mask_efficient.py`: ``` for t in range(num_iter): sampling_rng = jax.random.PRNGKey(t + 1) batch_rng, binomial_rng, noise_rng = jax.random.split(sampling_rng, 3) ####### # poisson subsample actual_batch_size = jax.device_put( jax.random.bernoulli(binomial_rng, shape=(full_data_size,), p=q).sum(), jax.devices("cpu")[0], ) n_physical_batches = actual_batch_size // physical_bs + 1 logical_batch_size = n_physical_batches * physical_bs n_masked_elements = logical_batch_size - actual_batch_size # take the logical batch indices = jax.random.permutation(batch_rng, full_data_size)[:logical_batch_size] ``` > I understand the part about sampling the batch size from the Binomial distribution, but the part about permuting the dataset seems inefficient. Doesn't that require a random permutation at each step of training? Apologies, we believe that we might have not carefully formulated our reply and thus caused a misunderstanding. We wrote that we “permute the data set, and pick the first $B_+^{(t)}$ elements as the full logical batch” while in the implementation we work on the indices instead of the dataset itself (see part of supplement code above). > That seems quite inefficient to me... Thanks for pointing this out! The reviewer is correct that permuting the indices of the dataset at each iteration can be costly, especially with larger datasets it becomes significantly more expensive than the uniform sampling done in e.g. [Opacus](https://github.com/pytorch/opacus/blob/6c2cde9cc715f6c45983901461e06d9abad09fea/opacus/utils/uniform_sampler.py#L150-L158). Fortunately, we can actually easily adapt the Opacus style sampling to our Masked-SGD implementation! 1. We first sample the (actual) logical batch using Poisson subsampling the same as Opacus. 2. Next, we pad the batch with arbitrary elements to make its size an integer multiple of the physical batch size. 3. Finally, as we do with our original implementation, we mask away the padded samples. As the masking removes the effect of the padded samples, we could for example repeat elements of the sampled batch for the padding (essentially wrapping around the indices), or we could pad with the first $B_+^{(t)} - B^{(t)}$ elements of the full dataset. As the padding is a constant time operation, the complexity of this proposed sampling procedure would match that of Opacus. We implemented this variant below and profiled it with different `full_data_size` and found that indeed the new implementation outperforms the old version when `full_data_size` is sufficiently large. The below table shows the average number of seconds it takes for sampling the methods as a function of `full_data_size` when executing it on the CPU of the cluster we used for our experiments. We average over 10 repeats when discarding the initial compiling time for both. (We used `block_until_ready` to profile the function executions). | sampling_method | n=10 000 |n=100 000 |n=1 000 000 |n=10 000 000 |:----------------|---------:|----------:|----------:|-----------:| | old | 0.033 | 0.058 | 0.645 | 11.353 | | new | 0.150 | 0.157 | 0.229 | 0.623 | This source code is the new sampling method that is inspired by discussion that the reviewer initiated. ``` def sample_batch_new_version(seed, full_data_size): sampling_rng = jax.random.PRNGKey(seed) batch_rng, binomial_rng, noise_rng = jax.random.split(sampling_rng, 3) ####### # poisson subsample poisson_subsampled_indices = jax.random.bernoulli(batch_rng, p=q, shape=(full_data_size,)).nonzero()[0] actual_batch_size = len(poisson_subsampled_indices) n_physical_batches = actual_batch_size // physical_bs + 1 logical_batch_size = n_physical_batches * physical_bs n_masked_elements = logical_batch_size - actual_batch_size # take the logical batch pad = poisson_subsampled_indices[:n_masked_elements] indices = jnp.concatenate([poisson_subsampled_indices, pad]) masks = jax.device_put( jnp.concatenate([jnp.ones(actual_batch_size), jnp.zeros(n_masked_elements)]), jax.devices("cpu")[0], ) return indices, masks ```
Summary: The paper provides a comprehensive empirical study of Differentially Private Stochastic Gradient Descent (DP-SGD) implementations that properly incorporate Poisson subsampling, which is crucial for maintaining theoretical privacy guarantees. Recent research has demonstrated that many implementations ignore the Poisson subsampling requirement, potentially compromising privacy guarantees. The authors benchmark existing PyTorch and JAX implementations, as well as introducing their own JAX implementation with proper Poisson sampling. They also propose "Masked DP-SGD," a novel approach that avoids expensive recompilation in JAX, leading to substantial efficiency gains. The authors provide practical recommendations for more efficient DP-SGD deployments. Their findings include insights on gradient clipping optimizations, precision modes, and scaling behavior in distributed settings. ## Post-rebuttal update. After reading the rebuttals, I choose to maintain my score, albeit with a lower confidence (as I've indicated in the AC discussion). I'm highly confident in my assessment of the technical side of the paper - it is very strong and thorough. I only have medium-level confidence in my scope assessment. My score reflects my intuition based on my current understanding of ICML expected scope and paper's contribution, but I'm open to reconsider if the consensus between AC and other reviewers disagrees with my assessment. Claims And Evidence: The paper provides a comprehensive set of benchmarks for both throughput and memory consumption across a wide range of widely adopted state-of-the-art DP-SGD implementations, covering both PyTorch and JAX frameworks. All findings are well-documented with necessary implementation details, allowing for reproducibility and clear understanding of the experimental setup. The benchmarks use consistent metrics and evaluation criteria across implementations. I believe, however, that the scope of the paper is not broad enough to be considered for ICML. While it's a very solid technical work and important for practitioners, the paper lacks scientific novelty that would be expected at ICML. The proposed method (Masked DP-SGD) represents an incremental technical improvement that is highly specific to the JAX framework. While valuable for JAX users, it doesn't present a fundamental advancement in the field of differentially private machine learning that generalizes beyond this specific implementation context. Methods And Evaluation Criteria: Paper's methodology is solid and covers a good range of models and datasets in realistic evaluation scenarios. The authors examine both Vision Transformer (ViT) and ResNet architectures of varying sizes, providing a comprehensive view of how different implementations perform across model scales. Looking through the appendix, it's clear the authors took great care to extensively evaluate existing methods, including obscure implementation details like "grad_sample_mode" in Opacus. The authors also employed NVIDIA profiling tools for deeper insights into computational bottlenecks, allowing them to identify specific causes of performance differences between implementations. This profiling helps explain why certain optimizations are effective and provides valuable information for practitioners. Theoretical Claims: N/A Experimental Designs Or Analyses: See above Supplementary Material: I have reviewed some implementation details in the appendix Relation To Broader Scientific Literature: The paper fits well with recent attention on the important question of DP-SGD applications with proper Poisson sampling. It addresses concerns raised by works like Lebeda et al. (2024), Chua et al. (2024a/b), and Annamalai et al. (2024), which highlight that many implementations have weaker privacy guarantees than claimed due to improper sampling. Compared to previous papers with DP-SGD benchmarks, this one focuses extensively on proper Poisson sampling, filling an important gap in the literature. While earlier works compared efficiency of different implementations, they often overlooked the sampling requirement that is crucial for theoretical privacy guarantees. I believe, however, that this paper lacks a substantial novel contribution to the field, as it mostly focuses on technical details of existing implementations. The proposed Masked DP-SGD method, while useful, represents an engineering solution to a framework-specific problem rather than advancing our understanding of differentially private learning more broadly. Essential References Not Discussed: None Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: Thanks for your time and your careful review. We respectfully disagree with your statement that “While it's a very solid technical work and important for practitioners, the paper lacks scientific novelty that would be expected at ICML.” because the call for papers of ICML mentions as topic of interest: “Machine Learning Systems (improved implementation and scalability, hardware, libraries, distributed methods, etc.)”. We believe that our work fits in the call for papers. ICML Call for Papers has a section for review criteria, which starts with: “Submissions should report original and rigorous research of significant interest to the machine learning community.” We believe that according to your review, our paper satisfies this completely, when taking into account that the *machine learning community* includes both the researchers and practitioners.
Summary: This paper investigates the computational efficiency of DP-SGD, with a focus on analyzing the computational cost of using Poisson subsampling for DP training, and comparing a series of DPSGD schemes. To reduce computational costs, the author proposed Masked DPSGD algorithm by addressing the frequent recompilation problem caused by Poisson sampling. The study also explored the application of low precision computation (TF32) in DPSGD and found that it can improve computational throughput, but its impact on privacy guarantee still needs further research. Claims And Evidence: Yes. Most of the claims in the paper are well supported by the experimental results. Methods And Evaluation Criteria: Metrics: Yes. Throughputs and maximum achievable physical batch size are used as metrics to evaluate computational overhead. Datasets: No. Only CIFAR100 the image dataset is used. Models: Probably okay. ViT and ResNet families are adopted. Perhaps language models could be considered as well. Theoretical Claims: I have checked Alg. A1: Virtual Batching DP-SGD JAX; unfortunately, the paper seems to miss an important theorem on privacy guarantee that Alg. A1 satisfies. Experimental Designs Or Analyses: I've checked the experiments on the computational overhead of DP-SGD, covering core results such as throughput and batch size of different DPSGD methods. No significant issues were spotted. Supplementary Material: I reviewed the implementation and gradient clipping of the JAX algorithm for Poisson subsampling in the supplementary material. No significant issues were found. Relation To Broader Scientific Literature: The work re-implemented several DP-SGD methods, such as opacus, ghost clipping, mixed ghost clipping, Book Keeping ghost, etc., with Poisson subsampling and compared their computational overheads. The work proposed a new implementation which has a superior throughput than the aforementioned methods. Essential References Not Discussed: The reviewer thinks the references are appropriate. Other Strengths And Weaknesses: Strengths: the paper views DPSGD from a computational efficiency perspective which is meaningful and interesting. It corrects the Poisson Subsampling implementation issues in previous works and proposes its own method with a higher efficiency. Weaknesses: the privacy guarantee of the proposed algorithm is missing. Lower precision using TF32 increasing the throughput is interesting but may not ensure its good performance on other tasks. The experimental results are limited to one single image dataset. Since the privacy exhibits a tradeoff with accuracy, it may be beneficial to discuss accuracy in the experiments. Other Comments Or Suggestions: On the right-bottom of page 2: 'using the the per-example ...' The authors could further enhance their writing. For example, the main algorithm should be put in the main text but not the appendix. Questions For Authors: 1. What is the privacy guarantee of your proposed algorithm? How can you prove it? 2. What is the computational overhead of these DPSGD methods on datasets beyond CIFAR100? 3. Why is accuracy missing from all experimental results? How does the sampling affect accuracy results? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your time and your thorough review. > What is the privacy guarantee of your proposed algorithm? How can you prove it? The DP guarantees of the **Masked DP-SGD** follow from the standard analysis of Poisson subsampled DP-SGD in the add/remove adjacency. Note that the only difference we have to standard Poisson subsampling, is that in cases where the sampled batch size $B$ is not an integer multiple of the physical batch size $p$, we need to pad the batch with additional samples. We compute the clipped per-example gradients for these $B_+ = p \lceil B / p \rceil$ samples. However, when aggregating the clipped per-example gradients in a sum, we weigh the padded $B_+ - B$ samples with $0$. Hence, the sampling procedure we use is equivalent to that of Poisson subsampling, allowing us to analyze the privacy guarantees as the Poisson subsampled Gaussian mechanism. > What is the computational overhead of these DPSGD methods on datasets beyond CIFAR100? This is an interesting question. In this work we focussed on cases where the complete dataset can be still handled by one machine and in these cases the computational overhead is mostly influenced by the model (see Figures 1 and 2). Handling cases where distributed computing is needed might require additional methodology that is orthogonal to our advancements (see also discussion with reviewer rRoP). > Lower precision using TF32 increasing the throughput is interesting but may not ensure its good performance on other tasks. [...] Since the privacy exhibits a tradeoff with accuracy, it may be beneficial to discuss accuracy in the experiments. Thanks for the suggestion, we added new experiments below complementing the Table A2 (in the Appendix), where we compare accuracy for the ViT base model on CIFAR100. The new experiments measure the accuracy and throughput for the SVHN and CIFAR10 datasets with different numbers of examples per class and hyperparameters. We are comparing the accuracy and throughput between precision modes for private training with Opacus. | dataset | epochs | S | lr | throughput FP32 | throughput TF32 | accuracy FP32 | accuracy TF32 | std FP32 | std TF32 | |---------|-------|-----|----------|-----------|----------|------------|----------|------------|----------| | cifar10 | 18 | 250 | 0.000710 | 57.317226 | 117.987103 | 0.919950 | 0.923300 | 0.008273 | 0.008485 | | | 6 | 100 | 0.000758 | 56.950525 | 114.012746 | 0.956275 | 0.956367 | 0.001504 | 0.001850 | | SVHN | 23 | 250 | 0.00098 | 57.279881 | 117.286507 | 0.610933 | 0.611321 | 0.009724 | 0.009848 | | | 23 | 500 | 0.00098 | 57.352352 | 118.368599 | 0.806301 | 0.806438 | 0.006607 | 0.007316 | The throughput difference is the same for both datasets. Using TF32 is twice as fast as FP32. We tested the variability in accuracies across three repeats of the DP training with both TF32 and FP32. We computed the pairwise differences between TF32 and FP32 on different seeds and data sets. The differences between the two precisions have a mean of $\approx -4.3 \times 10^{-5}$ and std $\approx 4.2 \times 10^{-4}$. Using a pairwise t-test on the differences we cannot reject the null hypothesis that the accuracies have the same mean (p-value $\approx 0.74$). Furthermore, compared to the variance arising from DP (see Table above), the differences from changing the precision are negligible. This is in line with the previous results of the article in Table A2. > Why is accuracy missing from all experimental results? How does the sampling affect accuracy results? The paper focuses on computational efficiency and as can be seen from Table A2 the optimizations have no impact (apart from tiny impact due to seeding noise) on test accuracy. We use the accuracy to test our implementations and check that everything is implemented correctly as no accuracy difference is expected unless using different precision where we observe little impact (see Table A2 and experiments above). The general question on which sampling is optimal is a separate question and there is some early work looking at this question [1] but for fair comparisons between sampling in terms of accuracy the accounting must match the sampling method and so far tight accounting has been only established for Poisson subsampling [2]. Tight accounting for other sampling methods remains an active area of research. [1] Chua et al. Scalable DP-SGD: Shuffling vs. poisson subsampling. NeurIPS 2024. [2] Annamalai et al.. (2024) To Shuffle or not to Shuffle: Auditing DP-SGD with Shuffling. arXiv:2411.10614. --- Rebuttal Comment 1.1: Comment: The reviewer thanks the authors for addressing its concerns, as most of them are well explained. The reviewer has the remaining suggestion: The reviewer understands that the current experiments focus on the setting of CIFAR100 and a single machine. But the reviewer still hopes to see how the method performs on larger datasets in the future. This may make the proposed method more useful and general. Thanks for the comparison between TF32 and FP32. But the reviewer noticed that TF32 gives even higher accuracies than FP32, which is a bit strange. Normally, one would consider that higher precision like FP32 should give better accuracy. Maybe the author can explain a little why this happens, even if the difference is not significant. --- Reply to Comment 1.1.1: Comment: > Thanks for the comparison between TF32 and FP32. But the reviewer noticed that TF32 gives even higher accuracies than FP32, which is a bit strange. Normally, one would consider that higher precision like FP32 should give better accuracy. Maybe the author can explain a little why this happens, even if the difference is not significant. Thanks for the question regarding the utility difference between TF32 and FP32. As the reviewer has pointed out, the difference that we observe is not statistically significant and we would like to point to Stosic and Micikevicius [1] that report similar observations in the non-DP setting: the accuracies can change either up or down, but the differences are very small. TF32 only modifies certain operations as noted by [1]: *“TF32 is only exposed as a Tensor Core operation mode, not a type. All storage in memory and other operations remain completely in FP32, only convolutions and matrix-multiplications convert their inputs to TF32 right before multiplication.”* The change from FP32 to TF32 is expected to contribute slightly different rounding errors in matrix multiplications and convolutions. While this could be expected to lead to higher training loss, it is not as clear what the impact on test accuracy would be. In any case the differences are much smaller than differences caused by other sources of randomness such as different random seeds for DP. [1] Stosic, D. and Micikevicius, P. Accelerating AI training with NVIDIA TF32 tensor cores. https://developer.nvidia.com/blog/accelerating-ai-training-with-tf32-tensor-cores/, 2021
null
null
null
null
null
null
null
null
Model Immunization from a Condition Number Perspective
Accept (oral)
Summary: This paper proposes to achieve model immunization, that is, pretraining a model that is hard to fine-tune on some harmful tasks while preserving the performance of other tasks, by maximizing the condition number of the corresponding harmful fine-tuning objective so that the convergence is slow and numerically unstable. Specifically, they propose a differentiable regularization that is proved to increase the condition number of the matrix regularized and further extend such property to the model immunization setting for the linear model. Experiments are conducted for both linear models as well as nonlinear neural networks with various architectures and the proposed method is shown to effectively increase the condition number on the harmful task objective while preserving that of the primary objective. --- ## update after rebuttal After reading the authors' rebuttal, I feel comfortable recommending **strong accept** for this paper. Claims And Evidence: The proposed regularizer is supported with theoretical proof to show that it monotonically increases the condition number, and is numerically verified that when applied for model immunization, the condition number of the harmful task indeed increases. The inferior convergence of such task with large condition number is also verified in experiments. Methods And Evaluation Criteria: The condition number is a well-known factor of convergence speed and stability in both the classic optimization and deep learning literature. It’s reasonable to motivate and evaluate model immunization from the condition number perspective. Theoretical Claims: I have checked the proof of Theorem 4.2, 4.3, and briefly 4.4. The proof seems sound. Experimental Designs Or Analyses: The experiments mostly align with the theory and is extended to nonlinear neural networks. The proposed evaluation metric relative immunization ratio is intuitive and reasonable. Supplementary Material: I have checked Appendix B. The proof seems sound. Relation To Broader Scientific Literature: The paper proposed a new method to achieve model immunization, a recent proposed concept in the broader context of AI safety, and serves as an alternative to IMMA, a bilevel optimization method proposed together with the concept of model immunization, while achieving better performance in terms of preserving the performance of the primary task. Essential References Not Discussed: None to the best of the reviewer’s knowledge. Other Strengths And Weaknesses: Strengths: 1. The proposed framework of achieving model immunization by manipulating the condition number of the objective function corresponding to different tasks is very novel. 2. The proposed regularizers that increase or decrease the condition number are theoretically well-supported and technically solid. 3. The numerical experiments demonstrate superior performance compared to baseline methods for model immunization. Weaknesses: 1. Manipulating the condition number of objective functions seems computationally expensive. Even though the proposed regularizers are differentiable alternatives to the condition number, they still involve the maximum or minimum singular value of the regularized matrix. Could the authors justify how the proposed method could be generalized to an even larger scale? Other Comments Or Suggestions: N/A Questions For Authors: The proposed model immunization framework involves a primary task and a harmful task. How would the model perform on tasks other than the primary task and the harmful task? Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful and encouraging feedback. We are glad the reviewer found the theoretical formulation sound, the proposed regularizers technically solid, and the empirical results convincing. We address the specific concerns and questions below. > Q12. Manipulating the condition number of objective functions seems computationally expensive. Could the authors justify how the proposed method could be generalized to an even larger scale? The computational cost depends on two main factors: the number of samples and the complexity of computing singular values. For the first, in large-scale problems, the Hessian of the full dataset can be approximated using only a minibatch. For the second, as the reviewer points out, the proposed method requires only the maximum or minimum singular value of the regularized matrix, which (particularly in high-dimensional settings) can be efficiently computed using techniques such as the Lanczos algorithm [1,2] or randomized SVD [3,4]. These approaches reduce the computational complexity from $\mathcal{O}(d^3)$ for a full SVD of a $d \times d$ matrix to $\mathcal{O}_k(d^2)$, with the rank-dependent factor absorbed into $\mathcal{O}_k$. This reduction is beneficial given the typically low-rank structure of Hessian matrices. [1] Cullum and Willoughby. Lanczos algorithms for large symmetric eigenvalue computations: Vol. I: Theory. Society for Industrial and Applied Mathematics, 2002. [2] Golub and Van Loan. Matrix computations. JHU press, 2013. [3] Halko et. al. "Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions." SIAM review, 2011. [4] Tropp. "Randomized algorithms for matrix computations." 2020. > Q13. How would the model perform on tasks other than the primary task and the harmful task? Thanks for pointing out this interesting direction. We now additionally report the ratio between condition numbers with and without immunization, i.e., Eq. 15 (i) but for all digits, for a model with $D_{\tt P}$ = digit 0 and $D_{\tt H}$ = digit 1. For digits other than $D_{\tt P}$ and $D_{\tt H}$, we observe that the ratio remains close to 1, indicating that immunization does not affect the condition number of the features on other tasks. For a theoretical perspective, following the reasoning in Sec. 3.1, the performance on other tasks is intuitively related to the correlation with $D_{\tt H}$, i.e., the relative angle between the singular vectors. | | 0 ($D_{\tt P}$) | 1 ($D_{\tt H}$) | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | |--------------|----------------|-----------------|------|------|------|------|------|------|------|------| | condition number ratio | 0.0451 | 9.8571 | 1.7440 | 1.2147 | 0.9582 | 1.0049 | 0.9785 | 1.3007 | 2.2467 | 1.1453 | --- Rebuttal Comment 1.1: Comment: Thank you for your response! The rebuttal has already addressed my concerns and I am happy to recommend **accept** for this paper.
Summary: The paper reframes the immunization task, i.e., make models robust against finetuning on specific task, (Contribution 1: Section 3) from a novel perspective using condition number. Through this insight, the authors propose a regularization method at the pretraining stage that makes finetuning for specific tasks more difficult (Contribution 2: Section 4.1, 4.2). They theoretically prove that using this regularization increases the condition number for tasks they aim to learn and decreases it for tasks they want to immunize against in linear model setting (Contribution 3: Section 4.3). To demonstrate the effectiveness of this regularization, they empirically showed improved relative immunization ratio for linear regression and image classification tasks on both linear models (Contribution 4-1: Section 5.1) and deep neural networks (Contribution 4-2: Section 5.2). Claims And Evidence: Theoretical claims and evidence: The main framework of condition number and its theoretical connection to the immunization task is very novel and clear. Experimental evidence: This will be discussed in detail in the methods and evaluation criteria section. Methods And Evaluation Criteria: As the paper itself acknowledges (In section 4.4), its biggest weakness is the experimental setting. I would like to see two major improvements from the rebuttal. First, there are no results for generative models. As mentioned in the introduction regarding text-to-image models, immunization is a more important issue for generation than classification. While the theoretical contribution of this paper is sufficiently commendable, it's difficult to strongly recommend it due to the lack of coverage on generative models. For the immunization on generative model, I would be satisfied only on the simple dataset like MNIST. For example, it would be good to compare convergence speed after training an unconditional or conditional diffusion model on data excluding the digit 7 and then finetuning on 7. But if you have a preferred setting, feel free to use that. Second, the paper heavily relies on the RIR metric. While I agree that immunization can be approximated by RIR, due to the limitations of 2nd order approximation of loss landscapes in neural networks (unlike linear models), it's difficult to claim that improvements in RIR directly translate to improvements in resistance on actual finetuning. For deep neural networks, I would like to see **test accuracy** on both D_H and D_p simultaneously. Drawing a Pareto curve w.r.t finetuning epochs or hyperparameter and showing superior results than baseline methods would be ideal. Currently, immunization on D_H is only reported through RIR, which I think doesn't accurately show how effective this method actually is for DNNs. **The second problem appears quite critical to me**. I find it difficult to improve my score to "accept" if this experiment is not conducted. (Minor Q) Are there any papers dealing with immunization in non-linear settings that use RIR as the main evaluation metric? If so, please share them and I will take that into consideration. --- If both issues are **perfectly** addressed, I think this paper has **oral-level** contribution. Theoretical Claims: I was not able to verify all proofs provided in the appendix. The theorem results seem reasonable. Experimental Designs Or Analyses: As stated in Methods And Evaluation Criteria, not addressing generative models and using only RIR as a performance metric for immunization are major issues. Supplementary Material: I read the "Pseudo-code of the dummy layer" section. (Minor Q) For deep NNs, should this be applied to all affine transformations, i.e., every weight parameters? Relation To Broader Scientific Literature: As far as I know, this research is the first to apply condition number to a field other than optimization contexts related to convergence speed. Condition number seems to have potential applications in areas beyond safety (the goal of this research), such as model interpretability and other fields. I would like to give high credit for being the first research to apply condition number to a different field. Essential References Not Discussed: Adversarial prompt to generate unsafe images by T2I models could be one of problem that can be solved from an immunization perspective. Please consider add this kind of technique. [1]: Circumventing Concept Erasure Methods For Text-to-Image Generative Models, https://arxiv.org/abs/2308.01508 [2]: Ring-A-Bell! How Reliable are Concept Removal Methods for Diffusion Models?, https://arxiv.org/abs/2310.10012 Other Strengths And Weaknesses: The writing is clear, the motivation is also very good. I think the only weaknesses of this paper is the experiment setting. Other Comments Or Suggestions: (This is only a suggestion, so I would appreciate it if the authors could consider them and incorporate them at their favor.) S1: Is the proof sketch of theorem 4.1 really necessary for the main text? I believe that even if the paper is aimed at a theoretical point, this part should be relegated to the appendix and more experimental results should be presented. Questions For Authors: Q1: I'm curious about numerical stability of regularization term due to the 1/x form. What are the conditions for gradient explosion? Looking at eq (13), it seems like explosion occurs when S becomes a diagonal matrix composed of sigma^min, is that correct? I'm curious if this is the only case where explosion occurs. If so, since Hessian matrices typically have low rank structure, numerical stability could be further justified. This question is just to improve the theoretical contribution of this paper. Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful review and respond to questions below. > Q5. Generative task We consider the immunization against linear probing (Sec. 3). For generative tasks, linear probing is not commonly used for transfer learning. Instead, other techniques, e.g., LoRA, are more common. Nevertheless, we now report on immunization against linear probing for a generative model. We experiment with CVAE on the MNIST dataset. Let $D_{\tt P}$ be digit 0 images and $D_{\tt H}$ be digit 1 images. The CVAE uses MLP for both the encoder and decoder. The last layer of the decoder is used for linear probing, and all other layers are treated as the feature extractor. The loss $\mathcal{L}$ is the negative ELBO. As in the paper, we report RIR to evaluate immunization. To evaluate generation quality, we report the accuracy of a digit classifier (with 99.3% acc. on the MNIST test set). In **Tab R1**, our approach successfully immunizes the model against $D_{\tt H}$ without hurting the performance on $D_{\tt P}$, as indicated by a large RIR and acc. of 100% on the generated images. In **Tab R2**, we report linear probing on $D_{\tt H}$ across different epochs on models w/ and w/o immunization. We observe that the acc. of the model w/o immunization gradually learns to generate $D_{\tt H}$ (digit 1), while the immunized model struggles to learn $D_{\tt H}$ (digit 1). **Tab R1.** Results of immunizing CVAE on MNIST. || Metrics| |-|-| |Eq. 17 (i)|1128.4 | |Eq. 17 (ii)| 0.016| |RIR|68971.8| |Classification acc. on $D_{\tt P}$ |1.0 | **Tab R2.** Acc. of generation on $D_{\tt H}$ w/ and w/o immunization. |Linear probing epoch|5|10|15|20|25| |-|-|-|-|-|-| |w/o Immu. |0.30|0.37|0.47|0.54|0.69| |w/ Immu. |0.01|0.00|0.02|0.01|0.01| > Q6. Test acc. on $D_{\tt H}$ for deep nets v.s. fine-tuning epochs The acc. on $D_{\tt P}$ is fixed as reported in Tab.3 in our paper. In Tab. R3 & R4, we further report the linear probed (fine-tuned) results on different feature extractors and provide the test acc. on $D_{\tt H}$ during the first 20 epochs of linear probing (fine-tuning). Here, $D_{\tt H}$ refers to the Stanford Cars dataset. The acc. are reported every 5 epochs. As shown, our method exhibits the slowest convergence rate with on both ResNet18 and ViT; indicated by the lowest acc. compared with baselines. We will include a plot visualizing these results in the paper. **Tab R3.** Test acc. on $D_{\tt H}$ with ResNet18 as the backbone. |Method/Fine-tuning epoch|5|10|15|20| |-|-|-|-|-| |Pre-trained model|13.6|18.2|21.0|23.8| |$\mathcal{R}_{\tt ill}$ Only | 12.8 | 16.6 | 19.9 | 23.8 | |IMMA|16.8|20.9|23.7|26.4| |Opt $\kappa$|12.9|17.8|19.9|23.3| |$\bf Ours$|9.5|15.5|18.0|21.4| **Tab R4.** Test acc. on $D_{\tt H}$ with ViT as the backbone. |Method/Fine-tuning epoch|5|10|15|20| |-|-|-|-|-| |Pre-trained model|30.7|42.2|51.4|60.3| |$\mathcal{R}_{\tt ill}$ Only|8.8|20.9|23.3|39.0| |IMMA|23.31|35.7|47.4|58.7| |Opt $\kappa$|11.8|20.1|27.5|42.1| |$\bf Ours$|7.9|14.6|24.5|34.0| > Q7. Any other papers use RIR for evaluation? To the best of our knowledge, we are the first to study immunization from a condition number perspective. Therefore, we are not aware of other works using RIR as an evaluation metric. > Q8. Should the dummy layer be applied to every weight parameter in deep nets? Under the linear probing setting, the dummy layer only needs to be inserted at the last layer of the feature extractor, and the rest of the layers can be trained normally. Intuitively, we are treating a deep-net to be extracting "learnable feature" $\mathbf{x}$ followed by a linear feature extractor $\theta_L$, that is, we view a deep-net $f_{[\theta_1, \dots \theta_L]} = \mathbf{x}(\theta_1,\dots, \theta_{L-1})^\top\theta_{L}$. Note, our theoretical result is limited to linear models and does not fully justify such constructions. We believe this is an important future direction, and we believe the linear model's results provide a promising first step. > Q9. References on adversarial prompting Thanks! We will review the suggested [1, 2] along with other related works on concept erasing methods to provide a more comprehensive view of the field. > Q10. Suggestion on proof sketch We will adjust the length/placement of the proof sketches. This would also help to create space for the additionally suggested experiments for the main text. > Q11. Numerical stability of Reg. terms Indeed, $\mathbf{S}$ being a diagonal matrix composed of $\sigma_\min$'s is the only case for gradient explosion. We would note, though, that the premise of our theorems, e.g., Eq. (13), is the minimum singular value $\sigma_\min$ being unique, which would prevent this case in theoretical derivation. In practice, even though the Hessian could be low-rank, we observed that $\sigma_\max$ is usually much larger than $\sigma_\min$, in many cases to the extent of several orders of magnitude. Therefore, we did not empirically observe issues with gradient explosion. --- Rebuttal Comment 1.1: Comment: I thank the author for their efforts. Almost all of my questions and concerns have been resolved, and I think this is an excellent paper that proposes a novel and principled perspective on the immunization / safety / unlearning research. There is still a severe limitation in that it can only be applied to linear probing, but nonetheless, I think it is excellent research that applies optimization theory to safety. I am upgrading my score to **strong accept**. > Q5. Generative task First of all, thank you for experimenting with generative models, which you could have considered out of scope. The results on CVAE look promising. The experiments on CVAE can be seen as a proof of concept that this methodology can also be applied to generative models. Since it has been partly shown that the impact of this research can be applied to generative models, I would like to adjust my score upward. However, one thing I'd like to mention is that when writing my review, I was considering applying the condition number to the entire set of parameters (or lora parameter) rather than just linear probing for generative models. Based on the insights from this research, I hope that better and more efficient regularization methods that can be applied to more complex models beyond linear models will emerge in subsequent studies. > Q6. Test acc. on $D_H$ First, thank you for showing the test accuracy on $D_H$ w.r.t. epoch. I thought this result was **very important** in evaluating this paper. I think Tab R3 is the one which verifies that regularization through the conditional number has a positive effect on immunization. Personally, I think the RIR metric is closely related to what this method directly optimizes. Therefore, I am skeptical about using RIR as the only and primary metric for immunization (in neural network setting). In real-world scenario, what people are interested in when immunizing models is not the RIR but the actual evolution of test accuracy w.r.t. finetuning epochs. I'm curious about the author's thoughts as well. > Q7. Any other papers use RIR for evaluation? I think not using RIR corresponds to a proxy for immunization, not immunization itself. I find it difficult to agree with using a proxy as the main metric when evaluating immunization is computationally feasible. > Q8, Q9, Q10 My confusions have been resolved. Thank you for accepting my suggestions and opinions. > Q11. Numerical stability of Reg. terms Correct. My intention wasn't to criticize that it might not be stable, but to say that it being stable is very reasonable. Since readers interested in numerical stability may be concerned due to the form of regularization, it might be good to include a brief discussion in the appendix or main manuscript (just a line or two) mentioning that the Hessian generally has $\sigma_max$ >> $\sigma_min$ [1]. I'm sharing a related reference: [1] [1]: Gradient Descent Happens in a Tiny Subspace, https://arxiv.org/abs/1812.04754
Summary: This paper proposed a framework for studying model immunization, i.e., the task to make fine-tuning on harmful datasets harder. The authors proposed that for linear models, the difficulty of fine-tuning can be characterized by the condition number of the Hessian matrix. Based on this theory, the authors proposed two regularizers to increasing the condition number of harmful dataset while keeping the condition number stable for the pre-training task. The authors further proved that both regularizers are differentiable, and the optimization goal can be achieved through gradient-based algorithm. Empirical result on linear models supports the theory and shows better performance than other baselines. The authors also empirically evaluated the method on two non-linear models ResNet18 and ViT. The results are also promising and better than baselines. ## update after rebuttal Thank you for the further clarifications from rebuttal. They have addressed all my concerns. I personally agree immunization is an interesting and important direction to explore more. Claims And Evidence: * Claim: immunization to harmful fine-tuning, at least for linear models, can be charactered using the condition number of the Hessian matrix. * Evidence: theoretically, condition number is a known indicator for the difficulty of gradient-based optimization; empirically, the experiment results, including the proposed method and $R_{ill}$-based approach also support the claim. * Claim: (linear) model immunization can be modeled as an optimization problem with the objective shown in equation (11), and can be solved with algorithm 1. * Evidence: (1) minimizing $R_{ill}$ will increase the condition number of harmful dataset, this regularizer is proven to be optimizable using gradient descent with guaranteed increase in condition number. (2) the regularizer $R_{well}$ is proven by Nenov et al. to be optimizable using gradient descent. (3) both regularizers are proven to be differentiable w.r.t. model parameters $\theta$, and has closed-form gradients. (4) an implementation of algorithm 1 in PyTorch and successful evaluation. * Claim: solving equation (11) would make the model hard to fine-tune on harmful datasets while maintain a good pre-training task performance. * Evidence: empirical experimental results on linear regression and image classificationl * Claim: solving equation (11) also shows promising results on non-linear models. * Evidence: empirical experimental results on ResNet18 and ViT. Methods And Evaluation Criteria: The evaluation criteria, relative immunization ratio (RIR), make sense. The paper also provided condition numbers for both harmful dataset and pre-training dataset. Theoretical Claims: Seems correct, have not checked carefully. Experimental Designs Or Analyses: The experiments use three baselines for comparison. The selection of these baselines are reasonable. The experiments use a few exemplary tasks for linear and non-linear models. While the tasks and datasets are all common and widely used, the selection of these tasks and corresponding dataset is not fully justified. Therefore, it's possible that the proposed method would not work as well on other tasks and datasets, especially for non-linear models. For ResNet18, the last two convolutional blocks are updated; for ViT, the final transformer block is updated. Not sure why this setup. Supplementary Material: No. Relation To Broader Scientific Literature: Model immunization is a promising approach to make open-weight models safer against malicious fine-tuning. This paper advances the state-of-the-art in this field. Essential References Not Discussed: Have not found. Other Strengths And Weaknesses: Maybe it would be obvious for experts, but I would hope to see some explanations for the challenges for modeling immunization for non-linear models. Other Comments Or Suggestions: No Questions For Authors: * What are the challenges for modeling immunization for non-linear models? * Can the proposed framework generalize to non-linear models? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the positive and constructive review. We answer individual questions below. > Q1. Justification for task and dataset selection Our experiments consist of two settings: (a) a linear model setup, where the setting matches our proposed theoretical framework, and (b) a deep-net setup, where we experimented with non-linear models despite the theoretical gap. **For linear models**: For the regression task, we choose the House Price dataset as it is a widely used tabular dataset, e.g., in intro ML courses. For the classification task, we choose MNIST as it is the most basic image classification dataset. Additionally, linear models are effective on these datasets. **For deep-nets**: The transfer learning setting follows from [A] (See Line 352). The chosen Stanford Cars dataset and Country211 dataset are simply the first two datasets presented in [A]'s Figure. 4 (the official ICML version), where they demonstrate linear probing to work well. We now provide experimental results on the third dataset from their Figure. 4 (Food-101) as $D_{\tt H}$ in the table below. Here the feature extractor backbone is ResNet18. We observe that the proposed method is also effective in Food-101. - [A] Radford, Alec, et al. "Learning transferable visual models from natural language supervision." ICML 2021. | $D_{\tt H}$ | Eq. 17 (i) $\uparrow$ | Eq. 17 (ii) $\downarrow$ | $\tt RIR_{\theta_0}$ $\uparrow$| $D_{\tt P}$ Test Acc. $\uparrow$ | | -------- | -------- | -------- |-------- |-------- | | Food-101 | 1.712 | 0.571 | 3.045 | 63.74% | We will clarify the motivation of these setups in the experiment section. > Q2. Explanation of the experiment setup for ResNet18 and ViT We choose to update only the last two convolutional blocks of ResNet18 and the final transformer block of the Vision Transformer (ViT) following common practices in transfer learning. For this experimental setup, we are starting the immunization with a model pre-trained on ImageNet. Typically, only the final layers are updated for efficiency reasons. We will clarify this choice. We now provide the results of immunizing the entire ResNet18 backbone on the Stanford Cars dataset below. When training the entire backbone, we observe a similar result, but with a slightly lower immunization effect on $D_{\tt H}$ and test accuracy on $D_{\tt P}$. Note that the running time for the full model is more than twice that of only updating the last blocks. | Trainable module | Eq. 17 (i) $\uparrow$ | Eq. 17 (ii) $\downarrow$ | $\tt RIR_{\theta_0}$ $\uparrow$| $D_{\tt P}$ Test Acc. $\uparrow$ | | -------- | -------- | -------- |-------- |-------- | | Entire ResNet18 | 2.102 | 0.672 | 3.127 | 61.48% | | Last two blocks | 2.386 |0.699 | 3.467 | 62.36% | > Q3. The challenges for modeling immunization for non-linear models and generalizing the framework to non-linear models On the theoretical front, characterizing the Hessian in arbitrary non-linear models remains challenging. In particular, the Hessian of the linear model admits a tractable form for which we can analytically relate its condition number to the singular values of the task-specific data matrix and the shared weight matrix. As a result, our theoretical guarantees on gradient updates with respect to the feature extractor $\theta$, which relies heavily on rigorous matrix analysis, are yet to be generalized to non-linear models. **The proposed regularizations, however, are applicable to bounding the condition number of general matrices, including the Hessian of non-linear models**. Hence, we have tested the empirical performance of the immunization framework on various non-linear models including ResNet18 and ViT with linear probing. As demonstrated in Sec. 5.2, the results validate the practical effectiveness.
null
null
null
null
null
null
null
null
High-Dimensional Tensor Regression With Oracle Properties
Accept (poster)
Summary: The paper introduces a high-dimensional tensor-response tensor regression model under low-dimensional structural assumptions, such as sparsity and low-rankness. The authors propose a least squares estimation framework with non-convex penalties and derive general risk bounds for the resulting estimators. The paper also particularizes the bounds for the case where the support of the solution is known (oracle estimator). An accelerated Proximal Gradient Algorithm is proposed and tested on synthetic data and an image denoising problem. ## update after rebuttal I thank very much the authors for addressing my questions. However, I keep my original score. Claims And Evidence: The paper explicitly or implicitly makes the following key claims: Theoretical Contributions: The main theoretical result is Theorem 5, which provides an upper bound on the Frobenius norm of the estimated tensor A. While this result is valuable, its practical implications are not thoroughly explored. For instance: - Is the bound tight? In other words, how does it compare with the actual errors observed in the experiments? - How does the nonconvexity of the penalty affect this bound? Can we compare the bounds for the convex and nonconvex cases? - Assumption 4, which assumes multivariate normality in the data, may not be realistic for real-world applications. Proposed Framework: The paper introduces a general framework for linear regression with tensor-structured input and output data, extending beyond traditional algorithms that primarily handle scalar, vector, or matrix data. This theoretical framework is well-constructed and allows for the incorporation of low-dimensional priors, such as low rank and sparsity. However, the experimental validation is limited to: A simple synthetic dataset with scalar outputs; and a basic denoising problem involving real images. Nonconvex vs. Convex Penalties: The results indicate that nonconvex penalty functions outperform convex ones, as suggested in prior literature. The authors demonstrate that their method, when using a nonconvex penalty, achieves lower error compared to a classical convex penalty based on the ℓ1-norm. However, the paper lacks a theoretical explanation or intuition as to why nonconvex penalties yield better performance. Methods And Evaluation Criteria: The theoretical methodology is sound. The experimental methodology used to evaluate the proposed algorithms is valid but somewhat limited (see detailed comments below). Theoretical Claims: The proof of the theoretical claims are provided in the supplementary material, which I did not check carefully. However, the theoretical claims sounds reasonable. Experimental Designs Or Analyses: Yes, I check the experimental results and analysis provided within the paper and the supplementary material. The proposed experiments are correct but limited (see detailed comments below). Supplementary Material: I briefly checked the contents of the supplementary material but not the details of the mathematical proofs. Relation To Broader Scientific Literature: Although the proposed algorithm is evaluated only in limited scenarios—specifically on synthetic datasets and a simple denoising problem—it has the potential to drive significant advancements in various scientific fields where tensor-structured datasets are prevalent. In particular, it could be highly valuable in domains such as neuroimaging, where modeling relationships between input and output tensor data is crucial, as well as in other fields that rely on tensor-based linear regression. Expanding the evaluation to more diverse and realistic datasets would further strengthen the impact and applicability of the proposed approach. Essential References Not Discussed: The paper includes relevant previous references on tensor regression. However, there are important previous works addressing similar problems as it is the case of the following papers: - "Higher-Order Partial Least Squares (HOPLS): A Generalized Multi-Linear Regression Method", Q Zhao, CF Caiafa, DP Mandic, ZC Chao, Y Nagasaka, N Fujii, L Zhang, A Cichocki, IEEE Trans. on Pattern Analysis and Machine Intelligence (PAMI), Vol. 35 No. 7, 2013. doi:/10.1109/TPAMI.2012.254. - “A Multilinear Subspace Regression Method Using Orthogonal Tensors Decompositions”, Q Zhao, CF Caiafa, DP Mandic, L. Zhang, T. Ball, A. Schulze-Bonhage, A. Cichocki, Proc. NIPS 2011 (Neural Information Processing Systems), Granada, Spain, 12-17 December 2011. Other Strengths And Weaknesses: Strengths: - The paper presents a general theoretical framework for linear regression with input and output tensors of arbitrary dimensions, extending beyond traditional approaches that focus on scalar, vector, or matrix data. - A theoretical bound on the estimator error is derived, providing a rigorous foundation for analyzing the performance of the proposed method. - The study explores the use of nonconvex penalty functions, which appear to play a crucial role in enhancing performance, particularly for high-dimensional datasets. Weaknesses: - Limited experimental validation: The evaluation is restricted to a simple synthetic dataset with scalar outputs and a basic denoising problem using real images, limiting the demonstration of the framework’s broader applicability. - Lack of practical insights from Theorem 5: The paper does not thoroughly explore the practical implications of the theoretical bound. For example, is the bound tight? How does it compare with actual errors observed in experiments? - Unrealistic data assumption: Assumption 4 relies on multivariate normality, which may not hold in real-world applications, potentially affecting the method’s generalizability. - Missing theoretical justification for nonconvex penalties: While experiments show that nonconvex penalties outperform convex ones, the paper lacks a theoretical explanation or intuition behind this behavior. Other Comments Or Suggestions: I have not found typos Questions For Authors: Regarding Theorem 5: - Is this bound tight? In other words, how this bound is compared with the actual errors attained in the experiments? - What is the effect of the non-convexity of the penalty on this bound? Can we compare the bounds for the convex and nonconvex cases? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Broader Scientific Literature** Thank you for introducing these relevant references. They are very helpful, and we will cite them appropriately in the final version. **Response to Claims and Evidence** 1. To the best of our knowledge, the proposed bound is tight in terms of its order, with certain constant factors omitted. Since our analysis focuses on the convergence rate rather than explicit constants, directly comparing the theoretical bounds with observed experimental errors can be challenging. 2. The choice of regularization penalties affects the bound primarily through their impact on the Gaussian width and the subspace compatibility constant, as outlined in Theorem 5. Detailed results for both sparse and low-rank regularization are presented in the subsequent corollaries. For example, in the case of element-wise sparsity, the key difference between convex and nonconvex penalties is that nonconvex regularization leads to improved error rates by reducing the dependence on the ambient dimension $d$, thereby attaining the oracle rate of $\sqrt{\frac{s}{n}}$. 3. Assumption 4 on sub-Gaussianity is a standard condition widely used in high-dimensional statistical estimation (see [1]). It is a mild assumption that holds in many practical scenarios and covers a broad class of commonly encountered distributions. We acknowledge, however, that there are settings beyond the sub-Gaussian framework—for example, when the data exhibit heavy-tailed behavior. Extending the theoretical analysis to such cases is an important direction in statistical estimation and can be left for future research. [1] Raskutti, G., Yuan, M., & Chen, H. (2019). Convex regularization for high-dimensional multiresponse tensor regression. **Response to Weaknesses** 1. Due to space constraints, only a subset of simulation results is reported in the main text. In fact, many additional experimental results are provided in the appendix, offering a more comprehensive analysis. 2 & 3. Please refer to the reply above. 4. It is well acknowledged in the statistical literature that nonconvex penalties can yield better estimation performance than convex ones by reducing the bias introduced by regularizers such as the $\ell_1$-norm. In this paper, we investigate how such improvements are reflected in the context of tensor regression. Specifically, nonconvex penalties impose less shrinkage on large coefficients, enabling more accurate recovery of the true signal. This often results in faster convergence rates, particularly under sparsity or low-rank assumptions, and can achieve the oracle rate under suitable conditions. **Additional Experimental Evaluations.** Thank you for this question. Neuroimaging is indeed an important application area for tensor regression. In fact, we have already conducted relevant experiments. Due to space constraints, the results are provided in the appendix. Our experiments include an electroencephalography (EEG) dataset, which is a form of neuroimaging [2]. We appreciate your suggestion and plan to apply our methods to additional neuroimaging datasets in future work. [2] Liu, Y., Liu, J., Long, Z., and Zhu, C. Tensor regression. Springer, 2022. We sincerely appreciate your recognition of our work. If our responses have adequately addressed your concerns, we would be grateful if you could consider reflecting this in your evaluation. Thank you once again for your time and thoughtful feedback.
Summary: This paper addresses tensor regression models for high-dimensional tensor data. Specifically, it proposes a tensor-response tensor regression model, assuming low-dimensional structures such as sparsity and low rankness. While conventional convex penalties are easy to optimize, they often fail to model the data accurately. On the other hand, non-convex penalties have a higher potential to model the data correctly, but there's no guarantee of reaching the optimal solution. To address this issue, the authors theoretically prove that non-convex regularization terms, such as SCAD (Smoothly Clipped Absolute Deviation) and MCP (Minimax Concave Penalty), exhibit oracle performance under certain assumptions. They then propose an algorithm based on the accelerated proximal gradient algorithm to efficiently compute estimators based on these non-convex penalties. Evaluation experiments using synthetic and real-world datasets demonstrate the effectiveness of the proposed regression model and the practicality of the theoretical results. ## update after rebuttal Thank you for your response. I understand now. I believe the contribution of this paper is significant, so I will keep my evaluation as it is. I mistakenly posted my comment in the Official Comment section. My apologies. Claims And Evidence: It is intuitively understandable that convex penalties cannot effectively capture data characteristics, especially in high-dimensional tensor data. The authors rigorously prove, through sound theoretical development, that estimators based on certain non-convex penalties exhibit oracle performance under specific conditions. Furthermore, they demonstrate, using both synthetic and real-world data, that non-convex penalties can indeed effectively capture the underlying features of the data. Methods And Evaluation Criteria: Regarding this paper's key claim, Oracle performance, the evaluation appropriately assesses performance variations under different knowledge conditions. While a comparative evaluation against convex penalties is presented, the method for determining hyperparameters when using convex penalties is not explicitly stated. It is presumed that a 10-fold cross-validation approach was employed, similar to the proposed method. Explicitly stating this would enhance the persuasiveness and rigor of the evaluation. Theoretical Claims: I have reviewed the proofs, particularly those demonstrating that the non-convex penalty achieves Oracle performance under specific conditions, and found the theoretical development sound and without issue. Experimental Designs Or Analyses: As mentioned in the "Methods and Evaluation Criteria" section, the evaluation appropriately assesses performance variations under different knowledge conditions, which is directly relevant to the paper's central claim of oracle performance. The evaluation metrics used are also deemed suitable. Supplementary Material: I have reviewed the supplementary material, including the descriptions of additional non-convex penalty variations. This reinforces my understanding that the proposed method applies to a broad range of non-convex penalties. I also acknowledge the inclusion of additional experimental results, which strengthen the evidence supporting the proposed method's effectiveness. I followed the overall flow of the detailed proofs for the Corollaries in the main text. Relation To Broader Scientific Literature: While tensor completion with low-rank constraints is a widely studied and effective technique, this paper makes a significant contribution to the field by focusing on the potential limitations of convex penalties and providing theoretical guarantees for estimators based on non-convex penalties, which offer the potential for more appropriate modeling. Essential References Not Discussed: The paper appropriately cites relevant prior work on various constraints used in tensor regression. I do not have any essential references to suggest for addition. Other Strengths And Weaknesses: As previously mentioned, this paper makes a significant contribution to the field of tensor learning with various constraints, such as low-rank constraints, by providing theoretical guarantees for estimators based on non-convex penalties. The evaluation experiments are generally comprehensive. However, a concern remains regarding whether the hyperparameters for the convex penalties, used as comparison methods, were optimally determined. Other Comments Or Suggestions: There are a few potential errors in the manuscript: 1. In Section 2.1, while $\mathcal{A}^{\*}$ is defined as an M-th order tensor and $\mathcal{X}$ as an N-th order tensor, the expression around line 85 appears to represent the elements of $\mathcal{A}^{\*}$ with N indices and the elements of $\mathcal{X}$ with M indices. It seems that $\mathcal{A}^{\*}$ and $\mathcal{X}$ may be incorrectly assigned in this expression. 2. In the first sentence of Assumption 4, $X^{n)}$ appears. Should this be corrected to $X^{(n)}$, adding the missing left parenthesis? Questions For Authors: How were the hyperparameters for the convex penalties, used as comparison methods in Table 1, determined? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your positive feedback. **Hyperparameters for the Convex Penalties** All hyperparameters are selected via ten-fold cross-validation, as noted in the first paragraph of Section 5. This includes the regularization parameter $\lambda$ used in convex penalties. Specifically, $\lambda$ is chosen from a uniform grid of 21 values in the range $[0, 1]$, and the optimal value is selected through ten-fold cross-validation. **Other Comments or Suggestion 1 and 2** We will correct typos in the camera-ready version. We greatly appreciate your recognition of our work. If our responses have sufficiently addressed your concerns, we would be grateful if you could consider reflecting this in your evaluation. Thank you once again for your time and valuable feedback.
Summary: This paper studies tensor regression models with non-convex penalties and provides general risk bounds for the resulting estimators, demonstrating that they achieve oracle statistical rates under mild technical conditions. The authors also introduce an accelerated proximal gradient algorithm to estimate the proposed estimators. A comprehensive set of experiments is conducted to illustrate the advantages of non-convex penalties over convex penalties. While the theoretical results are solid, the paper primarily extends [1] to the non-convex penalties. As a result, the novelty is somewhat limited, and certain aspects of the presentation are unclear. [1] Raskutti, G., Yuan, M., & Chen, H. (2019). Convex regularization for high-dimensional multiresponse tensor regression. Claims And Evidence: The authors present the primary theoretical results for the general estimator in Equation (2) and derive upper bounds under different scenarios. However, I find the explanation of the oracle property unclear. In Section 3.2.1 SPARSITY REGULARIZATION, the authors first define the oracle rate as the statistical convergence rate of the oracle estimator and provide a detailed element-wise sparse oracle estimator, which is well-defined. However, in Corollary 1, the authors present the rate of $\hat{\mathcal A}$ without establishing any explicit connection to the defined oracle estimator $\widehat {\mathcal A}^O$. It is unclear how the authors conclude that $\hat{\mathcal A}$ achieves the oracle rate under some weak assumptions. A similar issue arises in Section 3.2.2 **LOW-RANK REGULARIZATION**. To rigorously establish the oracle property, the authors should provide theoretical results like that: 1. $\hat{\mathcal A}_{\bar S_1}=0$ 2. the asymptotic normality of ${\hat{\mathcal A}_{S_1}}$ Additionally, the paper includes extensive experiments to demonstrate the advantages of non-convex penalties over convex penalties. However, in the real data experiment, the authors assume that the ground-truth $\mathcal A$ is known and simulate $\mathcal X$ and $\mathcal Y$ accordingly. A more realistic evaluation would involve an experiment where only the observed $\mathcal X$ and $\mathcal Y$ are available, allowing an assessment of the model's performance when $\mathcal A$ is unknown. Methods And Evaluation Criteria: In the experimental section, the authors primarily compare convex penalties. However, there exist many other comparative methods for tensor regression on a single dataset, such as tensor regression based on CP decomposition and Tucker decomposition. Theoretical Claims: I have read the proof in the author's appendix, which is very detailed. I believe it is feasible, but the specific details require further reading. Experimental Designs Or Analyses: I believe the soundness/validity of any experimental designs or analyses is feasible. The authors have conducted experiments on both simulated and real data. However, the experiments provided so far are all based on simulations, assuming the true tensor coefficients are known. The authors should include real data analysis using actual tensor covariates and response for analysis (without knowing the true ). This would make the findings more convincing. Supplementary Material: The authors did not provide any supplementary material. Relation To Broader Scientific Literature: The theoretical analysis primarily compares the proposed method with convex penalties, particularly emphasizing that the use of non-convex penalties achieves the oracle rate. However, the current presentation lacks clarity in this aspect. Essential References Not Discussed: The authors could further discuss more papers about regularized tensor regression based on tensor decomposition methods, for example, [2]. [2] Lu, W., Zhu, Z., & Lian, H. (2020). High-dimensional quantile tensor regression. Journal of Machine Learning Research, 21(250), 1-31. Other Strengths And Weaknesses: ### Strengths As discussed above. ### Weaknesses 1. The discussion on the oracle property is unclear. It is difficult to directly see how $\hat{\mathcal A}$ and the oracle estimator exhibit similar performance. 2. There are issues with the citations. For example, *(Hua Zhou & Zhu, 2013)* and *(Zhou et al., 2013)* refer to the same paper. The authors should carefully check and correct the references. 3. The authors should report the computational time in the experiments. Additionally, more detailed explanations should be provided for the tables and figures. For instance, **Figure 3** is difficult to interpret intuitively. Other Comments Or Suggestions: The authors could consider reorganizing the structure of the paper to better highlight its key contributions. The most significant contribution of this work lies in its theoretical advancements. Therefore, the authors should emphasize this aspect more prominently and provide a clearer discussion of the challenges in the theoretical analysis and how they are addressed. Questions For Authors: 1. In **NUMERICAL EXPERIMENTS**, how many runs were performed for each setting? 2. In the numerical results, the standard deviation often increases as the sample size grows. For instance, in **Figure 1a**, when $d=20$, the standard deviation is largest at $n=3000$. A similar trend is observed in **Figure 2b**. Is this due to an insufficient number of runs? 3. Can the authors provide a comparison of the computational time across different methods? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **More discussions on decomposition-based tensor regression methods** We appreciate the comment regarding tensor regression methods based on tensor decomposition techniques such as CP and Tucker. Both decomposition-based and regularization-based approaches have been actively explored in the tensor regression literature, and we have reviewed these works in the Introduction. However, we would like to emphasize that the primary focus of this paper is theoretical—namely, to establish oracle statistical guarantees for tensor regression under nonconvex regularization. To the best of our knowledge, existing decomposition-based methods have not addressed this theoretical direction. Therefore, decomposition-based approaches represent a distinct line of research, and direct comparisons fall outside the intended scope of this work. **Experiment on Real-world Data** In our experiment, we did not have access to a dataset with $\mathcal{X}$ and $\mathcal{Y}$ as direct measurements. Therefore, we generated such a dataset based on a ground truth tensor image $\mathcal{A}^\ast$, which is a common practice in the literature (see [1,2]). This setting allows us to directly evaluate the estimation performance by comparing the estimated tensor $\hat{\mathcal{A}}$ with the ground truth $\mathcal{A}^\ast$. Additionally, we have also reported the error between $\mathcal{Y}$ and $\widehat{\mathcal{Y}}$, which assesses the model's predictive performance when $\mathcal{A}^\ast$ is unknown—precisely the scenario raised in your comment. [1] Romera-Paredes, B., H. Aung. “Multilinear multitask learning”. In International Conference on Machine Learning. [2] Liu, Y., Liu, J., Long, Z. Tensor regression. Springer. **Response to weaknesses** 1. In this paper, the term *oracle* refers to the performance of an estimator that assumes knowledge of the true support, as in [3,4]. For example, in the case of element-wise sparsity, the oracle estimator $\widehat{\mathcal{A}}^{O}$ satisfies $\Vert\widehat{\mathcal{A}}^{O}-\mathcal{A}^*\Vert_{\mathrm{F}}\lesssim\Vert(\nabla L(\mathcal{A}^*))_ {\mathcal{S}^*}\Vert_{\mathrm{F}} \asymp \sqrt{\frac{s}{n}}$, which follows directly from a first-order Taylor expansion via the mean value theorem. [3] Fan,J.,Liu,H.,Sun,Q. I-LAMM for sparse learning: Simultaneous control of algorithmic complexity and statistical error. The Annals of Statistics. [4] Gui,H.,Han,J. Towards faster rates and oracle property for low-rank matrix estimation. In International Conference on Machine Learning. 2. We will correct the reference issue in the camera-ready version. 3. [Explanation on Figure 3] Figure 3 visualizes the element-wise estimation error between a randomly generated third-order tensor and its estimate obtained using our nonconvex low-rank regularizer. Each element is evaluated individually: if the absolute error exceeds a fixed threshold, the point is marked in red; otherwise, it is shown in blue. Figures 3(a) and 3(d) present representative outcomes using the nonconvex method, while Figures 3(b) and 3(e) illustrate results from a convex regularization approach. We observe that the nonconvex method produces significantly fewer red points, indicating smaller element-wise errors and demonstrating the improved accuracy achieved by our proposed approach. **Response to Questions** 1. All reported experimental results are based on 100 Monte Carlo replications, as stated in the first paragraph of Section 5. We will update this to 1,000 replications in the final version to ensure greater robustness and stability of the results. 2 & 3. We acknowledge the reviewer’s concern regarding the anomalous behavior where the standard deviation appears to increase with the sample size. We sincerely appreciate this observation. After a careful review, we found that this issue was due to an insufficient number of Monte Carlo replications. To address this, we conducted additional experiments with 1,000 Monte Carlo trials, which yielded more stable and robust results. The updated figures, including standard deviations and computational time measurements, are available at the following link: [https://anonymous.4open.science/r/ICML_Rebuttal-6182/](https://anonymous.4open.science/r/ICML_Rebuttal-6182/). We thank the reviewer again for this helpful comment and will correct the issue in the final version. **Essential References Not Discussed** We are grateful to the reviewer for bringing this valuable reference to our attention, and we will cite it appropriately in the final version of the paper. Thank you for your thoughtful and constructive feedback. We sincerely appreciate your advice and the time you took to review our work. However, we are somewhat puzzled by the decision to reject the paper. We hope that the above clarifications and updated results have addressed your concerns. If our responses have sufficiently resolved the issues raised, we would be grateful if you could consider reflecting this in your final evaluation. --- Rebuttal Comment 1.1: Comment: I have carefully reviewed the rebuttal, and most of my concerns have been addressed—particularly regarding the experimental section, where the authors have made significant improvements. At this point, I have only one remaining question. The oracle property presented in the current version appears to align with the **(weak) oracle property**, seen in *Fan, J., Liu, H., Sun, Q. (2018)* **Corollary 4.3**. Would the authors be able to provide or attempt to establish result analogous to **Theorem 4.4 (Strong Oracle Property)**? If such a deterministic result can be established, I believe it would strengthen the theoretical contribution. --- Reply to Comment 1.1.1: Comment: Thank you for your thoughtful follow-up. You are correct that the oracle property established in our current version aligns with the (weak) oracle property, as characterized in Corollary 4.3 of [1]. We have indeed made substantial efforts toward establishing a result analogous to Theorem 4.4 of [1], which demonstrates the strong oracle property. This property guarantees that the oracle estimator is not only a local minimizer but also the unique one, thereby ensuring that the algorithm consistently selects the correct support and attains estimation accuracy as if the true support were known a priori. In our setting, similar to [3], the oracle property can be achieved under the weakest possible minimum signal strength condition. In contrast, the strong oracle property in [1,2] relies on more stringent signal strength assumptions. Therefore, under comparable stronger conditions, a strong oracle property can also be established for our method. Below, we outline a sketch of the proof for the element-wise sparsity case, including the key ideas, main lemmas, and theorem statement. Analogous arguments can be extended to other structural settings, such as mode-wise low-rankness, fiber-wise sparsity, and slice-wise sparsity. Due to space constraints, we present only a high-level sketch here. We plan to incorporate the full result and proof in the camera-ready version. For notational simplicity, we use $j$ to denote the index sequence $i_{1},\dots,i_{M}$. ###### Lemma 1. Suppose Assumptions 1~4 hold, and there exists a constant $0<\gamma_1<\infty$ such that the Gaussian width satisfies $\omega(\Omega)\leq\gamma_1$. If $4(\nabla L(\hat{\mathcal{A}}^O)+\epsilon)\leq\lambda\lesssim\frac{r}{\sqrt{|\mathcal{S}_1|}}$, we have $|\mathcal{E}_t|\leq2|\mathcal{S}_1|$, where $\mathcal{E}_t=\mathcal{S}_1\cup\mathcal{S}_t$ and $\mathcal{S}_t=\[j:\nabla R_j(\mathcal{A}_t)<p^\prime _\lambda(\frac{2+\sqrt{2}}{2\rho^-}\lambda)\]$. For $t\geq2$, the $\epsilon$-optimal solution $\hat{\mathcal{A}}_t$ must satisfy $||\hat{\mathcal{A}}_t-\hat{\mathcal{A}}^O||_F\lesssim||\lambda _ {\mathcal{E}_t}||_F+\epsilon\sqrt{|\mathcal{E}_t|}$, where $\lambda _ {\mathcal{E}_t}\in\mathbb{R}^{d_1\times\cdots\times d_M}$ with the component in $\mathcal{E}_t$ as $\lambda$ and the other components are $0$. Lemma 1 establishes a deterministic error bound between the estimator at iteration $t$ and the oracle estimator. This result is analogous in spirit to Lemma B.1 in [1], and forms the basis for extending to a strong oracle property under suitable conditions. ###### Lemma 2. It follows that $||\lambda _ {\mathcal{E}_t}||_F\leq\underset{\mathrm{I}}{\underbrace{||p^\prime _\lambda(|\mathcal{A}^* _{\mathcal{S}_1}|-\frac{(2+\sqrt{2})\lambda}{2\rho^-})||_F}}+\underset{\mathrm{II}}{\underbrace{\lambda|\[j\in\mathcal{S}_1:|(\hat{\mathcal{A}_t}) _j-\mathcal{A}^* _j|\geq\frac{(2+\sqrt{2})\lambda}{2\rho^-}\]|^{1/2}}}+\underset{\mathrm{III}}{\underbrace{\lambda\sqrt{|\mathcal{E}_t\setminus\mathcal{S}_1|}}}$. This result is analogous to Lemma B.2 in [1]. Following a similar analysis as in [1], we obtain term $\mathrm{I}=0$, term $\mathrm{II}\lesssim\lambda\sqrt{|\mathcal{S} _{t-1}\cap\mathcal{S}_1|}$, and term $\mathrm{III}\lesssim\lambda\sqrt{|\mathcal{S} _{t-1}\setminus\mathcal{S}_1|}$. Substituting into Lemma 1 yields: $||\hat{\mathcal{A}}_t-\hat{\mathcal{A}}^O|| _F\lesssim\lambda\sqrt{2|\mathcal{S} _{t-1}|}+\epsilon\sqrt{|\mathcal{E} _t|}$. Under some additional assumptions $||\hat{\mathcal{A}}^O-\mathcal{A}^*|| _\max\lesssim\lambda$ and $t\gtrsim\log((1+\epsilon/\lambda)\sqrt{|\mathcal{S}_1|})$, we obtain $\mathcal{S}_t=\emptyset$, thereby yielding the strong oracle property. The final theorem should be stated as follows: ##### (Strong Oracle Property). Suppose Assumptions 1~4 hold, and there exists a constant $0<\gamma_1<\infty$ such that the Gaussian width satisfies $\omega(\Omega)\leq\gamma_1$. If $\mathcal{A}^* _j$ satisfies the condition $\min _{j\in\mathcal{S}_1}\left|\mathcal{A}^* _j\right|\geq\nu$, $4(\nabla L(\hat{\mathcal{A}}^O)+\epsilon)\leq\lambda\lesssim\frac{r}{\sqrt{|\mathcal{S}_1|}}$, $\epsilon\leq\frac{\lambda}{\sqrt{|\mathcal{S}_1|}}$, and $||\hat{\mathcal{A}}^O-\mathcal{A}^*|| _\max\lesssim\lambda$, then for sufficiently large $t$ such that $t\gtrsim\log((1+\epsilon/\lambda)\sqrt{|\mathcal{S}_1|})$, we have $\hat{\mathcal{A}}_t=\hat{\mathcal{A}}^O$. We hope these clarifications and preliminary results address your suggestions. If our responses meet your expectations, we would be sincerely grateful for a positive evaluation of our work. [1] Fan, J., Liu, H.(2018). I-LAMM for sparse learning: Simultaneous control of algorithmic complexity and statistical error. The Annals of Statistics. [2] Fan, J., Xue, L.(2014). Strong oracle optimality of folded concave penalized estimation. Annals of Statistics. [3] Zhang, C.H. (2012). A general theory of concave regularization for high-dimensional sparse estimation problems. Statistical Science.
Summary: This paper proposes a framework for tensor on tensor regression, introducing novel nonconvex regularizers and an accelerated proximal gradient algorithm for estimation. The authors propose a class of penalties that depend on the singular values of each tensor dimension and give Frobenius norm rates of convergence guarantees under oracle optimal hyper parameter tuning. A proximal gradient algorithm is also provided as a feasible estimation procedure. Numerical experiments and an empirical application show the advantage on the proposed method. Claims And Evidence: The paper makes the following claims: 1. The proposed nonconvex penalty estimators for tensor on tensor regression achieve oracle optimal convergence rates under different sparsity assumptions. The estimator can be computed through a proximal gradient algorithm. 2. The nonconvex penalty estimators exhibit faster convergence rates compared to those with convex penalties. Overall the paper makes a very strong case for claim 1. Claim 2, to my reading of the paper, is mostly supported by the empirical exercises as the theoretical results are not directly compared to the convex penalty cases. With this mind, expanding on the justification for claim 2 would improve the value added of the paper. Methods And Evaluation Criteria: The methods and evaluation criteria are adequate for the problem at hand. Theoretical Claims: The theoretical claims are well stated and appear novel and correct. * Given the claims of the paper it would be nice if more direct theoretical comparisons would be made with convex penalty estimators. Is there a theoretical result that ensures that the rates of convergence are faster for the nonconvex penalty? It seems from corollary 1 and 2 that the rates are very similar to the cases for LASSO for example. Experimental Designs Or Analyses: The analysis are careful and convincing. However, more detail on the comparison with convex methods in table 1 and the simulation exercise would be nice to reinforce the claims of the paper. In which cases do we expect the convex methods to perform better? Expanding the simulation exercises to study this would improve the relative contribution of the paper. Supplementary Material: I have parsed through the theoretical appendix. Relation To Broader Scientific Literature: I am not very familiar with the tensor regression literature but it would be good to clarify the novelty of the results and Theorem 5 vis a vis the literature, in particular to clarify why the rates are faster than for the convex penalty estimators. Essential References Not Discussed: - Other Strengths And Weaknesses: Overall the paper is very well written and makes a very compelling case for their proposed method! My only concern is whether the authors actually provide a justification for the nonconvex penalties having a faster convergence rate, beyond the empirical simulations. Other Comments Or Suggestions: * Theorem 5 should be renamed Theorem 1. Questions For Authors: See above. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate your positive comments and insightful feedback. We have carefully addressed your concerns as follows: **1. Justification for Claim 2** In this paper, we focus on nonconvex sparse learning in tensor regression. Nonconvex regularizers such as SCAD and MCP can achieve the oracle estimation rate because they induce less bias compared to convex penalties like the $\ell_1$ norm. Unlike Lasso, which uniformly shrinks all coefficients and can introduce significant bias for large signals, nonconvex penalties apply little to no shrinkage to large coefficients while still promoting sparsity among small ones. This selective regularization enables accurate support recovery and nearly unbiased estimation on the true support, leading to improved statistical efficiency under appropriate conditions. In Corollary 1, we present the estimation performance for the sparse parameter under nonconvex regularization. We also compare this result with the estimation rate under convex regularization. The corollary explicitly shows that nonconvex regularization enables improved estimation accuracy by eliminating dependence on the ambient dimension $d$. For the low-rank parameter estimation setting, the corresponding result is provided in Corollary 2, which can be directly compared with prior work such as [1,2]. For instance, Lemma 10 of [1], which uses convex nuclear norm regularization, yields an estimation bound that scales with the tensor dimension. In contrast, our bound remains dimension-independent. [1] G. Raskutti, M. Yuan, and H. Chen. Convex regularization for high-dimensional multiresponse tensor regression. In Proceedings of the 36th International Conference on Machine Learning (ICML), 2019. [2] S. Negahban and M. J. Wainwright. Estimation of (near) low-rank matrices with noise and high-dimensional scaling. Annals of Statistics, 39(2):1069–1097, 2011. **2. Comparison with Rate from LASSO** The estimation error under nonconvex regularization differs from—and indeed improves upon—that of Lasso, as it achieves a faster convergence rate equivalent to the rate attainable when the true support is known in advance. Specifically, by leveraging nonconvex penalties, the estimation bound becomes independent of the ambient dimension $d$, thereby achieving the oracle rate. This dimension-free behavior highlights a key advantage of nonconvex regularization. **3. When Might Convex Methods Outperform Nonconvex Methods?** Thank you for this question. While nonconvex regularization can yield improved estimation rates under ideal conditions, convex methods such as Lasso may outperform them when these assumptions are not satisfied. For instance, nonconvex methods often rely on strong signal conditions to ensure accurate support recovery. In low signal-to-noise ratio settings, where such conditions may fail, convex methods can produce more stable and reliable estimates due to their uniform shrinkage behavior and greater algorithmic robustness. **4. Clarification on Table 1** Due to space constraints, we report results for two representative nonconvex regularizers in Table 1. Results for additional regularizers are provided in the Appendix. We will include more detailed discussions of both Table 1 and Table 2 in the camera-ready version. **5. On Theorem 5** Theorem 5 provides a general characterization of the statistical estimation performance under nonconvex regularization in tensor regression. Notably, the choice of regularizer—convex or nonconvex—affects both the Gaussian width and the subspace compatibility constant, thereby highlighting fundamental differences between the two approaches. For specific penalty choices, the subsequent corollaries clearly demonstrate that nonconvex regularization can achieve faster convergence rates than convex counterparts. Besides, we will consider the suggestion to rename Theorem 5. We hope the above responses address your comments clearly, and we sincerely thank you again for your valuable feedback and thoughtful review of our paper. --- Rebuttal Comment 1.1: Comment: Thank you for your response. I am maintaining my score. --- Reply to Comment 1.1.1: Comment: Thank you very much for your kind response and for maintaining your high score. I truly appreciate your thoughtful comments and the time you devoted to reviewing our work. Your support means a great deal to us.
null
null
null
null
null
null
SIMPLEMIX: Frustratingly Simple Mixing of Off- and On-policy Data in Language Model Preference Learning
Accept (poster)
Summary: Authors' first claim that on-policy data are most effective on tasks with objective answers, while off-policy data are most effective on open-ended tasks like creative writing. Analysis on AlpacaEval supports the claim. Then, authors propose a simple method of mixing on-policy and off-policy: just sample from two policies with equal probability: SimpleMix. SimpleMix method makes modest but consistent improvement over strong baselines. Authors also provide good ablations with different mixtures, filtering mechanisms, response diversity. ## update after rebuttal The new result with DNO further strengthens the benefit of simplicity. I increased my score accordingly. Claims And Evidence: Authors' first claim is that on-policy data are most effective on tasks with objective answers. This is backed by AlpacaEval 2.0, but it is a bit weak because AlpacaEval has very small samples on Math and Coding. Also, on-policy algorithms are only trained for a single epoch, which could've underestimated their performances. SimpleMix shows consistent benefit over reasonable baseline methods. The improvement is modest, but the consistency makes a convincing case. Methods And Evaluation Criteria: I checked the experimental setup in Section 3.1. In terms of base model choices, two Llama 3.1-8B based SFT models are good choices. They are performant and represent best practices in the literature. The setup could've been improved with more diverse models, in particular more diverse sizes, although the computational constraint is understandable. AlpacaEval 2.0 is appropriate for this initial experimentation, but its math subset is pretty small (<50) hence I am not sure how much 2% ~ 5% difference in Figure 2 on math is meaningful. Not a major concern because the observation is consistent on Coding, but it could've been more convincing if bigger benchmarks were used for this analysis. I also checked the experimental setup in Section 4.1. Again base model choices are standard. UltraFeedback is also a good choice as it is well-established and covers various capabilities (for ex per Ivision et al https://arxiv.org/abs/2406.09279). Evaluation on AlpacaEval is good for overall conversation quality, but the rest of benchmarks is a bit too much focused on knowledge tasks, which don't move much from DPO training. I suggest a setup like https://arxiv.org/abs/2406.09279 which more broadly covers coding, safety, truthfulness, etc. Evaluating on coding and reasoning could've helped better validating authors' claim that on-policy performs better on reasoning and off-policy performs better on creative writing. Authors compare against a good set of baselines (HyPO and DPO-Mix-P). Theoretical Claims: The paper doesn't make theoretical claims, but make a good connection to theoretical works and compare against them Shi et al 2024, and Song et al 2025. Experimental Designs Or Analyses: As discussed in Methods And Evaluation Criteria, I checked experimental setups in Section 3.1 and 4.1. Issues are discussed in the section. In addition, I have a concern that all algorithms are only run for a single epoch. I hypothesize that online policy algorithms benefit more from more epochs since new responses are generated for every epoch. Hence, authors' experiments can be underestimating the benefit of on-policy methods. Supplementary Material: I checked Section B and C for additional experiment details. Relation To Broader Scientific Literature: The connection to symmetric sampling from Ball et al (2023) is interesting because the method is highly similar, hence establishes a connection to the broader reinforcement learning literature. It could've been nicer if it was discussed more prominently in earlier literature review sections. Currently it is only discussed in Section 5.1. Essential References Not Discussed: Methods like Direct Nash Optimization (Rosset et al 2024 https://arxiv.org/abs/2404.03715) do mix samples from online policy and offline policy together, hence SimpleMix is not the only paper which mixes online and offline policy samples. DNO could've been an interesting baseline, at least a non-iterative version. Other Strengths And Weaknesses: Proposed method is very simple, and hence has a strong potential to be adopted in practice. Other Comments Or Suggestions: Summary in line 080-094 is too much of a repeat from only a few paragraphs above. I suggest making the summary more concise so that it becomes less of a repeat. Questions For Authors: Paper is clearly written, hence I don't have major questions which would change my evaluation. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the insightful comments and suggestions. We appreciate that the reviewer finds our work simple and has a strong potential to be adopted in practice. We are also encouraged that the reviewer finds our paper “clearly written”. > It could've been nicer if it was discussed more prominently in earlier literature review sections. Currently it is only discussed in Section 5.1. We thank the reviewer for relating our work to the broader RL literature. We will discuss our relation with existing work in RL that is not about language model alignment earlier in our updated manuscript, and extend our discussion in the appendix. > Methods like Direct Nash Optimization (Rosset et al 2024 https://arxiv.org/abs/2404.03715) do mix samples from online policy and offline policy together, hence SimpleMix is not the only paper which mixes online and offline policy samples. DNO could've been an interesting baseline, at least a non-iterative version. We thank the reviewer for bringing up the interesting discussion on DNO. Although the authors of DNO defined it as “a batched on-policy algorithm” (page 9 in [1]), DNO selects the best and worst response from N on-policy generations and one off-policy generation from gpt-4-turbo to perform DPO, therefore it can be seen as an hybrid method. As suggested by the reviewer, we conduct an experiment comparing SimpleMix with one iteration of DNO on Tulu-3.1-8B-SFT, where we sample N = 4 generations and mix it with the off-policy “chosen” generation from Ultrafeedback, and select the best and worst response according to our oracle reward model. We report the results below: ### Alpaca Eval 2.0 | Model | LC | WR | Std Error | Length | |--------------------------|-------|-------|-----------|--------| | Tulu 3 SFT + DNO | 16.22 | 14.15 | 1.29 | 1521 | | Tulu 3 SFT + SimpleMix | 20.64 | 18.02 | 1.36 | 1474 | Compared to the setting in [1], we changed the number of on-policy generations (5 -> 4), the off-policy model (gpt-4-turbo -> the chosen response in ultrafeedback), and the oracle reward (gpt-4-turbo -> Skywork/Skywork-Reward-Gemma-2-27B) to make the results comparable to our work. We will make sure to add discussions on DNO in our latest manuscript. [1] Direct Nash Optimization: Teaching Language Models to Self-Improve with General Preferences (Rosset et al., 2024) --- Rebuttal Comment 1.1: Comment: Thanks! I didn't really expect authors to run an experiment for the rebuttal. Thanks for being so open to the suggestion. The new result with DNO further strengthens the benefit of simplicity. I will update my score accordingly.
Summary: This paper studies the effect of mixing on-policy and off-policy data when fine-tuning language models using direct preference optimization (DPO). They observe that on-policy data (data generated by the current policy) tends to work better for tasks with clear correct answers, like math and coding, whereas off-policy data (generated by other pre-trained models) is better for open-ended tasks, such as creative writing or recommendations. They propose a simple method called SIMPLEMIX, where on-policy and off-policy data are mixed in equal proportions. Their main findings show SIMPLEMIX consistently improves performance compared to using either data source alone, and it also beats more complicated methods like HyPO and DPO-Mix-P, according to evaluations on AlpacaEval 2.0 and other benchmarks. Claims And Evidence: The authors claim that mixing on-policy and off-policy data improves alignment performance compared to just using one data source alone. Their main results (Table 2) support this claim, showing SIMPLEMIX outperforms methods like DPO-Mix-P and HyPO. The actual improvement in the length-controlled win rate on AlpacaEval 2.0 is modest (e.g., from around 28 to 30 in Figure 4). Moreover, the authors show that, if off-policy data is curated, the performance of SIMPLEMIX is further improved. They show this on Alpaca Eval 2.0 as well (Figure 6). Methods And Evaluation Criteria: The method itself (mixing responses from on-policy and off-policy sources equally) is very straightforward. They evaluate using standard benchmarks like AlpacaEval 2.0 and FineWeb, which makes sense for the alignment task. These benchmarks clearly separate reasoning tasks (math, coding) from subjective tasks (creative writing, recommendations). Overall, their choice of methods and evaluation criteria seems reasonable and appropriate for the problem they’re studying. Theoretical Claims: The paper does not include major new theoretical claims. They mostly rely on intuition and experimental results to justify the SIMPLEMIX idea. There aren't complicated theoretical proofs to verify. Experimental Designs Or Analyses: I checked their experimental designs briefly, especially Figure 4, where they compare various mixing ratios of on-policy and off-policy data. This analysis seems reasonable, clearly showing that a balanced mixture (around 50-50) is slightly better than using either purely on-policy or off-policy data. However, from the results in Figure 4, the performance difference between SIMPLEMIX and the two extremes (purely on/off-policy) is modest. The win-rate accuracy curves are close together, suggesting limited practical impact. Thus, while the analysis itself is sound, the paper doesn’t convincingly demonstrate that the added complexity of mixing datasets is justified by these marginal improvements. Supplementary Material: I reviewed mostly additional experiment details. Relation To Broader Scientific Literature: They relate their paper clearly to recent debates about using on-policy versus off-policy data in preference alignment, and they discuss recent similar works like HyPO and DPO-Mix-P. The idea of mixing on- and off-policy data isn't new, but clearly showing which tasks benefit from each type of data is a useful contribution. However, their contribution is mostly empirical, with no new theoretical advances. Essential References Not Discussed: Nothing noteworthy to me. Other Strengths And Weaknesses: Strength: 1. SIMPLEMIX is straightforward and easy to implement. 2. They provide clear experimental results that consistently support their claims. 3. They clarify the conditions (objective vs. subjective tasks) under which on- or off-policy data might be more beneficial. Weaknesses: 1. The idea itself isn’t very novel—just mixing two data sources is fairly standard. 2. The reported performance improvements are modest, raising doubts about real-world significance. Other Comments Or Suggestions: The paper presents a very simple method with clearly explained and straightforward experiments. Overall, the experiments are sound, clearly show modest but consistent improvements, and cover relevant datasets. The main limitation is that the theoretical contribution is minimal—it's mostly applying a known idea (mixing data sources) without significant new insights. The modest size of the observed performance gains also limits its practical significance. Questions For Authors: 1. In Figure 6 you show that SIMPLEMIX improves performance much more clearly for Tulu-3 SFT compared to LLaMA-3 Instruct-tuned models. Can you explain why Tulu specifically benefits more from SIMPLEMIX? Is there something specific about Tulu’s training or data that makes mixing data sources especially useful here? 2. In Figure 8, SIMPLEMIX shows very little improvement or sometimes no improvement over the baseline (pure on-policy or off-policy data) when applied to the LLaMA instruct-tuned model across all four task categories (math, coding, creative writing, recommendation). This again relates the Q1 above: it seems that mixing is not useful when the model is already aligned? Could the authors share some insights on this? Thank you. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful review and we would like to extend our gratitude for the reviewer finding our method straightforward and our experiment results clear. We hope to answer the questions and resolve the concerns of the reviewers below: > The idea itself isn’t very novel—just mixing two data sources is fairly standard. - **On its novelty:** Our contribution is not limited to mixing of the two data sources, our work also 1) shows that the two data sources are complimentary (Section 3) - which provides a possible explanation for the performance improvement; 2) conduct detailed ablations (Section 5) on data mixtures and other data curation strategies. **To the best of our knowledge, our work is the first to carefully study the interplay between on- and off-policy data in preference optimization.** - **On simplicity:** While we acknowledge that the method is simple, we believe this is our strength. SimpleMix does not introduce additional hyper-parameters. To put it in reviewer `9f8w`’s words, “SIMPLEMIX method is a straightforward but effective way to improve language model alignment without additional computational overhead.” > The reported performance improvements are modest, raising doubts about real-world significance. - **Improvements**: While the reported performance improvements may appear modest, they are averaged across **eight** benchmarks. As reviewer `fktC` denotes “The improvement is modest, but the consistency makes a convincing case.” - **Real-world significance** Tulu 3 [1] is a concurrent effort that is similar to our setting and demonstrates a 3-point average improvement across benchmarks. This is a meaningful gain, especially considering that mixing the two data sources incurs minimal cost. As Reviewer `fktC` denotes, our method “has a strong potential to be adopted in practice.” > Can you explain why Tulu specifically benefits more from SIMPLEMIX? Is there something specific about Tulu’s training or data that makes mixing data sources especially useful here? > In Figure 8, SIMPLEMIX shows very little improvement or sometimes no improvement over the baseline (pure on-policy or off-policy data) when applied to the LLaMA instruct-tuned model across all four task categories (math, coding, creative writing, recommendation). This again relates the Q1 above: it seems that mixing is not useful when the model is already aligned? We aggregated the two questions here: we conjecture that Tulu-8B-SFT have not gone through a preference optimization process while Llama-3.1-8B-Instruct have already gone through extensive SFT and DPO training, thus making it easier to improve Tulu’s performance compared to improving Llama-3.1-8B-Instruct’s performance. References: [1] Tulu 3: Pushing Frontiers in Open Language Model Post-Training (Lambert et al., 2024)
Summary: The paper investigates the interplay between on-policy and off-policy preference data in aligning large language models (LLMs) with human preferences. It presents the key finding that on-policy data is more effective for reasoning tasks (e.g., math, coding), whereas off-policy data performs better in open-ended tasks (e.g., creative writing, recommendations). Based on this observation, the authors propose SIMPLEMIX, a method that combines both data sources in a straightforward manner. Claims And Evidence: Most claims are well-supported. Methods And Evaluation Criteria: Yes, but relies heavily on LLM-based evaluation; lacks human validation. Theoretical Claims: No formal proofs provided; lacks theoretical justification for SIMPLEMIX effectiveness. Experimental Designs Or Analyses: Yes, but it lacks human evaluation. Supplementary Material: Yes, I reviewed the Appendix sections. Relation To Broader Scientific Literature: Builds on preference learning, hybrid RL, and DPO. Essential References Not Discussed: Lacks discussion on recent hybrid RL preference optimization and adaptive weighting. Other Strengths And Weaknesses: **Strengths:** 1. The proposed SIMPLEMIX method is a straightforward but effective way to improve language model alignment without additional computational overhead. 2. The results are statistically significant and demonstrate clear trends in performance across different tasks. **Weaknesses:** 1. The paper lacks a theoretical foundation to explain why SIMPLEMIX works better than other hybrid approaches. 2. Benchmarks like Alpaca Eval 2.0 and Ifeval involve LLM-based evaluation, which can be biased or manipulated. Including more human evaluation results (e.g., actual user studies) would add credibility to the findings. 3. While the study identifies task-dependent benefits of on- and off-policy data, it does not explore task-specific weighting or adaptation. A potential extension could involve dynamically adjusting the data mixture ratio based on task type. Other Comments Or Suggestions: See Strengths And Weaknesses Part. Questions For Authors: See Strengths And Weaknesses Part. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the thoughtful comments and suggestions. We are thankful that the reviewer finds our method “straightforward but effective” and our results “demonstrate clear trends”. We hope to resolve the concerns below: > Benchmarks like Alpaca Eval 2.0 and Ifeval involve LLM-based evaluation, which can be biased or manipulated. Including more human evaluation results (e.g., actual user studies) would add credibility to the findings. It is possible there may be some misunderstanding of the scope of our evaluation. In our work, we have chosen to report scores of **nine** benchmarks, only one of them is LLM-based evaluation (Alpaca Eval 2.0). The other 8 benchmarks include but are not limited to world knowledge acquisition (MMLU), commonsense reasoning (Hellaswag), precise instruction following (IFEval), open-domain question answering (OpenQA). We believe that this is a practical and comprehensive setting since the same benchmarks are also adopted by FineWeb [6] in pre-training data curation. Detailed descriptions of our evaluation benchmarks can be found at Appendix D. We acknowledge that adding real user studies would improve the credibility of our work. Since the cost of high quality human annotators is prohibitively large, we opted to use Alpaca Eval 2.0 due to it having a 0.95 correlation with real human annotators while only having 0.8% of the cost according to [7]. > The paper lacks a theoretical foundation to explain why SIMPLEMIX works better than other hybrid approaches. There exists a mismatch between theory of existing works and practice in LM alignment, and we conjecture that might be the reason why SIMPLEMIX works better: - HyPO [1] proposes to perform an off-policy DPO while using on-policy data to minimize the KL divergence between the current policy ($\pi_\theta$) and the reference policy ($\pi_\text{SFT}$). The motivating assumption for adding the additional KL regularization is that the “DPO implicit reward” is unbounded and can lead to infinite KL. In practice, the literature has witnessed the *opposite* trend [2, 3, 4] where KL regularization might not be necessary for LM alignment (because of the reference model’s unreliability, leading to unreliable KL values). Our work combines on- and off-policy data *without explicit regularization* of the KL compared to HyPO, removing the strong KL regularization in HyPO might be a reason that SimpleMix works better. - DPO-Mix-P [5] samples from an interpolation between $\pi_\theta$ (the current policy) and $\pi_\text{SFT}$ (reference policy). The theoretical foundation of DPO-Mix-P is designed for faster convergence in terms of DPO loss (achieving a lower DPO loss for less iterations). In practice, the recent works have shown that lower DPO loss doesn’t always go hand-in-hand with better alignment (“Alignment Gap” [6]). Therefore, achieving a lower DPO loss, or equivalently, a higher ranking accuracy, might not contribute to a better aligned model. > While the study identifies task-dependent benefits of on- and off-policy data, it does not explore task-specific weighting or adaptation. A potential extension could involve dynamically adjusting the data mixture ratio based on task type. In section 3, we have shown the complementariness of on- and off-policy data on carefully selected subtopics in Alpaca Eval 2.0. However, in reality, user prompts rarely fall into a single category as many queries require both understanding the user’s personal preference, but also following verifiable constraints. For example, the query from UltraFeedback “explain cache keys to me like im 5” requires both objective knowledge about cache keys and adjustments of tone and style. Therefore, we decided to be safe and only combine two sources of data to achieve a balanced performance across all types of tasks. We agree with the reviewer that algorithmic adjustment of the sampling method (on- or off-policy) based on the prompt category would be a promising future direction. References: [1] The Importance of Online Data: Understanding Preference Fine-tuning via Coverage (Song et al., NeurIPS 2024) [2] Contrastive Preference Optimization: Pushing the Boundaries of LLM Performance in Machine Translation (Xu et al., ICML 2024) [3] SimPO: Simple Preference Optimization with a Reference-Free Reward (Meng et al. NeurIPS 2024) [4] DAPO: An Open-Source LLM Reinforcement Learning System at Scale (Yu et al., 2025) [5] The Crucial Role of Samplers in Online Direct Preference Optimization (Shi et al., ICLR 2025) [6] Preference Learning Algorithms Do Not Learn Preference Rankings (Chen et al., NeurIPS 2024) [7] FineWeb: decanting the web for the finest text data at scale (Penedo et al., 2024) [8] MixEval: Deriving Wisdom of the Crowd from LLM Benchmark Mixtures (Ni et al., NeurIPS 2024)
null
null
null
null
null
null
null
null
PASER: Post-Training Data Selection for Efficient Pruned Large Language Model Recovery
Reject
Summary: This paper proposes PASER, a post-training data selection method for efficient pruned model recovery. PASER involves (i) Semantic-Structural Recovery Instruction Clustering to identify and group data points that focus on similar capabilities. (ii) Capability Degradation-aware Instruction Selection to enable more accurate identification and prioritization for affected capabilities, and (iii) Concept Consistency Graph to detect and mitigate potential negative transfer. Experiments on several open-source LLMs and benchmarks demonstrate that PASER recovers pruned LLM performance by using only part of the original data. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: There are no theoretical claims. Experimental Designs Or Analyses: Yes, I have checked. Supplementary Material: Yes. Relation To Broader Scientific Literature: Model recovery is a well-established field. Essential References Not Discussed: No. Other Strengths And Weaknesses: ## Strength ● The idea of post-training data selection to address the uneven capability degradation issue is interesting. ● This paper is generally easy to follow. ● Experiments on various benchmarks demonstrate that PASER can recover pruned LLMs. ## Weakness ● Many works have discussed data selection for post-training to identify high-quality data, and the authors mentioned in lines 55-56, 'Note that general high quality does not necessarily mean useful for recovery.' This statement may require more evidence or elaboration to support this point. ● The proposed idea of data clustering and selection is inherently limited to embedding and clustering methods. It remains confusingforthechoiceof simply applying SentenceBERT and DiffusionKernel forconducting the clustering process even after reading Appendix H. Given the off-the-shelf models are not specifically end-to-end trained, will the clustering results in the specific domain be unacceptable? The author should also provide some visual results after cluster to illustrate the semantic relationship of clusters after high-dimensional spatial clustering. ● In the proposed manifold learning process, the adjacency matrix or the normalized Laplacian matrix has been employed. If large-scale datasets are involved in real-world applications, will this process be time-consuming in real-time tasks? Additionally, the authors retained the selection of the top d eigenvectors of K_t. Is the proposed method robust to the choice of d? Moreover, Could you provide the specific values of ∣D∣, ∣B∣, and ∣S∣ for each dataset as mentioned in Equation 1? ● The proposed Concept Consistency Graph (CCG) aims to identify conflicting concepts to ensure consistency. I wonder that when CCG can not fully identify all conflicts, will it severely hurt the result of PASER? Additionally, regarding the construction process of CCG, which involves the definition and identification of concepts, and the construction of the adjacency matrix based on the co-occurrence of concepts. What is the time cost of constructing the CCG? In time-sensitive applications, if large datasets and concepts are involved, would this make the construction of CCG too costly to afford? ● Can PASER be applied to LLMs with different architectures, such as the pruned Mixtral 8x7B model for recovery? Other Comments Or Suggestions: Please refer to strengths and weaknesses. Questions For Authors: Please refer to strengths and weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: **W1:** In fact, the paper presents empirical evidence supporting this statement in multiple places: 1) In Table 1 and throughout our experiments, we demonstrate that using general-purpose instruction tuning data selection methods-which focus on selecting "high-quality" instruction data in general - consistently underperform our PASER method which specifically targets recovery needs. 2) Figure 4 illustrates that different capabilities degrade unevenly during pruning. This uneven deterioration means that high-quality data that doesn't specifically target the severely degraded capabilities will be less effective for recovery. 3) In Appendix A, we show that employing the full version of recovery data or uniformly split subset can hardly achieve satisfying performance, despite these containing many generally high-quality samples. This distinction is fundamental to PASER's design philosophy - while general data selection methods focus on intrinsic quality of instructions, effective recovery requires targeting the specific capabilities most severely impacted by pruning, which may not align with general notions of data quality. Correspondingly, our approach identifies which instruction data is most useful specifically for recovery, not just which data is generally high-quality. **W2:** *On the choice of SentenceBERT and Diffusion Kernel:* This is a careful design choice focused on practicality and effectiveness: 1) Domain adaptability: Though not specifically end-to-end trained for LLM instruction clustering, SentenceBERT has demonstrated strong transfer capability across various text semantic tasks. 2) Comparative analysis: As shown in Table 12 (Appendix H), we conducted comprehensive comparisons with alternative clustering approaches. The superior performance of our approach demonstrates that our method, while built on existing techniques, outperforms these alternatives consistently. 3) Computational efficiency: Our approach avoids the overhead of training domain-specific embeddings from scratch, making PASER more accessible and deployable. *Regarding cluster visualization:* We have prepared visualization: https://postimg.cc/nXC0XmJ4, which confirms that our approach successfully identifies meaningful semantic structures in the instruction space that correspond to different LLM capabilities. **W3:** *Large-scale applications:* Computing the full adjacency matrix would be indeed prohibitive for very large datasets. For LaMini (2.58M samples), we implemented an approximate k-nearest neighbors approach using locality-sensitive hashing rather than constructing the complete N×N matrix. This reduced computation from O(N²) to O(N log N) with minimal impact on clustering quality. Pre-computing embeddings and using incremental updates would further improve efficiency. *Robustness to d:* Our method is relatively robust to the choice of d. We conducted sensitivity analysis with d ranging from 4 to 64 and found consistent clustering results (Rand Index >0.85) across this range. We chose d=16 based on the eigenvalue decay pattern, where eigenvalues beyond this dimension contributed negligibly to the representation. Performance variation was within ±0.2 points across this range. *|D|, |B|, and |S| values:* For Alpaca: |D|=52K, |B|=10.4K (20%), |S|=10.4K. For LaMini: |D|=2.58M, |B|=103.2K (4%), |S|=103.2K. In all cases, we filled the allocated budget |B| completely, with |S|=|B| after filtering and selection. We'll add these details to the revised paper. **W4:** *Undetected conflicts:* While CCG cannot identify all possible conflicts, our experiments show it remains effective even with imperfect conflict detection. When we deliberately introduced undetectable conflicting samples (with conflicts expressed through paraphrasing rather than direct concept matches), performance degradation was limited to 0.3 points. *CCG construction time:* Constructing the CCG is relatively efficient. For Alpaca (52K samples), CCG construction took approximately 42 seconds. For LaMini (2.58M samples), we used the parallelization across multiple cores and took ~8 minutes. These times are negligible compared to the recovery training time (hours to days). Additionally, CCG construction can be performed offline as a preprocessing step before recovery training begins. The empirical benefits of conflict detection (0.68-2.39 points ↑) outweigh the computational overhead. **W5:** Our PASER framework is model-agnostic and can be applied to LLMs with different architectures. We have conducted experiments with Mixtral 8x7B under LLM-Pruner: |Recovery Method|WikiText2↓|PTB↓|Averaged Reasoning↑| |-|-|-|-| |Instruction Mining|14.86|25.92|62.68| |IFD|14.23|24.65|63.17| |Nuggets|13.79|23.81|63.40| |PASER|**12.31**|**21.06**|**64.76**| Results show that PASER significantly outperforms other instruction selection methods, demonstrating effectiveness for MOE. If our rebuttal has addressed you concern, could you please kindly consider raising the score? --- Rebuttal Comment 1.1: Comment: After reading the rebuttal, **I am still concerned about the choice of Sentence-BERT and the diffusion kernel. Additionally, the undetected conflicts present in CCG indicate that PARSER may be challenging to apply in certain knowledge-intensive domains. Providing a theoretical error bound might offer a deeper understanding of PARSER.** Therefore, I have revised my rating to 1. --- Reply to Comment 1.1.1: Comment: Dear Reviewer aKaN: Thank you for your continued engagement with our paper. We respectfully wish to address the concerns in your final comment as they appear to be based on some misunderstandings about our work: **On theoretical analysis:** Our paper does contain substantial theoretical analysis in Section 3.5, where we provide a formal theorem (Theorem 1) with a detailed proof of our algorithm's time complexity (O(N log N + NC²)). For data selection methods, time complexity analysis is crucial as it determines practical applicability. Regarding theoretical error bounds: while valuable in principle, such bounds require making assumptions that aren't realistic in LLM contexts given their non-convex loss landscapes and complex parameter spaces. This is why empirical validation across diverse settings (as we provide) is the standard approach in this field. Though, we still provide the theoretical error bound analysis following your comment: https://anonymous.4open.science/r/PASER-E606/error_bound.pdf (you may download it to read for better visualization). However, we need to clarify this is based on the idealized assumptions like Lipschitz continuity of recovery performance and capability degradation correlation, which are hard to be satisfied in the real scenarios. **On SentenceBERT and diffusion kernel choices**: Our technical choices were made after extensive empirical validation, not arbitrarily. First, we argue that Sentence-BERT suits our scenario well because it can transfer well across different text semantic tasks and possess relatively higher efficiency. In fact, we have also tried using much larger pretrained language models like LLaMA3-8B for embeddings which provides negligible performance improvements (less than 0.05 points) while significantly increasing computational costs. Second, in **A2** to **Reviewer 3hM3**, we have conducted additional ablation studies comparing our diffusion kernel approach with other dimentionality reduction techniques: UMAP, PCA, and t-SNE. The results demonstrates the superiority of choosing diffusion kernel. Actually, the reasons for its better performance is as follows: 1) It effectively preserves the manifold structure in high-dimensional embedding spaces; 2) It adapts to the intrinsic geometry of the data without assuming linear separability; 3) It performs well with heterogeneous data distributions typical in instruction tuning datasets. Third, in the Table 12, Appendix H, we have provided comprehensive experimental comparisons with alternative clustering approaches. The results also validate the effectiveness of our design. Finally, we have provided visualization: https://postimg.cc/nXC0XmJ4, which demonstrates that our approach effectively clusters instructions by capability. This is essential for targeted recovery. **On the CCG limitations**: While we acknowledged the theoretical possibility of undetected conflicts (actually, no method can be 100% successful), our experiments show: When deliberately introducing undetectable conflicting samples, performance degradation was minimal (no more than 0.3 points). This is far outweighed by CCG's benefits (0.68-2.39 points improvement). In fact, in real-world datasets like Alpaca and LaMini, cases that could circumvent our CCG-based detection mechanism are exceedingly rare. Especially, regarding knowledge-intensive domains: these typically feature standardized terminology and well-defined concepts, which actually makes our CCG approach more reliable in such contexts, not less. We hope these clarifications help address your concerns and convince you to consider raising the score.
Summary: This paper proposes PASER for efficient recovery of pruned large language models (i.e., fine-tuning pruned large language models to recover their performance). It uses SentenceBERT to embed data, a diffusion kernel to reduce dimensions, and then applies non-negative matrix factorization-based spectral clustering to cluster the data. For each cluster, it assesses the performance degradation and allocates the budget accordingly. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: Yes. The paper makes a theoretical claim regarding time complexity. The authors should clearly specify which factors are considered constants and hidden in the big-O notation, such as the vocabulary size of tokens. For example, when computing JSD, the computation is naturally linear with respect to the vocabulary size. However, this aspect was not discussed. The authors need to clarify this point. Experimental Designs Or Analyses: Yes. I would recommend that the authors conduct an ablation study. This paper combines several techniques, such as SentenceBERT, diffusion kernel, non-negative matrix factorization-based clustering, and budget allocation based on the Jensen-Shannon distance between the original and pruned models. My comments are as follows: - The paper lacks motivation for why specific techniques were chosen over others. For instance, there are many dimension reduction methods available—why was the diffusion kernel selected? The presentation of the methods needs significant improvement. - Given the variety of techniques integrated into the approach, it is unclear which ones are most effective and which may be less helpful. What would happen if a different dimension reduction technique were used while keeping the other components the same? An ablation study is needed to address these questions. - More specifically, the authors customized Instruction Mining (Cao et al.), IFD (Li et al., 2024a), and Nuggets (Li et al., 2024b) for the post-pruning recovery training scenario. What about using SentenceBERT and the diffusion kernel, and then applying the above techniques? This would reveal whether the JSD-based budget allocation works. Supplementary Material: No Relation To Broader Scientific Literature: This paper presents a very interesting end-to-end pipeline for efficient pruned large language model recovery. Essential References Not Discussed: No Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **C1: Theoretical Claims** **A1:** In our time complexity analysis (Section 3.5, page 5), we considered the following: 1. For JSD computation, we indeed treated vocabulary size |V| as a constant factor. The practical vocabulary size (typically 32K-100K tokens) remains fixed regardless of instruction dataset size. While JSD computation is linear in |V|, this factor is consistent across all samples and doesn't affect asymptotic scaling with N. 2. Sequence length |x| + |y| was treated as a constant, as instruction-tuning inputs and outputs typically have bounded lengths. 3. The number of clusters K was treated as a constant, though we explicitly noted "K ≤ N" in our analysis. In practice, K typically ranges from 8-20 regardless of dataset size. 4. Embedding dimension d from our manifold learning was treated as a constant (set to 16 in our experiments). In the revised paper, we will clarify all these assumptions to provide readers with a more complete understanding of PASER's computational characteristics. **C2: Experimental Designs Or Analyses** **A2:** *Regarding component motivation and ablation studies:* In response to points 1) and 2), we would like to highlight that our paper does include a comprehensive ablation study in Section 4.2 (Table 4) and further detailed in Appendix G (Table 11), where we systematically removed each of the three key components:S²RIC, CDAIS, NTM. These ablation studies demonstrate that all three components contribute positively to model recovery across different pruning schemes. However, we acknowledge that our paper could provide clearer motivation for the specific techniques chosen within each component. As for dimensionality reduction techniques, the diffusion kernel was selected as our dimensionality reduction method after comparing it with alternative techniques such as UMAP, PCA, and t-SNE. The diffusion kernel suits our specific scenario better because: 1. It effectively preserves the manifold structure in high-dimensional embedding spaces; 2. It adapts to the intrinsic geometry of the data without assuming linear separability; 3. It performs well with heterogeneous data distributions typical in instruction tuning datasets. To validate its effectiveness with empirical evidence, we present an additional ablation study below where we change the dimensionality reduction component while keeping the rest of PASER intact (LLaMA2-7B under LLM-Pruner): |Method|WikiText2↓|PTB↓|Averaged Reasoning↑| |-|-|-|-| |PASER w/ UMAP|16.92|27.83|60.31| |PASER w/ PCA|17.05|28.16|60.18| |PASER w/ t-SNE|17.21|28.42|60.05| |PASER w/ Diffusion Kernel (Full)|**16.40**|**26.35**|**61.10**| For the clustering component, in Table 12 of Appendix H, we compared our NMF-based spectral clustering with alternative clustering approaches including NMF_TFIDF, LDA_TFIDF, KMeans_TFIDF, Spectral_MTEB, and Spectral_BERT. Our approach consistently outperformed these alternatives across all pruning schemes, validating the soundness of our design. Besides, we studied the divergence measurement component selection. When replacing JSD with other options and keeping the rest intact, the performance comparison is as follows: |Method|WikiText2↓|PTB↓|Averaged Reasoning↑| |-|-|-|-| |PASER w/ KL divergence|16.91|27.54|60.37| |PASER w/ Wasserstein distance|16.73|27.26|60.59| |PASER w/ JSD (Full)|**16.40**|**26.35**|**61.10**| From the table, other versions can hardly surpass our JSD-based version. The detailed rationale has been provided in Sec.3.3. These results and analysis demonstrate that while the overall PASER framework is robust, the specific technical choices made for each component meaningfully contribute to the method's overall effectiveness. *Regarding JSD-based budget allocation effectiveness:* To address point 3), we conducted the additional experiment suggested by the reviewer: integrating our dimensionality reduction and clustering approach with existing data selection methods, while removing our JSD-based budget allocation. The results are summarized in the table below: |Method|WikiText2↓|PTB↓|Averaged Reasoning↑| |-|-|-|-| |w/o pruning|12.62|22.14|62.91| |w/o Training|20.34|38.81|57.78| |Instruction Mining|23.31|40.63|57.65| |Instruction Mining + S²RIC clustering|20.92|36.47|58.85| |IFD|19.76|33.30|58.59| |IFD + S²RIC clustering|17.95|32.61|59.47| |Nuggets|20.02|35.19|58.69| |Nuggets + S²RIC clustering|18.84|33.16|59.21| |PASER (Full)|**16.40**|**26.35**|**61.10**| These results confirm that while S²RIC clustering improves existing methods, the JSD-based capability degradation assessment and budget allocation are critical components that provide additional performance gains. We will enhance the presentation of these motivations and ablation studies in the revised paper to make our technical choices and their contributions clearer. If our rebuttal has addressed you concern, could you please kindly consider raising the overall recommendation score? --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for the additional experiments and detailed clarifications. My concerns have been largely addressed, and I will raise my score accordingly. One remaining question: do the authors plan to open-source the code and data to facilitate reproducibility of the results? --- Reply to Comment 1.1.1: Comment: We sincerely appreciate your positive feedback on our rebuttal. Yes, we plan to make both our code and data publicly available upon acceptance of the paper. This will include well-documented implementations and processed datasets to facilitate reproduction of our results. Thank you again for your valuable input throughout the review process.
Summary: This paper proposes a data selection method for effectively recovering model performance after pruning. Beyond the efficiency argument, the need for such a method is well justified through experimental results, where the authors show that simply training on the full dataset or randomly selected data not only performs worse than the proposed method but also underperforms compared to other methods not specifically designed for post-pruning recovery. The method consists of the following key components: - Encoding instruction data into a low-dimensional space using SentenceBERT and a Diffusion Kernel. - Clustering the samples in this space. - Capability-aware instruction selection: i) Assigning a sampling budget to clusters based on the average CDS (defined using JSD); ii) Sampling examples in order of decreasing IES to prioritize efficiency (favoring shorter examples). - Negative Transfer Mitigation: Ensuring only conceptually consistent examples are sampled, meaning that selected examples must contain concepts that do not contradict relationships already represented in the concept consistency graph. Claims And Evidence: Yes, all claims are supported by proper evidences. The main claims include: - post-pruning performance degrades differently for different capabilities - data selection is crucial for optimal post-pruning performance recovery - each component of the proposed PARSER method contributes positively to the final performance recovery Methods And Evaluation Criteria: Yes, proposed benchmarks make sense for the presented evaluations. Theoretical Claims: I briefly checked equations in section 3, they seem to be correct. Experimental Designs Or Analyses: Yes. Supplementary Material: I did not check the supplementary material. Relation To Broader Scientific Literature: Overall, I find the relation to broader scientific literature is discussed sufficiently well. However, I suggest authors include more references in section 3.2 to works that first proposed the manifold learning techniques applied there. Essential References Not Discussed: I could not identify any essential references that are not discussed. Other Strengths And Weaknesses: Strength: - clarity and sufficient level of detail of writing - strong experimental result and sound story - rich ablations - the method is well designed overall, it presents several interesting design decisions that can also transfer to other applications (the choice of clustering algorithm, manifold learning technique etc.) Weaknesses: - while the proposed method demonstrates strong empirical results, it appears somewhat complex, involving multiple hyperparameters, which could make implementation difficult in practice. Other Comments Or Suggestions: - typo ll. 381 - 382 "(from 10K to 10K samples)"? - Regarding "Capability Degradation Assessment": the increased JSD between M_o and M_p does not necessarily mean degradatiaon in the performance of M_p, does it? Maybe the naming here can be adjusted? - I am a bit confused by the proposed concept consistency graph (Definition 1 and subsequent): 1) I am not sure how exactly the concepts are extracted from the instructions; 2) is my understanding correct (from ll. 237 - 238) that a sample will not be selected if it contains a pair of concepts that are allergy present in the graph but are not yet linked (i.e. do not co-occur yet)? Wouldn't this significantly limit the diversity of selected data? - why did authors decide to only apply the technique to the instruction part of the data, and not also to the output? Questions For Authors: See section above. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **C1: Relation To Broader Scientific Literature** **A1:** In the revised version, we will add relevant foundational references for the manifold learning techniques applied in our work, including: 1. For manifold learning in high-dimensional spaces: Belkin & Niyogi (2003) "Laplacian Eigenmaps for Dimensionality Reduction and Data Representation" 2. For diffusion maps: Coifman & Lafon (2006) "Diffusion maps" and Nadler et al. (2006) "Diffusion maps, spectral clustering and eigenfunctions of Fokker-Planck operators" 3. For NMF-based spectral clustering: Ding et al. (2005) "On the equivalence of nonnegative matrix factorization and spectral clustering" (which we currently cite only briefly) 4. For spectral gap methods: Chung (1997) "Spectral Graph Theory" and von Luxburg (2007) "A Tutorial on Spectral Clustering" **C2: Other Strengths And Weaknesses** **A2:** For hyperparameter settings, please see our **A2** to **Reviewer hF4D**. We will include them in the final version of paper. Besides, we have provided the code in the anonymous URL (Line 1262, Page 23) for easing reproduction. **C3: Typo** **A3:** Sorry, here should be from 10K to 100K samples. 10K indicates the scale of selected Alpaca dataset (5% of 52K), 100K indicates the scale of selected LaMini dataset (4% of 2.58M). **C4: Regarding Capability Degradation Assessment** **A4:** A standard assumption in LLM pruning is to consider the original model (M_o) as the performance referencen (oracle model). The JSD between M_o and M_p measures behavioral divergence in output probability distributions, which generally correlates with capability degradation. We chose JSD over direct performance metrics because it captures subtle changes in model behavior that might not be immediately apparent in accuracy or loss values due to sampling uncertainty. JSD's information-theoretic foundation allows us to detect divergences in the underlying probability distributions, which often precede observable performance drops. While JSD may not be perfectly proportional to performance degradation in all cases, our experiments confirm it effectively identifies capabilities requiring focused recovery attention. We'll clarify this relationship in the revised paper to avoid potential misinterpretation. **C5: I am a bit confused by the proposed concept consistency graph** **A5:** *Regarding concept extraction:* We extract concepts from instruction-output pairs using a modified RAKE (Rapid Automatic Keyword Extraction) algorithm, which identifies key phrases based on word co-occurrence and frequency statistics. As demonstrated in Appendix J (pages 21-23), this approach effectively captures domain-specific entities (e.g., "quantum computing," "neural network," "backpropagation") that represent core knowledge units in the instruction. We use parts-of-speech filtering to prioritize meaningful noun phrases and named entities. *Regarding potential diversity limitation:* Your understanding is correct - we exclude samples that introduce new relationships between existing concepts. While this might appear to limit diversity, our experiments show this constraint is crucial for preventing negative transfer. When we removed this constraint in ablation studies, we observed performance degradation across most tasks (Table 4). The primary goal of recovery is targetedness rather than diversity. Full-dataset training offers maximum diversity but achieves suboptimal results (Tables 1-3), often due to conflicting information. Our approach balances concept coverage with consistency, ensuring coherent capability recovery while preventing harmful conceptual conflicts. We'll clarify these aspects in the revised paper to address potential confusion. **C6: Why did authors decide to only apply the technique to the instruction part of the data, and not also to the output?** **A6:** We chose to build our Concept Consistency Graph (CCG) using only the instruction part of each data sample for several reasons: 1) Instructions typically contain sufficient context and almost all main concepts needed to identify capability domains. 2) Instructions more directly represent the task domain and required knowledge, making them ideal for detecting potential conflicts. 3) Outputs can vary significantly in style and implementation details, potentially introducing noise into the concept space. Our experiments showed that instruction-based concept extraction was sufficient for effective negative transfer mitigation. This approach also reduced computational complexity while maintaining performance. It's important to note that while the CCG is built using only instructions, the actual recovery training process utilizes both instructions and their corresponding outputs for fine-tuning, ensuring the model learns complete instruction-response patterns.
Summary: This paper proposes a data selection method for recovery fine-tuning: additional training for pruned LLMs to recover the original performance as well as possible. The method consists of three subcomponents: (1) clustering instruction data, (2) assign data budget for each cluster according to the probabilistic discrepancy between the original and pruned model on the specific cluster, and (3) remove data with inconsistent concepts. Experiments on various models showed that the dataset generated by the proposed method consistently improve the resulting model compared with several conventional methods for data selection. ## update after rebuttal: Thank you for your comments on my review! However, I have not found a good enough reason to change my review result, so I will leave the overall score as it is. Claims And Evidence: The method is well motivated: it is designed for especially selecting the training data for recovery fine-tuning. Though there's no strong theoretical evidences on the method itself, it may work better than other data selection methods that are not focused on model recovery. Methods And Evaluation Criteria: The overall method is well developed, but it looks like the method is a result of engineering: a combination of many subroutinees, and it is not easy to say if selection of each technique is suitable (i.e., there's no other option) or not. Evaluation is conducted on three model series: Llama2, Llama3 (English-centric) and Baichuan2 (En-Zh bilingual). Especially for Llama2 they contucted experiments on up to 70B size. This seems to be enough to claim general effectiveness of the proposed methods on various LLMs. Theoretical Claims: Most of components are designed empirically: it is not easy to mention that theere are some underlying facts to support each subroutine. Especially, I'm wondering if: * the Equation (6) is really optimal for determining the amount of traiing data. It looks this amount is crucial to guarantee the final performance of the recovered model, but there looks no any strong evidence of adopting simple normalization over the calculated CDS. * The algorithm 1 in Appendix B is optimal and agnostic to the order of data consumption. Experimental Designs Or Analyses: Experiments looks comprehensive enough in terms of downstream performance. It controls several parameters to obtain pruned models, and the proposed method successfully recovers the model performance in most cases. For ablation, Table 4 shows that every subcomponent in the proposed method works to improve the resulting model. Figure 2 also show that the proposed method is better than other conventional methods, while maintaining the robustness against data budget. Supplementary Material: NA Relation To Broader Scientific Literature: Data selection is sometimes studied in wide range of machine learning tasks. And model pruning and its potential degradation is one of the core insterests among model users. Essential References Not Discussed: Not sure Other Strengths And Weaknesses: NA Other Comments Or Suggestions: NA Questions For Authors: I'd appreciate if the authors answered questions raised in the Theoretical Claims section. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **C1: Claims And Evidence** **A1:** While PASER lacks strong theoretical guarantees, our approach is guided by sound principles: targeting severely impaired capabilities is intuitively more efficient than general data selection methods not designed for recovery. The difficulty in establishing theoretical connections between data selection and recovery performance stems from LLMs' inherent characteristics - their highly non-convex loss landscapes, vast parameter spaces (billions of parameters), and complex capability distributions that aren't easily characterized mathematically. Rather than introducing impractical assumptions that would limit real-world applicability, we focused on empirical validation across diverse models and pruning schemes. Our consistent performance improvements across these scenarios provide strong evidence for PASER's effectiveness, even without formal theoretical bounds. **C2: Methods And Evaluation Criteria** **A2:** Thank you for this comment. We acknowledge that PASER combines multiple components to address the multi-faceted challenge of efficient recovery. While PASER may appear engineering-driven, each component addresses a specific, critical aspect of pruned LLM recovery: (1) capability identification through clustering, (2) targeted resource allocation based on degradation severity, and (3) negative transfer prevention. Our ablation studies (Table 4, page 8, and Appendix G) validate each component's contribution, showing that removing any single component consistently degrades performance. We explored alternative techniques for clustering (Table 12, page 20), demonstrating that our S²RIC approach outperforms other methods. In additional experiments (not included due to space constraints), we evaluated different divergence metrics for capability degradation assessment: KL-divergence reduced average reasoning performance by 0.73 points compared to JSD, while Wasserstein distance reduced it by 0.51 points. For negative transfer mitigation, we compared our CCG approach with simpler methods like keyword filtering (1.32 points lower) and cosine similarity thresholding (0.89 points lower). For more results regarding the selection of each component, you may also refer to **A2** for **Reviewer 3hM3**. Rather than an arbitrary assembly, PASER represents a principled approach to the novel problem of post-pruning recovery data selection, with each component carefully designed and validated. **C3: Theoretical Claims** **A3:** *Regarding the optimality of Equation (6) for budget allocation:* We acknowledge that our proportional allocation approach based on CDS is heuristic rather than theoretically optimal. Different allocation strategies were explored in our experiments, including equal allocation, square-root scaling, and logarithmic scaling. The linear proportional allocation (Equation 6) consistently outperformed alternatives, showing ~0.4-0.8 points higher average performance across pruning schemes. While we cannot claim theoretical optimality, this approach intuitively directs more resources to capabilities with greater degradation while still maintaining some recovery effort for less affected capabilities. Finding a provably optimal allocation would require making unrealistic assumptions about capability independence and recovery dynamics. *Regarding algorithm optimality and order-sensitivity:* To be honest, we cannot claim theoretical optimality for Algorithm 1. In fact, finding a provably optimal subset would require solving a complex combinatorial optimization problem with O(2^N) complexity, which is computationally intractable for large instruction datasets. Our algorithm represents a greedy approach that makes locally optimal choices at each step. The algorithm has inherent order-dependency since the Concept Consistency Graph evolves as samples are added. Considering the intra-cluster order is determined by the IES score, we tested different cluster orderings and found performance variations of ±0.1-0.3 points, indicating that our approach is relatively robust. Given the NP-hard nature of the optimal subset selection problem (as formulated in Equation 1), Algorithm 1 provides a practical approximation that balances computational efficiency with strong empirical performance. Future work could explore more sophisticated optimization techniques with stronger theoretical guarantees. We appreciate these theoretical questions and will clarify these limitations in the revised paper. **C4: Questions For Authors** **A4:** Please see **A3**. If our rebuttal has addressed your concern, could you please kindly consider raising the overall recommendation score?
Summary: The paper introduces PASER, a novel method for selecting instruction tuning data to recover the performance of pruned large language models (LLMs). Pruning, particularly structured pruning, often degrades model capabilities, and instruction tuning has shown promise for efficient recovery. PASER comprises three key components: semantic-structural recovery instruction clustering (S²RIC) to group instructions by capability, capability degradation-aware instruction selection (CDAIS) to prioritize severely affected capabilities, and negative transfer mitigation (NTM) via a concept consistency graph (CCG) to filter conflicting data. Evaluated on LLMs like LLaMA2, LLaMA3, and Baichuan2 across structured (e.g., LLM-Pruner), semi-structured (e.g., Wanda), and unstructured (e.g., SparseGPT) pruning schemes, PASER outperforms baselines like random selection and general instruction tuning methods (e.g., Instruction Mining, Nuggets) in language modeling (WikiText2, PTB) and reasoning tasks (e.g., BoolQ, PIQA). It achieves higher performance with less data, reducing training overhead, as demonstrated in Tables 1 and 2. Update after rebuttal: Thanks authors for the replies which partly resolve my concerns, but considering the overall quality of this paper, I keep my original recommendation decision. Claims And Evidence: The paper claims PASER enhances recovered LLM performance and reduces training overhead compared to baselines. This is well-supported by experimental results. For instance, Table 1 (Section 4, Page 6) shows PASER achieving an average performance of 61.10 on LLaMA2-7B under LLM-Pruner, surpassing random selection (47.69) and Nuggets (58.59). Table 2 (Section 4, Page 7) extends this across models, with PASER recovering LLaMA2-70B reasoning to 69.62, closer to the unpruned 71.72 than Nuggets (67.73). Efficiency is evidenced by PASER using 20% of Alpaca data (Section K, Page 23), yet outperforming full-data recovery. However, the claim of "efficiency and scalability" (Section 3.5, Page 5) is problematic. The time complexity of O(N log N + N C²) suggests potential computational intensity for large N or C, despite C being small in practice. No empirical runtime data supports this claim, weakening its convincingness. The claim of negative transfer mitigation (Section 3.4, Page 4) relies on indirect evidence through performance gains, lacking direct analysis (e.g., rejected sample impact), which could strengthen validation. Methods And Evaluation Criteria: PASER’s methods—clustering via SentenceBERT and NMF spectral clustering, degradation assessment with JSD, and CCG-based filtering—are appropriate for targeted recovery. They address uneven capability degradation post-pruning (Section 1, Page 1). Benchmarks like WikiText2 and PTB for language modeling and seven reasoning datasets (e.g., BoolQ, HellaSwag) (Section 4.1, Page 5) align with evaluating general LLM capabilities. Using Alpaca (52K samples) and LaMini (2.58M samples) (Section 4.1, Page 5) tests scalability across data sizes. Theoretical Claims: The primary theoretical claim is PASER’s time complexity of O(N log N + N C²) (Theorem 1, Section 3.5, Page 5). The proof decomposes this into clustering (O(N log N)) and sample selection (O(N C²)), assuming C << N simplifies to O(N log N). This breakdown is correct and aligns with the algorithm’s steps (Section 3). No discrepancies were found. Experimental Designs Or Analyses: Experiments are sound, comparing PASER against random selection, full-data recovery, and baselines (Instruction Mining, IFD, Nuggets) across multiple LLMs and pruning schemes (Section 4.1, Page 5). Ablation studies (Table 11, Section I, Page 19) validate each component’s contribution, e.g., PASER without NTM drops from 61.10 to 59.25 on LLaMA2-7B. Five-run averages with t-tests (p < 0.01) (Section K, Page 23) ensure statistical rigor. A concern is the lack of hyperparameter details in the main text (e.g., LoRA settings: rank=8, epochs=2) (Section K, Page 23), relegated to the appendix. Consistency across models/pruning schemes is unclear, potentially affecting reproducibility. Empirical selection time data would address efficiency concerns. Supplementary Material: The provided codes but I did not get time to run it and verify. Conceptually, the code structure looks resonable to me. Please let me know if you need me to verify it by running it locally. Relation To Broader Scientific Literature: PASER builds on LLM pruning (e.g., SparseGPT, Wanda, LLM-Pruner) and instruction tuning literature (Section 2, Page 2). It uniquely targets post-pruning recovery, unlike general data selection methods (e.g., Wang et al., 2024). It advances prior recovery approaches (Ma et al., 2023; Zhao et al.) by optimizing data selection, not just using full datasets. Connections to active learning or curriculum learning could broaden its context, as PASER’s degradation-aware selection resembles these strategies. Essential References Not Discussed: Not what I am aware of. Other Strengths And Weaknesses: NA, already pretty covered by previous questions. Other Comments Or Suggestions: Would be great to discuss limitations (e.g., English bias, clustering sensitivity) more prominently. Questions For Authors: 1. Dataset Quality: How does PASER handle noisy instructions (e.g., in Alpaca)? Robustness to quality issues could influence real-world applicability, potentially raising my evaluation if addressed. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **C1: Claims and Evidence** **A1:** *Time complexity validation*: In fact, we have provided the empirical runtime data in Figure 2 (Page 8) and compared the efficiency of our PASER with baselines. In practice, for selecting 20% from Alpaca (52K samples) and 4% from LaMini (2.58M samples), the data selection process took approximately 27 minutes and 4.3 hours respectively on our server. This confirms our theoretical analysis - as C (number of concepts per sample) is typically small (average of 5-7 concepts per instruction), the dominant factor is indeed O(N log N). We'll highlight these empirical measurements in the revised paper. *Scalability evidence:* Figure 2 has demonstrated scalability by showing PASER maintains efficiency advantages across different data budget ratios. At the 4% data budget on LaMini (using ~100K samples from 2.58M), PASER still completes recovery training significantly faster than baselines while achieving better performance. In the final version, we'll include a direct runtime comparison table to further strengthen this claim. *Negative transfer mitigation:* We agree that the evidence for negative transfer mitigation could be strengthened with more direct analysis. In our case study (Section J, Pages 21-23), we demonstrated the Concept Consistency Graph's ability to detect and reject conflicting samples (specifically analyzing a rejected sample involving quantum computing and deep learning). To quantify this effect: across experiments, approximately 12-18% of potential samples were rejected by our negative transfer mitigation mechanism. When we deliberately included these rejected samples in place of compatible ones (in an ablation experiment not included due to space constraints), we observed a 0.9-1.6 point performance degradation across tasks. We'll incorporate this quantitative analysis in the revised paper. Thank you for helping us identify these areas for improvement. We believe these additions will strengthen the validation of our claims while maintaining the paper's overall contributions. **C2: Experimental Designs Or Analyses** **A2:** *Hyperparameter:* For the Semantic-Structural Recovery Instruction Clustering, we used consistent settings across all experiments: diffusion time t was automatically selected using the spectral gap method, and the embedding dimension d was set to 16. The optimal number of clusters K was determined adaptively through NMF approximation error minimization, typically resulting in 8-12 clusters for Alpaca and 15-20 clusters for LaMini. For the JSD calculation in capability degradation score (CDS), we used a temperature τ=1.0 for the output probability distribution. The computational cost was approximated using the quadratic term of sequence length with a coefficient of 1.0 across all experiments. For concept extraction, we used a maximum of 10 concepts per instruction-response pair with a minimum phrase length of 2 words and a maximum of 4 words. The concept similarity threshold for consistency checking was set to 0.75 across all experiments. We maintained these same hyperparameter settings across all models and pruning schemes to ensure fair comparison. The only adaptation was the recovery data budget ratio: 20% for Alpaca and 4% for LaMini, chosen based on preliminary experiments to balance computational cost and recovery performance. We will move these key hyperparameter details from the appendix to the main experimental setup section and provide a comprehensive configuration table in the revised paper. *Empirical selection time:* Please see **A1**. **C3: Relation To Broader Scientific Literature** **A3:** In the revised paper, we'll add discussion relating PASER to active learning and curriculum learning . **C4: Other Comments Or Suggestions** **A4:** Due to space limitation here, we will provide more comprehensive discussion in final version. **C5: Questions For Authors** **A5:** Handling noisy instructions is indeed a critical aspect for real-world applicability. In fact, our negative transfer mitigation module actively filters out instructions containing conflicting or inconsistent concepts. This naturally excludes many problematic samples that contain contradictory information or conceptual inconsistencies - a common characteristic of noisy instructions. In our experiments, approximately 12-18% of potential samples were rejected by this mechanism in our experiments. As shown in Figure 2, PASER demonstrates robust performance as data budget increases, unlike random selection which shows performance degradation when B/N increases from 0.3 to 0.4. The is because expanding data scale also introduces the conflicting or negative data existing in the original dataset. Despite this challenge, PASER maintains consistent performance advantages by focusing on capability-relevant samples and filtering inconsistencies through the CCG. If our rebuttal has addressed your concern, could you please kindly consider raising the evaluation?
null
null
null
null
FourierMamba: Fourier Learning Integration with State Space Models for Image Deraining
Accept (poster)
Summary: The authors propose a novel framework, FourierMamba, which integrates Fourier priors with a state-space model to associate different frequencies in the Fourier domain for image deraining. ## update after rebuttal The authors' second-round response has addressed most of my concerns. I now find the motivation of the paper to be reasonable and the experiments to be sufficiently thorough. I have decided to raise my score. Claims And Evidence: The motivation behind combining Mamba with the Fourier space needs to be discussed in greater depth. The results shown in Figure 1 only include 1×1 convolution and previous scanning in Fourier space. However, more commonly used alternatives, such as Transformers and stacked 3×3 convolutions, should also be considered and discussed. Methods And Evaluation Criteria: The authors lack quantitative evaluation in real-world scenarios, yet the ultimate goal of image processing is to apply it to real-world situations. Quantitative assessment of real rain images without ground truth (GT), using no-reference evaluation metrics, is crucial for demonstrating the method's practical applicability. Theoretical Claims: n/a Experimental Designs Or Analyses: If the authors provide more real-world examples and analyze the contributions of Fourier and Mamba in these scenarios, this work would more convincing. Supplementary Material: n/a Relation To Broader Scientific Literature: n/a Essential References Not Discussed: n/a Other Strengths And Weaknesses: Strengths: 1.The authors combine Mamba with the Fourier space, which is an interesting idea. 2. The proposed method achieves state-of-the-art performance on multiple image deraining datasets. 3. This work is evaluated on multiple image restoration tasks beyond image deraining, further demonstrating the model's generalization capability and effectiveness. Weaknesses: 1.The Mamba network suffers from two issues: local pixel forgetting and spatial misalignment. I am curious about how these issues manifest in the Fourier domain. Would they still occur, or do the Fourier priors help mitigate these problems? 2.The equations in the Preliminary section are not clearly linked to the modules in Figure 3. In other words, these equations are not directly utilized later in the paper, making them appear redundant. The module primarily applies FFT for phase and amplitude separation, which has a weak connection to the equations presented in the Preliminary section. 3.The authors should include more examples from real-world scenarios to further demonstrate the model's generalization ability. 4.Some references need to be updated, changing citations from arXiv papers to the officially accepted versions of the papers. 5.The double-line and triple-line tables in the manuscript should be standardized for consistency. 6.In Figure 7, I noticed that besides the rain streaks, the feature maps also contain window edge textures. Could this lead to an issue where the Fourier space mistakenly treats window edges as having similar frequency characteristics to rain streaks, potentially causing background degradation? 7.I am curious why many entries in Table 1 are missing. As far as I know, these methods have publicly available source codes. Could the authors clarify why the results for these methods are not reported? 8.The font size in tables and figures is inconsistent (e.g., Tables 2, 3, and 4). The authors should standardize the font size across all tables and figures for consistency. Other Comments Or Suggestions: As mentioned earlier, real-world evaluation is a crucial aspect of validating image deraining methods. However, Figure 6 is too small, making it difficult to discern the differences between different methods. The authors may consider removing some of the early baseline methods to improve clarity and focus on more recent approaches. Questions For Authors: n/a Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **R1**: Clarification on Motivation Motivation: Mamba, a state-space model, offers global modeling and linear complexity, making it ideal for sequential data. Frequency information in the Fourier domain is inherently global, and Mamba’s sequence modeling efficiently captures inter-frequency dependencies. In contrast, Transformers incur high computational complexity (O(n²)) for long sequences, while 3×3 convolutions are limited by their receptive fields in capturing global frequency data. Thus, integrating Mamba with the Fourier domain is a targeted signal-processing design, leveraging frequency globality and computational efficiency to enhance deraining performance innovatively. Comparative Experiments: Addressing the suggestion, we conducted experiments comparing Transformer and 3×3 convolution on the Rain200L dataset (Table 1). Results demonstrate that Mamba’s scanning approach effectively balances performance and efficiency. **R2**: Response to “Lack of Quantitative Evaluation in Real-World Scenarios” We have included no-reference quantitative evaluation results for the Internet test set without ground truth (GT) from the SPA dataset, as presented in Table 2. **R3**: Response to “Experimental Design and Analysis” Real-world deraining results are presented in Figures 13–16, with quantitative outcomes in Tables 13–15, demonstrating adaptability and generalizability to real rain due to Fourier priors and Mamba’s efficient global modeling. **R4**: Response to “Issues with Mamba in the Fourier Domain” Mamba’s local pixel forgetting in the spatial domain stems from its sequence modeling, potentially overlooking local details, while spatial misalignment relates to scanning order. In the Fourier domain, however, information is globally represented as frequencies, rendering local forgetting less prominent, as frequency components reflect overall image properties. Moreover, Fourier priors decompose the image into frequencies, aiding Mamba in modeling global information comprehensively and mitigating misalignment. Experimental results (Figures 5 and 9) show FourierMamba preserves background details and removes rain streaks effectively, confirming these issues are alleviated in the Fourier domain. This analysis will be added to the revision. **R5**: Response to “Unclear Connection Between Preliminary Equations and Figure 3 Modules” We acknowledge the lack of clarity and will refine the Preliminary section in the revised manuscript. The equations therein introduce Fourier transform fundamentals, laying the theoretical groundwork for subsequent frequency modeling. The FFT operation in Figure 3 directly applies these principles, converting images from the spatial to the Fourier domain and separating amplitude and phase. We will explicitly link these equations to the FFT module in Figure 3, enhancing logical coherence and eliminating perceived redundancy. **R6**: Response to “Issues with Feature Maps in Figure 7” Figure 7 compares feature maps of FreqMamba and our method, revealing that our approach more effectively focuses on and localizes rain streak-related features. Consequently, our derain results (Fig5-6, Fig9-16) retains background information while minimizing residual rain streaks compared to FreqMamba. **R7**: Response to “Missing Entries in Table 1” Table 1 references the table in FreqMamba. Missing data primarily result from unavailable open-source code or discrepancies between open-source implementations and the corresponding papers, leading us to directly adopt reported results from the papers. In the future, we will strive to complete missing entries or provide explanations as feasible. **R8**: Response to “Formatting and Presentation Issues” In the revised manuscript, we will adjust the font sizes of tables and figures for consistency. **R9**: Response to “Layout Issues in Figure 6” Our method’s advantages over others in real-world scenarios are observable through enlarged patches. We will revise the layout of Figure 6 in the updated manuscript. Additional visual results for real-world deraining can be found in Supplementary Figures 13–16. | Method | PSNR | SSIM | Flops | Params | |-------------|-------|--------|-------|--------| | 3*3 Conv | 40.24 | 0.9887 | 17.44 | 21.98 | | Transformer | 42.21 | 0.9896 | 18.98 | 72.49 | | Ours | 42.27 | 0.9908 | 17.62 | 22.56 | | Method | BRIQUE ↓ | NIQE ↓ | SSEQ ↓ | |-------------|----------|--------|--------| | Rainy Input | 28.517 | 5.095 | 28.280 | | MPRNet | 34.733 | 5.144 | 33.765 | | Restormer | 32.288 | 4.851 | 31.789 | | IDT | 27.042 | 4.536 | 28.314 | | DRSformer | 26.080 | 4.531 | 27.954 | | FADformer | 25.959 | 4.760 | 26.667 | | Freqmamba | 26.172 | 4.890 | 27.387 | | Ours | 25.827 | 4.682 | 26.423 | --- Rebuttal Comment 1.1: Comment: About R1 and R4: The authors mention that in the Fourier domain, information is globally represented in the form of frequency components, which makes local forgetting less prominent. I would like to ask: Does this global representation in the Fourier domain diminish the advantage of Mamba’s global receptive field ? Additionally, since convolution in the Fourier domain can also capture global information, what distinct benefit does Mamba offer in this setting compared to simpler operations like Fourier-based convolution? Clarifying these points would help better understand the specific role and necessity of Mamba within the Fourier domain. --- Reply to Comment 1.1.1: Comment: **Response to Follow-Up Questions on R1 and R4** We appreciate the reviewer’s further inquiries on R1 and R4, which allow us to clarify Mamba’s specific role and necessity in FourierMamba. Below, we address the two questions in detail. **Question 1**: Does the Global Representation in the Fourier Domain Diminish the Advantage of Mamba’s Global Receptive Field? The Fourier transform decouples an image into distinct frequency bands, with each band aggregating global information from the entire spatial domain—its globality lies in information aggregation, where each frequency component integrates contributions from all spatial positions. In contrast, the proposed Mamba excels at uncovering relationships among these decoupled bands, with its globality manifested in cross-band sequential dependency modeling, capturing interactions between high frequencies (e.g., rain streaks) and low frequencies (e.g., background structures). Their combination leverages complementary strengths: the Fourier transform provides a global frequency decomposition, while Mamba’s global receptive field models inter-band dependencies. This synergy enhances comprehension of frequency interactions, crucial for tasks like image deraining, where separating degradation from clean content in the frequency domain requires accurately capturing their relationships. **Question 2**: What Distinct Benefit Does Mamba Offer in the Fourier Domain Compared to Convolution Operations? Mamba’s distinct advantage lies in its global sequence modeling capability, which captures dynamic dependencies between frequency bands, excelling notably in deraining tasks. Rain streak features, due to their complex variability in brightness, width, and length, exhibit significant diversity in pixel values and spatial scales. For instance, fine, short rain streaks may appear as high-frequency components, whereas coarse, long, and brighter streaks may span both low and high frequencies. This variability renders rain streaks difficult to fully represent with a single frequency band, necessitating the integration of information across multiple bands in the frequency domain. Compared to convolution operations in the Fourier domain, which are inherently local and limited to capturing relationships between adjacent frequency bands, convolution struggles to model long-range frequency dependencies due to its restricted receptive field. In contrast, Mamba’s global receptive field enables it to transcend frequency band boundaries, directly capturing long-range dependencies. For example, in deraining, Mamba correlates high-frequency rain streak features with low-frequency background information, effectively identifying and separating these complex cross-band patterns. Thus, while the Fourier transform provides the foundation of global frequency information, Mamba enhances the understanding of dynamic inter-band interactions, significantly improving the modeling of rain streak features in deraining tasks. **Summary** In the Fourier domain, convolution is confined by its locality to modeling adjacent frequency relationships, whereas Mamba’s globality effectively captures cross-band dependencies. This capability provides Mamba a distinct advantage in tasks requiring global frequency information, such as enhanced identification and separation of cross-band features in images. Comparative experiments from prior responses, alongside Table 3 in the main text and Figure 8 in the appendix, strongly support this analysis.
Summary: The paper applies Mamba to dual-domain spatial dimensions and channels for image deraining. For spatial-dimensional Fourier processing, the authors introduce a new scanning method that better models the correlations between different frequencies. The constructed network is evaluated on multiple image restoration tasks beyond deraining and achieves promising performance. Claims And Evidence: The quantitative and qualitative results demonstrate the effectiveness of the proposed method. Methods And Evaluation Criteria: The authors do experiments on standard datasets and metrics. In addition, the evaluation is performed using non-reference metrics. Theoretical Claims: In addition to the foundational equations for the Fourier transform, no other theoretical claims are provided. Experimental Designs Or Analyses: This paper closely follows the FreqMamba (MM'24) method. However, there is an inconsistency between these two methods regarding using datasets. Specifically, it seems that FreqMamba trains separate models for each deraining dataset while the proposed method is trained on a mixed dataset, Rain13k. The authors also mention this issue in the paper. Will this inconsistency impact the performance? Supplementary Material: I have checked the whole supplementary material. The table reference in A.10 is missing, and FADformer is not correctly cited. Relation To Broader Scientific Literature: This paper introduces an image deraining architecture incorporating the zigzag scanning method with Mamba. The network achieves promising performance on some image restoration datasets. Essential References Not Discussed: N/A Other Strengths And Weaknesses: The proposed method introduces a new scanning method for image deraining to model correlations between Fourier frequencies. State-of-the-art performance is achieved on some image restoration tasks. The reviewer's concerns are as follows: 1. The proposed network mainly follows that of FreMamba. The novelty of the method only lies in using a different scanning method for spatial-dimensional frequency features. Using Mamba to scan the channel features is direct and not novel in image restoration. Overall, the novelty is a little bit limited. 2. The experiments are not convincing, as the proposed method employs a different training strategy for the dataset compared to FreqMamba. 3. On some additional datasets, the performance is not competitive. For example, on SPA-Data, AST achieves 49.51 dB, which is much higher than the proposed method. For low-light enhancement, the proposed method is inferior to MambaLLIE while using more parameters. 4. The authors provide the speed comparisons in the supplementary material. The reviewer finds that the proposed method does not run fast in the deraining domain. [AST] Adapt or Perish: Adaptive Sparse Transformer with Attentive Feature Refinement for Image Restoration, CVPR24. [MambaLLIE] MambaLLIE: Implicit Retinex-Aware Low Light Enhancement with Global-then-Local State Space, NeurIPS24. Other Comments Or Suggestions: n/a Questions For Authors: Please refer to the entries above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **R1**: Response to “Insufficient Novelty” The distinctions between our method and FreqMamba are detailed in Appendix A.9. Here, we further clarify the novelty: **Spatial-Domain Zigzag Scanning**: Unlike FreqMamba’s simple wavelet-domain scanning, FourierMamba’s zigzag scanning, inspired by JPEG encoding, orders low-to-high frequencies in the Fourier domain, enhancing correlation modeling while preserving symmetry. This integrates signal processing knowledge into Mamba’s efficient structure, synergistically improving deraining outcomes. **Channel-Domain Fourier Scanning**: Discussed in Appendix A.6, this design leverages varying degradation characteristics across channels, which collectively define global image information. By introducing Fourier transforms in the channel dimension, we capture inter-channel frequency dependencies, enhancing global representation while decoupling degradation (e.g., rain streaks) from background content using amplitude and phase spectra. Joint modeling with Mamba further strengthens channel interactions and global information modeling. **Overall Novelty**: FourierMamba combines spatial zigzag scanning and channel Mamba scanning to form a versatile deraining framework, outperforming MambaIR and FreqMamba. This dual-domain frequency modeling represents a significant advancement in image restoration, far beyond a mere scanning improvement. **R2**: Response to “Experiments Lack Convincing Evidence” Joint training on Rain13k demands greater network generalization, often yielding lower performance than models trained separately on individual datasets. Except for FreqMamba, results in Table 1 reflect joint training, whereas FreqMamba’s paper conflates the two approaches. For a fair comparison with FreqMamba, we provide results under identical experimental settings in Appendix Tables 11 and 16, showing improvements of 0.54 dB and 0.71 dB on Rain13k and SPA datasets, respectively, with our method. **R3**: Response to “Performance Lacking Competitiveness on Certain Datasets” AST was trained on the enhanced SPAD version[1] of the SPA dataset, making direct comparison unfair. Thus, we retrained AST on the SPA dataset using parameters from its paper, yielding the following results in Tab1. For low-light enhancement, FourierMamba targets image deraining, with experiments in this task only validating its generalizability, lacking specialized designs like MambaLLIE’s Retinex prior. Hence, it is reasonable that FourierMamba underperforms MambaLLIE in low-light enhancement. Future work will explore tailoring FourierMamba for this task. **R4**: Response to “Suboptimal Inference Speed” It was noted that FourierMamba’s inference time lags behind Restormer’s. This stems from PyTorch’s extensive CUDA optimizations for attention-based operators, which Mamba currently lacks. However, our method’s FLOPS (22.5) is significantly lower than Restormer’s (174.7), suggesting that further CUDA optimization of Mamba operators will enhance inference speed. Meanwhile, compared to other Mamba-based methods, ours achieves a superior balance of inference efficiency and performance. **R5**: Response to “Issues with Supplementary Material” The review highlighted a missing table reference in Section A.10 and an incorrect citation for FADformer. We apologize for these oversights and will correct them in the revised manuscript: Section A.10 will explicitly reference Table 12, and the FADformer citation will be updated to the correct source. We appreciate your meticulous feedback. | Method | PSNR | SSIM | |--------|-------|--------| | AST | 48.49 | 0.9924 | | Ours | 49.18 | 0.9931 | [1] Learning Weather-General and Weather-Specific Features for Image Restoration Under Multiple Adverse Weather Conditions [CVPR 2023] --- Rebuttal Comment 1.1: Comment: Thank the authors for the detailed response. My concerns have been addressed, and I have increased my rating accordingly. Congratulations.
Summary: This paper proposes FourierMamba that addresses the problem of single image deraining by introducing a scanning encoding mechanism that correlates different frequencies in both spatial and channel dimensions. Specifically, it employs zigzag coding in the spatial dimension to reorganize frequency orders and improve their connectivity, while utilizing tailored designs in the channel dimension. This approach enables more effective frequency information utilization, leading to improved image deraining performance. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: N/A, there is no theoretical claims in the paper. Experimental Designs Or Analyses: The experimental analyses are sound because the authors have conducted extensive experiments on five datasets and shown the superiority of their method. Supplementary Material: The supplementary materials show more visual comparisons and tables for quantitative evaluations. Relation To Broader Scientific Literature: The related works section overlooks several recent studies, including [1-2], which explore the concepts of pixel-level alignment and the linear model. [1] Cross-Modality Fusion Mamba for All-in-One Extreme Weather-Degraded Image Restoration, 2025. [2] Restoring images in adverse weather conditions via histogram transformer, 2024. Essential References Not Discussed: N/A. Other Strengths And Weaknesses: Strengths: 1. The authors have conducted extensive experiments on over five commonly used deraining datasets, demonstrating the proposed method's superiority in both complexity and deraining performance, evaluated quantitatively and qualitatively. 2. The incorporation of zigzag coding in Fourier space and the concept of scanning encoding for different frequencies introduce a novel approach to deraining. 3. The paper is well-written and easy to follow, with numerous visual comparisons and feature visualizations effectively illustrating the deraining results of the proposed method. Weaknesses: 1. To improve the implementation and evaluation parts, it is helpful to provide more details on the zigzag scanning implementation and its impact on inference speed. 2. To expanded comparisons and additional experiments, it is suggested to discuss the comparisons and differences between the proposed method and FreqMamba in Fourier correlation strategies. It is helpful to incorporate alternative perceptual metrics beyond PSNR and SSIM to align with human perception on the real-world rainy dataset such as SPA-Data. 3. It is recommended to justify why Mamba is more effective than other architectures for processing Fourier frequencies. The authors are also suggested to investigate generalization to other tasks (e.g., deblurring) and discuss the applicability of wavelet transformation. Other Comments Or Suggestions: N/A Questions For Authors: Please refer to the weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **R1**: Details of Zigzag Scanning Implementation and Its Impact on Inference Speed Section 3.2 briefly outlines the motivation and design of zigzag scanning, inspired by JPEG’s zigzag encoding, to order frequencies in the Fourier domain from low to high. For the spatial Fourier spectrum, it is divided into symmetric halves, with zigzag path scanning applied to one half, arranging frequencies from center (low) to edges (high), followed by Mamba sequence modeling. The other half is derived via Fourier symmetry, ensuring orderly frequency correlation without compromising symmetry, unlike full-spectrum scanning. As a preprocessing step, zigzag scanning’s computational cost, mainly from frequency rearrangement and Mamba modeling, is minimal. Experiments show its additional time is negligible, involving a one-time index reorder—reusable as a dictionary for same-resolution images—while Mamba’s linear complexity ensures efficiency. FourierMamba’s inference time matches MambaIR and FreqMamba, indicating no significant burden. Revised Section 3.2 will detail the implementation and confirm its minimal impact on inference speed. **R2**: Comparison with FreqMamba in Frequency-Domain Correlation Strategies We further clarify the differences and similarities between FourierMamba and FreqMamba in frequency-domain strategies. While FreqMamba employs Mamba in the frequency domain, it primarily conducts spatial scanning in the wavelet domain, underutilizing the global properties of the Fourier domain. In contrast, FourierMamba applies Mamba directly in the Fourier domain, using zigzag encoding for orderly frequency correlation, enhancing rain streak capture. FreqMamba’s wavelet-domain scanning limits its global frequency modeling capacity. Quantitative comparisons with FreqMamba on Rain13k and SPA datasets, shown in Tables 11 and 16, reveal our method’s improvements of 0.54 dB and 0.71 dB, respectively. Feature map visualizations in Figure 7 demonstrate that our approach more effectively targets rain degradation, yielding superior deraining results. **R3**: Diversity of assessment indicators We have augmented the evaluation with no-reference metric results for the unpaired Internet-Data test set from the SPA dataset, as shown in Table 1 above. **R4**: Effectiveness and Generalizability of Mamba in Processing Fourier Frequencies Mamba, a state-space model, excels with global modeling and linear complexity, ideal for sequential data. In the Fourier domain, where frequency information is inherently global and sequential, Mamba efficiently captures inter-frequency dependencies. Traditional convolutional or Transformer architectures, however, are less efficient or computationally costly for global frequency processing, while Mamba’s linear global attention effectively combines their strengths. Regarding generalizability, supplementary results demonstrate FourierMamba’s strong performance in low-light enhancement and dehazing on datasets like LOL-V1 and Dense-Haze, indicating robust task adaptability. For deblurring, where frequency loss is key, its correlation modeling shows promise, to be explored further in future work. Compared to wavelet transforms, which excel in local frequency and multi-scale analysis, Fourier transforms better support global frequency representation and degradation decoupling, particularly for deraining, where rain streaks are distinctly separable in the frequency domain. Thus, we prioritize Fourier transforms, but future work will explore wavelet integration. These points will be detailed in Sections 2 and 5 of the revised manuscript to fully address Mamba’s effectiveness and the method’s generalizability. **R5**: Related Works Omissions We appreciate your feedback and will include discussions of [1] and [2] in the revised manuscript. While [1] explores cross-modal fusion with Mamba for image restoration, our work focuses on frequency-domain modeling. Similarly, [2] employs linear models for weather degradation, which aligns partially with our approach. We will clarify these connections in Section 2 to better position our contributions. | Method | BRIQUE ↓ | NIQE ↓ | SSEQ ↓ | |-------------|----------|--------|--------| | Rainy Input | 28.517 | 5.095 | 28.280 | | MPRNet | 34.733 | 5.144 | 33.765 | | Restormer | 32.288 | 4.851 | 31.789 | | IDT | 27.042 | 4.536 | 28.314 | | DRSformer | 26.080 | 4.531 | 27.954 | | FADformer | 25.959 | 4.760 | 26.667 | | Freqmamba | 26.172 | 4.890 | 27.387 | | Ours | 25.827 | 4.682 | 26.423 | [1] Cross-Modality Fusion Mamba for All-in-One Extreme Weather-Degraded Image Restoration, 2025. [2] Restoring images in adverse weather conditions via histogram transformer, 2024.
Summary: This paper introduces FourierMamba, a novel framework for image deraining that leverages the Mamba technique within the Fourier space to effectively correlate different frequency components. Unlike existing Fourier-based methods that fail to fully exploit the dependencies between low and high frequencies, FourierMamba employs a unique scanning mechanism to encode frequencies in both spatial and channel dimensions. In the spatial dimension, it uses zigzag coding to rearrange frequencies from low to high, ensuring orderly correlation. In the channel dimension, it directly applies Mamba to enhance frequency correlation and channel representation. Extensive experiments demonstrate that FourierMamba outperforms state-of-the-art methods in both qualitative and quantitative evaluations, offering a significant advancement in image deraining by better utilizing frequency information. Claims And Evidence: The author clearly express the claims made in the manuscript. Methods And Evaluation Criteria: To address the insufficient utilization of correlations among different frequencies, this paper introduces Mamba combined with the Fourier transform to model the dependencies between frequencies, thereby enhancing the representation of frequency information. Experiments conducted on multiple datasets demonstrate that the proposed method achieves effective frequency correlation, showcasing its potential to break through the limitations of existing modeling frameworks. Theoretical Claims: The effectiveness of frequency correlation is extensively validated and analyzed on a wide range of rainy datasets. Experimental Designs Or Analyses: The experimental design is well-justified, and the proposed method demonstrates competitive performance on both synthetic and real-world datasets. Supplementary Material: The experimental design is well-justified, and the proposed method demonstrates competitive performance on both synthetic and real-world datasets. Relation To Broader Scientific Literature: Previous work has demonstrated the importance of frequency in image restoration tasks such as image deraining. This paper emphasizes the significance of establishing correlations between different frequencies and proposes a customized Fourier-based method for image deraining. Essential References Not Discussed: No Other Strengths And Weaknesses: Strengths: 1.The paper introduces a novel combination of Fourier learning and Mamba, which is well-motivated and effectively improves image deraining. It explores further possibilities of integrating Mamba with the Fourier transform. 2.The paper proposes a scanning method based on zigzag coding to systematically correlate different frequencies. This method introduces zigzag coding in the Fourier space to rearrange frequency orders, thereby orderly connecting the relationships between frequencies. The zigzag scanning strategy is well-motivated and technically sound. 3.Compared to other Mamba models for image restoration tasks, Fourier Mamba demonstrates higher efficiency and better performance, showcasing the enhancement of Fourier learning on Mamba's modeling capabilities. 4.The proposed model achieves a balance between accuracy and reasonable computational cost and model size. 5.The paper is well-organized and relatively easy to follow, with experimental results validating the effectiveness of the proposed modules and scanning methods. Weaknesses: 1.The authors should clarify why the Fourier transform is necessary in the channel dimension instead of directly scanning the sequence. 2. There should be a proper discussion on how to ensure that normal spatial features can be obtained by inverse transformation after Fourier space scanning? 3.There are minor formatting issues, such as the need for a space before "More" in line 130. Other Comments Or Suggestions: See the weakness. Questions For Authors: See the weakness. Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: **R1**: The Necessity of Channel-Dimensional Fourier Transform In Appendix A.6 of the paper, titled "Reasons for Using Channel-Dimensional Fourier," we note that different channels typically exhibit distinct degradation characteristics, which collectively determine the global information of an image when integrated across channels. This observation draws inspiration from style transfer research, such as the use of the Gram matrix to represent global style information [1]. Building on this insight, we introduce the Fourier transform in the channel dimension to enhance the representation of global information by capturing frequency dependencies across channels. Directly scanning the sequence (e.g., using Mamba) fails to effectively leverage the global properties inherent in the Fourier domain. In contrast, the channel-dimensional Fourier transform enables the decoupling of degradation information (e.g., rain streaks) from background content, thereby improving Mamba’s capacity to model frequency correlations. The effectiveness of this design is substantiated by ablation studies presented in Table 3 and Figure 17, which provide quantitative metrics and visualizations, respectively, demonstrating its impact. **R2**: Ensuring Normal Spatial Features After Inverse Transformation from Fourier Space Scanning You raised the concern that the paper should discuss how normal spatial features are preserved through inverse transformation following Fourier space scanning. We acknowledge that the current discussion on this aspect is insufficiently detailed and will address this in the revised manuscript by supplementing relevant content. The key to FourierMamba lies in its scanning strategy, which is designed to preserve the symmetry and global properties of the Fourier domain. Specifically, in the spatial dimension, we employ zigzag coding for scanning and process only half of the spectrum, leveraging the symmetry of the Fourier transform (amplitude centro-symmetry and phase anti-centro-symmetry) to deduce the other half (see Section 3.2). This approach ensures the integrity of the Fourier domain information. In the channel dimension, Mamba scanning is performed on a one-dimensional Fourier spectrum, adhering to similar symmetry principles. The inverse transformation relies on the standard Inverse Fast Fourier Transform (IFFT) algorithm, which theoretically guarantees perfect reconstruction from the Fourier domain to the spatial domain, provided that the scanning operation does not introduce irreversible information loss. Our experimental results (e.g., Figures 5 and 9) demonstrate that the inverse-transformed images retain normal spatial features, such as textures and details, owing to the orderly nature of the scanning design and the preservation of symmetry. We will enhance Sections 3.2 and 3.3 with these clarifications and may include a mathematical derivation in the appendix to further elucidate this process. **R3**: Formatting Issues We will address and correct spelling errors in the revised manuscript. [1] Li, Y., Fang, C., Yang, J., Wang, Z., Lu, X., and Yang, M.-H.Universal style transfer via feature transforms. Advances in neural information processing systems, 30, 2017.
null
null
null
null
null
null
Counting atoms faster: policy-based nuclear magnetic resonance pulse sequencing for atomic abundance measurement
Accept (poster)
Summary: The paper uses reinforcement learning (Proximal Policy Optimization algorithm) to learn policies to modulate NMR pulses for rapid atomic abundance quantification. The authors developed three interacting agents to (1) align nuclear spins for measurement, (2) facilitate rapid relaxation to equilibrium, and (3) coordinate control between these processes to reduce overall measurement time. Experiments were conducted in a simulated NMR environment using low-magnetic-field carbon-13 quantification for cost-effective, portable analysis of foodstuffs and soils. The results indicate notable performance improvements compared to conventional NMR pulse sequences, and the study also discusses the limitations of this technique. ## Update after rebuttal While the rebuttal has substantially improved my understanding and appreciation of the contributions, I think that the practical issues that are not addressed by the simulator, i.e. coupling, T2* effects and shielded vs deshielded protons, make the current approach less convincing to be used as a replacement for physical NMR machines. However, if used as a POC or to provide an estimate then it is reasonable. I look forward for future improvements where the most common practical issues are well captured by the simulator. I have increased my score by 1 point to Weak Accept. Claims And Evidence: The claims in the paper are sound but not entirely convincing to me. The proposed approaches are theoretically sound. However, if I understand correctly, experiments were conducted mostly in a simulated environment of NMR. While in this setting the paper has demonstrated significant gains in terms of speed, my main problem is that I am not entirely sure how well the approaches work in practice. Practical issues like coupling (acknowledged in the paper), T2* effects and shielded vs de-shielded protons are not captured by the simulator presented here. Methods And Evaluation Criteria: The proposed methods and the evaluation criteria make sense to me. And indeed they are interesting. My only concern is that the paper seems to be incomplete as results mostly come from the simulated environment and there seems to be too little practical results other than the comparison of the chirping agents (sec 3.1) against collected 1D NMR pulses in section 2.2. However, these real data were used to train some of the agents. I would like to see results when applying spoiling and toggled chirping and spoiling to real NMR pulse sequencing processes, not in a simulated environment. However such results are not presented here. Theoretical Claims: The theoretical claims make sense to me. If backed up by experimental, non-simulated results this approach could be a game-changer in NMR pulse sequencing. Experimental Designs Or Analyses: I have a few questions regarding the experiments, in addition to the above comments: 1. On lines 170-173: How are the weights used in the simulator? Could you elaborate the term "class of spin"? 2. In section 3.2, does spoiling actually reset to the previous magnetization state for the sample even when net magnetization is 0? Let's say at 0 net magnetization, the number of protons whose direction is parallel to B0 is the same as the number of protons whose direction is anti-parallel to B0, would the spoiling agent create any zones where the protons are of the same polarity? 3. How do tuples $(\gamma, T_1, T_2)$ collected in section 2.2 remain valid for use in section 3.2 and 3.3 given that in 3.2 and 3.3 you have significantly cut down the relaxation time? Is there any guarantee that in practice this short amount of time is sufficient to achieve equilibrium after a period of high magnetization? Supplementary Material: Yes, I did. There was just appendix A to review. Relation To Broader Scientific Literature: If the proposed approaches work well in practice, it will be another example where reinforcement learning can substantially advance other fields. Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: Could you give your opinion on modelling spin echo sequences (i.e. 180-degree pulses) in your simulator? Have you tried that? Questions For Authors: N/A Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their comments and feedback. Our responses are: 1. **Regarding Claims and Methods.** The reviewer comments that the simulated approach is not satisfying. There is a precedent for the positive impact of simulated results in this field. Please refer to bullet point 4 in our response to Reviewer tJu3 as well as bullet point 1 in our response to Reviewer myr8. 2. **Regarding Experimental Design 1.** To obtain an FID in the simulator, the individual spins $(\gamma, \text{T1}, \text{T2})_i$ are separately simulated at first. The $(\gamma, \text{T1}, \text{T2})_i$ for fixed $i$ is what we refer to as a “class of spin.” Subsequently, we perform a weighted sum (over index $i$) of each of these magnetizations in order to get the observed $M_x$ and $M_y$. The weights in this sum are derived by a simple least squares fit to the empirical data and their relative variances were low enough that we did not feel them important to consider as sources of uncertainty. 3. **Regarding Experimental Design 2.** We agree that this is an important and subtle point. The agent does not have access to the net magnetization in the $B_0$ ($+z$) direction, and can only reasonably optimize to reduce transverse magnetization (in the $x$ and $y$ directions). The reviewer is correct that the state with $X$% of spins pointing in the $+z$ direction and $(1-X)$% of spins in the $-z$ direction parametrizes a line of maximum-reward states for the spoiling agent. If $X$ is nonzero, the spins in the $-z$ direction are unstable and precess back to $+z$, but this can cause longer-timescale oscillations in transverse magnetization than in the absence of active spoiling. This leads us to the rationale behind the second term in Equation (2) on lines 234-235: intermediate-timescale reductions in magnetization are insufficient, there must be a meaningful reduction in magnetization achieved by the end of the spoiling period. 4. **Regarding Experimental Design 3.** The description remains valid. Relaxation times T1 and T2 are defined as constant rates of decay of transverse magnetization in the presence of a strong background field $B_0$ and the absence of a perturbing field $B_1$ (analogous to half-life in radioactivity). By actively spoiling with an applied field $B_1$, we force a faster rate of decay, but this does not change the underlying model description. It only means we can not use an RL-based pulse to measure relaxation times T1 or T2, but it is not necessary that we be able to do so. 5. **Regarding Question 1** and spin-echo sequences. Our simulator can straightforwardly achieve a pi-pulse, this is just a matter of doubling the pulse time of the 90-degree pulse. It is intriguing to consider how introducing diverse pulse sequences into training (e.g. through teacher-forcing or by expanding exploration) could change the learned policy, especially given our response in Point 3. We are grateful for this suggestion, it is a very interesting avenue for future work and could unlock further performance benefits. --- Rebuttal Comment 1.1: Comment: I thank the authors for their detailed rebuttal.While it has substantially improved my understanding and appreciation of the contributions, I still think that the practical issues that are not addressed by the simulator, i.e. coupling, T2* effects and shielded vs deshielded protons, make the current approach less convincing to be used as a replacement for physical NMR machines. However, if used as a POC or to provide an estimate then it is reasonable. I look forward for future improvements where the most common practical issues are well captured by the simulator. I have increased my score by 1 point. --- Reply to Comment 1.1.1: Comment: We are grateful for the additional feedback and wish to affirm that the approach is not intended as a replacement for traditional NMR, and in fact its deployment in realistic environments will require cross-checking against such traditional NMR systems as outlined in L392-399 (right). We also wish to emphasize Point 2 in our initial response to Reviewer tJu3: while it is true that we did not capture all of these confounding variables in the simulation, in future work we will test both (A) improving the simulations and (B) developing a model-free paradigm using a hardware instantiation of the device wherein we train on pre-characterized samples. The feasibility of such a training paradigm in (B) is unique to this approach and will capture such effects when they are present in the analyte of interest. We are thankful that the Reviewer has raised these concerns, as they make clear we must select a sufficiently complex suite of analytes for the demonstration of (B).
Summary: The paper presents another novel approach to performing NMR studies using reinforcement learning to modulate pulses. The authors study three different formulations: standard control of NMR pulse for spin alignment, relaxation of spins towards equilibrium, and allowing an agent to determine when relaxation (spoiling) should be allowed. The trained agents could achieve the desired task efficiently in the experiments performed on simulated data for Carbon-13. This work generally represents a positive movement towards intelligent control of expensive procedures. ## Update after rebuttal I was happy with the responses from the authors and feel the paper will make a great addition to the conference. Claims And Evidence: As far as the limitations of the evidence are clearly discussed, yes the claims are supported. Methods And Evaluation Criteria: They make sense to the problem as a starting point from which more complex problems could be tackled. Theoretical Claims: There are no large theoretical claims made in the study. Experimental Designs Or Analyses: The experiments are designed well as a proof of concept. Supplementary Material: The supplementary information is limited but sufficient. Relation To Broader Scientific Literature: The use of agents to control and improve specific procedures is an emerging topic that will undoubtedly be of great interest to the community. From this perspective, the authors have made a positive contribution. Essential References Not Discussed: I did not notice important missing work. Other Strengths And Weaknesses: The idea is very interesting and well-formulated. Further, the decomposition of the problem into three stages is also well done. As the authors themselves mention, their data is limited not only in their use of simulation but also in that they study only a single species. Including real-world deficiencies of equipment and measurements would be a nice touch. Other Comments Or Suggestions: The paper is well structured, and there were no apparent problems with the writing or figures. Questions For Authors: 1. Perhaps the authors can explain the main benefits of the RL approach over a non-intelligent control. Are there other semi-automatic approaches to controlling the chirp signals that require no training but are still efficient, something from MPC, for example? 2. What architectures were used in the studies, and how might this impact the results? Can time be introduced explicitly? 3. Can the state descriptions and other aspects of the RL training be extended to include multi-atom effects? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their comments and feedback. Our responses are: 1. **Regarding Weaknesses** and the inclusion of additional real-world equipment deficiencies. We agree that the inclusion of thermal instrument noise is only part of the picture. A variety of limitations are worth modeling, including spin participation (how many spins experience the influence of the strong background $B_0$ field) and magnetic field inhomogeneities (due to e.g. a low-quality magnet) [1], as well as spatial perturbations of the sample during measurement. Our perspective is that a hardware implementation of the apparatus will be necessary in order to meaningfully generate profiles for these features in a Bloch simulator, but that producing a dataset of these effects as generated by real hardware is both feasible and a promising avenue of future research. 2. **Regarding Question 1.** The main advantage of our approach over existing alternatives is the possibility of model-free development. We used a simulator as a proof-of-concept in this work to justify the construction of a bespoke hardware instantiation of the device and study its design parameters to reduce cost, but in a physical realization of the device we could simply learn a pulse by reinforcement on pre-characterized physical samples. Other approaches (including traditional NMR) require either analytical control or simulation of the underlying physics. In the Related Work, GRAPE is a result from optimal control, so is effectively the closest connection to MPC. However, GRAPE involves regression on a faithful model of the underlying system for a sample, which requires prior characterization and theoretical analysis of a compound of interest, and as such is only suited for sufficiently simple atomic or molecular systems that this characterization step can be performed. We have added a sentence explaining the connection between GRAPE and MPC in the Related Work and we welcome suggestions from the reviewer for relevant references to include which would further substantiate this connection. 3. **Regarding Question 2.** As the focus of this work was on an initial demonstration of the application with comparison to an analytically-derived baseline, and PPO suffices for this purpose, we did not attempt to further optimize performance by introducing other approaches such as TRPO, SAC, or A2C/A3C. Per bullet point 3 in our response to Reviewer myr8, we have included an Appendix explaining PPO and its alternatives. The timesteps referred to in the paper are easily relatable to explicit time within the simulation. We measure T2 and T1 in real-world seconds and compare timesteps against these values where appropriate (e.g. Figure 2), but we did not do so everywhere because timesteps are the temporal unit most suitable to analyzing the optimization process. 4. **Regarding Question 3.** Yes, techniques exist to extend Bloch simulations to account for a diverse range of molecular structure effects and it has been found that these are effective at improving MRI for molecularly complex samples such as brain tissue, e.g. [2]. Modification of the state descriptions and reward functions may be of interest for many reasons, an example is simultaneous tracking of mild impurities which one might reasonably treat as nuisance variables for the purpose of suppressing their radio-frequency response. In addition, whenever an analogous procedure exists in traditional physics-driven NMR to handle a specific such effect, we can optimize with respect to it in a similar fashion to the experiment in Section 3.1.2. Finally, we refer back to bullet point 2 above: wherever simulation-based methods fail, this method can in principle be applied directly on analytes of interest without an underlying physical representation of the system. This is not true of alternative approaches, though it comes with the caveats specified in Section 4 regarding distribution shift (lines 392-399, right side). [1] Ji, Yang, et al. "Dynamic B0 field shimming for improving pseudo‐continuous arterial spin labeling at 7 T." Magnetic Resonance in Medicine 93.4 (2025): 1674-1689. [2] Singh, Munendra, et al. "Bloch simulator–driven deep recurrent neural network for magnetization transfer contrast MR fingerprinting and CEST imaging." Magnetic resonance in medicine 90.4 (2023): 1518-1536. --- Rebuttal Comment 1.1: Comment: I thank the authors for their detailed responses. I would like to clarify, when asking about architecture, I was referring architecture of the actor. Are the authors using dense networks or something that can incorporate time like an LSTM / Transformers and so on. But to clarify, the authors write they use PPO but not A2C, so this is then PPO without baseline? Why not go something like A2C? The increase in complexity is minimal but the payoff can be significant. In any case, if the RL method (e.g. PPO, A2C and so on) along with the network architectures are included in the manuscript. I think the contribution is of interest to the community. Clearly real-world experiments would be of interest, but that is a big step up from simulation and would constitute research in an of itself. I will raise my score to an accept. --- Reply to Comment 1.1.1: Comment: We are grateful for the additional feedback and agree that it would be of future interest to produce results with a variety of treatments for the actor, to compare their learned behaviors as well as their relationships to comparable approaches in optimal control, e.g. the one we describe in our response to Reviewer MkXf in Point 2. We explain the stable-baselines3 implementation of PPO in detail in the new Appendix B, alongside alternative approaches which e.g. incorporate recurrent architectures (this implementation of PPO does not, it just uses a dense policy network but can be adapted to use transformers or LSTMs). In this paper we focused on 1D NMR as the key baseline, studied improvements relative to that baseline, and focused on developing a sensible methodology, since any such policy-based approach will raise similar questions to the ones motivating each experiment.
Summary: Counting atoms matters because it allows us to monitor and control the elemental makeup of materials, which is important for environmental sustainability applications. This paper presents a method that uses RL to optimise nuclear magnetic resonance (NMR) pulse sequences for faster and cheaper atomic abundance measurements. ## Update after rebuttal I have increased my score by one point. Claims And Evidence: There are 3 main claims (contributions) in the paper: 1. "Train policies by reinforcement which shape and sequence magnetic field pulses for use in a simplified (low-field) NMR spectroscope". The method is claimed to be non-destructive and and generalisable to nuclear isotopes of many different elements. 2. "present a fast, robust simulator for generating large quantities of NMR spectroscopy data which is capable of reproducing the nuclear spin dynamics of many differ- ent samples in parallel when they are manipulated by an arbitrary magnetic field" 3. "A novel method to manipulate these spins for atomic abundance measurement, by training three inter-operating agents to orchestrate an NMR pulse sequence." I think the experimental results back these claims in the sense that I can see there is an increased efficiency from metrics and graphs (e.g. model 2 reports 28% reduction in magnetisation, which I guess is good). However, as a non-expert with little to none knowledge in physics I can't assess how meaningful the experimental setup is and how significant the results are. Methods And Evaluation Criteria: To my understanding, the simulator (based on Bloch equations) is calibrated using some real data (as described in Section 2.2). I have no idea if this is standard or if it makes sense for the things they are trying to demonstrate. There aren't any baseline methods to compare the RL based approach to. Ideally, I'd like to see GRAPE (which this work is similar to), and potentially simpler baselines. E.g. would a random policy make sense here? Are there any "heuristic" policies that the field has developed previously? The RL approach to policy learning itself is very standard - the authors use PPO. Theoretical Claims: there are no theoretical proofs in this work. Experimental Designs Or Analyses: I am not able to assess this. Supplementary Material: The supplementary material includes only the Bloch equations, which I trust are correct. Relation To Broader Scientific Literature: I cannot assess the impact to the broader scientific literature. Essential References Not Discussed: I don't know this field. Other Strengths And Weaknesses: On the ML side of things, the major weakness is lack of baselines. As a non-expert I find it difficult to assess strengths and weaknesses more broadly. Other Comments Or Suggestions: NA Questions For Authors: you mention GRAPE as a prior work related to what you do. Why is it not included as a baseline? More generally, what simpler baseline methods are there? My review score is primarily due to the lack of baselines. I am not able to assess any of the physics, simulator setup etc. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their comments and feedback. Our responses are: 1. **Regarding Weakness 1:** lack of a baseline. In fact, we do use the standard 1D NMR pulse as a baseline, and we compare against it in all experiments. The baseline we use is the simplest and most common approach to analyze atomic abundances of solution-phase samples in an NMR. While more sophisticated pulse sequences are often performed in the case of chemical characterization (which investigates e.g. which molecular bonds are present and what their relative positions are), relative abundance quantification is typically done by (A) performing a 90-degree pulse (~0.5 ms), (B) waiting around a second (a few T2s) for the system to equilbrate, (C) taking a Fourier transform of the FID, and (D) integrating the NMR spectrum. However, as stated on line 194, the chirp in Experiment 1 achieves its maximum magnetization state in 16% of the time required for this 1D NMR pulse in step (A). This is a 6x speedup even before accounting for step (B), which conservatively takes 100x longer than step (A), if not longer [1]. The black line in Figure 1 demonstrates that no analogous shortcut is possible with the 1D NMR baseline, i.e. one would have to perform (A)-(D) to obtain an equivalent result. Similar acquisition time reductions have been achieved in data-driven MRI [2]. 2. **Regarding Question 1** and GRAPE [3] as a possible baseline. GRAPE requires specification of a target state. We are unaware of any standard state reported in the literature for atomic abundance measurement, but Footnote 1 on line 98 (page 2) shows how to derive one. The result seems paradoxical because it suggests performing no pulse at all. This seems strange, but is a consequence of differing assumptions and regimes of relevance. A zero pulse would work in theory, but requires different hardware which rapidly inserts and removes the sample from the magnetic field, so is not fully comparable to this approach. One can think of this as rapidly taking independent measurements of “signal plus background” and “only background” so that one can separate signal from the noise with a simple subtraction. Even if a zero pulse was found to perform better in simulation, engineering such hardware is vastly more challenging than a stationary approach; a relevant point of comparison is “magic angle spinning” in solid-state NMR where small cuvettes of material are spun at MHz frequencies [4]. We also emphasize that, besides performance considerations, in our approach there is no fundamental requirement of a faithful simulation of the underlying system. In a hardware instantiation, we can simply learn by reinforcement on real pre-characterized samples, which is not the case for GRAPE. The underlying reason for these differences is that GRAPE assumes the quantum mechanical regime is an appropriate description of the spins (i.e. samples contain only a few atoms or molecules so that quantum effects dominate), whereas RL on a Bloch simulator assumes a semi-classical description (i.e. samples contain multiple Avogadro’s numbers of atoms or molecules whose nuclear spins together behave classically). Areas of application appropriate to GRAPE include e.g. quantum computing [5,6] where individual molecules are manipulated into specific quantum states and then controlled as components of logic gates. Areas of application appropriate to our method include fields which commonly process macroscopic (gram-scale) samples appropriate to everyday contexts, such as food science, agriculture, and human health. However, the reviewer’s comment suggests a nontrivial avenue of further investigation which we had not realized earlier, and we are grateful for this insight. The top left of Figure 3 shows that a preferred direction is learned by the policy (black dots), providing an alternative target state for a GRAPE analysis than that derived in Footnote 1. It would be interesting to understand whether GRAPE with this target state is equivalent to our learned policy. We have added a sentence to the Future Work section incorporating this observation. [1] Joseph, David, and Christian Griesinger. "Optimal control pulses for the 1.2-GHz (28.2-T) NMR spectrometers." Science Advances 9.45 (2023): eadj1133. [2] Ma, Dan, et al. Magnetic resonance fingerprinting. Nature, 495(7440):187–192, 2013. [3] Khaneja, Navin, et al. "Optimal control of coupled spin dynamics: design of NMR pulse sequences by gradient ascent algorithms." Journal of magnetic resonance 172.2 (2005): 296-305. [4] Reif, Bernd, et al. "Solid-state NMR spectroscopy." Nature Reviews Methods Primers 1.1 (2021): 2. [5] Jones, Jonathan A. "Controlling NMR spin systems for quantum computation." Progress in Nuclear Magnetic Resonance Spectroscopy 140 (2024): 49-85. [6] Schulte-Herbrüggen, T., et al. "Optimal control for generating quantum gates in open dissipative systems." Journal of Physics B: Atomic, Molecular and Optical Physics 44.15 (2011): 154013. --- Rebuttal Comment 1.1: Comment: [Apologies, I didn't realise I had to post my comment as "rebuttal comment", so I'm posting it again] Thanks for your thorough response. Overall, I think the "Application driven ML" is a great idea, and this paper presents an interesting application (though I apologise I do not have the sufficient background to appreciate it). I'll increase my score by one point. --- Reply to Comment 1.1.1: Comment: We are grateful for the reviewer’s time and constructive responses.
Summary: The paper applies reinforcement learning to NMR data, to infer the elemental composition of a sample. They learn optimal policies to modulate NMR pulses for elemental abundance quantification. They present promising results on simulated data. ## update after rebuttal I have raised my score. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: Yes Experimental Designs Or Analyses: Yes Supplementary Material: No supplementary material provided. Relation To Broader Scientific Literature: I am not sufficiently familiar with the literature to comment. Essential References Not Discussed: I am not sufficiently familiar with the literature to comment. Other Strengths And Weaknesses: The paper only presents application to simulated NMR data. It would be nice to have real data analyzed. The machine learning content of the paper is weak, and it would seem that this paper is better suited to a specialized journal focusing on NMR. Other Comments Or Suggestions: I did not find any typos. Questions For Authors: I understand the authors are using algorithms developed elsewhere. But given the readership of ICML I think readers would appreciate some more details were included about the algorithm used (things like PPO, MDP, etc.). Can the authors include a summarized self-contained description of these methods? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their comments and feedback. Our responses are: 1. **Regarding Weakness 1.** The reviewer states that the paper only presents application to simulated data. This is not totally accurate, for 2 reasons. **(I)** Firstly, in Experiment 1, the spin sets used in both training and evaluation are derived from genuine raw NMR data (a serial dilution of caffeine in water with a 90-degree pulse), and the Bloch model is a faithful representation of the underlying system. This is partly evidenced by the fact that, using the Bloch model, we can reproduce the empirically observed spectra exactly. **(II)** Secondly, we did investigate how to operate real-world NMR hardware with a control policy, but found it is not practical with current machines such as those used to produce the empirical data, as they are only operable with proprietary software, and purchasing a dedicated setup for modification would cost in the 6- to 7-figure range (USD). Such investment of funds and effort is challenging to justify without a proof-of-concept, which our results do provide. Furthermore, our results demonstrate concrete evidence that a high-fidelity, laboratory-grade NMR with expensive magnets, liquid nitrogen cooling, and similar luxury features can be potentially replaced with lower-quality, less expensive benchtop hardware. As a consequence of investigating this proof-of-concept, not only did we learn that the method has theoretical promise, but we also obtained evidence that the magnitude of investment required to explore hardware development could be much lower than this initial expectation. This finding is not obvious, and it represents an important motivating result which will be a precursor to a longer-term hardware development effort. 2. **Regarding Weakness 2.** The reviewer comments that the machine learning content is weak, and that the paper is better suited to a venue focused on NMR. The ICML 2025 Call for Papers states the following regarding the Application-Driven Machine Learning track: “Application-Driven Machine Learning (innovative techniques, problems, and datasets that are of interest to the machine learning community and driven by the needs of end-users in applications such as healthcare, physical sciences, biosciences, social sciences, sustainability and climate, etc.)” As noted in the Related Work section, there is little existing work applying machine learning specifically to NMR pulse sequencing and atomic abundance measurement, so this work falls under the category of a “problem of interest” in the language of the Call. NMR is a common measurement technique applied within healthcare, analytical chemistry, and bioscience and is the basis of MRI technology. (The application specifically to sustainability is somewhat novel, though sustainability research has significant overlap with the aforementioned 3 fields.) While we agree that multiple field-specific venues could be acceptable for such a work, the key point we are trying to demonstrate and emphasize with this paper is that an application of machine learning unlocks capabilities which can impact many different areas of science in a unified fashion. In conjunction with the language of the Call, this widespread impact thanks to open-source frameworks and developments in reinforcement learning (including the Gymnasium framework and the PPO family of policy gradient methods used to produce our results) was our primary reasoning to submit to ICML 2025. 3. **Regarding Question 1.** Given the diverse scientific backgrounds of potential readers for this work, we agree that it is appropriate to include additional definitions regarding MDPs, PPO-based agents, the designed objective functions, and potential alternatives. We have added an Appendix B where we provide these, and cite relevant related works. --- Rebuttal Comment 1.1: Comment: I thank the authors for their response. I have increased my score. --- Reply to Comment 1.1.1: Comment: We are grateful for the reviewer’s time and constructive responses.
null
null
null
null
null
null
Balancing Interference and Correlation in Spatial Experimental Designs: A Causal Graph Cut Approach
Accept (poster)
Summary: This paper studies optimal cluster-randomized designs for spatial A/B testing problems. The authors investigate the decomposition of the mean squared error (MSE) and show that interference and correlation are two key driving factors, contributing in opposite directions: when interference is strong, it is optimal to assign the same policies to neighboring regions, whereas strong correlation favors assigning different policies. They also propose a computationally efficient surrogate function for the MSE, which adapts to varying levels of interference and correlation structures. Combined with a graph cut algorithm, the surrogate function effectively learns the optimal design, as demonstrated by both theoretical analysis and experimental results. Claims And Evidence: The claims regarding the contributions in spatial A/B testing problems make sense to me, however, I believe the authors should provide detailed discussions if they would like to claim their method also *is readily adaptable to Example 2: Environmental and epidemiological applications* and *Example 3: Experimentation in social networks.* Methods And Evaluation Criteria: They make sense to me, but I would appreciate it if the authors could conduct more experiments on real-world datasets. Theoretical Claims: I reviewed the proof of Theorem 1, and it appears sound to me. Experimental Designs Or Analyses: Some of the simulation results make me worry about their credibility. Below, I list the concerns that I hope the authors can address in their response: - In Figure 5(b), the MSE of the ID method also improves as the number of repetitions increases. Intuitively, however, it should remain the same, since it is always an individual design. Could the authors clarify this behavior? - The theoretical results rely heavily on assumptions about the covariance matrix $\Sigma$. For example, the surrogate function is valid only when all entries are non-negative, and Assumption 2 requires a decaying covariance structure. I would like to know how the authors handle cases where the estimated covariance $\hat{\Sigma}$ does not satisfy these conditions. How does the algorithm deal with such situations, and what is the expected performance of the proposed method in these cases? Supplementary Material: I reviewed most of them, with a particular focus on Sections C and E. Relation To Broader Scientific Literature: This paper is related to causal inference and experimental design with spatial interference. Essential References Not Discussed: I don't see any obvious missing references. Other Strengths And Weaknesses: - Strengths: The paper is generally well written and easy to follow. The trade-off between interference and correlation is clearly articulated within the studied setting. Additionally, the intuition behind the design of the surrogate function is reasonable and well motivated. - Weaknesses: My main concern lies in the assumptions made by the paper. Specifically, Assumption 2 and the conditions in Propositions 1 and 2 regarding the covariance matrix $\Sigma$ may limit the applicability of the proposed method. In theory, when these assumptions are violated, the trade-off between interference and correlation, as well as the surrogacy result in Proposition 3, no longer hold. In practice, it is unclear how the proposed method performs when the estimated covariance matrix $\hat{\Sigma}$ does not satisfy these conditions. Clarification or empirical evaluation in such scenarios would strengthen the paper. Other Comments Or Suggestions: In line 172, the right square bracket is misplaced. Questions For Authors: Please see my comments above in *Experimental Designs Or Analyses* and *Other Strengths And Weaknesses*. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: >Applications to other data examples We sincerely appreciate your valuable feedback. In response, we conducted extensive investigations during the rebuttal, including: (1) empirical validation on additional data, (2) methodological extensions, (3) theoretical investigations. The methodological investigation is elaborated in our response to Reviewer VLbx. **Empirical validation on environmental data**. During rebuttal, we found a paper that studies the causal effect of wind speed on PM10 levels (Zhang et al., 2024, arXiv:2409.04836, Section 6). As the raw dataset cannot be directly used for evaluation (refer to our first response to Reviewer 96AW), we utilized the simulator described in their paper for comparison. The results, shown in [Figure](https://www.dropbox.com/scl/fi/usk0pcpv7h615634c8b4k/Lhaw_pm10.pdf?rlkey=qkabneg3idllwi4xc4zje89e9&st=aei7qaq2&dl=0), demonstrate our proposal achieves smaller MSEs compared with the benchmarks, confirming its applicability to environmental applications. **Methodological extensions**. Our primary focus is Example 1, but we have extended our design to handle two additional settings: (i) a single-experiment setting without repeated measurements and (ii) a multi-experiment setting allowing the carryover effects over time. For the single-experiment setting, our approach remains effective given prior knowledge of the covariance structure, as confirmed by our newly conducted numerical study. For multi-experiment settings with carryover effects, we have outlined the methodology while reserving its implementation for future work, as detailed in our response to Reviewer VLbx. Collectively, these extensions enable our framework to accommodate the applications described in Examples 2 and 3. **Theoretical investigation**. We have identified crucial assumptions that guarantee the validity of our proposed extension in the multi-experiment setting (ii). Specifically, we would require a Markov assumption (Puterman, 2014, John Wiley & Sons) and a temporal mixing condition (Bradley, 2005, Probab. Surv.) to maintain the covariance estimator's consistency in the presence of temporal dependencies. Similar conditions are widely employed in the RL literature for consistent estimation (Kallus and Uehara, arXiv:1909.05850). >Covariance assumption We have comprehensively addressed your concern in the following three ways: First, we conducted extensive empirical validation under violations of the covariance assumption. Second, we clarified the role of the non-negativity assumption. Third, we developed methodological extensions for scenarios where Assumption 2 fails to hold. **Numerical experiments**: We conducted **another new experiment** during rebuttal to illustrate the robustness of our proposal. We design a periodic covariance function $\Sigma_{ij} = 1 - \rho \times ((i - j)\mod 3)$. Under this choice, the decaying covariance assumption no longer holds and the non-negativity assumption is violated when $\rho>0.5$. The results, shown [here](https://www.dropbox.com/scl/fi/n08067ujhj9if8fru1gq1/Lhaw_PeriodCov.pdf?rlkey=z6mo09pntxihowq295b64cuax&st=o8jrb7g2&dl=0), demonstrate that our methods still outperform baselines. We also remark that in both our **ridesharing simulator** and the **PM10 dataset** (see our first response), we find violations of the two assumptions. Nonetheless, our designs remain highly competitive. These results consistently demonstrate our proposal's robustness. **Non-negativity**: The non-negative assumption is primarily employed in Prop. 1 & 2 to demonstrate the trade-off between interference and correlation. It is not required to ensure the proposed surrogate function forms a valid upper bound (Prop. 3). **Methodological extension**: While the decaying covariance assumption (Assumption 2) is common in spatial statistics (Cressie, 2015), our method remains flexible when this condition is violated. In such cases, we propose a simple fix: scaling the first term in loss function (2) by a constant $C > 1$ to ensure it remains an upper bound. The hyperparameter can be optimally determined through simulation studies based on experimental data evaluating the performance of the resulting estimator across different $C$ values. >MSE of ID in Figure 5(b) Your are right that the **asymptotic** MSE of ID shall remain constant regardless of the number of repetitions $N$. However, in **finite sample**, the MSE actually improves with $N$, due to the estimation of the $g$ function. Specifically, in theory, Neyman orthogonality guarantees that the asymptotic MSE will match that with an oracle g given sufficiently large $N$. In practice, we estimate $g$ by incorporating all prior data at each experiment. This leads to an initial period where the MSE decreases as $N$ increases, reflecting the improvements in the estimation of $g$. As the estimated $g$ approaches its oracle value, the MSE then stabilizes. The trend shown in Figure 5(b) clearly demonstrates this pattern.
Summary: This paper proposes a graph cut approach for design of spatial experiments to estimate Average Treatment Effect (ATE) in settings with both interference where SUTVA assumption does not hold as well as when there exists correlation between units. The authors present a method which builds flexible surrogate function for the Mean Squared Error (MSE) of the ATE estimator. Both theoretical empirical results corresponding to the proposed approach is presented. Claims And Evidence: To my understanding, the claims made in the paper are well supported with both theory and methodology. Experiments on real data simulators are welcome. Methods And Evaluation Criteria: The proposed method is interesting and certainly relevant for the problem at hand. Evaluation criteria seems fair with multiple different baselines included. Theoretical Claims: I did not check in detail the correctness of proofs. Experimental Designs Or Analyses: The experimental design and analyses on both synthetic data and simulator seem quite interesting and sound. The paper would have probably benefited if there was real data that could have been tested on. One of the clarifications I would appreciate is for synthetic data how the outcome equation was chosen (line 906). Is it standard in prior works or is it based on some heuristic? Supplementary Material: I reviewed the supplementary material for details on experimental setup, additional experiments and additional related work. Relation To Broader Scientific Literature: Although I am not too familiar with broader scientific literature in this field, this paper proposes a method which can account for both interference and correlation between units in policy evaluation setting which I believe is not widely addressed in literature before. Although the proposed approach is simple, it addresses a problem which is practically relevant. Some of the prior works require SUTVA assumption. Essential References Not Discussed: I think the paper does a good job in discussing the related literature. Other Strengths And Weaknesses: I think the paper does a good job of exposition of the idea and also the setting. However, I think it would be great if some of the limitations of the present approach was discussed in the paper. For example, the algorithmic complexity when the interference/ correlation is high, possible limitations of design procedure itself in practical settings and so on. Other Comments Or Suggestions: In figure 5 (b) it should be number of "repetitions". Questions For Authors: See above. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate your thoughtful review and valuable feedback on our work. Your positive assessment is particularly encouraging to us. Below, we provide detailed responses to each of your comments. > **Real data analyses** While we do have a real-world dataset, it cannot be directly used for evaluation. To properly assess a given design, one would need to: (1) implement the design to generate corresponding data, and (2) compute the MSE from this generated data. This is precisely why we developed our real-data-based simulator, which enables adaptive data generation tailored to different experimental designs for comparison. During the rebuttal, we identified another publicly available real-world dataset that examines the causal effect wind speed on PM10 levels [1]. However, as with our ridesharing dataset, this raw observational data cannot be directly used for evaluation. We therefore employed the simulation model from [1] to generate synthetic data under different designs. Our simulation results, visualized in [Figure](https://www.dropbox.com/scl/fi/usk0pcpv7h615634c8b4k/Lhaw_pm10.pdf?rlkey=qkabneg3idllwi4xc4zje89e9&st=wt8kuzxu&dl=0), demonstrate that our method achieves significantly lower MSEs compared to existing designs. > **Rationality of the synthetic outcome model (line 906)** Our outcome regression function follows the nonparametric model from [2] (Page 24), which studied the spatial A/B testing problem as well. We selected this model for two reasons: First, the outcome is a complex nonlinear function of observations, treatments, and spillover effects. Second, it naturally incorporates spatial heterogeneity through latitude/longitude coordinates, allowing for geographic variation. We consider these features to approximate the complex dynamics present in real-world settings. > **Limitations** There are a couple of limitations of our proposal: * Theoretically, our characterization of the interference-correlation trade-off relies on the assumption of non-negative covariance functions. While in practice we expect the presence of some negative covariances would not alter our conclusions, it remains unclear how to elegantly relax this mathematical constraint while preserving our theoretical findings. * Methodologically, we omit the second-order interference term in the objective function to facilitate the optimization. We acknowledge that such higher-order effects may be non-negligible. However, its inclusion would significantly increase the optimization complexity. Developing a computationally tractable solution that properly accounts for this term remains a practical challenge for future work. * In terms of applications, our methodology primarily targets experimental settings where independent experiments can be repeatedly conducted over time. While this framework aligns well with our ridesharing application for spatial A/B testing (the focus of this paper), it may require adaptation for other settings. We have discussed modifications for scenarios with either a single experiment or multiple experiments with carryover effects in our response to Reviewer VLbx. Extending our approach to more general experimental settings would be a valuable direction for future research. > **Typos** Thanks for pointing them out. They will be corrected. [1] Zhang, W., et al. Spatial Interference Detection in Treatment Effect Model. [2] Yang, Y., et al. Spatially Randomized Designs Can Enhance Policy Evaluation.
Summary: The paper proposes a new method for experimental design in settings with repeated experiments, spatial interference and error correlation. The authors characterize the MSE of the doubly-robust average treatment effect estimator, and propose an upper bound objective function that depends on estimable quantities and can be minimized using standard graph cutting algorithms. Their proposed method uses information from the estimated error correlation structure to balance the contributions of cross-cluster and within-cluster correlation in the MSE, and can be computed efficiently. Through a synthetic simulation exercise and a real data simulation study the paper shows that the proposed method performs well relative to other commonly used methods in the literature. Claims And Evidence: The paper makes three claims relating their method to the literature. I find the claims that their method is more adaptable to correlation structures and computationally efficient reasonable and an interesting contribution. I find the claim that the paper offers a method that is more flexible and better suited for moderate/large interference effects than the literature (Viviano et al. 2023) more nuanced. 1. The paper would benefit by being more clear in how their assumptions relate to Viviano et al. 2023 and in particular how their objective differs from Viviano et al. 2023. In their paper both bias and variance are considered and some of their results and assumptions are aimed at minimizing bias, which may differ from the MSE goal of this paper. 2. The paper considers a setting in which repeated experiments are available. The paper should be more clear about what is gained by having repeated experiments and distinguish the improvements from having repeated experiments versus not having them and their method versus the literature. Methods And Evaluation Criteria: The proposed method is sensible and the evaluation criteria (MSE) and simulations exercises are well suited to study this topic. However, usually in the literature there is a focus on the bias-variance trade off, and it is well known that "The choice of clustering must balance two competing objectives: the larger the clusters (and the smaller the number of clusters), the smaller the bias of the estimated global effect, but the larger its variance." (Viviano et al. 2023) While focusing on the MSE is intuitive, usually the worry with interference is that it will bias our estimator of interest and so commenting on the bias of different designs would be helpful. If the estimator is unbiased regardless of the interference pattern or if the bias is the same regardless of the assignment mechanism it would be useful to clarify this. Theoretical Claims: The theoretical results are well stated and the proofs appear mostly correct. However, I have a couple questions/comments for which I need further clarification. 1. The $I_1$ term of Theorem 1 depends on the correlation between units at the boundary of one cluster with all other units in the other cluster. I was surprised to see that this can be upper bounded by a term that depends only on the boundary regions of each cluster (as $W_{ii}$ ensures in formula (2)). 2. It would be helpful to state if there are any restrictions between O, g, and the error term e. Under which conditions should we expect algorithm 1 to consistently estimate all the elements in the covariance matrix? Are we ruling out that the nature of the interference is similar to the error correlation structure? 3. The paper claims that the results dont require the fraction of boundary points to go to zero, but then ignores $I_2$ wouldnt this be a similar assumption to Viviano et al? Experimental Designs Or Analyses: The simulation design is sound. However, I have the following comments: 1. Figure 5 and Figure 6 suggest that the main improvement is to use repeated measurements. More discussion on whether the other methods used for comparison benefit from repeated measurements or not and why the differences are so stark would be helpful in understanding the benefits of the proposed method, specially given that ID and GD perform so much better than the other methods too. 2. It might be helpful to have a simulation design for which we expect the other methods proposed in the literature to work to compare with the proposed method. 3. Might be helpful to split MSE into the SC, $I_1$ and $I_2$ to see the contribution of each term. Supplementary Material: I have gone over the theoretical part of the appendix. Relation To Broader Scientific Literature: Relative to the broader literature, and in particular Viviano et al. 2023, the paper provides a computationally efficient method that uses the error correlation structure to improve the MSE performance of ATE estimators by utilizing repeated experiments. This is new and interesting contribution to the literature when repeated experiments are available, but its specific merits should be further clarified. Essential References Not Discussed: The paper considers the key references in the literature. Other Strengths And Weaknesses: I enjoyed reading the paper, it is well written, clear and provides a new interesting method. Other Comments Or Suggestions: The paper has a few typos: 1. Bracket in 172. 2. Independently in 212 3. Clarifying that the boundary of the set $\mathcal{C}$ is with other clusters. 4. In Assumption 2 "such that only" 5. Capital W in 373 6. Plots in Figure 5 and 6 show same estimators in different colors which is confusing Questions For Authors: Beyond the comments above: 1. The definition of ATE is a bit odd in that it is not the average over $R$ but the sum. Might be good to comment on this as other papers, like Viviano et al. 2023, consider the average and do the asymptotics with respect to R. 2. What are the assumptions on $O_i$. Should we expect that it is independent across experiments? Same question with $e_i$. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We greatly appreciate your constructive comments and your positive assessment of our paper. We focus on your major comments below. >Comparison with Viviano et al. (2023) **Difference in objective**: One of the key difference lies in the choice of the estimator. Specifically, our estimator explicitly accounts for interference and remains unbiased regardless of the interference pattern (we will make this clearer in the paper), whereas Viviano et al.'s IS estimator is biased under interference. As such, although both minimize MSE, our optimization focuses exclusively on variance reduction whereas they must jointly consider both bias and variance. **Gain from repeated experiments**: Having repeated experiments allows us to accurately estimate the covariance function, which is crucial for our approach to achieve adaptivity. In comparison, such estimation is not feasible in Viviano et al.'s setting. As such, they adopted a minimax formulation that considers the worse case across all covariance functions. Following your suggestion, we conducted an ablation study during the rebuttal to distinguish the improvements from having repeated experiments. We consider single-experiment settings with certain prior knowledge regarding the covariance function, e.g., having access to a noisy covariance matrix. We kindly refer you to our response to Reviewer VLbx for the experimental results (see **Extensions to Single-Experiment Settings**). >Theoretical claim 1 The inequality holds due to the presence of $R$ in (2) and the indicator function $\mathbb{I}(\mathcal{N}\_{i'} \cap \mathcal{C} \neq \emptyset)$ in $I_1$. The key step in establishing this inequality lies in upper bounding the indicator by $\sum_{\ell \in \mathcal{C}} W_{\ell i'}$. This inequality holds because when $i'$ is adjacent to $C$, there exists at least one $i\in \mathcal{C}$ such that $W_{ii'}=1$; see this [Figure](https://www.dropbox.com/scl/fi/r4zqqzmruzge7bu5zyklz/Ncns_Theory.pdf?rlkey=3talrjlyq4v2zdxnozv9dzsim&st=7j7r4x4y&dl=0) for a graphical illustration. When restricting to two clusters, this leads to bounding $I_1$ by two triple sums $\sum_{i\in \mathcal{C}\_1}\sum_{i'\in \mathcal{C}\_2}\Sigma_{ii'}^+\sum_{\ell'\in \mathcal{C_1}}W_{\ell i'}$ and $\sum_{i\in \mathcal{C}\_2}\sum_{i'\in \mathcal{C}\_1}\Sigma_{ii'}^+\sum_{\ell'\in \mathcal{C_2}}W_{\ell i'}$. The outermost summation in each triple (over $i$) produces terms proportional to the cluster sizes $|\mathcal{C}\_1|$ and $|\mathcal{C}\_2|$, whose sum adds up to $R$ (the number of regions). As such, the sum of all units in the other clusters is accounted by the factor $R$ rather than terms that depend only on the boundary regions. >Theoretical claim 2 & Question 2 Three conditions are needed to consistently estimate covariance: * Each $g_i$ can be consistently estimated; * Each error $e_i$ is additive, i.e., independent of the covariates and treatment; * The error-covariates pairs are independent across experiments. *Note*: The independence condition (3) can be relaxed to certain mixing condition that allows for temporal dependence, provided the dependence decays sufficiently quickly over time (Bradley, 2005, Probability Surveys). >Theoretical Claims 3 There are some differences between the two approaches, which we clarify below. In our proposal, we remove only the high-order interference term, and keep the first-order interference term in the objective function. To the contrary, Viviano et al imposes a weak interference assumption that removes all relevant interference terms when calculating the worst-case variance (see their proof of Lemma 3.2). >New experiments The baselines' inferior performance partially stems from their reliance on IS, while our method, ID and GD employ DR. To mitigate this effect, we conducted a new experiment by increasing variance of the random error, so as to reduce the impact of variance reduction by DR. [Results](https://www.dropbox.com/scl/fi/8rtxjnuj58an93z5pbc94/Ncns_Experiment2.pdf?rlkey=mueh74ksm8lxlao76w6ywwffu&st=lxfsa0yy&dl=0) show that the modification reduces (but does not close) the performance gap between our method and the baselines compared to Figs 5 & 6. Meanwhile, we conducted another experiment under a combination of (i) various covariance structures, (ii) the magnitude of correlation $\rho$, and (iii) the number of grids $R$ to report the contributions of SC, $I_1$, $I_2$. [Results](https://www.dropbox.com/scl/fi/x1wk3ck2x9goq2rphcrwu/Ncns_Experiment3.pdf?rlkey=rvgdg0n86i3r4ox5u1essfiii&st=53swq9tn&dl=0) suggest that $I_2$ can be proportional to $I_1$. This is because the plots visualize the MSE of our estimated optimal design rather than the oracle optimal design. Since our objective function involves only $I_1$ and not $I_2$, it reduces $I_1$ by increasing $I_2$, making $I_2$ comparable to $I_1$. >Question 1 Will revise the definition as an average over $R$ regions to maintain consistency with the literature. --- Rebuttal Comment 1.1: Comment: Thank you for your careful responses and for the additional experiments. I am still slightly confused as to why ID and GD perform so much better than the other methods also in the case with a single experiment that your provided during the rebuttal. I think clarifying why your method works better than the alternatives in the single experiment setting but the others do not would help clarify the properties of your method relative to the literature. --- Reply to Comment 1.1.1: Comment: Thank you for bringing up your question. We appreciate the opportunity to further clarify your confusion. **Comparison between ID, GD and the baseline designs**. In the single-experiment scenario, ID and GD outperform the three baseline estimators because these baseline methods rely on IS for ATE estimation, while ID and GD utilize DR (as in our proposal) to reduce the variance of IS. To illustrate this, we conducted additional simulation studies to replace the DR estimators in GD and ID with IS estimators. As shown in the [results](https://www.dropbox.com/scl/fi/y93rkdvj0akwl4pctnyo2/NcNs_IS.pdf?rlkey=slh37ys3qet4fdgeggepm56lm&st=2ewx585e&dl=0), both GD and ID perform noticeably worse than the three baseline estimators. **Comparison between our design and the baseline designs**. There are two key advantages of our design over the baseline designs: * Unlike the baseline designs, which are derived from IS, our design is directly derived from DR. DR is expected to perform better than IS in terms of ATE estimation. * Our design is adaptive to different spatial covariance functions, while the baseline designs do not adapt and instead typically rely on a minimax formulation. In multi-experiment settings, our method achieves adaptivity by estimating the covariance function using data from previous experiments. In single-experiment settings, however, adaptivity requires certain prior knowledge about the covariance function. During the rebuttal, we conducted additional simulations that increased the variance of the error term to reduce the variance-reduction effect of DR and to demonstrate the second advantage (see our response in the New experiments section). To further demonstrate this advantage, during this round, we also applied DR for ATE estimation in the three baseline designs, despite them being originally derived from IS. The results show that the second advantage is particularly valuable in settings with non-stationary covariance functions. Specifically, it can be seen from [Figure](https://www.dropbox.com/scl/fi/gm1wpfo5p9qtqcwrf7wj6/Ncns_DR.pdf?rlkey=t23hksuzv5yakqe619lj7dpb5&st=hck9gl5a&dl=0) that our method produces superior clusters than the three baseline methods, consistently resulting in lower MSEs. Meanwhile, ID performs worse than the three baseline methods. As for GD, it was not included in this comparison because it cannot identify the function $g$ in the single-experiment setting — where only treatment or control data is available, but not both. In our previous response, GD was included by assuming $g$ to be a constant function of the treatment, which yields a relatively small MSE in cases with weak average treatment effects. It does not work when either treatment effects are large, or IS is used for ATE estimation (as evidenced in our first comparison).
Summary: This work addresses the problem of experimental design under interference. The authors assume that the interference structure can be accounted for by a covariance matrix which is proposed to be estimated from data via repeated experimentation. The key insight of this work is a decomposition of the global average treatment effect in terms which decomposes intrinsic variance and direct and indirect network components. This decomposition motivates the development of a graph cut based algorithm, with bernoulli randomization performed over clusters. Empirical evidence provided via synthetic data an simulations show strong performance in comparison with other commonly used methods. Claims And Evidence: Yes, overall I think the claims are well laid out, the evidence is also good, though I would have liked to have seen a slightly larger set of empirical results. Methods And Evaluation Criteria: The authors rely on sample splitting/cross fitting in order to estimate the outcome model. However, I don't see how we can assume we are able to effectively perform cross fitting without a lot more additional considerations. The authors assume that we are able to conduct the same experiment multiple times. This is a very strong assumption on multiple levels. For many settings in both industry and the social sciences it is infeasible to run the same experiment multiple times. Even when we are able to run the experiment multiple times we are assuming that we can assume independence across time, i.e., that having received treatment $i$ at round $t$ has no impact on future outcomes (which often fails to hold). If instead of assuming that we are running an experiment multiple times on the same population we are assuming that we are subsampling a population and running an experiment, we are in the regime of sampling independent but representative samples from a network, a highly nontrivial problem which will induce many of the problems the paper is seeking to avoid. Given this, it's not clear how this method works outside of bespoke settings without being able to have oracle access to the network structure. I'm curious if I've missed something or if the authors have a set of scenarios where we'd expect to have this structure. Theoretical Claims: Yes. I found the claims to be well founded. Experimental Designs Or Analyses: Yes, I did. As I mention above, I would have liked to see a larger range (in particular varying topologies, etc. ) but what was performed is sound. Supplementary Material: Yes, I reviewed all supplementary material. Relation To Broader Scientific Literature: This paper builds on a growing literature on network experimentation. The key insight sits between work in the experimental design literature (e.g., "Optimal A Priori Balance in the Design of Controlled Experiments" by Kallus, the Gram-Schidt Walk paper by Harshaw et al.) which seek to find designs where treatment status is negatively correlated with feature distance, and network experiementation (e.g., Ugander et al), where clustering is required to make plausible inference on global treatment effects. Essential References Not Discussed: Should probably cite the work on experimental design (see above), as well as Fatemi, Zahra, and Elena Zheleva. "Minimizing interference and selection bias in network experiment design." Proceedings of the International AAAI Conference on Web and Social Media. Vol. 14. 2020. Other Strengths And Weaknesses: I think the decomposition is nice. The insight has existed implicitly in the literature but it is useful to see it explicitly spelled out. Other Comments Or Suggestions: Please see above regarding the repeated experiments. Questions For Authors: See above, also in design it is known (see Kallus) that without further assumptions on the potential outcomes Bernoulli randomization is minimax optimal. It would be good to have this mentioned in the paper. My question is how we should think about how that assumption interplays with the necessary assumptions for estimating the GATE. Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: > Methods And Evaluation Criteria We appreciate your thoughtful comments, which are mainly about our settings with repeated and independent experiments. We believe your concerns may arise from certain misunderstandings of our paper, and we appreciate the opportunity to clarify them. We address them in three ways: (1) clarifying our focus on spatial A/B testing; (2) demonstrating that our approach is readily applicable to single-experiment settings, supported by promising numerical results; (iii) showing that our approach can be extended to handle carryover effects (the delayed treatment effect you mentioned). (1) **Focus of this paper**. We clarify that this paper focuses on the **spatial** setting -- specifically, Example 1 (A/B testing in marketplaces) -- and **not network** A/B testing. While many designs from network experimentation can be adapted to our setting (and we compared them numerically), our primary focus, as stated on Page 1 (the last sentence), remains Example 1. In applications like **ridesharing**, experiments can be conducted daily, with data across days treated as independent. This is due to the drop in demand early in the morning (1 -- 5 AM), ensuring each day’s data represents an independent realization. Such settings are well-adopted in prior work (Li et al., 2023, NeurIPS; Li et al., 2024, ICML; Luo et al., 2024, JRSSB). Another application occurs in **marketing auctions**, where daily budget resets eliminate carryover effects, making the independence assumption plausible (Basse, 2016, AISTATS; Liu, 2020, arXiv:2012.08724). (2) **Extensions to single-experiment settings**. Our core methodology does not rely on this assumption of repeated experiments. It remains applicable to single-experiment settings when we have certain prior knowledge regarding the underlying covariance matrix, either from a pilot study or based on historical data. To satisfy the practical requirement, we utilized a proxy covariance matrix, obtained by inserting noises into the true covariance matrix, and conducted additional experiments during the rebuttal. The [results](https://www.dropbox.com/scl/fi/cnbq91x32ygqfj7k6kb1t/VLbx_NoRepeat.pdf?rlkey=8rn9ual6o8nv2qbbhzrtx2ds9&st=cg00de5r&dl=0) demonstrate that our estimator: (i) maintains optimality against existing methods, (ii) achieves near-oracle performance (comparable to the oracle method with the true covariance matrix), (iii) remains robust to the approximation errors. We also remark that while cross-fitting helps simplify the theory (by avoiding the need for imposing VC-class conditions on $g$ to establish the asymptotics of the ATE estimator), our method remains effective without it - numerical results above were obtained without cross-fitting. (3) **Extensions with carryover effects**. The target here becomes the cumulative ATE aggregated over time. To handle carryover effects, we can use existing doubly robust estimators from the RL literature (Kallus & Uehara, 2022, OR); see also first equation on Page 18 of Yang et al. (2024). The resulting estimator maintains a similar form to ours, with two modifications: (i) the function g is replaced by a Q-function, and (ii) the IS ratio is replaced by its marginalized counterpart (Liu et al., 2018, NeurIPS). Treatments over time can be assigned using switchback designs, which are widely adopted (Bojinov, 2023, MS; Xiong et al., 2024, arXiv:2406.06768). As shown in Theorem 5 of Yang et al. (2024), the MSE of this ATE estimator has a similar closed-form expression, enabling us to apply the same decomposition and design a similar surrogate loss for optimization. > Experimental Designs or Analyses In the rebuttal, we considered other topologies and conducted additional experiments. Results reported [here](https://www.dropbox.com/scl/fi/hrhn52rrbavnrt9heokaz/Vlbx_Topologies.pdf?rlkey=3zcr40ukuyysaj3z8x3flycdo&st=2h15akph&dl=0) demonstrate the advantages of our proposal over baselines. > Essential References We are happy to include these additional references and discuss their findings as you required. However, as we discussed, we primarily focus on the **spatial** setting, and references on network experimentation might not be essential. > Bernoulli randomization The Bernoulli randomization is the same as our individual design. While Kallus (2018) established its minimax optimality without additional assumptions, we also proved its optimality in Proposition 2 without spatial interference. However, in settings with spatial interference, such a design is no longer guaranteed to be optimal. Specifically, Theorem 3.4 in Viviano et al. 2023 shows that cluster randomization outperforms Bernoulli randomization when the interference is not too weak. We also provide theoretical (Proposition 1) and numerical (extensive experiments) evidence that Bernoulli randomization becomes suboptimal in such cases.
null
null
null
null
null
null