title
string
paper_decision
string
review_1
string
rebuttals_1
string
review_2
string
rebuttals_2
string
review_3
string
rebuttals_3
string
review_4
string
rebuttals_4
string
global_rebuttals
string
dataset_source
string
conference_year
int64
review_5
string
rebuttals_5
string
review_6
string
rebuttals_6
string
review_7
string
rebuttals_7
string
review_8
string
rebuttals_8
string
A New Neural Kernel Regime: The Inductive Bias of Multi-Task Learning
Accept (poster)
Summary: This paper analyzes the solutions to a multi-task training objective that entails finding a two-layer ReLU network that interpolates all training data and has minimum l2 norm of the first- and second-layer weights, in the overparameterized setting in which the number of neurons is larger than the total number of training examples across tasks. In the case wherein the the data dimension is one (univariate case) and under a very weak task diversity condition, the paper proves that the solution to the multi-task training objective is unique and equal to the ``connect-the-dots’’ interpolator, which aligns with the solution to a particular kernel optimization problem with a certain kernel. On the other hand, prior results, as well as empirical results presented in this paper, show that the solutions to the analogous single-task training objective are not unique and may be overly-complicated, non-robust interpolators. The paper takes an initial step towards extending the univariate multi-task learning results to the multivariate setting with preliminary analysis and experiments suggesting that the multi-task solution is unique, unlike the single-task solutions. Strengths: - The writing is mostly clear and easy to follow. - The univariate result is interesting. Such a clear separation between the behavior of single-task and multi-task learning, with only a very weak assumption on the tasks, is arguably unprecedented in the literature to my knowledge, and is an important step towards explaining why multi-tasking can lead to more robust models. I checked the proof and did not notice any mistakes. - The paper studies a nonlinear model, whereas most existing characterizations of solutions to multi-task objectives are limited to linear models. - In the multivariate case, the provided intuitions are sound, and the experiments evince the validity of the approximations in the discussion and the uniqueness of solution. Weaknesses: - The most interesting results are limited to the univariate case. In the multivariate case, the analysis is not rigorous. Also, the conclusion that multi-tasking behaves like a kernel method is underwhelming because this kernel is unknown and depends on $\mathbf{v}^*$ and $\mathbf{w}_k^*$ and $b_k^*$ for $k=2,…,K$, and characterizing these reduces to the original problem of characterizing the solution(s) of the multivariate multi-task learning problem. Along this line, while the multivariate experiments are helpful for the reasons discussed above, they don’t help to characterize the kernel, and in particular they don’t show an analogous solution to the connect-the-dots solution for the univariate case. - In addition to being primarily focused on the univariate case, the results are limited to the setting in which the overparameterized networks are trained to exactly fit the training data, and all tasks have exactly the same input data, neither of which may be the case in practice. - More discussion could be given regarding why the fact that multi-task learning behaves like a kernel method is important/interesting, and why the kernel in the univariate case is conducive to generalization. - The optimization algorithm used in the experiments is not discussed. If the algorithm is gradient descent on the Lagrangian, then this suggests that the multi-task augmented loss is convex, and the single-task augmented loss is non-convex, which is possibly an important point that the paper is missing. Related, for the experiments showing that single-tasking with different initializations can lead to different solutions, it would be helpful to share the final values of the training objective, to confirm that all functions are global optima. Minor - Nit-picking, but the use of “unlikely” to describe the alignment of $\mathbf{s}_i - \mathbf{s}_{i-1}$ and $\mathbf{s}_{I+1} - \mathbf{s}_{I}$ is not quite correct because there is no assumed generative model for the data. Under many distributions, such alignment is very likely. - Related: I suggest that the authors refer to the condition that $\mathbf{s}_i - \mathbf{s}_{i-1}$ and $\mathbf{s}_{I+1} - \mathbf{s}_{I}$ are not aligned as a task diversity condition. If all the tasks are the same, it fails (which is an important sanity check that should be pointed out). However, if the tasks have only a small amount of diversity in some sense, it holds. Previous studies of multi-task learning require some level of task diversity to learn generalizable models, e.g. [1,2,3,4]. - Again, related: there is a large body of work studying solutions found by multi-task learning and their generalization capabilities [1,2,3,4, etc.] that the paper does not address. Like the current paper, these works also require task diversity, so they are more related than the paper’s brief discussion suggests. - Citations should be changed to parenthetical. Technical Quality: 3 Clarity: 3 Questions for Authors: N/A Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors have adequately addressed limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their careful review of our work and the concerns brought up. Below, we address each of the weaknesses brought up in a point-by-point manner. 1. **[Multi-variate case]** The results for the multivariate case (which also apply to the univariate setting) are more of an approximate argument. This is partially due to the fact that far less is known about the solution sets in the multivariate case (see e.g. [3],[4], [8]) compared the the complete characterization which is known for univariate solutions ([2]). The fact that multivariate solutions can also display variety in orientation, as evidenced in the example of Figure 5, hint at the potential richness of the solution sets. Under the iid task model used in our analysis, we believe that the approximations assumed in the analysis can be made more rigorous, which is of interest for future work. We disagree that the results are underwhelming. The fact that multi-task neural network training behaves like an $\ell^2$ minimization over linear combinations of the learned neurons, whereas single-task training is equivalent to $\ell^1$ minimization over the neurons, implies that the nature and properties of functions the functions learned by multi-task vs. single-task training can be profoundly different. This is, in our opinion, novel and interesting. 2. **[Interpolation]** We note that the more approximate argument presented in the multivariate case (which also applies to the univariate case) characterizes non-interpolation training problems which minimize a data-fitting term plus a penalized loss (for any penalty strength). Neural network interpolation problems are also a setting of practical interest, since many overparameterized networks trained in practice are able to achieve zero or near-zero training error. Moreover, standard multi-task learning scenarios generally assume that all tasks share the same set of input data points, and these tasks are learned jointly, which is exactly the setup we consider.`` 3. **[Relevance of kernel regime]** The fact that multi-task learning behaves like a kernel method is important/interesting because it shows that the solutions to each task can be profoundly different compared to those obtained by training a separate network for each, even if the tasks are statistically independent as assumed in our analysis. In other words, multi-task training can have a major effect on the learned functions for each task even in situations where one might not anticipate such an effect. This insight may be valuable in practice, since this effect of multi-task training may be desirable or undesirable depending on the application. Regarding generalization properties, see the general comment made to all reviewers above. 4. **[Non-convex objective]** As discussed in Appendix 8, all experiments except that in the bottom right of Fig. 5 are done by training multi-output neural networks using the Adam optimizer with MSE loss and weight decay parameter 1e-5 (univariate experiments) or 1e-3 (multivariate experiments). Because the data-fitting term is non-convex and lambda is very small, the resulting problem is always non-convex. For the experiments in Fig. 4 showing that multi-task training with different random initializations leads to the same solution, it is not necessary to check the objective values since we know a priori (by our result in Theorem 3.1) that the learned functions depicted in the figures are global minimizers. However, if it is helpful to the presentation, we can include these numbers. For the multi-variate experiments in Fig. 5 the global optimality of these solutions were verified using the convex neural networks approach in Tolga & Pilanci [8]. Regarding the minor weaknesses: 1. We agree that the use of the word "unlikely" here is imprecise since no distributional assumptions are given on the data/labels. We note that Corollary 1 (Appendix 7.2) of Theorem 3.1 implies that, if the task labels are sampled i.i.d. from an absolutely continuous distribution (e.g. Gaussian) and the number of tasks T is greater than 1, then those labels admit only the connect-the-dots solution with probability one. (This follows from Corollary 1 in Appendix 7.2.) So for example, if the labels are given by ground truth values plus some additive Gaussian noise—even a small amount of noise—the unique multi-task solution is connect-the-dots. We will mention this explicitly and update the phrasing to be more precise. 2. Agreed. 3. We are unsure of what the reviewer is referring to for citations [1,2,3,4] but we can include some more discussion/mention of existing multi-task learning research which focuses on the benefits of unrelated tasks, and how our work relates to these upon the reviewers clarification. 4. Agreed. [2] Hanin, Boris. "On The Implicit Bias of Weight Decay in Shallow Univariate ReLU Networks." [3] Ongie, Greg, et al. "A Function Space View of Bounded Norm Infinite Width ReLU Nets: The Multivariate Case" ICLR 2020 [4] Zeno, Chen, et al. "How do minimum-norm shallow denoisers look in function space?." NeurIPS 2024. [8] Pilanci, Mert, and Tolga Ergen. "Neural networks are convex regularizers: Exact polynomial-time convex optimization formulations for two-layer networks." ICML 2020. --- Rebuttal Comment 1.1: Comment: I thank the authors for their thorough response, which has helped to alleviate my concerns, and I have increased my score as a result. I apologize for forgetting to include the citations, which are as follows: [1] Tripuraneni et al., On the theory of transfer learning: The importance of task diversity, NeurIPS 2020. [2] Du et al., Few-shot learning via learning the representation, provably, ICLR 2020. [3] Sun et al., Towards sample-efficient overparameterized meta-learning, NeurIPS 2021. [4] Collins et al., Provable Multi-Task Representation Learning by Two-Layer ReLU Neural Networks, ICML 2024 --- Reply to Comment 1.1.1: Comment: We appreciate your response and thank you for increasing the score! Indeed, the works mentioned are very relevant and we will be sure to reference them in the camera-ready version of the paper.
Summary: The paper shows that multi-task training can benefit the single tasks, even if the tasks are unrelated. Assuming a particular setting of training a 2-layer neural network that finds a path norm interpolation solution on univariate data, the multi-task optimizer is unique and is given by the piecewise linear function that connects the training data points which coincides the minimum norm interpolant in a first-order Sobolev RKHS, whereas the corresponding single-task optimizer is non-unique and hence not the minimum norm interpolant with respect to any RKHS. Strengths: The fact that a multi-task learning objective can induce a kernel regime, in cases where the single-task setting is non-unique, is to the best of my knowledge novel and interesting. The solution and presentation are clear and simple and correspond to the piecewise linear function that interpolates the training data. The theoretical argument that motivates the relevance for multi-dimensional covariates is solid and empirically verified by a minimal experiment. The related work is adequately cited. Weaknesses: While investigating the benefits of multi-task settings has great potential, I cannot recommend the paper for acceptance. The points and questions below specify important limitations that remain to be addressed and essential aspects of multi-task training that were not explored: - Question 1 below raises a potential gap in the proof. - A general claim that the multi-task solution improves upon the single-task solution - an essential narrative in the paper - can clearly not be made, and depends on the exact functional dependence between covariates x and labels y. Canatar et al. (2021) is one example of how the generalization error of (other) neural kernel regimes depends on the task alignment. In the provided example in Figure 5, the ground truth function is arguably ambiguous and it might also be desirable to reflect this uncertainty by being able to learn several reasonable solutions as the single-task solution does. At least a more detailed empirical evaluation in varying input dimension, varying task signal strength and alignment could have yielded valuable insights in the regard of uniqueness and generalization in various settings. Even under negative outcome, the results would have yielded important information about the practical implications and limitations of the univariate theory. - The implications and limitations for practical settings are unclear due to a lack of empirical evaluations. The experiments only include one task with signal and otherwise pure noise tasks (see questions 2 and 3 below), and at most cover 2 input dimensions with constructed symmetric data points. **Typos:** in lines 26, 146, 222, before 235: be be, eq (3): f_theta(x_i)=y_i **References:** Canatar, A., Bordelon, B. & Pehlevan, C. Spectral bias and task-model alignment explain generalization in kernel regression and infinitely wide neural networks. *Nat Commun* **12**, 2914 (2021). Technical Quality: 2 Clarity: 3 Questions for Authors: - Assume an optimal solution does not contain a training point x_i as a knot. How can you transform this solution to the linear spline containing only the x_i as knots. Do all optimizers already contain all x_i as knots? Can x_i be introduced without increasing the representational cost? I do not see how this case is covered in the proof and would appreciate a clarification. - Is there an intuitive explanation when and why the multi-task solution improves the individual tasks too? In particular, does your example with all tasks except one being pure noise tasks induce some regularizing implicit bias or does this conclusion also hold for more typical multi-task settings where there is a significant amount of signal in each task, and differing amounts of alignment between them? - Practical data sets are not expected to be as symmetrical as the x_i in the 2-dimensional example of Figure 5. Is the non-uniqueness of single-task solutions less severe under continuous iid sampling of the x_i? - The results of Boursier and Flammarion (2023) indicate that regularizing the biases can make an important difference in the optimal solutions. Could your multi-task theory be extended to this case and would the solution also be sparsified compared to the current objective without bias regularization? Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: The implications and limitations to practically optimized neural networks, multi-dimensional data and tasks with varying degrees of alignment and anti-alignment should have been assessed, at least empirically. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their careful review of our work and for their feedback. Below, we address each of the comments individually. 1. **[Clarification on Theorem 3.1]** We thank the review for this clarifying question about Theorem 3.1. Lemma 3.2 says that, given any solution which interpolates the data, we can remove all knots located away from the data points ("extraneous knots") without increasing the representational cost of the network. Removing these "extraneous knots" may introduce new knots at the data points which were not originally present, but Lemma 3.2 says that the total cost $R(0) = \|b-a\|_2 + \|c-b\|_2$ of the new solution with knots at the data points (see Fig. 3) is no greater than the cost $R(\delta) = \| \delta + b-a \|_2 + \frac{1}{1-\tau} \| \delta \|_2 + \| c-b + \tau \delta/(1-\tau) \|_2$ of the original solution, which may or may not have knots at the data points. Therefore, the question does not present a gap in the proof. Please let us know if this clarifies the concern. 2. **[Generalization]** See the general response to all reviewers above. When we referenced generalization, we specifically meant to reference the "tempered overfitting" result in [3], which is specific to the univariate case. 3. **[Experiments]** See the general response for details on new experimental results. Regarding questions: 1. We address this question in the rebuttal to weaknesses above. 2. In the univariate case, the intuition is that multi-output weight decay regularization encourages neuron sharing across outputs ([6]) while also encouraging the weights of each output individually to be "small," i.e., encouraging each output individually to be a min-norm univariate interpolant (which are characterized in [2]). In general, the only way to accomplish this is for each output to perform connect-the-dots interpolation, with the shared knots located at the shared data points. The multivariate case is more subtle. The example in Figure 5 was constructed specifically to illustrate the existence of multiple solutions with signficantly different "directions" of variation. We agree that it is difficult to argue that any particular one of these solutions is better, but the more symmetric solution obtained by multi-task training does stay closer on average to the data points (we verified this computationally). We used pure noise tasks here for illustration and convenience, but similar results are obtained if the multiple tasks are randomly perturbed versions of the two-squares task. Intuitively and more generally, since the multi-task solutions are approximately equal to the kernel solution, these functions will be weighted combinations of all neurons in the model and rarely strongly aligned with a subset of the neurons. Thus, the multi-task solutions tend to be smoother and have more symmetry. 5. In the new experiments conducted we do not rely on any special structure for the data samples. In this case we still observe a striking difference between solutions learned by multi-task task networks vs. single task networks trained individually. 6. This is an interesting theoretical question for future work. Our initial experiments (not shown) indicate a similar kernel-like effect when regularizing biases in multi-task settings. [2] Hanin, Boris. "On The Implicit Bias of Weight Decay in Shallow Univariate ReLU Networks." [6] Shenouda, Joseph, et al. "Variation Spaces for Multi-Output Neural Networks: Insights on Multi-Task Learning and Network Compression." JMLR 2024. --- Rebuttal Comment 1.1: Comment: Thank you for clarifying my confusion with the proof. I am still unable to find a statement anywhere in the proof that discusses introducing knots and why this does not increase the representational cost. I encourage the authors to clarify that in the proof. In the added experiments, it is not surprising that the single-task and multi-task solutions differ; the question is how. The fact that more neurons are active and they are shared across tasks is an interesting mechanistic understanding in this regard, but I would like to see the correspondence to Figure 1 in the attached pdf of single task NNs to have a comparison. Also, in Figure 2, it now seems that the single-task solutions are the piecewise linear interpolants and without variation across random seeds, which questions the generalization of the original interpretation in the paper. In this example, the multi-task solution does not seem to be desirable but instead further away from the piecewise linear interpolant of the data than the single task solution. It also remains unclear how task alignment or high dimensionality impacts the conclusions drawn in the paper. Overall, I will keep my score. --- Reply to Comment 1.1.1: Title: Follow-up to reviewer XF9Q Comment: We thank the reviewer for engaging in the rebuttal. We want to further clarify that we are only considering interpolating solutions for the univariate setting. Therefore, we only consider solutions which pass through the data points exactly: it is never necessary to introduce additional knots in order to have the function interpolate the data points. However, if we consider an interpolating solution which has some "extraneous knots" (knots which are not located at the data points) and we remove these, additional knots may appear at the data points. Lemma 3.2 says that doing this will never increase the representational cost, and almost always decrease it, even if doing so results in new knots appearing at the data points. While we believe that this is clear in the current statement of the lemma, we are happy to update our language to make this more explicit. Regarding the new experiments, we would like to clear up some possible misunderstandings about our setup and claims. > In the added experiments, it is not surprising that the single-task and multi-task solutions differ; We disagree with this claim. In our experiment each task is entirely independent of the rest so there is no reason to assume that the network would exploit some shared representation. The network could have learned the ground truth input weight for each task (a single neuron) and then learned a diagonal output weight matrix for the output weight of each of those teacher tasks. Due to the unrelatedness of the different tasks this would seem like a reasonable solution however it is not the solution ultimately learned. > the question is how. For the univariate case **we precisely describe how the solutions are different**. Training a single task network can lead to many solutions that interpolate with minimum representation cost. However, doing multitask training almost always leads to the unique “connect-the-dots” interpolant for each task. Moreover, this interpolant is also the solution to minimum norm data interpolation in a particular RKHS for each task individually and we explicitly provide the kernel associated with this RKHS. The results for the multivariate case (which also apply to the univariate setting) are more of an approximate argument. This is partially due to the fact that far less is known about the solution sets in the multivariate case (see e.g. [1],[2], [3]) compared to the complete characterization which is known for univariate solutions ([4]). However, **our analysis and experiments do provide insights into the difference between single-task and multi-task solutions**. Multi-task neural networks learn solutions that behave like an $\ell_2$ minimization over linear combinations of the learned neurons, whereas single-task neural networks learn solutions that behave as an $\ell_1$ minimization over linear combinations the learned neurons. This is, in our opinion, novel and interesting and as pointed out by reiviewer `aewJ` is **arguably unprecedented in the literature.** > but I would like to see the correspondence to Figure 1 in the attached pdf of single task NNs to have a comparison As mentioned in the experimental details in the general response, for the single task networks there are only 5-10 active neurons at the end of training. > Also, in Figure 2, it now seems that the single-task solutions are the piecewise linear interpolants and without variation across random seeds, which questions the generalization of the original interpretation in the paper. In this case the ground truth tasks are single ReLU neurons and thus we do not expect non-uniqueness in the solutions here (the function which represents a single ReLU neuron with minimum representation cost is simply a ReLU neuron). The purpose of this experiment was to emphasize the difference between single-task solutions and multi-task solutions in a more extreme setting and in higher dimensions. > In this example, the multi-task solution does not seem to be desirable but instead further away from the piecewise linear interpolant of the data than the single task solution. Our paper does not claim that multi-task solutions are **always** desirable. The main message is that they are different. We have clarified our use of the phrase “generalizes better” in the general response. > It also remains unclear how task alignment or high dimensionality impacts the conclusions drawn in the paper. The new experiments are in higher dimensions ($\mathbb{R}^5$). We agree that exploring how task alignment affects these solutions would be interesting, but our main message is that multi-task training in ReLU neural networks can be vastly different than single-task training **even with very little assumptions on the tasks**.
Summary: The authors investigate the properties of solutions to multi-task shallow ReLU neural network training problems. It reveals a novel connection between neural networks and kernel methods, particularly focusing on interpolating training data while minimizing the sum of squared weights in the network. The findings highlight that while single-task solutions are non-unique and can exhibit undesirable behaviors, multi-task training leads to unique solutions with desirable generalization properties. Strengths: The authors offer a robust theoretical framework, providing new insights into how multi-task training in neural networks can lead to fundamentally different solutions compared to single-task training. This framework connects multi-task neural network solutions to kernel methods. It rigorously proves that under certain conditions, the solutions are unique and analogous to those in an RKHS. Weaknesses: 1. The theorems show the difference between the single and multi-task solutions. However, the author's argument makes it hard to determine the difference between the 1-task and 2-task solutions. 2. Although [1] used the neural tangent kernel (NTK) framework, different from the idea in this paper, [1] showed that overparameterized neural networks on $\mathbb{R}$ is linear interpolation on the single task and it seems to be a generalization solution in the author's argument. This challenges the main contribution of this paper. 3. Under the RKHS framework (kernel methods), the author should quantify the generalization ability, like the generalization error bound, if they want to claim "the unique multi-task solution has desirable generalization properties that the single-task solutions can lack". 4. For the multivariate case, the exact nature of the kernel corresponding to the neural network solutions is not fully characterized. If the authors can solve the above problems, I will consider raising the score. [1] Generalization Ability of Wide Neural Networks on \mathbb{R} Technical Quality: 3 Clarity: 3 Questions for Authors: 1. In Equation (3), do you mean $f_{\theta}(x_i)=y_i$ instead of $f_{\theta}(x_i)=x_i$? 2. What happens when $x$ is different in different tasks? Do your conclusions still hold? Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The study is limited to shallow ReLU networks. Extending these results to other activation functions and deeper network architectures remains an open question. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback, but it seems the reviewer has a **misunderstanding** about the results of our paper. Below, we address each of the comments individually. 1. **[1-task vs. 2-task solution]** Our result does indeed demonstrate the difference in solution between learning 1 task individually versus 2 tasks jointly. The description in Theorem 3.2 of conditions under which multi-task solutions are non-unique holds for any T > 1. 2. **[Relation to NTK]** Our results are based on established exact characterizations of the global minimizers of training neural networks with weight decay ([2], [1], [3], [4], [5], [6]). The global minimizers are non-unique in general, and in the univariate case, it is well known that the connect-the-dots solution is just one of (potentially infinitely) many solutions. **The NTK regime is a different, approximate analytical framework that relies on a number of assumptions which we do not employ in this work**. Namely, our analysis does not require that the learning rate be extremely small ("lazy training") or that the network have infinite width, both of which are necessary for the NTK regime and it is now well established that such assumptions are insufficient for explaining the success of neural networks ([8],[9],[10]]). Additionally, **the paper [7] cited by the reviewer does not address connections between neural kernel regime(s) and multi-task training/"task augmentation," which is one of the main novel contributions of our paper**. Finally, the conclusions about the results in [7] seem to be incorrect, as **[7] states that the solutions are *almost* linear interpolation under additional assumptions on the data**. In contrast our result states that the solutions are **exactly** connect-the-dots linear interpolation with very weak assumptions on the data which are almost always satisfied. 3. **[Generalization]** See the general response to all reviewers above. When we referenced generalization, we specifically meant to reference the "tempered overfitting" result in [3], which is specific to the univariate case. 4. **[Multi-variate case]** The kernel will depend on the specific neurons learned in training, as indicated in the paper. These will depend on the training data collectively across all tasks and the random initialization of the training process. However, the function learned for each task will be a standard kernel solution in terms of this kernel. In contrast, training separate networks for each task will, in general, produce different solutions that cannot be viewed as kernel solutions. Specifically, the solutions to single-task training with weight decay effectively minimize the $\ell_1$ norm of the output weights, as discussed in the paper and many of the referenced prior works. Regarding Question #2, this is an interesting setting, but our main focus in this paper is the standard multi-task learning scenario, in which all tasks share the same data points x. [1] Joshi, Nirmit, Gal Vardi, and Nathan Srebro. "Noisy interpolation learning with shallow univariate ReLU networks." ICLR 2024. [2] Hanin, Boris. "On The Implicit Bias of Weight Decay in Shallow Univariate ReLU Networks." [3] Ongie, Greg, et al. "A Function Space View of Bounded Norm Infinite Width ReLU Nets: The Multivariate Case." ICLR 2020. [4] Parhi, Rahul, and Robert D. Nowak. "What kinds of functions do deep neural networks learn? Insights from variational spline theory." SIAM Journal on Mathematics of Data Science 2022. [5] Savarese, Pedro, et al. "How do infinite width bounded norm networks look in function space?." COLT 2019. [6] Shenouda, Joseph, et al. "Variation Spaces for Multi-Output Neural Networks: Insights on Multi-Task Learning and Network Compression." JMLR 2024. [7] Lai, Jianfa, et al. "Generalization ability of wide neural networks on $\mathbb{R}$." [8] Jacot, Arthur, et al. "Feature Learning in $ L_2 $-regularized DNNs: Attraction/Repulsion and Sparsity." NeurIPS 2022 [9] Arora, Sanjeev, et al. "On exact computation with an infinitely wide neural net." NeurIPS 2019. [10] Damian, Alex, et al. "Neural networks can learn representations with gradient descent." COLT 2022. --- Rebuttal Comment 1.1: Title: Thank you for your responds Comment: 1. Are you referring to 'Lemma 3.2'? While you've illustrated the difference, the explanation lacks a clear description of the nature of this difference. For instance, what distinguishes a single-task classification problem where \( y_i \in \{0,1\} \) from a 'two-task' classification problem where $ y_i = (y_{i,1}, y_{i,2}) $ and $ y_{i,1} = 1$ if $y_{i,2} = 0 $? Alternatively, consider a scenario where a second task is created with random labels. What would be the implications for the original task? 2 & 3. I referenced [7] because it demonstrates that linear interpolation does not generalize well when the data contains noise. Are you suggesting that linear interpolation can indeed generalize under such conditions? --- Reply to Comment 1.1.1: Title: Follow-up to reviewer Snxf Comment: We thank the reviewer for following up on this! Apologies, we meant to refer to Theorem 3.1 (there is no Theorem 3.2). The statement of **Theorem 3.1** precisely describes the conditions on the data and labels under which the unique solution for each task is connect-the-dots for any number of tasks $T > 1$, including $T=2$. For any dataset with two tasks, the solution is non-unique if and only if the conditions of Theorem 3.1 are satisfied, **otherwise the solution is unique and it is the connect-the-dots solution for both tasks (i.e. the solution is of the form of the right plot on Fig. 1 or the bottom row of Fig. 4)**. This is the difference in the nature of the solution between single task and multi-task learning. As a simple example, consider the data points {0, 1, 2, 3} and first task labels {0, 1, 1, 0}. A neural network trained on this task alone can interpolate the data either by a "peak" or a "plateau" (connect-the-dots) function with the same representational cost (similar to the example in Fig. 1). However, if we do multi-task learning with a second task with labels {0, 1, 0, 1}, then by Theorem 3.1 the neural network learns a unique solution which will be the connect-the-dots interpolant for both tasks, eliminating the possible "peak" solution for task 1 when learned by itself. It is possible to construct example sets of data and labels which admit multiple solutions by Theorem 3.1, but if we assume the task labels are real valued (as we do throughout the paper), i.e. a standard regression setting, the set of all such label sets has Lebesgue measure zero (see proof of Corollary 1 in Appendix 6.2). In other words, for real-valued labels, labels which admit non-unique solutions are exceedingly rare. A direct corollary of this fact is that, as long as the labels are sampled i.i.d. from a distribution which is absolutely continuous with respect to the Lebesgue measure on $\mathbb{R}^T$, the multi-task solution is connect-the-dots with probability 1. If the labels are binary-valued, then non-uniquess of solutions may not be as rare as in the real-valued case. For example, consider the two task setting above with task labels {0,1,1,0} and {1,0,0,1}. In this case, there can be multiple solutions to the multi-task training problem. However, our focus in this paper is on the regression setting, and in either case, Theorem 3.1 precisely describes the conditions under which solutions are and are not unique. **Please let us know if this explanation of the difference between one-task and two-tasks solution is still unclear.** Regarding the question about generalization: [7] shows that, for uniformly-spaced data points the generalization error of ("almost") connect-the-dots interpolation is *lower-bounded* by a constant with probability 1 as the number of samples $N$ grows large. In contrast, [1] (the paper we reference when referring to "desirable generalization properties" of connect-the-dots interpolation) shows that, for data points sampled from an absolutely continuous distribution, the generalization error of (exact) connect-the-dots interpolation is *upper-bounded* by a constant with probability 1 as $N$ grows large. Setting aside the different assumptions employed in both works, these two statements are not incompatible with each other. By the taxonomy in [9], the result in [1] shows that the connect-the-dots solution exhibits "tempered overfitting" which is worse than "benign overfitting" but not as terrible as "catastrophic overfitting". We note however that **the main focus of the paper is to demonstrate the significant difference between single-task and multi-task solutions** and we will adjust our language about generalization benefits in the camera-ready version as described in the General Response. [1] Joshi, Nirmit, Gal Vardi, and Nathan Srebro. "Noisy interpolation learning with shallow univariate ReLU networks." ICLR 2024. [9] Mallinar, Neil, et al. "Benign, tempered, or catastrophic: Toward a refined taxonomy of overfitting." NeurIPS 2022.
Summary: Using the piece-wise linear data interpolation problem, the authors in this paper studied the solution obtained in single task learning and that in multi-task learning where interpolation functions are jointly obtained for multiple problems. Both numerical and empirical results are provided to show that muti-task learning high likely leads to unique optimal solution with different initializations, while the solution resulted from single-task learning is very sensitive to initialization. In the paper, the analysis is done first for the case where the input is univariate; and it is latter extended to the case of multivariate. Strengths: * The analysis is sound * The conclusions are interesting, providing another prospective of seeing the advantage of multi-task learning over single task learning. Weaknesses: * There is a typo in Eq. (3). There is f_{\theta}(x_i)=x_i, the latter x_i should be y_i. * The analysis is done based on a toy problem that is very simple, requiring technically, only two layers of neurons with relu activation in the first layer and identity activation in the second. However, networks for practical applications are much more complicated. In my own experience, very different solutions are clearly observed while training deep neural networks in multi-task learning setting with different random initializations. These observations do not align with the conclusions in this paper. I guess the underlying causes can be complicated, involving multiple factors, for example, as the authors pointed out whether global optimal solution is reached during each training may be in question. My concern is that given large gap between the toy setting in this paper and practical applications of deep neural networks, the real impact of this work or its significance is quite uncertain. Technical Quality: 3 Clarity: 3 Questions for Authors: None Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their review and positive score. Our analysis does focus on two-layer networks, but the conclusions of the multivariate analysis can be used to reason about the behavior of functions learned at layers within a deeper network. A ReLU layer in a deep network can be viewed as a shallow network (with inputs coming from the previous layer and outputs going into the next layer). If the outputs of this layer are sufficiently diverse, then the behavior can be similar to the kernel-like behavior indicated by our analysis. We have now also included new experiments in higher dimensions beyond the Fig. 5 example (see the general response above). Also, we agree that when training neural networks in multi-task settings, the solutions will not be exactly the same across different initializations, but this is not at odds with our results. In our setting, the input weights will depend both on the training data (collectively across all tasks) and the random initializations, but whatever input weights (and hence neurons) are learned, the output weights for each task are effectively minimizing a common weighted ridge regularization objective. We will update the language of the paper to make this point more clear. --- Rebuttal Comment 1.1: Comment: I appreciate the authors to respond to my comments. After reading through review comments from other reviewers and authors' response, I would like to keep my initial assessment of this paper.
Rebuttal 1: Rebuttal: ## General response to reviewers We thank the reviewers for their helpful feedback and careful review as well as the AC. Most reviewers agreed that our results are novel and provide new insights on multi-task learning with neural networks. In particular, - Reviewer XF9Q noted that our main result is "to the best of my knowledge **novel** and **interesting**" - Reviewer aewJ highlighted that our result "is arguably **unprecedented in the literature**" and "is an important step towards explaining why multi-tasking can lead to more robust models." Here we provide a rebuttal to some of the common concerns across reviews. ___ ### Clarified Claims on Generalization Multiple reviewers raised concerns about the statement in our abstract that "the unique multi-task solution has desirable generalization properties that the single-task solutions can lack." We will remove this sentence from the abstract, as it does not describe our paper's main contribution(s), and generalization is not our main focus. **The main focus of the paper is to demonstrate the significant difference between single-task and multi-task solutions**, even when the tasks are statistically independent, and to highlight a novel connection between multi-task neural network training and kernel methods. Our intention in mentioning the "desirable generalization properties" was specific to the univariate case, in which the connect-the-dots univariate interpolant is known to exhibit "tempered overfitting" ([1]). Informally, this means that the connect-the-dots solution generalizes reasonably well, as opposed to other minimum norm solutions that can generalize arbitrarily poorly (see [1] for technical details of this result). We will update our language to clarify that this is the "desirable generalization property" being referenced, and that this is not a contribution or focus of our paper, merely a reference to an existing result which motivates why our work is interesting. Several reviewers also noted that there is a typo in Equation 3: the correct expression should be $f_{\theta}(x_i) = y_i$, not $f_{\theta}(x_i) = x_i$. ### New Experimental Results ___ We have now included another experiment for the multi-variate setting that is higher dimensional and does not rely on the symmetric structure of the datapoints. In this experiment we generate $T=25$ random teacher networks with one random ReLU neuron. We then construct a multi-task dataset by evaluating the teacher networks on the same $n=25$ random samples $x_i \in \mathbb{R}^5$ sampled i.i.d from a Gaussian distribution. Next, we train 25 single-output student networks on each task individually with $K=200$ neurons and one multi-task network on all tasks simultaneously. The single-output networks learn very sparse solutions consisting of only 5-10 neurons for each task. In constrast the multi-task network consists of 155 active neurons and each active neuron contributes to all of the outputs of the network. We plot the difference in the solutions obtained from training a student network on task $t$ independently of the rest vs. the $t^{\text{th}}$ output of the multi-task network along different directions in $\mathbb{R}^{5}$. We observe that the single-output student network trained on task $t$ learned the ground truth ReLU function and is a ReLU function in all directions. In contrast, for certain directions the $t^{\text{th}}$ output of the multi-task student network is very different from the single-output student network and is clearly not a ReLU function. This again supports our claim that the functions learned for each task via a multi-task neural networks can be very different from the function learned by a neural net trained on each task individually. Please see Fig. 1, Fig. 2 and Fig. 3 in the attached PDF for the results. [1] Joshi, Nirmit, Gal Vardi, and Nathan Srebro. "Noisy interpolation learning with shallow univariate ReLU networks." ICLR 2024. Pdf: /pdf/67725cf2c674791b309303a8749286e6f7463cf1.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Tree of Attributes Prompt Learning for Vision-Language Models
Reject
Summary: This paper proposes Tree of Attributes Prompt learning (TAP). Unlike previous works that rely on unstructured class descriptions, this approach distillates structured knowledge graphs associated with class names from LLMs. Text/vision prompts and vision-conditional pooling module are designed to extract instance-specific text features. Extensive experimental results demosntrate its improved performances. Strengths: - Overall, the idea of distillating structured knowledge from LLMs in the task of prompt learning is new and interesting. - The paper designed an effective prompt learning framework to capture fine-grained attributes, using vision expert tokens and vision-conditional pooling layer. - The illustrated way to generate structure tree of attribute from LLMs can also be used in other tasks. - From the experiments, using structured knowledge leads to better performances than unstructured descriptions in base-to-novel and few-shot classification tasks. - The visualization of class activation maps and attention weights look good. The paper is well written and easy to follow. Weaknesses: - Apart from the new framework, the method highly relies on the quality of tree of attribute generated with GPT-3.5-turbo. There is no study on the robustness aganist different LLMs, different generation prompts, or varying attribute sets. - The loss includes a model regularization and its effectiveness is not discussed. - In Figure 2, it is not too clear to me about $I_1 T_1$, $I_2 T_2$,etc. They seem not be discussed in the text parts. Technical Quality: 4 Clarity: 3 Questions for Authors: - In Table 6, what is the difference between Attn. Max Pooling and VPC? Is that the former selects one most similar while the latter uses a soft weighted sum? - In Table 5, why using an adaptive number of attributes is better than a fixed 8 number of experts. When increasing the number of experts from 1 to 8, did the authors observe some patterns in the order of added attributes? Say, at the beginning some general attributes, then finer ones, and later irrelevant ones? - In Ln 189, could the authors clarify 'deep prompting'. Does it mean, in additional to vision prompt tokens at the input layer, there are other vision prompt tokens inside vision encoder not plotted in Figure 2? - In Eq. 7, could the authors clarify how $\alpha=0.4$ is chosen, and compare the performances of using CLS token only $\alpha=1.0$? - In Ln 262, how to revert to unstructured set of descriptions? Are they converted from the same tree of attributes by keeping all attributes and descriptions? Confidence: 5 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: One limitation is its reliance on LLMs (GPT) to generate the tree of attribute. When generating more complex responses, it is challening to ensure the quality and variances. How to keep a balance between the diversity of attribute sets and relevancy of attributes to classification is important. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the constructive feedback and the positive assessment of our work! We address the detailed concerns below. ### W1: LLM Robustness Thank you for highlighting this concern. We note that since our use of LLM is mainly retrieving simple facts without requiring complex capabilities, the quality of the generated attributes is generally reliable. 1. Robustness Against Different LLMs We regenerated the descriptions using a small LLM, Qwen-2-7B-Instruct [1], and obtained comparable results: | Base | Novel | HM | | - | - | - | | 84.68 | 77.31 | 80.83 | 2. Robustness Against Different Generation Prompts We reran ToA generation pipeline using prompts rewritten by ChatGPT. This process resulted in a slightly different set of attributes. The results were as follows: | Base | Novel | HM | | - | - | - | | 84.93 | 77.20 | 80.88 | These results demonstrate that our method maintains its performance across different LLMs and generation prompts. The consistency in results indicates that our method's effectiveness does not heavily rely on the choice of LLM, given its straightforward nature. As LLM capabilities continue to improve, we anticipate further enhancements in robustness. ### W2: Effectiveness of Model Regularization Thank you for pointing this out. We conducted an additional experiment without regularization, while reducing the learning rate to 1/4 of the original values to avoid overfitting. The results are as follows: | Base | Novel | HM | | - | - | - | | 83.37 | 75.82 | 79.42 | As expected, the performance significantly decreases without model regularization, which aligns with findings from previous works [2,3]. For example, MaPLe, which did not use model regularization, reported Base: 82.28, Novel: 75.14, HM: 78.55, whereas PromptSRC, which added model regularization to MaPLe, achieved an average HM of 79.97. These results demonstrate that model regularization is crucial for prompt tuning to prevent overfitting and catastrophic forgetting. Notably, our work outperforms existing methods both with and without model regularization, showing better performance than MaPLe when regularization is not used, and surpassing PromptSRC when it is used. ### W3: Clarity of Figure 2 We apologize for any confusion caused by Figure 2. In the figure, each “color” represents an attribute (e.g., orange -> fur pattern). In each attribute: - $I_1$ represents the visual expert token $p^v_a$ of attribute $a$. - $T_1$, $T_2$, etc. are $v_c^a$, representing the output of the VCP layer for class 1, class 2, etc., in attribute $a$. - $I_1T_1$, $I_1T_2$, etc. represent the cosine similarity calculations between the visual expert token and the corresponding pooled textual features of all classes, which are the prediction logits of attribute $a$. We calculate prediction logits for each attribute and obtain the final prediction via a weighted sum of all prediction logits (refer to Equation 7). We will refine the figure to make this clearer. ### Q1: Attn. Max Pooling vs. VCP Both Attn. Max Pooling and VCP are attention-based pooling, but VCP can be viewed as a "soft" version of Attn. Max Pooling. In Attn. Max Pooling, we calculate the attention score between $p^v_a$ and the description set $D_c^a$ in the same way in first three lines of Equation 4. Instead of obtaining $v_c^a$ using the attention score for a weighted sum of $D_c^a$, we perform a max pooling of $D_c^a$ based on the attention score, selecting the description with the highest attention score to the visual expert token. The better performance of VCP than Attn. Max Pooling shows VCP's flexibility in capturing multiple relevant descriptions, whereas Attn. Max Pooling only pools one description. ### Q2: Adaptive attribute number We use an adaptive number of attributes because of variations and granularities between datasets. For instance, 4 attributes (Action Pose, Number of People, Background Setting, Objects Present) may suffice for UCF101 but are inadequate for ImageNet, which has greater class variation. The generated attributes are generally in the same granularity, such as shape, pattern, etc. for ImageNet and fur pattern, eye pattern, etc. for Pets. Once sufficient attributes for class differentiation are available, additional attributes do not provide further benefits and may increase overfitting risks. ### Q3: Deep prompting Yes, deep prompting introduces learnable prompt tokens in every transformer layer, as opposed to shallow prompting, which introduces prompt tokens only at the input level. This is a common trick for improving model's performance used in previous works [2,4,5]. To reduce complexity, we applied deep prompting only to the vision encoder. We apologize for not showing this in Figure 2. ### Q4: Effect of $\alpha$ $\alpha$=0.4 was empirically found to balance global and local information optimally. When $\alpha$=1.0, the performance significantly decreased: | Base | Novel | HM | | - | - | - | | 81.54 | 73.85 | 77.51 | This shows the importance of expert tokens. Additionally, in Table 4, we tried using CLS token instead of expert tokens to align with each attribute feature during training, but also found it significantly worse. These results support the notion that domain-specific expert tokens enhance the model’s ability to grasp fine-grained details by focusing on distinct aspects of the image, as opposed to the CLS token’s global focus. ### Q5: Unstructured Set of Descriptions Yes, they were converted by putting all descriptions from the same tree of attributes in a unstructured set of descriptions. [1] Yang et al. Qwen2 Technical Report. arXiv:2407.10671 (2024) [2] Khattak et al. Self-regulating Prompts: Foundational Model Adaptation without Forgetting. ICCV (2023) [3] Yao et al. Visual-Language Prompt Tuning with Knowledge-guided Context Optimization. CVPR (2023) [4] Khattak et al. MaPLe: Multi-modal Prompt Learning. CVPR (2023) [5] Jia et al. Visual Prompt Tuning. ECCV (2022) --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for their detailed responses, which solved my previous questions. --- Reply to Comment 1.1.1: Title: Thank you Comment: Thank you again for your positive assessment and for taking the time to review our work. Your invaluable feedback has been instrumental in improving our paper.
Summary: This paper proposes a new method called "Attribute Prompt Learning Tree (TAP)" to improve the performance of CLIP on zero-shot and few-shot classification tasks. The authors leverage large language models (LLMs) to generate more descriptive text prompts and introduce a hierarchical tree-like structure to systematically generate and integrate these descriptions, ensuring a layered and comprehensive understanding of the visual content. The method also learns specialized "domain expert" prompt tokens that focus on different visual attributes and uses a vision-based pooling module to extract text features for specific instances. Extensive experiments show that TAP outperforms state-of-the-art methods on zero-shot and few-shot classification tasks across multiple datasets Strengths: 1), The idea that utilizing LLM to generate tree-like prompts makes sense. This structured description approach is significantly different from the existing simple text prompt methods and provides an efficient way to improve VLMs. 2), The image-conditional pooling module looks like good for capturing instance-specific features. 3), Experiments and visualization demonstrate the effectiveness of the proposed model. Weaknesses: 1), TAP introduces many textual and visual prompts, which leads to high computing and time costs. This may limit its applications. 2), TAP first generates hierarchical token prompts, while it seems like TAP does not use such a hierarchical structure to integrate the output of the text encoder. It only uses a pooling strategy to update the text encoder output with the visual feature. That is, TAP also does not utilize these relationships in the prompt graph. 3), TAP can be viewed as a multimodal prompt tuning method. What is the main difference between TAP and MAPLE, ALIGN. Technical Quality: 2 Clarity: 3 Questions for Authors: Please see above Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: None Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable comments! We address the detailed concerns below. We hope that our responses will reflect positively on your final decision. ### W1: Computational and Time Costs Thank you for highlighting the concern. We would like to clarify how TAP manages these costs effectively compared to previous works. 1. Efficient Use of Learnable Parameters: While TAP introduces textual and visual prompts, the number of newly introduced learnable parameters is actually less than in previous works such as PromptSRC [1] and MaPLe [2]. Specifically, methods like PromptSRC and MaPLe utilize "deep prompting," introducing "N" learnable prompt tokens in each layer of both the vision and text encoders. In contrast, TAP employs "deep prompting" only for the vision encoder. For the text encoder, we use "shallow prompting," where learnable prompts are introduced only at the input level. This approach significantly reduces the number of additional parameters for the text encoder. 2. Inference Efficiency: At inference time, both the vision and text embeddings in TAP can be independently pre-extracted and saved for future retrieval tasks. This means that once the embeddings are generated, they can be reused without recomputing. In contrast, MaPLe [2] utilizes a vision-language (VL) coupling function, which couples the vision and text encoders. This coupling means that the embeddings cannot be cached independently; different images can alter the text feature embedding due to the VL coupling, necessitating re-inference for new images. TAP avoids this issue, making it more efficient for repeated inferences and retrieval tasks. ### W2: How TAP uses the hierarchical structure Thank you for your feedback. We would like to clarify how the hierarchical structure in TAP is utilized to enhance the integration of text encoder outputs and the overall model performance. Our approach leverages the tree structure in two significant ways: 1. Description generation in Top-Down: We first generate a set of attributes from the class name, followed by generating descriptions for each attribute. This hierarchical approach ensures that the descriptions are structured and contextually relevant. Unlike previous works that generate an unstructured set of descriptions, our method organizes them into a coherent tree structure, as illustrated in Figure 1 of the paper. 2. Utilization of Tree of Attributes in Bottom-Up: - From leaf nodes to attribute-layer, we use the VCP layer to aggregate descriptions in each attribute to form attribute-level features. These features are then aligned with corresponding visual expert tokens, ensuring that each visual expert token focuses on specific, fine-grained attributes. - From attribute-layer to root node class prediction, we aggregate attribute-level features to make class predictions via a weighted sum of the prediction logits (refer to Equation 7). This process allows the model to utilize the structured relationships within the tree, enhancing the alignment and integration of visual and textual data. By using a top-down approach to generate the tree and a bottom-up approach to utilize it, TAP effectively integrates hierarchical relationships within the prompt graph. This dual usage ensures that the model leverages both the high-level structure and fine-grained details, leading to improved performance and interpretability. We hope this clarifies the role and utilization of the hierarchical structure in TAP. ### W3: Difference with MAPLE and ALIGN Compared with multimodal prompt tuning methods MAPLE and ALIGN, our method significantly differs from them in two key aspects: 1) Main focus. Our method focuses on augmenting category names with prior knowledge from LLMs to better utilize the rich information inherent in the category names. In contrast, these methods rely solely on the original category names and focus on multimodal feature fusion/alignment. 2) Methodology. Our method constructs a tree of attributes to organize LLM-generated descriptions and leverages this structured information through prompting, whereas MAPLE and ALIGN fail to learn such structured information, as they do not associate textual descriptions with category names. [1] Khattak et al. Self-regulating Prompts: Foundational Model Adaptation without Forgetting. ICCV (2023) [2] Khattak et al. MaPLe: Multi-modal Prompt Learning. CVPR (2023) --- Rebuttal Comment 1.1: Comment: I thank the authors for their detailed response, and I have read other comments. I decide to keep my rating. --- Reply to Comment 1.1.1: Comment: Dear Reviewer, Thank you for your response. Could you please let us know if there are any specific concerns that remain? We are more than happy to address any further issues during the discussion period.
Summary: This paper propose a method that aiming to align the vision modality with not only the category name but also the whole concept subgraph the noun represents in the knowledge graph. This is achieved by adding a bunch of attributes branches attached to this concept. The authors argue that this integration of attribute knowledge will make the alignment more transferrable and thus result in a good performance boost in terms of zero/few shot results. Basically, this work focusing on the topic of textual prompt enrichment task that is investigated before but implement in a different manner. Additionally, the proposed method use seperate tokens to learn different aspectrs of attributes of given images, working as 'domain expert'. Strengths: 1. Might be the first work trying to align the vision image with structured data. It is quite interesting considering that most text prompts now are less organized and noisy. And structured data, as pointed out in the recent research of LLM, may lead to better reasoning skill for a foundation model. 2. The proposed vision-conditional pooling can help the model filter out descriptions that are not direct appeared in the image. 3. Recieve good results on different classification datasets with the model trained with this method. Weaknesses: 1. The attributes description is generated by the LLM, which could contain hallucinated content. While there are many reliable sources of knowledge such as wikipedia or conceptNet, this paper seems skip these sources to obtain some accurate attributes. 2. Though this paper decide to use a tree structure to represent the concept. The built tree is not encoded in a structure-awared manner. They are still feed as langauge tokens to the LLMs. 3. in equation (5), what is $v_y^a$ stands for? 4. The author argued that the vision-conditional pooling, which is bascially a cross attention layer between the visual and language modal. The authors believe this this design will make the model filter out non-exisiting material in the text description. However, we know that due to the quirk of softmax function. You can never make some tokens attention to be '0'. Thus, the model is learning some spurious correlation aftertall. Technical Quality: 4 Clarity: 3 Questions for Authors: 1. Why structured description is so important in your presumption? Given that only the text prompts are structured but visual data are not, will this fact hinder the model to learn a structured in-detail alignment? 2. How do you make sure one expert token will only learn from one attribute? 3. Can the model trained this way also work well on the downstream tasks? Confidence: 5 Soundness: 4 Presentation: 3 Contribution: 4 Limitations: Not applicable. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the constructive feedback and the positive assessment of our work! We address the detailed concerns below. ### W1: Potential for Hallucinated Content from LLM Thank you for your insightful comment. We considered using Wikipedia for description generation; however, this approach still necessitates an LLM to extract structured descriptions from extensive Wikipedia pages, which may still result in hallucinated content. Additionally, processing long Wikipedia pages can be resource-intensive. Since our task is essentially retrieving simple facts from LLM that does not require sophisticated reasoning or planing capabilities, the risk of hallucination is significantly reduced. ### W2: Structure-Aware Encoding of the Tree We apologize for any confusion caused. Our tree structure serves dual purposes: 1. Generation of Descriptions: The tree structure guides the generation of descriptions by first generating a set of attributes from the class name and then generating descriptions for each attribute. This approach contrasts with previous works that generate an unstructured set of descriptions (refer to Figure 1). 2. Utilization of the Tree of Attributes: The generated Tree of Attributes is used as follows: The VCP layer aggregates descriptions in each attribute (leaf nodes) to form attribute-level features, which align with the visual expert tokens. These attribute-level features are then aggregated to make class predictions via a weighted sum of the prediction logits (refer to Equation 7). Thus, our approach generates the tree in a top-down manner and utilizes it in a bottom-up manner. ### W3: Equation (5) clarification We apologize for the confusion. $y$ in $v^a_y$ represents the ground truth, thus $v^a_y$ represents attribute $a$'s VCP-pooled text embedding of the ground truth class. ### W4: Potential for Spurious Correlation in Vision-Conditional Pooling We acknowledge your concern. We note this issue to be a common problem of the attention mechanism. To address this, we experimented with "Quiet Attention" proposed by Evan Miller [1], obtaining comparable results: |Base|Novel|HM| |-|-|-| |84.65|77.48|80.90| This demonstrates that our current implementation of the VCP module, which emphasizes relevant descriptions while de-emphasizing irrelevant ones (as showcased in Figure 4), is effective for the current study. ### Q1: Structed Description We argue that visual data implicitly has a tree structure. The raw image can be seen as the root node, the expert tokens as the attribute layer, and the text description features linked to the expert tokens by the VCP module as the leaf nodes. In this framework, the image corresponds to the class label in text, visual expert tokens align with text attributes, and leaf nodes are shared between vision and text via the VCP layer. This tree structure facilitates fine-grained alignment between visual and textual data. ### Q2: One Expert for One Attribute We make sure one expert token only learn from one attribute by only aligning one expert token to one specific attribute. That is, we have the same number of expert tokens as the number of attributes in the generated Tree of Attributes, and each expert token aligns with one of the attribute via contrastive learning. ### Q3: Downstream tasks Yes, the model works well on downstream tasks. Our base-to-novel experiments, where the model is trained on base classes and tested in zero-shot on novel classes, demonstrate superior performance compared to other baselines. Additionally, we conducted a cross-dataset experiment, where we trained on ImageNet with 16 shots and tested in zero-shot on the remaining 10 datasets. As shown in Table 1 of the Global response, TAP outperforms PromptSRC by 1.03% on ImageNet and 0.75% on average across the other 10 datasets. This indicates that our model generalizes well to downstream tasks. [1] Evan Miller. Attention Is Off By One. (2023) --- Rebuttal Comment 1.1: Comment: 1. Many works have validate that including knowledge sources or using RAG can alleviate hallucination problem. So I believe if you can insert a paragraph from a reliable sources to the LLM as context, the LLMs will generate more reliable results comparing to direct doing QA. 2. Thank you for your explanation. From my understanding, you grouped a collection of discription for each attribute by using VCP and each VCP is aligned with an Expert token to reflect the two-level hirerachical structure, is that correct? I was expecting there is a Graph embedding or something close to embed the tree structure. But I still find this implementation is interesting. Good job. 3. I'd like to learn more about the visiual Expert tokens in 3.4. So, you said that you have A independent tokens serve as Expert tokens. And you insert these vision expert tokens before the image patch sequence like what VPT. Then I wonder, are you inserting all the expert tokens together into the image embedding sequences? If that is the case, when doing the cross attention mentioned in 3.5, how do you make sure only the relevant Expert tokens is doing cross-pooling with the relevant attributes, as described in equation 4. Please answer this question clearly and in detail. I will consider to raise my rating if I received a satisfying response. --- Reply to Comment 1.1.1: Comment: 1. Thank you for your suggestion. We agree that incorporating reliable sources as context before querying the LLM could generate more accurate and reliable results. We will certainly explore this direction in our future work to further enhance the robustness of our method. 2. Thank you for your compliment and understanding. You are correct that our approach groups a collection of descriptions for each attribute using VCP, and each VCP is aligned with an expert token to reflect a hierarchical structure. To clarify further, the structure in our method is actually three levels: - Class Name (Root Node) → Attributes (Intermediate Layer) → Descriptions (Leaf Nodes) VCP helps aggregate from descriptions to attributes (from leaf nodes to the attribute layer). Then, the prediction fusion via weighted sum aggregates the attribute predictions into the final class prediction (from the attribute layer to the root node). While we did not use an explicit graph embedding to represent the tree structure, we implemented the hierarchical structure in a more implicit manner through this approach. We appreciate your positive feedback on this implementation. 3. Thank you for providing me the opportunity to clarify. Yes, all expert tokens are indeed inserted together into the image embedding sequences, similar to what VPT did. To ensure only the relevant expert token is doing cross-attention, we used a separate VCP module for each expert token. Concretely, - The number of attributes in the Tree of Attributes (ToA) equals the number of expert tokens added, which also equals the number of VCP modules. - Each attribute description set is aggregated using a dedicated VCP module that operates solely for that attribute. The pooled embedding from this VCP module is then aligned with the specific expert token associated with that attribute. - In the VCP module (batch size is omitted for simplicity in the following notation): - The query is the expert token of shape $1\times D$ (where $D$ is the embedding dimension). - The key is the set of descriptions in this attribute, with shape $N\times D$ (where $N$ is the number of descriptions). - The resulting pooled embedding, after cross-attention, is of shape $1\times D$, which aligns directly with the expert token (also of shape $1\times D$). This design ensures that each expert token only interacts with its relevant attribute. I hope this detailed explanation clarifies your concerns. Please feel free to ask any further questions, and I sincerely appreciate your consideration in potentially raising the rating.
Summary: The TAP method structures textual descriptions in a hierarchical “concept-attribute-description” format, effectively creating a knowledge graph from large language models (LLMs) for each category name. This structure allows for a more comprehensive and detailed understanding of the visual content. The paper reimagines learnable prompt tokens as "domain experts," each specializing in different aspects of the image, supplemented by a global perspective provided by the CLS token. To address potential misalignment between general descriptions and specific image content, the paper introduces a vision-conditional pooling module. This module extracts instance-specific text features, ensuring optimal image-text alignment. Strengths: The proposed method incorporates structured tree of attribute into prompt tuning that provide richer supervisory information compared to unstructured attribute information. A set of experiments has been conducted, and the results look promising. Weaknesses: One major limitation of the method is that it requires human review to "ensure the quality of the example" (L175). Recall that one major advantage of prompt tuning is that it can adapt large models quickly to specific tasks. However, the requirement of human reviewing in the proposed method is not consistent with this goal. In addition, it is not clear how many human efforts are needed here, and how to handle the potential human bias in quality evaluation. The paper lacks cross-dataset experiments, which is typically provided in existing PT papers. The results are important to examine the domain generalization capability of the method. For training details, different learning rates were used for different datasets, however, existing methods typically use a same LR for all datasets. From this point, the comparison is somewhat unfair. Technical Quality: 2 Clarity: 3 Questions for Authors: In Section 3.3, for attribute generation, what type of dataset information is given to the large model? In Figure 4, each image is accompanied by only two descriptions. Are all images described using two sentences each? In this paper, it mentions that the method can capture subtle differences between attributes. Could you provide a relevant example? Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable comments! We address the detailed concerns below. We hope that our responses will reflect positively on your final decision. ### W1: Human Review Requirement We appreciate the reviewer highlighting the concern regarding the human review process in our method. We apologize for any confusion caused by our explanation. The human efforts mentioned refer to the process of curating a 1-shot example for in-context learning when prompting LLMs. Unlike previous works [1] that manually curate descriptions, we have streamlined the process by making it semi-automatic. LLMs generate the examples, followed by a brief human review. Typically, the generated examples are sufficiently accurate and require less than 30 seconds per dataset for a quick read-through, as they mostly retrieve simple facts from LLMs. This minimal human involvement ensures quality without significant effort or bias. ### W2: Cross-Dataset Experiments We have conducted additional "cross-dataset" experiments where we trained the model on ImageNet with 16 shots and tested it directly on the remaining 10 datasets in a zero-shot manner. The results are presented in Table 1 of the Global response. Compared to PromptSRC, TAP achieved a +1.03% improvement on ImageNet and a +0.75% average improvement across the 10 datasets, demonstrating robust domain generalization capabilities. ### W3: Use of Different Learning Rates We understand the concern regarding the use of different learning rates. The primary reason for splitting the datasets into two groups with different learning rates is due to the variability in ease of learning from LLM-generated descriptions. However, we also note that using different hyperparameters for different datasets is not uncommon in existing prompt-learning papers. For instance, TaskRes [2] used different learning rates for ImageNet and other datasets. Similarly, CoOP [3] and DMN [4] utilized different numbers of epochs for various datasets, and PromptSRC applied different GPA hyperparameters across dataset groups. ### Q1: Dataset Information As stated in Appendix A.3, the dataset information provided to the LLM is the description from the dataset's official website. For example, the dataset information for EuroSAT is: "EuroSAT dataset is based on Sentinel-2 satellite images covering 13 spectral bands and consisting of 10 classes." ### Q2: Number of Descriptions We apologize for the confusion. As indicated in our prompts in Appendix A.3, the number of descriptions per class ranges from 2 to 5. The figure is an illustrative example and does not limit the descriptions to two sentences per image. ### Q3: Relevant Example At higher hierarchy, the model captures details from different attributes in the image through the alignment of vision expert tokens and corresponding text attribute features, as showcased in Figure 3. At the attribute level, the model captures the variations within the class via the use of the VCP module as showcased in Figure 4. [1] Menon et al. Visual Classification via Description from Large Language Models. ICLR (2023) [2] Yu et al. Task Residual for Tuning Vision-Language Models. CVPR (2023) [3] Zhou et al. Learning to Prompt for Vision-Language Models. IJCV (2022) [4] Zhang et al. Dual memory networks: A versatile adaptation approach for vision-language models. CVPR (2024) [5] Khattak et al. Self-regulating Prompts: Foundational Model Adaptation without Forgetting. ICCV (2023) --- Rebuttal Comment 1.1: Title: Please respond to the authors' rebuttal Comment: Dear Reviewer ZE3g, Thank you again for reviewing this paper. Since the reviewer-author discussion phase is closing soon, could you please respond to the authors' comments? Best, AC --- Rebuttal Comment 1.2: Title: Reply to rebuttal Comment: Thanks for the rebuttal. It addresses some of my concerns but one major concern still exists. One limitation of the method is that it requires many tunings specific to each dataset in order to achieve good performance. The tuning process involves two key aspects: __Human Intervention (W1)__: thought the required human effort is minimal, its impact on model performance remains unclear after the rebuttal. __Learning Rate Variability (W3)__: The method employs different learning rates for different datasets. While TaskRes is trained in a similar manner, the experiments lack comparative analysis with this approach. Additionally, further discussion is needed on how to determine the most appropriate learning rate for a given dataset. I expect authors provide additional insights regarding this issue. --- Reply to Comment 1.2.1: Comment: Thank you for your follow-up questions. W1. We apologize for any confusion. To clarify, the human review stage is designed to ensure the quality of LLM-generated descriptions. In practice, we found the LLM-generated descriptions good enough and no manual editing was involved in this stage. Therefore, even if we remove this stage, our model would still achieve the same results. Additionally, in the LLM robustness experiment requested by Reviewer Vt6k, we regenerated the descriptions using Qwen2-7B-Instruct without any human review due to the limited time during the rebuttal process. The results are as follows: | Base | Novel | HM | |------|-------|----| | 84.68 | 77.31 | 80.83 | These robust results show that the method's performance is maintained. W3. We apologize for not including TaskRes as one of our baselines. We compare TAP and TaskRes in the 16-shot setting as follows: | Method | ImageNet | SUN | Aircraft | EuroSAT | Cars | Food | Pets | Flowers | Caltech | DTD | UCF | Average | |--------|----------|-----|----------|---------|------|------|------|---------|---------|-----|-----|---------| | TaskRes | 73.0 | 76.1 | 44.9 | 82.7 | 83.5 | 86.9 | 92.4 | 97.5 | 95.8 | 71.5 | 84.0 | 80.8 | | TAP | 73.8 | 77.3 | 50.4 | 91.9 | 85.4 | 87.5 | 93.9 | 98.1 | 96.7 | 74.9 | 87.2 | 83.4 | TAP outperforms TaskRes on all datasets, with an average improvement of 2.6% across the 11 datasets. Regarding the determination of learning rates, we apologize for not being clear enough in our rebuttal. We grouped the datasets based on the number of attributes and adjusted the learning rates for vision and text encoders separately based on our intuition to balance generalizability and performance. Concretely, for the vision encoder, datasets that have fewer attributes also have fewer learnable expert tokens (thus fewer parameters and lower learning difficulty), we used a larger learning rate (0.006 vs. 0.004) to facilities the learning process. For text encoder, the number of learnable text prompts is fixed, where fewer attributes provide fewer text descriptions/data, and thus a smaller learning rate (0.002 vs. 0.004) was used to avoid overfitting. We hope this additional context clarifies our approach. Thank you again for your thoughtful feedback and for considering these points in your evaluation.
Rebuttal 1: Rebuttal: We sincerely thank all reviewers for your valuable comments. We first reply to questions raised by multiple reviewers and then other questions from every reviewer. Q1. Model's generalizability. (Reviewer bYva, ZE3g, 1xvn) To evaluate the generalizability of our model, we conducted an additional cross-dataset experiment where we trained on ImageNet with 16-shot and tested on the remaining 10 datasets in a zero-shot manner. As shown in Table 1, TAP achieved state-of-the-art performance, outperforming PromptSRC by 1.03% on ImageNet and 0.75% on average across the 10 datasets. Notably, TAP is 4.7% better on DTD, 1.2% better on SUN397, and 0.69% better on FGVC Aircraft compared to PromptSRC, demonstrating TAP's superior generalizability. Table 1. Cross-dataset benchmark evaluation. Trained on the source dataset (ImageNet) with 16-shot and tested on all other target datasets in zero-shot. "Average" represents the average performance across all target datasets. ||ImageNet | Caltech101 | OxfordPets | StanfordCars | Flowers102 | Food101 | Aircraft | SUN397 | DTD | EuroSAT | UCF101 | *Average* | | - | - | - | - | - | - | - | - | - | - | - | - | - | |CoOp|71.51|93.70 |89.14 |64.51 |68.71 |85.30| 18.47| 64.15 |41.92 |**46.39**| 66.55| 63.88| |Co-CoOp|71.02|**94.43**|90.14 |65.32 |**71.88**|86.06| 22.94 |67.36 |45.73 |45.37 |68.21|65.74| |PromptSRC | 71.27 | 93.60 | 90.25 | **65.70**| 70.25 | **86.15** | 23.90 | 67.10 | 46.87 | 45.50 | 68.75 | 65.81 | |TAP|**72.30**|94.30|**90.70**|65.60|70.93|86.10|**24.57**|**68.30**|**50.20**|46.00|**68.90**|**66.56**|
NeurIPS_2024_submissions_huggingface
2,024
Summary: This paper proposes a new prompt tuning method for adapting the vision-language model. The authors design the tree of attribute prompt learning to substitute the categorical description for adapting the vision-language model. A vision-conditional pooling module is proposed to extract instance-specific text features. Extensive experimental results demonstrate the effectiveness of the proposed method. Strengths: 1. A tree of attribute prompt learning method is proposed to guide the adatpation of VLM with the hierarchical semantic information. 2. This paper is well-written and easy to follow. Weaknesses: 1. According to the experiment, the performance improvement of TAP is marginal, e.g., the few-shot performance on most of datasets. Although the visualization results of VCP layer are impressive, the improvement of this module is also very slight compared to average pooling. 2. The core motivation of this method is learning fine-grained attributes to adapt VLMs. However, similar ideas have been explored in previous works , e.g., APPL[1], MAP[2]. Please discuss the differences. [1] AAPL: Adding Attributes to Prompt Learning for Vision-Language Models [2] Multi-modal Attribute Prompting for Vision-Language Models 3. The construction of ToA depends heavily on the prior information on the category of attributes suitable for the dataset. However, one of the most capability of VLM is its zero-shot ability in the open-vocabulary context. What's the performance of the proposed method in the domain generalization setting? 4. The model details in Figure 2 are not presented very clear, especially the input & output streams. This figure should be refined for better clarity. 5. The mechanism behind Equation (5) and the function of VCP needs more clarification. Why conduct constrastive learning between expert token P_a^v and attribute embedding v_c^a generated from P_a^v itself, instead of P_a^v and the embedding of attribute descriptions D? Technical Quality: 3 Clarity: 2 Questions for Authors: Please refer to Weaknesses. Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable comments! We address the detailed concerns below. ### W1: Marginal Performance Improvement We appreciate the reviewer's observation. **a. Generalizability in Base-to-Novel Experiments:** The base-to-novel experiment is crucial for evaluating the generalizability of prompt learning methods. According to CoCoOP [1], models can overfit to base classes in a few-shot setting. Our results show that TAP excels in this aspect, indicating robust generalization capabilities. **b. Significance of Performance Increase:** While the few-shot setting is less important, an average performance increase of 0.5% in the few-shot classification is still noteworthy. For context, the improvement from CoOP to ProGrad is 0.8%, and CoCoOP performs nearly 5% **worse** than CoOP in few-shot settings. Thus, the improvements brought by TAP, though appearing small, are significant enough within the domain. **c. Improvement of VCP Module:** Regarding the VCP module, Table 6 indicates that VCP achieves a 1.08% improvement over average pooling. This enhancement is substantial, demonstrating the effectiveness of VCP in refining attribute relevance to the visual context, which translates to better overall performance. ### W2: Differences between TAP and previous works The key differences between TAP and existing methods are the structured approach to integrating LLM-generated descriptions and the use of visual expert tokens for fine-grained alignment. As for the two referenced papers, although some similarities exist, significant differences between our work and theirs are evident. Additionally, TAP outperforms them by large margins (TAP HM 81.04 vs. AAPL 76.01 and MAP 79.36). Regarding AAPL: While both AAPL and TAP aim to enhance vision-language models through attribute integration, they diverge fundamentally in their methodologies and objectives. AAPL focuses on managing learnable prompts via data augmentation, using adversarial token embedding to mitigate bias and enhance attribute-specific learning through augmented image features. However, this approach is constrained by the limitations of image data augmentation, which cannot cover all possible variations. In contrast, TAP leverages LLM-generated knowledge to construct a structured Tree of Attributes. This method explicitly captures fine-grained, instance-specific attributes, such as differentiating between a pan-fried crescent-shaped dumpling and a round steamed dumpling, which image augmentations in AAPL cannot achieve. Thus, TAP's utilization of LLM-generated descriptions allows for a more comprehensive and accurate adaptation of VLMs to diverse and nuanced visual concepts. Regarding MAP: While both TAP and MAP leverage LLM-generated descriptions and introduce learnable prompts in the vision encoder, several key differences distinguish TAP. First, MAP generates descriptions in an unstructured manner, as depicted in Figure 1(b) of our paper, whereas TAP organizes this information into a structured Tree of Attributes (ToA). Second, although both methods aim to enhance fine-grained alignment, MAP aligns at the individual description level, which can lead to redundancy and misalignment due to similar descriptions falling under the same attribute category or containing irrelevant information for the specific image. TAP, on the other hand, aligns visual expert tokens at a higher "attribute" level, ensuring each token focuses on a specific attribute class. This structured approach mitigates the risk of misalignment and enhances the specificity and relevance of the visual prompts. Furthermore, TAP employs a vision-conditional pooling module to filter out irrelevant descriptions, which is not addressed in MAP, providing a more robust and contextually accurate alignment. ### W3: Dependency on Prior Information for ToA Construction We understand the reviewer's concern. To address this, we performed additional "cross-dataset" experiments to evaluate the zero-shot generalization capabilities of our method. In the experiment, following the common practice, we trained our model on ImageNet under 16-shot and tested it directly on the remaining 10 datasets in a zero-shot manner. The results are presented in Table 1 of the Global response. Compared to PromptSRC, TAP achieved a +1.03% improvement on ImageNet and a +0.75% average improvement across the 10 datasets. These results underscore TAP's robust performance in domain generalization settings, reinforcing its utility in open-vocabulary contexts. ### W4: Clarity of Figure 2 Thank you for your constructive feedback. In Figure 2, the input for the vision encoder is the tokenized image tokens and learnable visual expert tokens. The input for the text encoder is the generated descriptions and learnable text prompts. Each attribute has its prediction and the fusion of all attributes is treated as the final output. We will update the figure to ensure it clearly illustrates the input and output streams of our model in the revised version. ### W5: Clarification of equation (5) We apologize for the confusion caused. In equation (5), $v_c^a$ is the pooled textual embedding from attribute $a$ of class $c$, where pooling is done by the VCP module (cross attention between $p^v_a$ and the descriptions in the attribute description set $D_c^a$). The reason why we didn't contrast $p^v_a$ against $D$ is because $D$ is a set of textual embeddings. Note that the vision-conditional pooling module pools suitable textual descriptions with the visual instance condition so the $v_c^a$, as an instance-specific text feature, can be better aligned with the vision embeddings. [1] Zhou et al., Conditional Prompt Learning for Vision-Language Models. CVPR (2022) --- Rebuttal Comment 1.1: Title: Reply to rebuttal Comment: The responses have addressed all my questions. --- Reply to Comment 1.1.1: Title: Thank you Comment: Thank you again for your positive assessment and for taking the time to review our work. Your invaluable feedback has been instrumental in improving our paper.
null
null
null
null
null
null
Buffer of Thoughts: Thought-Augmented Reasoning with Large Language Models
Accept (spotlight)
Summary: The authors propose a new framework to optimize prompting for various tasks. The frameworks consists of a buffer, where high-level problem-solving templates are stored. A query is first processed by extracting its key aspects.Templates are selected using embeddings of the templates and the extracted key aspects query. The high-level problem-solving templates is then instantiated with the extracted key aspects of the query and its inference is executed. Additionally a buffer manager is introduced to allow updating of the thought template buffer with new, "specialized", thought templates. Strengths: The paper combines the idea of RAG with a library of templates for solving tasks in order to make automatic prompt engineering more flexible and by using a cache (the library) more efficient, since it avoids more complex exploration. The framework and its components are sufficiently outlined and their description is clear. The writing is easy to follow. The evaluation shows clear advantages over existing approaches, while being executed with modern LLMs. Weaknesses: Claims from the checklist: * point 6: It is not clear, what embedding model is used. * point 8: No sufficient information are provided for the compute resources, especially the local (?) Mixtral execution. Not clear, how well their approach scales with the number of different thought templates (and not just specialized one). Additionally it is not clear, how well their approach works, if the query (something completely different) does not match any of thought templates in the buffer, as at least some of thought templates seem to specifically designed for the respective evaluation tasks. This is also one of the weaker points, since the authors claim that their approach requires less manually labor from the user than other approaches. I would have appreciate some examples for the tasks in the appendix, so that it is not necessary to look them up in the respective code/papers. The language can be polished a little bit more. For example: * lines 29/30: "which makes it impractical to manually design them task by task" - it is just cumbersome to manually design them, but not impossible * lines 69/70: "the retrieval-augmented LLM first queries an external database with billion-level tokens [23] for retrieving a subset of the text" - I think that is just how that particular paper implemented RAG, but it is not a general description of the idea behind RAG. * typos: * "enahnced" (line 78) * Llama-70B (line 277) references could also use a little but of work, for example: * reference [27]: no place of publication * references are slightly inconsistent: * for example one time the abbreviation ICLR is used, otherwise not * sometimes the number prefix (for example Eleventh) for conferences is used, sometimes not Technical Quality: 3 Clarity: 3 Questions for Authors: * What embedding model is used for the thought buffer? * Why it is too computationally expensive to provide error bars for the existing results? It should be possible to compute them locally at least for some of the tasks. * Figure 3: * What is logarithmic time? * Why is the inference time of BoT not at least twice that of the Expert baseline, since it needs at least two LLM interactions, where as the Expert baseline only requires one (if I understood that correctly)? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The limitations are reasonably explained. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: *We thank Reviewer cwbi for the positive review and valuable feedback. We are glad that the reviewer found that our proposed BoT is more flexible and more efficient, and our method shows clear advantages over existing approaches. Please see below for our responses to your comments.* **Q1: What embedding model is used?** **A1:** In our implementation, we used text-embedding-3-large as the embedding model. We also conduct in-depth experiments about the model size impact for our method. **Please refer to global response for detail.** **Q2: Information for compute resources?** **A2:** As mentioned in Line 214, we use NVIDIA A100-PCIE-40GB GPUs for local model like Llama3-8B and Llama3-70B. **Q3: How well the approach scale with different thought templates. If query does not match any of thought templates in the buffer, how well this approach works.** **A3:** 1. As depicted in Fig.9 and Fig.10, with the number of different thought template increase, the accuracy and reasoning efficiency of BoT steadily improves, it confirms that our method possesses excellent scalability. This is because the buffer manager continually expands the meta-buffer while also utilizing the thought templates obtained from previously solved problems to address subsequent similar problems. The possibility of retrieving suitable thought templates also increases, thus avoiding the need to construct reasoning structures from scratch and thereby enhancing inference efficiency. 2. As discussed in L160-163, for entirely new or unique problems, our method provides three general coarse-grained thought-templates, which could be assigned considering the distilled task. These coarse-grained thought templates ensure that new tasks follow appropriate reasoning processes. This broad guidance offers LLMs greater flexibility during the generative inference process, enabling them to efficiently address new and unique problems. 3. Due to character limits, **please refer to Q1 in response to reviewer yiEf** for more examples in demonstrating how well our thought templates work. **Q4: Language, typos and references** **A4:** We sincerely appreciate your attention to the details of our paper. We will carefully address the issues you pointed out regarding language, formatting, and citations, to enhance the professionalism and standardization of our paper. **Q5: Concern about error bars** **A5:** 1.It is indeed possible to compute error bars for our results. However, we chose not to include them in our initial version to maintain consistency with many prior works, such as ToT, Meta Prompting, and PAL, which also do not provide error bars. This approach ensures a unified basis for comparison across different methods. 2.In response to the reviewer's concern, we have now computed the error bars for our results to enhance the strictness and clarity of our results. The updated results, including error bars, are presented in the table below. We will incorporate these error bars into the final version of our paper to provide a more comprehensive and statistically robust presentation of our results. | Task|GPT4|GPT4+CoT|Expert|PAL|ToT|GoT|Meta|**BoT (Ours)**| |---|----|-------|------|---|---|---|----|--------------| | Game of 24|3.0|11.0|3.0|64.0|74.0|73.2|67.0|82.4 ± 1.5| | MGSM (avg)|84.4|85.5|85.0|72.0|86.4|87.0|84.8|89.2 ± 1.8| | Multi-Step Arithmetic|84.0|83.2|83.2|87.4|88.2|89.2|90.0|99.8 ± 0.2|| | WordSorting|80.4|83.6|85.2|93.2|96.4|98.4|99.6|100.0 ± 0.0| | Python Puzzles|31.1|36.3|33.8|47.3|43.5|41.9|45.8|52.4 ± 1.6| | Geometric Shapes|52.6|69.2|55.2|51.2|56.8|54.2|78.2|93.6 ± 2.4| | Checkmate-in-One|36.4|32.8|39.6|10.8|49.2|51.4|57.2|86.4 ± 1.7| | Date Understanding|68.4|69.6|68.4|76.2|78.6|77.4|79.2|88.2 ± 1.5| | Penguins|71.1|73.6|75.8|93.3|84.2|85.4|88.6|94.7 ± 1.2| | Sonnet Writing|62.0|71.2|74.0|36.2|68.4|62.8|79.6|80.0 ± 0.4| As shown in the table, our method achieves stable performance, which further proofs that our method is effective, stable and robust. **Q6: What is logarithmic time?** **A6:** We perform a logarithmic operation on the obtained time, i.e., Logarithm time = $\ln(t)$. The reason why we do this is that the data range of the inference time between different methods is board, in such cases, the readability and effectiveness of the histogram in conveying information are compromised. By applying logarithmic transformation, the differences in bar heights within the histogram are reduced, rendering the chart more aesthetically pleasing and the information more clearly presented. **Q7: Why is the inference time of BoT not at least twice that of the Expert baseline?** **A7:** As mentioned in Q6, the formula for calculating the multiple of inference time should be $e^{\ln(t_1) - \ln(t_2)}$, rather than direct comparison. Furthermore, despite our method involving collaboration across multiple components, the inference time required by BoT for certain problems does not significantly exceed that of the baseline. Because our method transforms some reasoning tasks (such as Game of 24 and checkmate in one) by simplifying the multi-step reasoning or heuristic search process into generating a segment of code capable of solving the problem. Specifically, in the expert baseline, solving a problem from Game of 24 need to experiment with multiple combinations and conduct calculations, which is time consuming. On the contraty, our method transform this process into generating few lines of code and execute the code to get the answer, which requires significantly less time compared to the expert baseline, demonstrating the superiority of our approach in addressing certain complex problems. --- Rebuttal Comment 1.1: Comment: I have read the rebuttal to my review as well as the rebuttals and comments for the other reviewers. Q6: Regarding Fig. 3, I find the figure rather misleading or at least hard to read, especially with current y axis label, which suggests that the time is seconds. I suggest using a logarithmic scale of the y axis instead of logarithmic time. Q7: But then shouldn't you use a better baseline? I mean clearly Game of 24 can be solved by a simple Python script. --- Rebuttal 2: Title: Gentle Reminder Comment: Dear Reviewer cwbi, We greatly appreciate the time and effort you have invested in reviewing our paper. Your thoughtful questions and insightful feedback have been invaluable. In response to your queries, we provide more explanations about the effectiveness of our proposed approach, and additional analysis experiments. If you have any further questions, please feel free to ask. Thank you once again for your invaluable contribution to our research. Warm regards, The Authors --- Rebuttal 3: Title: Response to Reviewer cwbi Comment: Thank you for your reply, and we sincerely appreciate your suggestions! Following your suggestion, we will using a logarithmic scale of the y axis instead of logarithmic time in Fig.3 for the final version. Actually, in Fig.3, we have compared our method with PAL (Program-aided Language Models) which utilizes a python script to solve problem. And from the results in Fig.3, we can find that our BoT is faster than PAL because BoT retrieves proper thought templates for accelerating our reasoning process without reasoning from scratch. If you have any further questions, please feel free to ask. Thank you once again for your invaluable contribution to our research. Warm Regards, The Authors
Summary: This paper proposes a novel reasoning procedure for LLM, named "buffer of thoughts". The core idea is that for each task, first extract a template describing how the task should be solved, then store all these templates in a "buffer". As the LLM receives a new query, a retrieval procedure is applied to extract the most relevant task template out of the buffer. The template is then instantiated for this specific query, which becomes a concrete guideline that instructs the model how to solve the new query. The paper demonstrates that this procedure outperforms baselines like tree-of-thought and graph-of-thought significantly on some reasoning tasks. Strengths: The novelty of the proposed method "buffer of thought" is a good contribution of the paper which may significantly improve LM's reasoning ability for some tasks. Weaknesses: Generalizability and robustness of the proposed approach needs to be further verified, as the introduction of a hierarchical reasoning procedure (abstract template + instantiation) may create more noise and make it less reliable for novel tasks. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. The success of the proposed approach critically depends on the quality of the automatically induced template. While the paper showed empirical performance on downstream tasks, it remains unclear how good the templates themselves are. It would be great to make a comparison between the automatically generated task templates with manually prepared templates for some tasks. 2. It remains unclear to me how the template for a specific task category (say maths) can potentially be revised and improved, as the model sees more and more examples from that task. Although the paper described briefly how the buffer can be dynamically updated (line 182), I don't quite understand how the template is updated automatically for higher accuracy as the model receives more examples (this seems to be a verbalized gradient descent procedure, but no quality improvement is guaranteed). 3. What are the embedding models you used for retrieval purposes? Also it would be great to give some analysis on the impact of retrieval quality on downstream task performance, as finding the right template is critical. 4. In Eq 5, it seems whether the template should be replaced should depend upon the template quality, instead of its similarity with the embedding vector. 5. How quickly can this approach adapt to an unseen task? Would one or two examples from new task be enough to achieve reasonable accuracy? It would be great to give some analysis to the adaptability and continual learning ability of the proposed approach. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The paper properly addressed limitations and impact of the work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: *We thank Reviewer upCD for the positive review and valuable feedback. We are glad that the reviewer found that our BoT makes good contributions to the improvements of LLMs, and our method significantly improves LM's reasoning ability in various tasks. Please see below for our responses to your comments.* **Q1: Generalizability and robustness of the proposed approach.** **A1:** 1. Breaking down a complex problem into step-by-step solutions has proven to be an effective way to enhance accuracy and robustness, as demonstrated by ToT, GoT, and Meta-prompting. Moreover, Our BoT additionally introduces a problem distiller to extract the core problem for input task and simplifies the problem-solving procedure by retrieving proper thought templates. Such high-level thought templates discard unnecessary specific details of problem-solving processes and can be instantiated with different problems adaptively, which effectively reduce possible noises caused by previous hierarchical reasoning methods (ToT, GoT). 2. The empirical results have also proved the generalizability and robustness of our BoT. The experiments of Fig.4 demonstrate that our BoT can maintain a robust reasoning process and reduce more noises compared to previous methods when faced with different tasks. And the experiments of Fig.9 reveal that our BoT can generalize to more novel tasks and continually improve the performance. **Q2: Comparison between automatically generated task templates with manually prepared templates.** **A2:** 1. We appreciate the reviewer's insightful comment regarding the quality of the automatically generated templates. To address this, we conducted comprehensive experiments to compare the automatically-generated templates with manually-designed templates in MATH dataset. Please refer to **the PDF in global response** for more detailed results. 2. The additional experiments showcase that the automatically-generated templates ensure consistency, efficiency and scalability for more accurate and effective problem-solving, also demonstrated by the results in Fig.9 and 10. **Q3: How template can be revised and improved?** **A3:** 1. Since there is no suitable metric to evaluate the quality of thought templates and BoT is a training-free reasoning fraemwork, we **do not optimize the previously accumulated thought templates but instead add new thought templates**. To be specific, we calculate the similarity between new thought templates and those in the meta-buffer to avoid redundancy and repetition. Using powerful LLMs like GPT-4, the quality of the extracted thought templates is relatively high, as demonstrated by our qualitative analysis in Fig.9 and 10. 2. As for the reason why the accuracy increase with more examples received in Fig.9, we have discussed this in Line 465-477. That is because when in the first round, the meta-buffer is empty and the buffer manager is still accumulating thought templates. With the accumulation of thought templates, BoT gradually enhances its ability **by utilizing the thought-templates obtained from previously solved problems to help** addressing subsequent similar problems, thus contributing to the increase in accuracy. 3. We sincerely thank you for your insightful suggestion. In future work, we plan to design a metric to evaluate the quality of thought templates, allowing for dynamic optimization of the accumulated thought templates. This enhancement may further improve the accuracy and robustness of our method. **Q4: What are the embedding models used for retrieval ? And analysis on the impact of retrieval quality.** **A4:** We used text-embedding-3-large as the embedding model. For better understanding, we here provide experimental analysis on the impact of retrieval quality on downstream task performance using three different-sized embedding models. Due to the page limits, **please refer to the table and analysis in global response**. **Q5: In Eq.5, whether we should replace the template based on similarity.** **A5:** It is noted that our update process **only includes adding new thought templates, instead of replacing or optimizing** existing ones . As mentioned in above A3, there is no suitable metric to evaluate the quality of thought template and BoT is a training-free reasoning framework. For future work, we will try to effectively evaluate the quality of thought templates, thereby enabling the dynamic optimization of meta buffer. **Q6: How quickly can this approach adapt to an unseen task? Number of examples to achieve reasonable accuracy? Analysis of adaptability and continual learning ability of BoT.** **A6:** 1. The adaptation is related with task diversity. For tasks with low diversity, such as "Game of 24", our approach can adapt very quickly. For tasks with higher diversity, such as those in MATH dataset, which includes a wide range of problem types (e.g., algebra, geometry...), more examples are required to accumulate a comprehensive set of thought templates. This is necessary to cover the various subdomains of the problems. 2. For tasks with low diversity, one or two examples are often sufficient to achieve reasonable accuracy. For tasks with higher diversity, a larger set of examples is necessary as mentioned above. This ensures that our method remains robust and generalizable across a wide range of problem types. 3. As shown in in Fig.9 and Fig.10 of Appendix, we conduct ablation study on buffer-manager, which demonstrate the extradionary adaptability and continual learning ability of BoT. With more examples received and high-level thought templates accumulated in each round, there is significant improvement in both the overall performance and reasoning efficiency of the model. We have discussed the underlying reasons for these improvements in A3. This continual learning capability is crucial for ensuring that BoT can effectively adapt to new tasks over time, maintaining the efficiency and robustness of our method. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed response, it addressed some of my concerns. I have updated my rating to 6. --- Rebuttal 2: Title: Gentle Reminder Comment: Dear Reviewer upCD, We greatly appreciate the time and effort you have invested in reviewing our paper. Your thoughtful questions and insightful feedback have been invaluable. In response to your queries, we provide further explanations about the generalizability and robustness of our proposed approach, and more details about our thought-retrieval procedure. If you have any further questions, please feel free to ask. Thank you once again for your invaluable contribution to our research. Warm regards, The Authors
Summary: This paper proposes a new approach called Buffer of Thoughts (BoT) to improve LLMs reasoning abilities. BoT addresses this by creating a "meta-buffer" that stores general problem-solving “thought” templates across different tasks. When a new query is given as input, BoT retrieves a relevant thought from the meta-buffer and tailors it to the specific situation. The method also includes a "buffer-manager" to keep the meta-buffer up-to-date and effective as the LLM encounters new challenges for scalability and stability purposes. The authors perform experiments on different reasoning tasks that showed significant improvements with BoT compared to previous methods. Strengths: - The framework is clearly explained and the paper is easy to follow - The empirical evaluation is thorough - The improvement for some tasks is notable. - The idea of a library of thoughts templates is sound and novel Weaknesses: - BoT seems to work well for problems with existing templates in the meta-buffer. However, entirely new or unique problems might not have a relevant template for BoT to adapt, potentially hindering its ability to solve them effectively. - What happens in the case of unsolvable queries? Does the method allows for uncertainty based abstention? Technical Quality: 3 Clarity: 3 Questions for Authors: see weaknesses. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: limitations have been discussed in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: *We thank Reviewer yiEf for the positive review and valuable feedback. We are glad that the reviewer found that the proposed thought template and meta-buffer is novel, and our method achieve notable improvement across various tasks. Please see below for our responses to your comments.* **Q1: How to solve entirely new or unique problems effectively?** **A1:** 1. As discussed in Line 160-163, for entirely new or unique problems, our method provides **three pre-designed coarse-grained thought templates** (in Appendix A.3) that can be instantiated based on the distilled task information. These templates offer a good starting point for reasoning when the relavant thought template does not exist. 2. Additionally, these coarse-grained thought templates ensure that new tasks follow appropriate inference processes. This general guidance offers LLMs greater flexibility during the reasoning process, enabling them to efficiently address new and unique problems. 3. During the cold-start reasoning process mentioned above, the buffer manager distills and summarizes new thought templates that can be used to solve similar future problems effectively. This adaptive process ensures that our method remains effective and robust, and could continually adapt to new problems. 4. To give a comprehensive illustration of this reasoning process for new or unique problems, we use the Game of 24 and solving a quadratic equation as examples below: **Game of 24** Input problem: Let's play a game called 24. You'll be given four integers, and your objective is to use each number only once, combined with any of the four arithmetic operations (addition, subtraction, multiplication, and division) and parentheses, to achieve a total of 24. For example, if the input is 4, 7, 8, and 8, the output could be 7 * 8 - 4 * 8 = 24. You only need to find one feasible solution! **Problem Statement:** Given four integers 4, 7, 8, and 8, use each number exactly once and the four arithmetic operations (addition, subtraction, multiplication, division) to form an expression that equals 24. Parentheses can be used to define the order of operations. Analysis: **Step 1: Problem Understanding**: We need to find a combination of the four given integers and arithmetic operations that results in 24. **Step 2: Input Extraction**: The four integers are 4, 7, 8, and 8. **Step 3: Output Requirements**: The output should be a valid mathematical expression that equals 24. **Step 4: Constraints**: - Each number must be used exactly once. - Only the four arithmetic operations can be used. - Parentheses can be used to define the order of operations. **Step 5: Approach**: - Generate all permutations of the numbers. - Generate all possible combinations of arithmetic operations. - Evaluate all possible expressions formed by combining the numbers and operations with different parenthetical groupings. - Check if any of these expressions evaluate to 24. **Transformed Python Code:** ```python ... Due to character limit, we are unable not show the code here, you can refer to A.1.5 in our paper ... ``` **Solving a quadratic equation** **Input Problem:** Solving a quadratic equation: $2x^2-4x-6 = 0$ **Solution:** Step 1: Identify the coefficients \(a\), \(b\), and \(c\) from the quadratic equation $(2x^2 - 4x - 6 = 0)$: - \(a = 2\) - \(b = -4\) - \(c = -6\) Step 2: Calculate the discriminant \(D\): $D = b^2 - 4ac = (-4)^2 - 4 \cdot 2 \cdot (-6) = 16 + 48 = 64$ Step 3: Determine the nature of the roots: - Since \(D = 64 > 0\), the equation has two distinct real roots. Step 4: Compute the roots using the quadratic formula: $x = \frac{-b \pm \sqrt{D}}{2a} = \frac{-(-4) \pm \sqrt{64}}{2 \cdot 2} = \frac{4 \pm 8}{4}$ - Calculate the two roots: $x_1 = \frac{4 + 8}{4} = \frac{12}{4} = 3$ $x_2 = \frac{4 - 8}{4} = \frac{-4}{4} = -1$ **Answer:** The solutions for the quadratic equation $(2x^2 - 4x - 6 = 0)$ are: $x_1 = 3 \quad \text{and} \quad x_2 = -1$ In summary, as we mentioned in Line 160-163 and the example in Appendix A.3, we can effectively solve entirely new or unique problems. The example above further showcase the robustness of our method when encountering new tasks. **Q2: What happens in the case of unsolvable queries? Does the method allows for uncertainty based abstention?** **A2:** 1. For possible unsolvable queries, our method attempts to resolve the issue during the instantiation process. Subsequently, an additional inspector reviews both the problem and the reasoning process. If any issues are detected, the unreasonable parts are identified and handed over to the reasoner for revision. If it is concluded that the problem is unsolvable, the reasoning process is terminated immediately to prevent any pollution of the meta-buffer. 2. Due to the page limits, we did not discuss this scenario in our paper, and there were very few unsolvable queries in our experiments. However, we have implemented the inspector mechanism. We will add this part in the final version to provide a comprehensive overview of our approach. --- Rebuttal Comment 1.1: Title: Response to rebuttal Comment: I appreciate the authors' responses to my questions. I maintain my current score and evaluation.
Summary: The paper introduces "Buffer of Thoughts" (BoT), a novel framework designed to improve the reasoning abilities of large language models (LLMs) by incorporating a 'thought-augmented' approach. This framework uses a component called 'meta-buffer' to store high-level thoughts—concise, distilled reasoning strategies from various problem-solving instances—which can be dynamically retrieved and adapted to new tasks to facilitate efficient reasoning. This approach significantly enhances the accuracy, efficiency, and robustness of LLMs across multiple reasoning-intensive tasks. 1. Novel Framework: BoT innovates a thought-augmented reasoning mechanism that leverages previous problem-solving insights to aid new reasoning tasks, reducing the need for generating reasoning paths from scratch. 2. Meta-Buffer: Introduces a storage system that holds distilled high-level thoughts, allowing for rapid adaptation to different tasks by retrieving and instantiating these thoughts as needed. 3. Buffer Manager: A dynamic management system that updates the meta-buffer based on newly encountered tasks, continuously improving the system’s reasoning capacity. 4. Empirical Validation: The paper reports extensive testing across 10 complex tasks, demonstrating significant performance improvements over state-of-the-art methods, including improvements of 11% on Game of 24, 20% on Geometric Shapes, and 51% on Checkmate-in-One, while also reducing the computational cost compared to traditional multi-query prompting methods. Strengths: 1. **Improved Reasoning Accuracy:** BoT significantly enhances the reasoning accuracy of large language models by leveraging distilled high-level thoughts, allowing the models to approach problems with a pre-formed strategy that has been proven effective across tasks. 2. **Computational Efficiency:** The thought-augmented reasoning approach minimizes the need for complex and iterative query processes typical in multi-query prompting systems. By reusing structured thought templates, BoT reduces the computational overhead, leading to faster reasoning times. 3. **Robustness Across Tasks:** BoT exhibits a robust performance over a range of different and challenging tasks. This is attributed to the system's ability to adapt high-level reasoning thoughts to new problems, ensuring consistent performance without the need to tailor the system for specific tasks. Weaknesses: 1. **Dependence on Quality of Distilled Thoughts:** The effectiveness of BoT hinges significantly on the quality of the distilled thoughts stored in the meta-buffer. If the initial thoughts distilled from problem-solving processes are not sufficiently generalizable or are too simplistic, they may not provide the necessary depth for complex reasoning tasks. 2. **Scalability and Maintenance of the Meta-Buffer:** The paper introduces a dynamic system for updating the meta-buffer but does not deeply explore the long-term scalability and maintenance challenges associated with continuously growing and updating this repository. Managing an ever-expanding set of thought templates could lead to efficiency issues or dilution of useful thoughts. 3. **Risk of Overfitting to Distilled Thoughts:** There is a potential risk that the LLM might overfit to the specific styles or patterns of reasoning encapsulated in the thought templates, especially if these templates are derived from a limited set of problem-solving instances. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. As the meta-buffer grows with more distilled thoughts, how does its size impact the retrieval time and overall performance of the model? 2. In cases where multiple applicable thoughts could be retrieved from the meta-buffer, how does BoT prioritize or choose among conflicting reasoning strategies? 3. Given the reliance on previously distilled thoughts, how does BoT handle situations where the foundational thoughts might be based on incorrect or outdated information? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: One of the primary limitations highlighted in the discussion of the Buffer of Thoughts (BoT) framework is its dependency on the initial quality of the meta-buffer. The performance of BoT is contingent upon the initialization of this buffer. If initialized with a weak model, the distilled thoughts stored in the meta-buffer may not be of sufficient quality or depth to facilitate effective reasoning across diverse tasks. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: *We thank Reviewer hZUQ for the positive review and valuable feedback. We are glad that the reviewer found that the proposed framework is novel, and greatly enhances the accuracy, efficiency, and robustness of LLMs across multiple reasoning tasks. Please see below for our responses to your comments.* **Q1: Dependence on Quality of Distilled Thoughts.** **A1:** 1. We address this by utilizing pre-designed prompt and leveraging state-of-the-art LLMs such as GPT-4, which are capable of generating high-quality, high-level thought templates that strike a balance between abstraction and specificity. 2. To demonstrate the adaptability and robustness of our method, we provide examples from two distinct datasets: GSM8K (grade school math word problems) and MATH (challenging high school math competition problems). **Example 1: GSM8K** **Question**: A robe takes 2 bolts of blue fiber and half that much white fiber. How many bolts in total does it take? **High-level Thought Template:** Step 1: Identify the amount of each type of material required. Step 2: Calculate any dependent quantities based on the given relationships. Step 3: Sum all the quantities to find the total amount of material needed. **Example 2: MATH** **Question**: At a school, all 60 students play on at least one of three teams: Basketball, Soccer, and Mathletics. 8 students play all three sports, half the students play basketball, and the ratio of the size of the math team to the size of the basketball team to the size of the soccer team is $4:3:2$. How many students at the school play on exactly two teams? **High-level Thought Template:** Given: Total number of students: T Number of students in each team: A, B, C Number of students playing all three sports: y Step 1: Identify the total number of students T and the number of students in each team A, B, and C. Step 2: Recognize the overlapping memberships and set up the equation: $$ A + B + C = T + x + 2y $$ Step 3: Substitute the known values for A, B, C, T, and y into the equation. Step 4: Solve for x, the number of students playing exactly two sports: $$ x = (A + B + C) - T - 2y $$ These examples illustrate that our method can adaptively extract high-quality thought templates for problems of different complexity. For simpler problems from GSM8K, the high-level thought template is straightforward, while for more complex problems from MATH, the high-level thought template is correspondingly complicated. This adaptability and generalizability showcase the robustness of our approach. **Q2: Risk of Overfitting to Distilled Thoughts.** **A2:** 1. Our thought templates are designed to be high-level and abstract, which do not contain specific reasoning details from previous problems. This abstraction ensures the thought templates provide general guidance that would not cause overfitting problems of LLMs. 2. In scenarios where the meta-buffer contains thought templates derived from a limited set of problem-solving instances that differ significantly from the current problems, we reinstantiate the reasoning process using our manually pre-designed coarse-grained thought templates, as mentioned in Line 450. This approach avoids applying an incorrect type of thought template from the meta-buffer, thereby further mitigating the risk of overfitting. 3. As we mentioned above, our approach ensures that the model retains the flexibility needed to handle a wide range of tasks effectively while maintain adaptability and robustness across various problems. **Q3: Size impact on retrieval time and overall performance. Concern about the Scalability and Maintenance.** **A3:** 1. Many different problems share similar solutions that could be instantiated from the same high-level thought template. Consequently, even as the meta-buffer grows with more distilled thoughts, the number of unique thought templates remains relatively small as demonstrated by the template distribution of Fig.5. This ensures that our meta-buffer is a lightweight library, and its size has little impact on retrieval time. Moreover, a bigger size of meta buffer would improve the overall performance and the reasoning efficiency of the model as in Fig.9 and Fig.10, with empirical analysis in Line 465-477. 2. Many problems share the same reasoning patterns, we only need to save the high-level thought templates distilled from the solutions of various problems. This approach keeps the size of the meta-buffer relatively small, ensuring its scalability and making it easy to maintain. **Q4: How does BoT prioritize among conflicting reasoning strategies?** **A4:** As illustrated in Line 134-144, for each thought template, we have a description $D_{T_i}$ that specifies the types of problems could be applied. BoT uses the embedding similarity between the distilled problem $x_d$ and $D_{T_i}$ to find the most suitable thought template, as mentioned in Eq.2. We only **choose the thought template with the highest similarity** to the current problem. This ensures that the chosen template aligns closely with the current problem, thereby prioritizing the most relevant reasoning strategy. **Q5: How to handle situations when thoughts are based on incorrect or outdated information?** **A5:** 1. The thought templates derived from the problem-solving process are high-level and abstract, which means they **do not contain specific reasoning details or factual information**. This abstraction helps to ensure that the templates provide robust and generalizable guidance without being affected by the incorrect examples or outdated data. 2. Compared to conventional methods that rely on in-context examples or specific databases, our approach leverages high-level thought templates to eliminate detailed reasoning steps. This abstraction helps to avoid potential errors that might be presented in more detailed templates, thereby enhancing the reliability and robustness of the reasoning process. --- Rebuttal Comment 1.1: Title: Official Comment by Reviewer hZUQ Comment: Thank you for the comprehensive response and additional experiments provided. I find this to be a very interesting and excellent piece of work and look forward to meeting the authors at the conference. I have increased my score from 7 to 8, and I hope the authors will incorporate these suggestions into the final manuscript. --- Reply to Comment 1.1.1: Title: Thank you for your support Comment: Thank you very much for raising the score! We sincerely appreciate your valuable comments and the time and effort you put into reviewing our paper. We will make sure to incorporate these suggestions into the final manuscript. Warm Regards, The Authors
Rebuttal 1: Rebuttal: Global response We sincerely thank all the reviewers for their thorough reviews and valuable feedback. We are glad to hear that the proposed framework is novel (all reviewers) and effective (all reviewers) in enhancing the reasoning abilities of LLMs, the paper is well-written and easy to follow (reviewer yiEf and cwbi), and the performance improvements demonstrated in experiments are promising (all reviewers). Here, we want to highlight the main contributions and novelties of our proposed framework, **"Buffer of Thoughts" (BoT)**: **Thought-Augmented Reasoning Mechanism:** BoT introduces a novel framework that leverages previously distilled high-level thoughts to assist LLM reasoning tasks. This reduces the need for generating reasoning paths from scratch, significantly enhancing accuracy, efficiency, and robustness. **Meta-Buffer and Buffer Manager:** BoT incorporates a meta-buffer to store distilled high-level thoughts and a dynamic buffer manager to continuously update the meta-buffer based on new tasks. This ensures scalability and continuous improvement of the LLM reasoning system. **Empirical Validation:** Extensive testing across 10 complex tasks demonstrates significant performance improvements over state-of-the-art methods, including substantial accuracy improves and reduced computational costs compared to traditional multi-query prompting methods. We summarize our responses to the reviewers' comments as follows: * We additionally provide more examples to show the quality of our high-level thought template. (reviewer hZUQ, yiEf and upCD). * We provide more examples to demonstrate the instantiation and reasoning process for new problems, and give detailed analysis about the adaptability and continual learning ability of our method. (reviewer yiEf, upCD and cwbi). * We further conduct the experiment to compare the quality of automatically-generated thought template and manually-designed template in response pdf. (reviewer hZUQ and upCD) * We provide more quantitative comparisons and analysis for the impact of different-sized embedding model in the table below (reviewer upCD and cwbi). * The results indicate that stronger encoding capabilities lead to higher accuracy. The larger model (text-embedding-3-large) extracts more informative embeddings, improving retrieval accuracy and overall BoT performance. * The impact of embedding model is not significant. Even with the small text-embedding-ada-002 model, we still achieve higher accuracy than other methods. This is because we use the distilled problem $x_d$ and the thought template description ($D_T$) for similarity computation. Both $x_d$ and $D_T$ are concise sentences, with simple but critical semantic structures, allowing even weaker encoders to handle them effectively. Thus, our method remains robust and generalizable across different-sized encoder, as demonstrated by the experiment results. |Task (accuracy)|text-embedding-3-large+BoT|text-embedding-3-small+BoT|text-embedding-ada-002+BoT|GPT4|ToT|Meta Prompt| |---|:---:|:---:|:---:|:---:|:---:|:---:| |Game of 24|82.4|81.8|81.0|3.0|74.0|67.0| |MGSM|89.2|88.7|87.9|84.4|86.4|84.8| |Word Sorting|100.0|100.0|99.8|80.4|96.4|99.6| We reply to each reviewer's questions in detail below their reviews. Please kindly check out them. Thank you and please feel free to ask any further question. Pdf: /pdf/b7e46ad88cce6da63092a8c2f6b5391f413bf2fb.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Noise Balance and Stationary Distribution of Stochastic Gradient Descent
Reject
Summary: The paper studies the effect of rescaling symmetry in SGD and shows SGD tends to favor solutions with balanced gradient noises. The authors then derive an exact solution of the stationary distribution of a toy model trained by SGD. The derived solution shed lights on problems observed in deep learning such as fluctuation inversion and edge of stability. Strengths: The paper contributes to the understanding of SGD properties. The noise balance theorem is novel and important. The analytical solution as well as the interpretation is interesting and insightful. Weaknesses: The results of the paper are interesting and important, the writing needs refinement to improve clarity and precision. The conditions under which the results hold is sometimes omitted, leading to confusion. The language should also be made more precise. Minor points: 1. The first paragraph in related works appears to overstate the novelty of the results. Specifically, our result is the first to derive an exact solution to the stationary distribution of SGD without any approximation. (Line 55-56) This is a strong claim, but it seems inaccurate. There are previous results showing exact solution of stationary distribution of SGD (e. g. Liu Ziyin 2021). Corollary I.1 in arXiv:2306.04251 (2023) also states the stationary distribution on a deep learning setup similar to the D=1 model discussed in this paper. Also, the solution given in the paper is for a specific model. These should be made clear. 2. It seems that eq. 15 takes D=1, which has not been stated and thus is confusing. 3. It is unclear why the left figure of Fig. 5 has only two theory lines instead of three. Major points: 1. The related works on symmetry and SGD dynamics are insufficient. There are a few related works that are missing, e. g. arXiv:2309.16932 (2023). 2. The paper has not discussed convergence to the stationary distribution. The authors seem to assume convergence to stationary distribution and use interchangeably the SGD properties and the stationary solution properties (e. g. line 97-98). However, the properties of SGD can be very different from the properties of stationary solutions unless convergence to the stationary solutions is guaranteed. The authors should clarify this. 3. The authors fail to discuss uniqueness of the stationary solutions. For example, it is unclear to me why eq (3) is a necessary and efficient condition for stationarity. Eq (3) is a critical result in the paper, and it would be better to make it a theorem or corollary. However, since eq (2) cannot be interpreted as a deterministic ODE. The unique condition for a stationary distribution should be justified, especially considering that C1 and C2 are not constant but depend on u and w. 4. The equivalence of SGD bias and weight decay is not rigorous. (line 155-158) The C0 term is not constant but depends on u and w, while the weight decay rate is constant. Technical Quality: 2 Clarity: 3 Questions for Authors: 1. The entire paper is based on rescaling symmetry, but why tanh without rescaling symmetry is used in Fig. 3 and 5? Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: The authors have listed limitations at the end. The major limitations are the simplicity of the model and lack of experiments on deep neural networks. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your feedback. We will answer the weaknesses and questions below. **Weaknesses:** **The results of the paper are interesting and important,...The language should also be made more precise.** Thank you for your suggestions. We will do our best to refine our language in the revision. **Minor points: 1. The first paragraph in related works appears to overstate the novelty of the results...** Thanks for these references. We will include a discussion of these works. In the previous work Liu Ziyin 2021 (actually, Liu. et al 2021), the authors investigated the stationary distribution only near the local minima. In Eq. (13) in Sec. 4.6, they approximate the covariance matrix as a constant matrix near the minima which represents the strength of the noise. Hence, their results are approximate stationary distribution rather than an exact one. Corollary I.1 of arXiv:2306.04251 only gives a particular solution to the Fokker-Planck equation, assuming strong conditions on the initialization (A3-A4). In contrast, our solution is a general one, enumerating all possible solutions for the problem. In science and mathematics, there is a fundamental difference between the two. For example, the general solution to the Navier-Stokes equation is a difficult open problem in mathematics, whereas particular solutions to it are quite easy to find (e.g., https://en.wikipedia.org/wiki/Landau%E2%80%93Squire_jet). We will restate our contribution as the “first general exact solution to a deep-and-wide linear model with 1d input and output,” which we believe is accurate given the existing literature. **Also, the solution given in the paper is for a specific model. These should be made clear.** We agree that this point should be made more explicit and also in the abstract. **2. It seems that eq. 15 takes D=1, which has not been stated and thus is confusing.** Yes. Eq. (15) is for $D=1$. We will clarify this. **3. It is unclear why the left figure of Fig. 5 has only two theory lines instead of three.** The dashed lines in Fig. 5 show the upper and lower bounds of the tail of the stationary distribution around l.295-l.300, where the left dashed line corresponds to the case $D=\infty$ while the right line corresponds to the case $D=1$. **Major points: 1. The related works on symmetry and SGD dynamics are insufficient. There are a few related works that are missing, e. g. arXiv:2309.16932 (2023).** Thank you for pointing out this related work. This work studies discrete symmetry. Our focus is on the rescaling symmetry, which is a continuous symmetry. In addition, this work focuses on the structure of Hessian under the mirror symmetry, while we focus on the stationary solutions and distribution of the parameters. We will clarify this. **2. The paper has not discussed convergence to the stationary distribution. The authors seem to assume convergence to stationary distribution and use interchangeably the SGD properties and the stationary solution properties (e. g. line 97-98). However, the properties of SGD can be very different from the properties of stationary solutions unless convergence to the stationary solutions is guaranteed. The authors should clarify this.** Yes. Convergence to the stationary distribution is assumed. The problem of convergence itself is a difficult mathematical problem and beyond the scope of our work (See https://doi.org/10.1007/s10884-018-9705-8 for an example). We will add a discussion on this point. For the stationarity of Eq (3), see the next point. **3. The authors fail to discuss uniqueness of the stationary solutions. For example, it is unclear to me why eq (3) is a necessary and efficient condition for stationarity. Eq (3) is a critical result in the paper, and it would be better to make it a theorem or corollary. However, since eq (2) cannot be interpreted as a deterministic ODE. The unique condition for a stationary distribution should be justified, especially considering that C1 and C2 are not constant but depend on u and w.** See the summary rebuttal. The solution to Eq (3) is unique in the sense that for fixed $u$ and $w$, there exists a unique $\lambda$ such that Eq (3) reaches stationarity. We also establish that if this $\lambda^*$ does not change in time, Eq (3) will converge to this fixed point. **4. The equivalence of SGD bias and weight decay is not rigorous. (line 155-158) The C0 term is not constant but depends on u and w, while the weight decay rate is constant.** We meant that they are qualitatively similar because SGD bias is like an “adaptive” weight decay. We will clarify this. **Questions:** **The entire paper is based on rescaling symmetry, but why tanh without rescaling symmetry is used in Fig. 3 and 5?** Here, we consider the tanh network to clarify the application of our insights to cases where the symmetry only approximately holds. For a small initialization, the tanh network can be approximated by a linear network since $\tanh x = x + O(x^2)$. Thus, the rescaling symmetry approximately holds. **Limitations:** **The authors have listed limitations at the end. The major limitations are the simplicity of the model and lack of experiments on deep neural networks.** Thanks for this criticism. We would like to politely emphasize that we do have a nontrivial theory for nonlinear models (Theorem 3.2), and experiments on nonlinear nets (ReLU networks in Fig. 2 and the tanh network in Figs. 3 and 5). We also include a new set of experiments on ReLU net in the uploaded pdf. --- Rebuttal Comment 1.1: Comment: I appreciate author’s response to my review! For the author’s response to my question, I would suggest replacing the experimental results with the ones run with ReLU activation. ReLU is widely used, and it still makes little sense to me that the author decided to run the experiments with tanh while the entire paper is based on rescaling symmetry. For the supplement pdf, I have two follow up questions. 1. From my understanding, Equation 124 should still be considered a stochastic differential equation by nature. Then, how can the author arrive at Corollary D.2, which is a deterministic statement without out any probability condition? 2. In the right panel of the new Figure 6, why does the difference in the norm grows up again later in time? --- Reply to Comment 1.1.1: Title: Thank you for the reply Comment: Hi! We noticed that you have not responded to our previous clarification. It would be really helpful to us when revising and improving our manuscript if we could hear more from you. We would be happy to hear any additional thoughts or feedback you have on our work and on our previous reply! --- Rebuttal 2: Title: Reply Comment: Thank you for your additional questions. **For the author’s response to my question, I would suggest replacing the experimental results with the ones run with ReLU activation. ReLU is widely used, and it still makes little sense to me that the author decided to run the experiments with tanh while the entire paper is based on rescaling symmetry.** Thank you for your suggestion. We will move the results of tanh nets to the appendix, and use the ones for ReLU nets in the main text to avoid confusion. **For the supplement pdf, I have two follow up questions.** **1. From my understanding, Equation 124 should still be considered a stochastic differential equation by nature. Then, how can the author arrive at Corollary D.2, which is a deterministic statement without out any probability condition?** This is a misunderstanding. There is no stochasticity in the evolution equation in Eq. 124. The diffusion terms in the dynamics of $\|u\|^2$ and $\|w\|^2$ cancel with each other due to the rescaling symmetry – even if $u$ and $w$ themselves are random. This (rather surprising fact) is essentially what the theorem is trying to prove. Also, please see our last reply to reviewer (56YC) for an alternative and more technical proof of theorem 3.1, which may be easier to understand for some readers. Now, because Eq. (124) has no noise term, one can further construct a deterministic dynamics that strictly upper bound Eq. 124, which is how one can arrive at the corollary. **2. In the right panel of the new Figure 6, why does the difference in the norm grows up again later in time?** First of all, Figure 6-right can be a little visually misleading. It looks like the neurons first have a decreasing $|\|u\|^2-\|w\|^2|$ and then an increasing $|\|u\|^2-\|w\|^2|$, but the actual situation is simply that roughly half of the signs of $\|u\|^2-\|w\|^2$ are changing during learning, and so in a log scale, it looks like they are first decreasing and then increasing, even though $\|u\|^2-\|w\|^2$ are quite monotonic in reality. Secondly, $\|u\|^2-\|w\|^2=0$ is not necessarily the stationary point and Corollary D.2 only guarantees a convergence to a neighborhood $B$ close to a stationary point, and so there is no reason to expect a convergence to any specific value, especially when the model is nonlinear. What happens the most often is that the parameters fluctuate around some small neighborhood, which is exactly what Figure 6-right shows. Please ask for additional clarification for any point that is not yet clear. --- Rebuttal 3: Title: reply Comment: Thanks for the reply. We will refine our statements. What you point to is a corner case. Technically speaking, $\lambda^*$ is indeed allowed to be infinity. The meaning is clear. When $\lambda^* =\infty$, the system diverges with probability 1 (here, an $O(1)$ neighborhood of infinity simply means infinity, and thus, divergence). This is not uncommon when the model has some sort of continuous symmetry. For example, in case of scale invariance, it is well-understood that SGD diverges along the degenerate directions (namely, the radial direction) when there is no weight decay (e.g., see https://arxiv.org/pdf/2102.12470)
Summary: For ReLU networks trained by gradient flows, it is classical that a type of Minkowski inner product between the coefficients of consecutive layers is preserved. The authors demonstrate a monotonicity of the same quantity for stochastic gradient descent in continuous time. They use this to study the invariant distribution of parameters trained by (continuous time) SGD. Strengths: The topic is well-chosen and the results - if correct - are very interesting. Weaknesses: * In its current form, I find the article a bit unpolished and the results not easy to access. Many questions remained unanswered when I tried reading the article (see questions). * Important quantities are defined throughout the plain text. I understand that reading as a reviewer under time pressure is different from normal reading, but for instance in Theorem 3.1, I would have hoped for a more self-contained statement on relations and properties $L, C, \ell$ and the distribution of $x$ have to satisfy. As far as I can tell, the statement is fairly general and not specific to machine learning. * I have serious doubts about Theorem 3.1. It is derived in Appendix A from Itô's Lemma without the diffusion term. This is valid *in expectation over $\theta$*, but not pointwise in $\theta$. Pointwise in $\theta$, there should be white noise in the 'time derivatives', i.e. the ODE identity should be written as an SDE. In the proof, equations (27) and (28) appear to be wrong. * The authors do not pay any attention to whether solutions to the evolution equations exist (or are unique). Problems with regularity can sometimes be alleviated if the distribution in $x$ is sufficiently regular, but I would appreciate a short discussion. Technical Quality: 2 Clarity: 2 Questions for Authors: * The noise matrix $C$ defined below (1) is an uncentered covariance matrix, while the centered version appears to be used below (and I believe that this would be correct). * In line 499, can you justify that $\ell$ is a function of only the rank one matrix, and is the function sufficiently smooth to take derivatives? Is it defined on the space of matrices, or at least in an open neighborhood of the set of rank 1 matrices so that the derivatives are well-defined? This is easy to settle in the linear settings in the main article, but a more thorough consideration is required in general. * The notation $\partial\tilde \ell /\partial(u_iw_j)$ is very unfortunate - for a single derivative, there should not be two arguments. This would be a derivative of $\tilde\ell$ in direction $z_{ij}$ on a space of matrices... Can you explain why the modification is needed? And could it be avoided in the statement of Theorem 3.1 to focus on the more intuitive quantities involving the original loss $\ell$? * Where in the proof of Theorem 3.1 is there any indication that $tr(C(u)) = tr(C(w))$? I have looked fairly closely and cannot find a justification for (3). I also assume that the covariance matrix of $u$ depends on $v, w$ and the covariance matrix of $w$ equally depends on all three sets of variables? I am also unsure how precisely this indicates that 'gradient noise between the two layers is balanced'. * What is the distribution of $x$ in the experiment of Figure 1? Please include code for reproducibility or describe the experiments which illustrate your point. * How does (2) imply that 'a single and unique point is favored by SGD'? * Would Theorem 3.1 hold for *all* neurons with leaky ReLU activation? * What do you mean by 'the difference between GD and SGD at stationarity is O(1)'? GD does not have an invariant distribution and the terminal state depends on (random) initialization. Is the message that a gradient flow analysis is only valid for a fixed finite time horizon? * Should the minimizer of (10) be $\beta_2 /(\beta_1+\gamma)$ or should $\beta_1 = \mathbb E[x^2] + \gamma$? * Below (10), what are the $\alpha_i$? Is this the same as in (8) for a different model? * I find that $$ \Delta = \min_v C(v) = 4\min_v \left( \alpha_1v^2 - 2\alpha_2v + \alpha_3\right) = 4\left(\alpha_3 - \frac{\alpha_2^2}{\alpha_2}\right) $$ is achieved for $v= \alpha_2/\alpha_1$. This differs from the value proposed in the article by a factor of $4/\alpha_1$. * I cannot find the proof of (12) in the appendix anywhere, and I am unsure what $\beta_1'$ is. More concrete references may be useful. * In Figure 3, what is plotted for $v$ in the depth 1 case? Is it the product of parameters? Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 3 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your constructive feedback. We will answer the weaknesses and questions below. **Weaknesses:** **Important quantities are defined throughout the plain text. I understand that...** This is a good question. Theorem 3.1 holds generally for an arbitrary network with the rescaling symmetry. The only condition we need is the symmetry of the network regardless of the distribution of $x$ and the properties of $L$ and $C$. So mathematically, this is a strong result and our results may also be applicable to other fields. This is not a weakness but a strength of our work. In fact, solving ODE with the Lie group is well-known in mathematics literature, whereas using the Lie group to solve SDE is, to our best knowledge, novel even for mathematicians. **I have serious doubts about Theorem 3.1. It is derived in...** This is a misunderstanding. While we did not write it explicitly, the time derivative of each variable is actually stochastic. In the SDE limit, the explicit forms of the time derivatives (Eqs. (27) and (28)) are $\frac{du_i}{dt}=-\frac{\partial L}{\partial u_i}+\sqrt{T\mathrm{Var}(\partial\ell/\partial u_i)}dW_t, \frac{dw_j}{dt}=-\frac{\partial L}{\partial w_j}+\sqrt{T\mathrm{Var}(\partial\ell/\partial w_j)}dW_t,$ which can be used to to derive Eq. (29). **The authors do not pay any attention to...** This is a good point. In the revision, we will strengthen the theorem by proving the existence and uniqueness of the solutions to Eq. (2). See the summary rebuttal and the pdf. For regularity, we do need some regularity conditions for the matrix $A$ as a function of the data in order for the matrices $C_1$ and $C_2$ to exist and be well-behaved. One sufficient condition would be that the matrix $A$ is a Lipshitz function of $x$. We will clarify this. **Questions:** **The noise matrix $C$ defined below (1) is...** This is a typo and we will fix it. **In line 499, can you justify that $\ell$ is a function of...** Here, we need the loss function $\ell$ to be a differentiable function of $u$ and $w$, and the rescaling symmetry implies some additional smoothness properties. To be specific, we can equivalently write the derivative $\partial\ell/\partial u_i$ and $\partial\ell/\partial w_j$ into a composition of the derivatives $\frac{\partial\tilde{\ell}}{\partial(u_iw_j)}=\frac{\partial\ell}{\partial u_i}\frac{1}{2w_j}+\frac{\partial\ell}{\partial w_j}\frac{1}{2u_i}$ if we define $\tilde{\ell}(u_iw_j,u_i/w_j):=\ell(u_i,w_j)$. Hence, once the loss function is sufficiently smooth for the parameters $u_i$ and $w_j$ and the parameters are away from $0$, we can always have the smooth derivative $\partial\tilde{\ell}/\partial(u_iw_j)$. **The notation $\partial\tilde{\ell}/\partial(u_iw_j)$ is very unfortunate...** Here the introduction of $\tilde{\ell}$ facilitates us to derive the balancedness of the norm $\lambda$ and the stationary distribution of the 1d case (See Corollary 3.3). The more intuitive expression of Eq. (2) is provided in Eq. (30), which utilizes the gradient of the original loss $\ell$ to the parameters $u_i$ and $w_j$. The stationarity directly means the balancedness of gradient noises on different layers. **Where in the proof of...this indicates that 'gradient noise between the two layers is balanced'.** The proof of $tr(C(u))=tr(C(w))$ is provided in Eq. (30). Here the definition of $C(u)$ and $C(w)$ are given by $C(u_i):=\mathrm{Var}(\partial\ell/\partial u_i)$ and $C(w_j):=\mathrm{Var}(\partial\ell/\partial w_j)$. We apologize for the confusion. **What is the distribution of $x$ in the experiment of Figure 1?...** This experiment is easy to reproduce. Here, $x$ is Gaussian, $y=x +\epsilon$, and $\epsilon$ is an independent Gaussian. As long as the variance of $x$ and $\epsilon$ are nonvanishing, one will be able to reproduce this experiment. **How does (2) imply that 'a single and unique point is favored by SGD'?** When the r.h.s. of Eq (2) does not vanish, Eq. (2) has a unique fixed point in the degenerate valley of the rescaling transformation. Namely, there exists a unique $\lambda$ such that the transformation $(u,w) \to (\lambda u, \lambda^{-1}w)$ makes Eq. (2) vanish. This is made precise with our additional update Theorem 3.1. See the summary rebuttal for more detail. **Would Theorem 3.1 hold for all neurons with leaky ReLU activation?** Yes. For example, the loss function is given by $\ell=\|\theta_1 LReLU(\theta_2^Tx + \theta_3)-y\|^2$. The network has the rescaling symmetry: $\theta_1\to\lambda\theta_1,\theta_2\to\theta_2/\lambda,\theta_3\to\theta_3/\lambda$. Therefore, we still have the law of balance by defining $u:=\theta_1$ and $w:=(\theta^T_2,\theta_3)^T$ in Eq. (2), which is the same as the ReLU activation. **What do you mean by 'the difference between GD and SGD at stationarity is O(1)'?...** Here, the O(1) difference means the difference of the final values between the SGD and GD does not depend on the noise strength $T$. Therefore, the GD can only approximate SGD for a finite time horizon. **Should the minimizer of (10) be...** This is a typo. The global minimizer should be $v^{*}=\beta_2/\beta_1’$ while $\beta_1’:=\beta_1+\gamma$. We will complement it in the revision. **Below (10), what are the $\alpha_i$?...** Yes, the definitions of $\alpha_i$ are the same as in Eq. (8). **I find that...differs from the value proposed in the article by a factor of $4/\alpha_1$.** Thank you for pointing this out. Here the definition of $\Delta$ is actually given in Eq. (11). We would like to revise the formula in l.189 as $\Delta:=\alpha_1\min_vC(v)/4$. We apologize for this typo. **I cannot find the proof of (12)...** The definition of $\beta_1’$ is given by $\beta_1’:=\beta_1+\gamma$. Eq. (12) is easy to prove and we will include a proof of Eq. (12) in the revision. **In Figure 3, what is plotted for $v$ in the depth 1 case?...** Yes, the parameter $v$ is defined as the product of $u$ and $w$. --- Rebuttal Comment 1.1: Comment: I appreciate the authors' response and explanation. * I looked at the proof of Theorem 3.1 and I am unable to fill in the gaps where diffusion terms should be. It is possible that the authors chose an unfortunate simplifying notation and that this is correct, but I cannot verify the result right now. I would also be surprised at true monotonicity for a stochastic quantity. In Corollary 3.3, the authors seem to use the result in an ODE fashion to conclude that |u-v|^2 in fact decays to zero. * It is still not entirely clear to me how to show that the loss only depends on the rank 1 matrix and how the 'lifted' function would be defined on the space of matrices (not just the set of rank 1 matrices, which does not form a linear subspace). An explicit construction or formula would be appreciated. I currently maintain my score and certainty. If the authors can provide a convincing answer to the two questions, I will most likely increase the score but drop the certainty, since I do not feel like I will be able to adequately assess its merit during the discussion period. For the record, I find the work interesting and I believe that it deserves to be published, but I have reservations about the correctness (or the presentation, if this is really a question of notation). --- Reply to Comment 1.1.1: Title: reply Comment: **I appreciate the authors' response and explanation.** **I looked at the proof of Theorem 3.1 and I am unable to fill in the gaps where diffusion terms should be. It is possible that the authors chose an unfortunate simplifying notation and that this is correct, but I cannot verify the result right now. I would also be surprised at true monotonicity for a stochastic quantity. In Corollary 3.3, the authors seem to use the result in an ODE fashion to conclude that |u-v|^2 in fact decays to zero.** Thanks for this question. We clarify the details below. We chose a simplified notation in the original proof to avoid confusing the readers with too much technical details. More technically, the proof should start with Eq. (1), and the standard SDE formulation. The following is how one would arrive at the result in this route. Also, a core aspect of the result of Theorem 3.1 is exactly that Eq. (2) is an ODE! This is because the diffusion term cancel each other for the time evolution of $\|u\|^2-\|w\|^2$. The mechanism for this cancellation is essential to the proof: the gradient noice covariance is low-rank due to symmetry. Now, we outline the proof of Theorem 3.1 in a more technical and standard manner. By defining $\theta=(u^T,w^T)^T$, we can rewrite the quantity $\|u\|^2-\|w\|^2$ as $\theta^TB\theta$, where $B:=\quad \begin{pmatrix} I_u & O \\\\ O & -I_w \end{pmatrix} \quad$. The dynamics of $\theta$ is described by Eq. (1): $d\theta/dt=-\nabla_{\theta}L+\sqrt{TC(\theta)}dW/dt$, where $C(\theta):=\mathbb{E}[\nabla_{\theta}\ell\nabla_{\theta}^T\ell]-\mathbb{E}[\nabla_{\theta}\ell]\mathbb{E}[\nabla_{\theta}\ell]$. To obtain the dynamics of $\theta^TB\theta$, we use Ito's lemma: If the vector $X_t$ satisfies the stochastic process $dX_t=\mu_tdt+D_tdW_t$, then the dynamics of a function $f(X_t)$ of $X_t$ can be written as $d f(X_t) = \left(\nabla_X^T f \mu_t + \frac{1}{2}\mathrm{Tr}[D_t^T \nabla_X^2 f(X_t) D_t] \right)dt + \nabla_X^Tf(X_t) D_t dW_t$. Hence, the dynamics of $G(\theta)=\theta^TB\theta$ is given by $dG(\theta)/dt=-\theta^TB\nabla_{\theta}L+\theta^TB\sqrt{TC(\theta)}dW/dt+T\mathrm{Tr}[C(\theta)B]$. To simplify this, we use the infinitesimal form of the rescaling symmetry, which can be expressed as $\ell(\theta,x)=\ell(A\theta,x)$ with the matrix $A:=\quad \begin{pmatrix} (1+\epsilon) I_u & O \\\\ O & (1-\epsilon)I_w \end{pmatrix} \quad$. We expand the equation to first order in $\epsilon$ and obtain $\theta^TB\partial\ell/\partial\theta=0$. By taking expectation to both sides, we have $\theta^TB\partial L/\partial\theta=0$. In addition, $\theta^TBC(\theta)=\mathbb{E}[\theta^TB\nabla_{\theta}\ell\nabla_{\theta}^T\ell]-\mathbb{E}[\theta^TB\nabla_{\theta}\ell]\mathbb{E}[\nabla_{\theta}\ell]=0$. Therefore, we can see $\theta^TB\sqrt{C(\theta)}=0$ since $C(\theta)$ and $\sqrt{C(\theta)}$ share the same null space. By substituting $\theta^TB\partial L/\partial\theta=0$ and $\theta^TB\sqrt{C(\theta)}=0$ into the evolution equation of $G(\theta)$, we have $dG(\theta)/dt=T\mathrm{Tr}[C(\theta)B]$, which is nothing but Eq. (30). Here we can see the diffusion terms related to $dW$ vanish due to the rescaling symmetry. **It is still not entirely clear to me how to show that the loss only depends on the rank 1 matrix and how the 'lifted' function would be defined on the space of matrices (not just the set of rank 1 matrices, which does not form a linear subspace). An explicit construction or formula would be appreciated.** Due to the rescaling symmetry, the loss function only depends on this rank 1 matrix. To show this, we redefine a new function $\tilde{\ell}(u_iw_j,w_j,x):=\ell(u_i,w_j,x)$ to reorganize the parameters. Then the derivatives of $\ell$ can be rewritten as $\frac{\partial \ell}{\partial w_j}=\sum_{i}u_i\frac{\partial \tilde{\ell}}{\partial(u_iw_j)}$ and $\frac{\partial\ell}{\partial u_i}=\sum_{j}w_j\frac{\partial \tilde{\ell}}{\partial(u_iw_j)}+\frac{\partial \tilde{\ell}}{\partial(w_j)}$. However, due to the rescaling symmetry $\tilde{\ell}(u_iw_j,w_j,x)=\tilde{\ell}(u_iw_j,\lambda w_j,x)$, we have $\sum_jw_j\partial\tilde{\ell}/\partial(w_j)=0$. Since this equality holds for an arbitrary $w_j$, we always have $\partial\tilde{\ell}/\partial(w_j)=0$ for all $j$. Hence, the loss function only relies on the rank 1 matrix $u_iw_j$. We will clarify this in the revision. Please ask for additional clarification for any point that is not yet clear. --- Rebuttal 2: Title: reply Comment: Thank you very much for this very good question. It is true that our definition only defines a rank-1 subspace of the gradient vector of $\tilde{\ell}$, but this is all that is required for us. Since the same argument applies to $C_1$ and $C_2$, we focus on $C_1$. Essentially, we only need the quantity $u^T C_1 u$ to exist, and this only requires a rank-$1$ subspace of $C_1$ to be defined, which is defined as: $u^T C_1 u = \sum_{i,j,k}u_i u_j\mathbb{E}[\partial\tilde{\ell}/\partial (u_i w_k) \partial\tilde{\ell}/\partial(u_j w_k)] = \sum_{i,j,k}\mathbb{E}[ u_i \partial\tilde{\ell}/\partial(u_i w_k) \partial\tilde{\ell}/\partial(u_j w_k) u_j] $ (ignoring the second term of $C_1$ as it follows from the same argument). Meanwhile, the quantity $ u_i \frac{\partial\tilde{\ell}}{\partial(u_i w_k)} = u_i \frac{\partial{\ell}}{\partial(u_i)} \frac{1}{2 w_k} + \frac{1}{2}\frac{\partial{\ell}}{\partial(w_k)} $ (see our very first rebuttal), which is well-defined whenever the gradient with respect to the original loss is well-defined. Since each term of the sum is well-defined, the quantity $u^T C_1 u$ is also well-defined. Similarly, $w^T C_2 w$ is always well-defined. Lastly, the fact that we only require a rank-$1$ condition for the gradient of $\tilde{\ell}$ does not imply that $\nabla \tilde{\ell}$ when viewed as a matrix is rank-$1$. As an example, consider the case $\ell(u_i, w_i)= \sum_i u_i w_i$, and $\tilde{\ell}(Z) = \sum_i Z_{ii}$, such that $Z_{ii}(u, w) = u_i w_k$. Thus, we have that $\nabla_Z \tilde{\ell}(Z) = {\rm identity}$, which is full-rank and so its largest and smallest eigenvalues can still be well-defined. --- Rebuttal Comment 2.1: Comment: I will increase my score. I still feel that this paper is not fully ready for publication and requires significant revision, but my concerns about mathematical correctness have been addressed. I thank the authors for engaging with the questions and providing thoughtful replies. --- Reply to Comment 2.1.1: Title: reply Comment: Thanks for the reply. We will do our best to incorporate the feedback from the reviewers to improve the manuscript.
Summary: This paper tries to analyze the specific features that carry the noise of SGD (through a continuous model). The authors show that there is a certain 'law of balance' across the layers when some invariance is assumed. Going further, they derive a toy model to push their study, showing that there is an analytic stationary solution to it. They finally propose a phenomenology related to the role of the noise of SGD when analyzing this precise stationary distribution. Strengths: The idea that a conservation law for the gradient flow implies an asymptotic balancedness condition for the stochastic flow is a good and striking idea. The one-dimensional examples that are given in the text are very pleasant to follow and they are good exercices to display the ability of the stochastic flow to diverge from the gradient flow. The example given in Eq.(13) is thoroughly analyzed. Weaknesses: The paper present the following weaknesses: - The law of balance is an interesting phenomenon, yet considering it with a closer look, it seems that not much can be said generally and that one has to understand it case by case. In one dimension, sure, it is possible to conclude that balancedness will occur at exponential speed, yet in dimension more than $2$, it seems impossible to predict it surely. - I have to say that I was a bit bothered by the general overselling of the paper : - As said before the law of balance is truly valid asymptotically in one-dimension - The stationary distribution that the authors claim to be the first to derive is for a very specific model, which is not standard and does not resemble a diagonal network! - The fact that the stationary distribution can be computed is also very inherent to $1d$ calculation and is simply a recognition of a Pearson diffusion that already made in way in ML (at least in https://arxiv.org/pdf/2402.01382 and https://arxiv.org/pdf/2407.02322). Minor typos/flaws: - l.41: Fokker Planck is not inherently high-dimensional - l.44: Go to the line for new paragraph - l.165: The law of balance is not strictly applicable here since $\ell$ is not scale invariant because of the regularization. - Section **4.1 Depth - 0**: I think that $\Delta > 0$ is not currently the "most practical example" since it corresponds to a underparametrized model. Technical Quality: 2 Clarity: 2 Questions for Authors: On top of the questions raised by the weakness section, here a some (more minor) questions: - l.65: Eq (1) is not the usual covariance matrix that is used for Stochastic modified equations. - l.95: Eq (3) seems important but it is difficult to follow where this comes from without intermediate calculations, can the authors develop? - l.110: Same thing for the equation $\lambda^4 = \frac{\langle w^*, C_2 w^* \rangle}{\langle w^*, C_1 w^* \rangle}$ (3): can the authors develop? - Thm 4.2 : Are they the only stationary distribution ? How to know to which one it converges ? Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 3 Limitations: As said before, all conclusion are drawn for models that live intrinsically in one dimension. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your feedback. We will answer the weaknesses and questions below. **The law of balance is an interesting phenomenon, yet...** Thanks for raising this point. We stress that the law of balance applies to high-dimensional problems as well. In the high-dimensional case, the convergence to the noise-balance point is no longer strictly exponential, but the following three strong properties still hold: 1. The fixed point (namely, the noise-balanced point) of Eq. (2) exists and is unique (among all degenerate solutions connected through the rescaling transformation). This is an insightful new theoretical result we will include that strengthens the law. 2. Assuming that this fixed point does not change in time, the dynamics in Eq. (2) converges to this fixed point. This is another new theoretical result we will include that strengthens the law. 3. Assuming that $C_1$ and $C_2$ are full-rank, convergence to an $O(1)$ neighborhood of the fixed point will be exponentially fast. This is a direct corollary of Theorem 3.1, which also enriches and strengthens our result. We also perform an additional set of experiments to validate the corollary. See the rebuttal summary and the attached pdf. **I have to say that I was a bit bothered by the general overselling of the paper:** Thanks for this criticism. We will do our best to tune down our statements to ensure that they are as accurate as achievable. **As said before the law of balance is truly valid asymptotically in one-dimension** As argued above, we stress that our law of balance (theorem 3.1) is generally valid for an arbitrary-dimension network, and this message strengthens with the newly added part and corollary to Theorem 3.1. **The stationary distribution..., which is not standard and does not resemble a diagonal network!** We will make it explicit that the stationary distribution is for a special model. Also, We have never claimed the model to be a diagonal network (and diagonal networks are no more realistic than our model). We also never claimed the model to be “standard.” Our claim actually has a very restrictive qualifier: that it is the first “general exact” solution of this specific model. We will emphasize these points. **The fact that the stationary distribution can be computed is also very inherent to 1d calculation and is simply a recognition of a Pearson diffusion that already made in way in ML (at least in https://arxiv.org/pdf/2402.01382 and https://arxiv.org/pdf/2407.02322).** It is true that 1d distributions are easier to derive, but our key contribution is not to derive a 1d distribution, but to reduce an arbitrary dimensional problem to 1d, and this is a special property of the SGD noise. This is not trivial and not an overclaim. Also, none of these references derived exact solutions. To be specific, at the beginning of Sec. 2.3 in the reference arXiv: 2402.01382, the authors applied the decoupling approximation to approximate the covariance of the gradient noise near the minima. In Lemma 4.4 of the reference arXiv: 2407.02322, the authors assume the strength of the noise to be a constant near the minima. In comparison, we give an exact solution for arbitrary initial parameters under the law of balance, which is not restricted to the points near the minima. We will include these references and clarification in the revision. **Minor typos/flaws:** **l.41: Fokker Planck is not inherently high-dimensional** When the dimension of the dynamical variable is higher than 1, the Fokker-Planck equation is inherently high-dimensional. We will clarify this. **l.165: The law of balance is not strictly applicable here since $\ell$ is not scale invariant because of the regularization.** What we want to derive is the dynamics of $C$, which can be decomposed into two parts: (a) contribution from symmetry loss, and (2) contribution from weight decay. These two parts can be analyzed separately. The contribution from the symmetry part directly follows from the law of balance. **Section 4.1 Depth - 0: I think that $\Delta>0$ is not currently the "most practical example" since it corresponds to a underparametrized model.** This is a misunderstanding and is based on the questionable assumption that overparametrized models reach a zero training loss and are more practical. For example, large language models often reach a training loss far above zero and are certainly underparametrized. For conventional CV models, it is also almost never the case that it reaches a zero training loss. As long as the training loss is above zero, it qualitatively corresponds to the case $\Delta >0$. **Questions:** **l.65: Eq (1) is not the usual covariance matrix that is used for Stochastic modified equations.** This is a typo. It should be $C(\theta)=\mathbb{E}[\nabla\ell(\theta)^T\nabla\ell(\theta)]-\mathbb{E}[\nabla\ell(\theta)^T]\mathbb{E}[\nabla\ell(\theta)]$. **l.95: Eq (3) seems important but it is difficult to follow where this comes from without intermediate calculations, can the authors develop?** This is a rewriting of the right-hand side of Eq. (2) by letting it equal $0$. **l.110: Same thing for the equation $\lambda^4=\langle w^{\*},C_2w^{\*}\rangle/\langle w^{\*},C_1w^{\*}\rangle$ (3): can the authors develop?** This is also a type of rewriting of the right-hand side of Eq. (2). By following the steps in l.108 and l.109, we let $u$ be $\lambda u$ and $w$ be $\lambda^{-1}w$. Then, by letting Eq. (2) equals 0, we have l.110. **Thm 4.2 : Are they the only stationary distribution? How to know to which one it converges ?** Yes. Theorem 4.2 enumerates all the stationary distributions. The solution which the network converges to depends on the initial parameters. If we initially choose a set of parameters such that $v>0$, then the probability distribution of the parameters converges to the solution $p_{+}$. For the initial parameters with $v<0$, they converge to the solution $p_{-}$. --- Rebuttal Comment 1.1: Title: After Rebuttal Comment: I thank the authors for the good rebuttal that has answered partially to my concern. For this reason I decide to increase my score by one, still thinking that the article lacks some convincing example to strenghten their claim. --- Reply to Comment 1.1.1: Title: reply Comment: Thanks for your reply and update. We would really appreciate it if you can be more specific regarding "convincing examples." To be specific, which claim in the paper do you think requires more example to strengthen? Knowing this would really help us improve our work.
null
null
Rebuttal 1: Rebuttal: We thank all the reviewers for their constructive feedback, which has helped us greatly improve our manuscript. We are encouraged to see that all reviewers agree that our contribution is "good." To address the concerns of the reviewers, we will make the following additions and changes to the manuscript. See the attached pdf for the updated theorem statements and additional experiments. We will include their proof in the revised manuscript. 1. A strengthened version of Theorem 3.1, which states that the fixed points of Eq. (2) are unique (**HVit, 56YC, mL9n**), and that if we assume that this fixed point does not change in time, the dynamics in Eq. (2) will converge to this fixed point (**HVit, 56YC, mL9n**). - This fixed point is unique in the following sense: when the r.h.s. of Eq (2) does not vanish, Eq. (2) has a unique fixed point in the degenerate valley of the rescaling transformation. Namely, there exists a unique $\lambda$ such that the transformation $(u,w) \to (\lambda u, \lambda^{-1}w)$ makes Eq. (2) vanish. The intuition behind the uniqueness of the fixed point is this: in Eq. (2), the term $u^T C_1 u$ is a monotonic increasing function of the norm of $u$, while $-w^T C_2 w$ is a monotonically decreasing function of the norm of $w$. When one of these two terms is nonvanishing, the sum of the two must intersect the x-axis at a unique point (including infinity) of the rescaling transformation. 2. A corollary that states that if $C_1$ and $C_2$ are full-rank, Eq. (2) will converge to an $O(1)$ neighborhood of the fixed point exponentially fast (**HVit**). 3. A more accurate restatement of our second main contribution as “first **general** (instead of being “particular”) **exact** solution to a specific deep-and-wide linear model with 1d input and output for an arbitrary initialization.” This is accurate because, in comparison with prior works, our solution does not rely on any approximations (**HVit**) and is not a particular solution that relies only applicable to special initialization conditions (**mL9n**). 4. Additional set of experiments on linear and relu nets, which validates the corollary that in high dimension, the convergence to a neighborhood of the noise-balanced solution is exponential. See Figure 6 (**HVit**). 5. Additional discussion of technical points such as the condition on the regularity of data and the smoothness of the loss function (**56YC**). 6. Discussion of related references (**HVit**, **mL9n**) We look forward to your feedback on these revisions and are open to further discussion to enhance the manuscript. Pdf: /pdf/8166d0d6813e12a9328e40cb840136fc08e3caaa.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
MTGS: A Novel Framework for Multi-Person Temporal Gaze Following and Social Gaze Prediction
Accept (poster)
Summary: This paper handles the problem of predicting human social labels and gaze heatmap simultaneously for all people in the input image sequence, which improve the accuracy of gaze following based on human social cues. Specifically, they first calculate person tokens through a person module, and then design an interaction module to extract interactive relationship between humans and scene. The author proves the effectiveness of the proposed model on multiple datasets, and verifies the role of each part of the model through ablation experiments. And a new dataset suitable for both subtasks is also annotated. Strengths: 1. The experimental section of this article is very detailed, and good results have been achieved in comparison with different methods on multiple datasets, proving the universality of this method. 2. The author for the first time incorporates human social cues into gaze prediction tasks to obtain interaction relationships between different individuals, which is an interesting topic. Weaknesses: 1. The author lacks an introduction to the Pairwise Instance Generator module in the model. What is its function? There is also a lack of explanation of the number of Interaction Blocks. What impact does the change of B have on the effect of the model? 2. The symbols used in the article are confusing and difficult to understand. For example, “Each block then processes the set of output person tokens $P_{1:N_n,1:T}^{o,b-1}$ and output frame tokens $f_{1:T}^{o,b-1}$ from the previous block”, why not indicate the range of b in this sentence? And too many superscripts and subscripts in formulas and expressions can easily confuse readers in the introduction to Interaction Module. 3. Lack of visual results to prove in which scenarios the proposed model has an advantage over the comparison methods. Technical Quality: 3 Clarity: 1 Questions for Authors: 1. Change the name of orange box in the Person Module of fig2 to Temporal Gaze Processor may be better to match the description in the essay. 2. In "$I_{ps}^b$ It is implemented as a single Transformer layer with cross-attention", "it" may be a redundant word. 3. Does $f_t^{o,b}=V_b(f_t^{p,b})$ in line 198 denote the output frame tokens for the block b, rather than the block B in Person-Scene Interaction? 4. Where is "temporal transformer architecture" in Fig.1? Confidence: 5 Soundness: 3 Presentation: 1 Contribution: 3 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback, and for their positive evaluation of our paper. We answer their questions and comments below. **Clarification of elements** The Pairwise Instance Generator was drawn in the figure to illustrate that the social gaze decoders take a pair of person tokens as input (L234-235). We will clarify this in the text. $f_t^{o,b}$ indeed refers to the output tokens for the specific block $b$. We use $B=4$ Interaction Module blocks as defined in L318-319. Our Interaction Module is inspired from ViT-Adaptor which found this value to give the best performance. We did not conduct additional ablations for different values of $B$. We included multiple notations for completeness when describing the Interaction Module, but will work on simplifying the notation so that it is easier to follow. The visualizations in Figure 1 are from our proposed multi-person temporal transformer model (Ours in results). We also thank the reviewer for pointing out the typos, we will correct them in the final version of the paper. **Qualitative comparisons** We provide qualitative comparisons of our method against other methods in the attached pdf with the overall response. We see that our model performs better overall, accurately capturing people’s gaze target and social gaze behavior. This is despite the complexity of the scenes with obscured eyes, multiple salient targets, varied settings (indoor, outdoor) and different age groups (children, adults). We will include these examples in the final version of the paper. --- Rebuttal Comment 1.1: Comment: Thanks for your response. I will keep my rating. --- Reply to Comment 1.1.1: Comment: Thank you for the response! We appreciate your feedback and positive evaluation of our work.
Summary: Paper proposes a novel framework which solves multiple gaze prediction tasks, gaze heatmap, in-out frame classification, social gaze classification for multi-person in one-pass simultanously. It also contributes a new dataset by extending existing datasets annotations. Comprehensive experiments were conducted for multiple datasets (GazeFollow, VideoAttentionTarget, ChildPlay, VideoCoAtt, UCO-LAEO which shows superior results compare to existing methods. Strengths: 1. Paper's experimental setup is comprehensive and supports the central claim of the paper of unifying several gaze prediction/classifcation tasks. 2. Sufficient ablation studies were also conducted to investigate the contribution and importance of the various submodules. 3. The extension of existing datasets with more annotations is also a good contribution to the community. Weaknesses: Paper's technical contribution is incremental. The framework design, while novel, does not have any significant improvements over prior works. The contributed dataset is also an extension of existing datasets. However, as pointed out in the rebuttal, the effecient construction process of the dataset is also a strength of this paper. This is a good paper. Technical Quality: 3 Clarity: 4 Questions for Authors: One of the paper's claimed contribution is the temporal aspect of gaze. However, in the proposed model, there is no clear design of this beyond using a simple sequence of frame tokens and temporal information is aggregated with standard attention mechanism. The main discussion of temporal information is relegated to the Appendix. Will the author consider reframing the importance of temporal information in the title and abstract, or to move the temporal information discussion to the main paper? Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback, and for noting our submission as a good paper. We answer their questions and comments below. **Novelty of method** We refer the reviewer to our discussion on the novelty of our architecture in the overall response. **Dataset is an extension of existing datasets** We appreciate the reviewer highlighting our dataset as a good contribution to the community. We would like to clarify however, that while VSGaze directly benefits from existing datasets (e.g. by having gaze following annotations), it is not just a concatenation and straightforward extension of these datasets. Indeed, annotating videos with gaze following and social gaze labels is a time-consuming process, which is why existing datasets typically annotate for a subset of these labels and are small in size. Hence, we wanted to come up with a **scalable method to build a dataset with *all of these labels*, that was the *largest* of its kind, and *diverse* in terms of scene content**. A key insight was that people’s head bounding boxes could be used as a semantic entity to unify annotations across multiple gaze datasets. However, the considered **datasets only annotated a subset of people** in the scene, whereas **all people’s head bounding boxes are required** to obtain all possible positive and negative social gaze labels. Hence, we leveraged a strong head detector to extend the head bounding box annotations in these datasets and manually verified the detections. This extended set of head bounding boxes was then coupled with the existing gaze annotations to extend and unify them across datasets. The detailed process for the construction of VSGaze is given in L280-308 and supplementary B. Further, the method for constructing VSGaze can be extended in the future to obtain more gaze annotations. For instance, we focus on people’s heads as a semantic entity for unification, but could leverage strong segmentation methods such as SAM [1] to unify gaze annotations using other semantic entities such as people’s hands. Also, future dataset annotation efforts can focus on a subset of gaze annotations such as gaze following labels, and can then follow our method to extend them with social gaze annotations. **Temporal information is a simple aggregation of frame tokens** We would like to clarify that temporal information is **incorporated in our architecture by aggregating *person tokens* across time using self-attention**. While self-attention itself is fairly standard and has shown strong performance for temporal tasks such as action recognition [2], designing how to use it for a specific task and domain is a research question. Previous works on temporal gaze following [3,4] performed a frame level aggregation of temporal information, achieving limited to no success. On the other hand, **we hypothesized that the broader scene tends to remain relatively static for a short temporal window**, and instead focused on **modeling person-level temporal information** to account for **gaze direction as well as gaze target dynamics**. To do this, we incorporate temporal aggregation of person tokens at multiple levels of the architecture. As discussed in the paper (supplementary D.1) and the overall response, our architecture benefits from the addition of temporal information, with most improvements for shared attention in particular (Table 5). It also learns to account for behaviors such as blinking (Figure 5) which are not captured by current metrics. Nevertheless, more research, datasets and metrics are needed to fully exploit this information. **Moving discussion on temporal information to the main paper** We moved the discussion on temporal information to the supplementary due to space limitations. Based on the reviewer’s suggestion, we plan to move the discussion to the main paper by instead moving Section 4.2 (training and validation details) and parts of Section 4.1 (VSGaze construction details) to the supplementary. [1] Kirillov et al. (2023). Segment Anything. ICCV. [2] Tong et al. (2022). VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training. NeurIPS. [3] Chong et al. (2020). Detecting attended visual targets in video. CVPR. [4] Miao et al. (2023). Patch-level gaze distribution prediction for gaze following. WACV.
Summary: This paper presents a novel framework for multi-person temporal gaze following and social gaze prediction. The authors propose an architecture that jointly predicts gaze targets and social gaze labels for all people in a scene. It uses a transformer-based model that processes both frame tokens and person-specific tokens to capture gaze information and interactions. They also build a dataset called VSGaze, which unifies and extends annotations across multiple existing gaze following and social gaze datasets. This allows joint training on multiple tasks and datasets. The proposed model achieves state-of-the-art results for multi-person gaze following and competitive performance on social gaze prediction tasks. Strengths: - Overall, this is a very solid study, and the comprehensive approach of organizing the data and training a large-scale unified model is quite logical. - There is a noticeable improvement in performance across multiple tasks. Experiments are being conducted thoroughly from various angles. Weaknesses: - Conversely, the network's structure is relatively straightforward, combining existing approaches, and its novelty as a method is not necessarily significant. - The model's generalization capability to unknown domains and datasets has not been evaluated. While preparing the data is challenging, it is particularly crucial for methods involving humans. Technical Quality: 3 Clarity: 3 Questions for Authors: It seems challenging to clearly state whether the training of models combining multiple tasks is useful even with a small amount of data based on the current results. Is there anything we can say about the relationship between data quantity and joint training? Since the effect of large-scale training with multiple datasets itself is not the core argument of this paper, I think it would be more insightful if this aspect can be carefully separated from the observations. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The discussion in the appendix seems to be conducted appropriately. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback, and for highlighting our paper as a very solid study. We answer their questions and comments below. **Novelty of method** We refer the reviewer to our discussion on the novelty of our architecture in the overall response. **Generalization to new domains/datasets** We agree with the reviewer that having a model that generalizes well to different domains and datasets is important. Indeed, this was a main motivation behind creating VSGaze, which extends and combines datasets containing scenes from diverse settings such as daily activities (VideoCoAtt), talk shows (VAT), childcare (ChildPlay) etc. We show that our model trained on VSGaze performs well on each of its constituent datasets in Table 4. On the other hand, training on a specific domain results in poorer generalization to other domains. For instance, we evaluated our VAT trained model on ChildPlay and found that it generalizes poorly (Dist: 0.119, $AP_{IO}$: 0.991, $F1_{LAH}$: 0.624, $F1_{LAEO}$: 0.363, $AP_{SA}$: 0.194) compared to the model trained on VSGaze (Dist: 0.113, $AP_{IO}$: 0.993, $F1_{LAH}$: 0.651, $F1_{LAEO}$: 0.436, $AP_{SA}$: 0.216). We leave the investigation of generalization to unseen domains and datasets for future work. **Data quantity vs joint training** This is an interesting and important question that we discuss under ‘Impact of VSGaze’ in L404-410. We find that fine-tuning models on specific datasets (Table 3) typically results in better performance compared to training on VSGaze (Table 4). This is because the model can learn dataset specific priors, ex. more LAH cases in VAT compared to ChildPlay (Table 1). For instance, on VAT, Gupta [1] has a distance score of 0.138 when trained on VSGaze, compared to a score of 0.134 when directly fine-tuned on VAT. Also, our proposed model has a distance score of 0.112 when trained on VSGaze, and a score of 0.105 when fine-tuned on VAT. Hence, while we may expect models to benefit from more data, accounting for different priors and statistics across datasets (especially given VSGaze’s diversity as discussed above) brings additional challenges. We will detail this aspect more in the final version of the paper. [1] Gupta et al. (2022). A modular multimodal architecture for gaze target prediction: Application to privacy-sensitive settings. CVPRW.
Summary: This paper focuses on social gaze prediction in videos. An approach based on ViT has been proposed, combining three modules including person module, interaction module, and prediction module. The authors also summarised the current shortcomings of the existing datasets and have introduced a new dataset comprising social gaze interactions such as looking at someone, looking each other, and shared attention. Strengths: The main strengths of the paper are: - Focusing on video-level gaze prediction, particularly on events like shared attention, which goes beyond gaze target estimation in images. - Introducing a new dataset for the target problem. - Overall, the paper is well-written and well-organized. Weaknesses: The main weakness of the paper is the results. If you refer to Tables 2 and 3, the differences in terms of distance and other metrics are very small, making it difficult to assess the benefits of the proposed approach. In some cases, the differences in terms of distance range between 0.01 and 0.001. This makes the results questionable in my view. The authors may need to run statistical tests to demonstrate that such differences are significant, or these improvements may be by chance given that the differences are negligible in most cases. Regarding the methodology, I am not sure if papers like [1] and [2] were available at the time of submitting this paper, but it is hard to see how the proposed approach is competitive compared to these aforementioned approaches in terms of both methodological novelty and performance. On page 3, the authors mention ".. both methods do not address the gaze following task." However, the proposed method and dataset do not address the gaze following task either. Indeed, the new dataset only includes a subset of the gaze communication cues presented by Fan et al. at ICCV 2019. Why hasn't this dataset been considered in the evaluations given its relevance? [1] Sharingan: A Transformer Architecture for Multi-Person Gaze Following [2] A Unified Model for Gaze Following and Social Gaze Prediction Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Could you please demonstrate that the differences between the obtained values are significant? 2. Could you please discuss your contributions with respect to the existing transformer-based approaches for gaze following and social gaze estimation? 3. Could you please explain why you selected these methods for comparison specifically, and why you chose the datasets like GazeFollow and VideoAttention, most of which do not even have annotations for social gaze cues? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The authors have discussed the limitations and broader impact satisfactorily. In addition, they have provided detailed visualisations to offer insight into failure cases. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback, and for raising valuable discussion elements. We answer their comments and questions below. **Significance of improvements** We agree with the reviewer that differences of 0.001 can be negligible, however, differences in 0.01 are significant as discussed below: - Distance: This metric for gaze following computes the distance between the ground truth and predicted gaze point on a normalized 1x1 grid. For an HD image of size 1080x720, a difference of 0.01 results in a difference of up to 11 pixels in image space on average (and often more). This can result in selecting a completely different target, ex. a nearby face. - F1: This metric for social gaze computes the harmonic mean of the precision and recall scores. A difference of 0.01 can correspond to a difference of 3% in either precision or recall. Following the suggestion of the reviewer, we also re-trained our model with 5 different seeds on VSGaze, and obtained a standard deviation of 0.0006 for distance, 0.0017 for $AP_{IO}$, 0.0007 for $F1_{LAH}$, 0.0048 for $F1_{LAEO}$ (LAEO has significantly less positives, Table 1) and 0.0020 for $AP_{SA}$. We further include qualitative examples in the attached pdf with the overall response, where we see that our model outperforms the baselines in various complex scenes. **Comparison to existing transformer based architectures** We discuss existing transformer based architectures in L93-100, where we mention that they follow a DETR style approach for simultaneously predicting people’s head bounding box and gaze target. While their approach is interesting, they are prone to head detection errors (examples in attached pdf with overall response). And as their evaluation relies on matching detected and annotated heads (and performance is only reported on the matched heads), it is difficult to compare performance against them. Also, these methods do not address social gaze prediction. We thank the reviewer for references [1, 2], which are interesting and contemporary to our work. These were not available at the time of our submission. Nevertheless, our work differs notably from these studies. Firstly, [1] focuses solely on the gaze following task. Their architecture is based on a ViT and treats person and frame tokens equally in the self-attention layers, which as noted in their paper, can be limiting. In contrast, our architecture allows the processing of person and frame tokens through separate transformers, facilitating interactions via cross-attention. In particular, this separation also allows for temporal processing of person tokens at multiple levels of the architecture. [2] extends [1] by leveraging a frozen gaze following model, and adding graph layers to model interactions and predict social gaze. However, this approach has several drawbacks: - As shown in [2], the graph layers provide limited benefit, which the authors attribute to over-smoothing, a known issue in graph neural networks. Our transformer-based architecture avoids this problem and benefits from the Interaction Module components, as demonstrated in our ablations in Section D.2. - Since [2] freezes the gaze following backbone during training, the gaze following and social gaze tasks cannot complement each other during training. Our analysis in L393-403 demonstrates that these tasks provide mutual benefits, and jointly training our architecture with both gaze following and social gaze losses yields the best performance. In terms of performance, our method performs comparably to [1] for gaze following on VAT (Dist. 0.105 vs 0.107) and improves over [2] for LAEO on UCO-LAEO (AP: 0.974 vs 0.946). We are unable to compare performance for other tasks against [2] as they use different performance metrics. We leave a detailed quantitative comparison for future work. **Method does not address gaze following, reasoning behind choice of datasets and compared methods** We would like to clarify that **our method *does* address the gaze following task**. We discuss the gaze heatmap decoding step in L218-230 and provide results for gaze following using the distance metric (abbreviated Dist. in the tables). Indeed, our method **achieves the state of the art multi-person gaze following performance on standard benchmarks as indicated in Tables 3a,b,c**. We compare performance against other gaze following methods whose predictions can also be post-processed for social gaze. Fan et al. (2019) and Chang et al. (2023) do not address gaze following, which is an important task to identify the target of shared attention, and improves performance for social gaze prediction when trained jointly (L393-403). Further, recent methods for gaze following [3] have been shown to outperform task-specific social gaze models (L126-127). Also, Fan et al. (2019) predict a single social gaze ‘state’ for a person, not allowing for simultaneous social gaze behaviors ex. LAEO and SA (L137-139). This issue extends to their dataset, which annotates a single social gaze state for a person at a given moment (L276-279). In the attached pdf, we show examples from the dataset where this annotation protocol fails. These issues were also reported in [2,4]. Besides this serious issue, VSGaze significantly differs from Fan (2019). **Content wise, it is much more diverse**, including scenes from TV shows, daily activities and childcare settings. It is also **more complex: in terms of number of people** in the scene (3.43 vs 2.14) and **size** (5.8x larger). [1] Tafasca et al. (2024). Sharingan: A Transformer Architecture for Multi-Person Gaze Following. CVPR. [2] Gupta et al. (2024). A Unified Model for Gaze Following and Social Gaze Prediction. FG. [3] Chong et al. (2020). Detecting attended visual targets in video. CVPR. [4] Belen et al. (2023). Temporal Understanding of Gaze Communication with GazeTransformer. Gaze Meets ML Workshop at NeurIPS. --- Rebuttal Comment 1.1: Title: reply: Rebuttal by Authors Comment: I have raised my score to 'weak accept' after considering all the feedback and responses. However, as another reviewer also noted, I am still not fully convinced about the significant methodological novelty of this architecture compared to previous ones. Nonetheless, this is a well-executed paper, and introducing a new dataset is a plus. --- Reply to Comment 1.1.1: Comment: Thank you for the response! We appreciate you raising your score and noting our submission as a well executed paper.
Rebuttal 1: Rebuttal: We thank the reviewers for their feedback on our paper, which presents the following contributions: - A novel **temporal, multi-person architecture** that jointly models gaze following and social gaze prediction. - A new **dataset, VSGaze**, which extends and unifies annotations across multiple gaze following and social gaze datasets. As by far the largest and most diverse dataset of its kind, it opens new avenues for gaze modeling. - New evaluation protocols and metrics for assessing semantic gaze following and social gaze performance. We are pleased that the reviewers found our central premise of unifying several gaze prediction tasks into a single framework logical (5XsF). They appreciated our novel architecture that, for the first time, encodes human interaction relationships for gaze tasks (FWeu). They also appreciated our extensive experiments (5XsF, poG5, FWeu), which demonstrated that (1) Our architecture can successfully model all tasks jointly, and that **this new architecture and joint training improved performance across multiple tasks and datasets compared to other methods**; (2) the importance and contribution of the different sub-modules in the overall performance. They further noted the value of our proposed dataset (7cdN, poG5) as a good contribution to the community and appreciated our social gaze metrics for characterizing semantic gaze performance (7cdN). Lastly, they described our paper as well-written and organized (7cdN) A common concern among reviewers (7cdN, 5XsF, poG5) was the novelty of our architecture. We would like to emphasize that our architecture was developed to address several complex research questions: **(1) How to account for multiple people in the scene?** As discussed in lines 45-47, this challenge is inherently more complex than single-person gaze following, as our architecture has to process the scene only once, and to capture salient items for all individuals while retaining the ability to decode each person's gaze target. Among the cited gaze-following papers, only [1] presents a comparable multi-person architecture, but as it processes each person separately, it fails to account for interactions (see discussion in lines 89-100). In contrast, by employing specific person tokens to encode individuals and an Interaction Module, our architecture is able to capture interactions between people and the scene (encoded as frame tokens), and has achieved state-of-the-art results for multi-person gaze following (Tables 3a, 3b, 3c). **(2) How to jointly model multiple gaze tasks?** Predicting the gaze target for each individual while simultaneously predicting social gaze between pairs of people presents a significant challenge, as gaze targets are represented as heatmaps, whereas social gaze is a binary label. Jointly modeling these diverse tasks has not been previously attempted. Our architecture overcomes this challenge thanks again to our token-based representation. It leverages person and frame tokens for gaze heatmap prediction through the conditional DPT decoder, and pairs of person tokens for social gaze prediction through the social gaze decoders. This approach achieves strong performance across all tasks without compromising on any of them (Table 2). **(3) How to incorporate temporal information?** Incorporating temporal information is particularly challenging due to the small size of gaze datasets. Only two other gaze-following methods [2, 3] have attempted this, with limited success and without accounting for gaze direction dynamics. In contrast, our architecture addresses this challenge by incorporating temporal information at multiple levels, from gaze direction dynamics to gaze heatmap prediction. Inspired by ViT-Adaptor [4], we freeze the ViT layers during training on VSGaze, allowing frame tokens to adapt through interactions with person tokens. This approach leverages temporal information and improves performance, especially for shared attention tasks (Table 5). Our qualitative examples (Figure 5) demonstrate that our temporal model captures behaviors such as blinking, but unfortunately such interesting behaviors are not accounted for by current metrics. Despite these advancements, as discussed in supplementary section D.1, further research, datasets, and metrics are needed to fully harness the potential of temporal information. In the supplementary F, we show that our architecture also supports incorporating person-level auxiliary information such as their speaking status. We will include this discussion and all other feedback from reviewers in the final version of the paper. [1] Jin et al. (2021). Multi-person gaze-following with numerical coordinate regression. FG. [2] Chong et al. (2020). Detecting attended visual targets in video. CVPR. [3] Miao et al. (2023). Patch-level gaze distribution prediction for gaze following. WACV. [4] Chen et al. (2022). Vision transformer adapter for dense predictions. ICLR. Pdf: /pdf/91a7fca0cc419c8b52ca3f032676ffe55291a267.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
ConStat: Performance-Based Contamination Detection in Large Language Models
Accept (poster)
Summary: Authors propose a new definition of contamination: "artificially inflated and non-generalizing benchmark performance" rather than "the inclusion of benchmark samples in the training data". They develop ConStat, a statistical method that reliably detects and quantifies contamination by comparing performance between a primary and reference benchmark relative to a set of reference models. They demonstrate the effectiveness of ConStat through an evaluation of diverse model architectures, benchmarks, and contamination scenarios, and find high levels of contamination in multiple popular models. Strengths: • A new method to detect benchmark contamination from a new perspective • Great performance on syntax and sample-specific contaminated models • Good motivation, presentation and organisation Weaknesses: • The paper uses reference datasets that are synthetically composed, authors briefly comment on this in the limitations. More ellaboration on this is need it since it is one of the basis of the experiments. Authors could potentially estimate by human analysis what is the expected error on their approach. • Benchmark-specific contamination is complex and it is not covered with enough in-depth. Technical Quality: 2 Clarity: 2 Questions for Authors: • There is a short paragraph explaining results on benchmark-specific contamination, could authors elaborate more? How does it compare to other methods? • Some statements need clarification, e.g. “MathQA is a multiple-choice benchmark that requires the model to answer immediately and therefore gives no room for this reasoning ability to shine.” ->• There is a section on exposing the limitations. Maybe authors could further comment on the limitation of experiments due to computational complexity. what does it mean that requires the model to answer immediately? Could authors further support “no room for this reasoning ability to shine” Confidence: 3 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: • There is a section on exposing the limitations. Maybe authors could further comment on the limitation of experiments due to computational complexity. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their detailed review and insightful questions. We are happy to hear that they found our motivation, presentation, and organization very good, and the performance of our method great. Below, we address their questions. **Could you elaborate more on the effect of the synthetic nature of the synthetic data on your approach?** We first note that the low quality of the synthetic datasets would not significantly affect the predictions made by ConStat. For instance, the inclusion of a sample with an incorrect label would lead to incorrect answers by all models. The hardness correction function would adjust for this incorrectness automatically, without affecting the contamination prediction. Furthermore, we performed extensive manual checks to ensure high data quality and fidelity of the synthetic data. Specifically, we followed this procedure: - We first manually tested around 10 samples for each benchmark with several system prompts and checked the output quality of the model. We adjusted the prompts until we were satisfied with the quality of these 10 samples. - We then conducted a manual check of around 100 samples for each benchmark to identify common mistakes and ensure overall data quality. This process helped us detect and exclude some synthetic samples in the GSM8k dataset that did not have integer answers. - We performed deduplication by computing the samples with the highest 1-gram overlap for each synthetic sample. We manually checked around 50 samples and their 1-gram overlaps to set the threshold for excluding a sample. We used a similar approach to check whether a synthetic sample was included in the original test set. **Could you elaborate more on benchmark-specific contamination? How does it compare to other methods?** Benchmark-specific contamination is the most generic and least problematic type of contamination. As we explain in Section 2.2, it occurs when a model fails to generalize to benchmarks that measure performance on the same task as the original benchmark. This implies that a model is only effective on specific questions or formats in the original benchmarks. Users should be cautious when applying this model to real-world scenarios, as its performance may not be consistent across the entire task. However, we note that this inconsistency can arise for reasons other than information-flow-based contamination. For instance, a focus during training on the particular type of questions in the original benchmark can cause benchmark-specific contamination. We acknowledge this in the paper and are careful in drawing major conclusions from this type of contamination. We will clarify this point in the paper to ensure that readers understand the necessary caution in interpreting results regarding benchmark-specific contamination and the conclusions that can be drawn from it. **Could you clarify the statement “MathQA is a multiple-choice benchmark that requires the model to answer immediately and therefore gives no room for this reasoning ability to shine.”?** This statement refers to our use of the LM Evaluation Harness, a popular framework for language model evaluation. The LM Evaluation Harness implements MathQA as a multiple-choice benchmark, where the framework measures the perplexity of the answer options and selects the option with the lowest perplexity. In contrast, for GSM8k, the framework extracts the answer from a free-form response created by the language model. We believe the Phi model family performs better in free-form evaluations than in multiple-choice evaluations, leading to the conclusion in the statement. This distinction is unrelated to computational complexity since we would have evaluated MathQA using free-form answers if the LM Evaluation Harness implemented the benchmark like this. **Could you comment on the computational complexity of your method, e.g., in the context of the prior statement?** As explained above, the computational complexity is not related to the multiple-choice evaluation of MathQA. However, ConStat does require evaluating several models on multiple benchmarks, which entails some computational complexity. Despite this, the total cost of our entire evaluation, amounting to a couple of thousand dollars, is negligible compared to the cost of training a model and serving it via an inference API. Therefore, we believe this does not pose a serious limitation, as our method can be applied by any organization with reasonable funds. We hope our answers address all of the reviewer’s concerns, remain happy to answer any further questions, and look forward to the reviewer’s response.
Summary: The paper proposes a performance-based approach to detect data contamination in large language models. First, a set of reference models is evaluated on the original benchmark and a proxy of it. Next, a difficulty correction function is fitted to find the relationship between the performance from the original benchmark and the proxy benchmark. Then, the performance of a target model is evaluated on the proxy benchmark, and using the correction function, the expected performance on the original benchmark is predicted. Lastly, the difference between the actual performance on the original benchmark and the expected performance is computed and checked for significance to label the target model as contaminated. Strengths: 1. The paper is well-written. 2. The experiments are comprehensive. 3. The proposed method can be used with both open-weight and closed-weight models, which is important. Weaknesses: 1. The main weakness of the proposed method is that it identifies data contamination based on changes in performance. Performance can change for several reasons, so it cannot be solely the indicator of contamination. In fact, the method assumes that any performance increase means contamination, leading to a high rate of false positives in situations where performance improvements come from other sources. For example, this method incorrectly detects contamination when performance improves due to unsupervised domain adaptation while it is not. 2. This method can only be applied to models that are similar to the reference models, especially in terms of size, architecture, and pre-training data, as it detects contamination with respect to these reference models. This limits its generality and makes it unsuitable for models that differ from the reference set. For example, if a target model just generalizes better than the reference models, the method incorrectly identifies this as contamination. Conversely, a contaminated model can be deemed uncontaminated if it cannot translate the contamination into improved performance, e.g., when a model is contaminated with a dataset but cannot follow instructions very well. Therefore, the proposed method measures the lack of generalization relative to the reference models rather than actual contamination. In short, this method does not guarantee whether a model has seen the datasets or not. 3. Building on the previous comment, this method does not actually measure contamination; it captures situations where excessive memorization has replaced generalization. In fact, the scenarios discussed in Section 2.2 **are not types of contamination**. Instead, they are examples of excessive memorization, which the method captures. 4. The results of the proposed method are relative, not absolute. In this method, detection is only meaningful when compared to the reference models. So, if the reference models are contaminated, the target model will still appear uncontaminated. Technical Quality: 2 Clarity: 4 Questions for Authors: **Question:** 1. In Table 1, what is the difference between "Shi et al. [40]" and "Shi [39]"? **Comment:** 1. Lines 11-14 and 343-345: Your method does not quantify/measure contamination. Confidence: 4 Soundness: 2 Presentation: 4 Contribution: 2 Limitations: 1. The outcomes cannot be interpreted individually. Specifically, detection heavily depends on the reference models and how they generalize/behave. This means that better generalization or steerability in models can be mistaken for contamination. Also, if the reference models are contaminated, this contamination can spread to the target models and go undetected. 2. The proposed method does not detect contamination, as none of the scenarios studied in Section 2.2 involve contamination. Instead, the method captures the lack of generalization or situations with excessive memorization. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their detailed review and insightful questions. We are pleased to hear that they found our paper well-written, our experiments comprehensive, and the applicability of our method to closed-source models interesting. Below, we address their questions. **Should contamination be viewed from a perspective of performance generalization?** Yes, we argue that information-flow-based contamination can be viewed from this perspective for two main reasons. First, in benchmark evaluation, the only relevant consequence of information-flow-based contamination is an artificial increase in performance on benchmarks. Although there are other areas of influence, such as detecting the use of copyrighted data, our focus lies specifically on its effect on performance measurements. Given the large datasets involved in training LLMs, some level of information-flow-based contamination is almost inevitable. Therefore, any model can be said to show some level of information-flow-based contamination. Thus, a detailed contamination report that includes its effect on benchmark performance is more interesting and easily obtainable using ConStat. Second, syntax- and sample-specific contamination imply information-flow-based contamination. As we argue in Q1 of our main reply, a model showing syntax- or sample-specific contamination can distinguish between two benchmarks drawn from the same or a very similar distribution. This is only possible if one of these benchmarks was seen during training. Otherwise, the model should treat the two benchmarks identically, resulting in similar performance on both. **Is it possible that performance changes for reasons other than information-flow-based contamination?** Please see our detailed answer in the main response. **Can the method only be applied to models that are similar to the reference models?** No, **none** of our experiments rely on this assumption. Our reference models encompass a range of architectures, sizes, and pre-training data. Therefore, our results demonstrating the accurate detection and estimation of performance-based contamination in Section 4.2 do not rely on similarity assumptions. This robustness also allows us to detect contamination in various model architectures and sizes in Section 4.3 and reproduce contamination results from some model providers without access to their training data. **What kind of contamination does the method measure? Does it measure excessive memorization rather than contamination?** Yes, our method redefines contamination based on its influence on performance, which can also be seen as excessive memorization. As argued above and in the paper, this is a key effect of information-flow-based contamination, and the only aspect worth measuring for the purpose of benchmark integrity. While ConStat might miss information-flow-based contamination that does not affect performance, this is by design, as such contamination has no practical consequences on benchmark integrity. **Would the method incorrectly detect contamination when the target model’s performance generalizes worse than the reference models due to reasons other than contamination?** For uncontaminated models, this should only occur when detecting benchmark-specific contamination. As previously argued, syntax- and sample-specific contamination imply information-flow-based contamination. Therefore, the absence of information-flow-based contamination also implies the absence of syntax- or sample-specific contamination. We acknowledge other factors influencing benchmark-specific contamination in the paper and are cautious in our conclusions about this type. However, measuring this contamination is important. If a model excels on specific benchmarks but performs poorly on others for the same task, it indicates problematic behavior. While this might not be information-flow based contamination, it may result from a focus on a subset of the task during training because of a drive to perform well on a specific benchmark. Benchmark-specific contamination can reveal this issue and indicate how representative reported scores are with regard to overall task performance. **Is measuring contamination relative to a set of reference models sufficient for contamination detection?** Please see our detailed answer in the main response. **In Table 1, what is the difference between "Shi et al. [40]" and "Shi [39]"?** These are two different methods by the same authors. Shi et al. [40], known as TopKMin, measure the perplexity of each token in the answer, retain the k% tokens with the highest perplexities, and average these to obtain a contamination measure. In contrast, Shi [39] generates 30 alternative completions of the first half of each question in the benchmark using an uncontaminated base model. It then measures the perplexities of these completions with the target model, and contamination is detected by the percentile of the actual answer's perplexity among these completions. If this percentile is consistently low, the model is deemed contaminated. We hope our answers address all of the reviewer’s concerns, remain happy to answer any further questions, and look forward to the reviewer’s response. --- Rebuttal Comment 1.1: Comment: Thanks to the reviewer for their rebuttals. While the rebuttal somewhat addressed some of my concerns, I still believe the method’s applicability is limited due to its reliance on a set of reference models. Therefore, I will maintain my score. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for engaging in the discussion and are happy to hear that we could address some of their points. Could the reviewer explain in what scenarios they believe ConStat’s applicability to be limited by the required set of reference models? In particular, we have demonstrated that a lack of generalization will not lead to falsely detected contamination, as mentioned in the reviewer's response. Furthermore, we believe that the reproduction of the contamination numbers of Llama-2-70b from the original paper without access to the training data demonstrates that this limitation is minimal, if not negligible.
Summary: The authors introduce a definition for “contamination” based on its outcome rather than its cause, unlike many previous approaches. The authors propose ConStat, a novel method for quantifying the contamination of a model on some benchmark and demonstrate that it outperforms other methodologies for detecting contamination. Strengths: - This paper was a pleasure to read. It is both written and structured in a clear way that communicates the ideas well - The authors used ConStat to show strong contamination of some models from the Open LLM Leaderboard on specific benchmarks. This research can be used to build trust in our evaluations Weaknesses: The authors generated a synthetic version of each benchmark they used in their experiments. They described their methodology for doing so in Appendix C. I don’t believe the authors used a sufficiently rigorous method to ensure this generated dataset was sufficiently high-quality. For example, it is possible that some of their samples did not have a high 1-gram overlap but did ask similar questions. I believe a manual review of a subset of the dataset for duplication, and perhaps other traits, would be necessary to guarantee data quality. Though I view this as a minor weakness given that their methodology was successful despite potential problems with this dataset. Technical Quality: 3 Clarity: 4 Questions for Authors: 1. Why did the authors not select a reference benchmark for MMLU? 2. Do the authors believe this methodology will work well when the measured task between the benchmark and reference benchmark is similar (e.g. coding) but the "form" between the benchmark and reference benchmark is significantly different, for example, multiple choice question answering evaluation vs. open-ended agentic evaluation? 3. What further research are the authors excited to see in this area? Are the authors planning on discussing this within the paper? Confidence: 3 Soundness: 3 Presentation: 4 Contribution: 4 Limitations: The authors sufficiently address this. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their detailed review and insightful questions. We are happy to hear that they found our paper a pleasure to read and that our research can be used to build trust in our evaluations. Below, we address their questions. **How did you ensure that the generated dataset had a sufficiently high quality? Is the deduplication process using 1-grams sufficient?** To ensure high data quality for rephrasing and synthetic sample generation, we performed the following procedure: - We manually tested around 10 samples for each benchmark with various system prompts, iteratively refining the prompts until we were satisfied with the output quality. - We performed a manual check of approximately 100 samples for each benchmark to identify common mistakes and evaluate overall data quality. This process revealed issues such as non-integer answers in some GSM8k samples, which were therefore excluded. - For deduplication, we computed the highest 1-gram overlaps between synthetic samples and manually reviewed around 50 samples to set an appropriate exclusion threshold. We used a similar method to ensure no synthetic samples were present in the original test set. It is important to note that low-quality synthetic data would not significantly impact ConStat’s predictions. For instance, a sample with an incorrect label would be consistently answered incorrectly by all models. The hardness correction function would adjust for this automatically, ensuring contamination prediction remains unaffected. Furthermore, the inclusion of test set samples in the synthetic data set would also only reduce the effectiveness of ConStat: these samples would reduce the estimated performance difference and therefore make ConStat’s estimates less accurate. The fact that our estimates are quite accurate, suggests that our approach was sufficient to prevent inclusion of test set samples in our synthetic data. **Why did the authors not select a reference benchmark for MMLU to measure benchmark-specific contamination?** MMLU is a broad benchmark designed to evaluate knowledge across diverse tasks. Benchmark-specific contamination detection requires a reference benchmark closely aligned with the target benchmark’s task. We could not find a sufficiently similar benchmark to MMLU, and including a dissimilar benchmark might lead to false positives. Therefore, we chose not to include benchmark-specific contamination detection for MMLU to maintain the accuracy of our analysis. **Do you expect your approach for measuring benchmark-specific contamination to work well when the reference benchmark uses a significantly different form than the original benchmark (e.g. multiple-choice instead of completion)?** The reviewer is correct to point out that an analysis would need to take this difference into account. For example, GSM8k (free-form) and MathQA (multiple-choice) are included in our experiments. We found that the Phi model family performed particularly badly on the MathQA dataset. This is likely due to the focus of the authors on textbook datasets during training. We believe the chain-of-thought capabilities of these models greatly increased because of this, but their ability to perform single-step mathematical computations, as required for MathQA, does not. This explains the very large discrepancy in performance between the two benchmarks. Therefore, conclusions here might not be indicative of information-flow-based contamination but do bring up a flaw in the model, i.e., its inability to perform well on mathematical operations without chain-of-thought reasoning. **What further research are the authors excited to see in this area? Are the authors planning on discussing this within the paper?** There are several exciting areas in data contamination that would benefit greatly from future research: - **Improving Information-Flow Based Contamination Detection**: Our method currently outperforms this traditional approach, but advancements here could provide more detailed contamination insights. For example, detecting specific contaminated samples could have applications in identifying the use of copyrighted materials. - **Extending ConStat to Multimodal Models**: Applying our approach to multimodal models, such as those involving images, would be valuable. Synthetic sample generation for complex data types like tabular images presents unique challenges, but overcoming these would increase the applicability of contamination detection. Unfortunately, due to space constraints, we did not discuss these topics in the NeurIPS submission. We will therefore not include it. We hope our answers address all of the reviewer’s concerns, remain happy to answer any further questions, and look forward to the reviewer’s response. --- Rebuttal Comment 1.1: Comment: **How did you ensure that the generated dataset had a sufficiently high quality? Is the deduplication process using 1-grams sufficient?** I accept the procedure presented by the authors is sufficiently rigorous. I would appreciate seeing this full procedure in the paper, as it could be useful for others who wish to replicate your work. **Why did the authors not select a reference benchmark for MMLU to measure benchmark-specific contamination?** I accept your point. **Do you expect your approach for measuring benchmark-specific contamination to work well when the reference benchmark uses a significantly different form than the original benchmark (e.g. multiple-choice instead of completion)?** I accept your point. **What further research are the authors excited to see in this area? Are the authors planning on discussing this within the paper?** Interesting to hear about these ideas! --- Reply to Comment 1.1.1: Comment: Thank you for the positive reply! We will include the full description of our dataset generation process in the experimental details of the paper. We further note that we have included the full code-base in the supplementary material and plan to release it upon publication to ensure reproducibility.
Summary: This paper targets the problem of contamination detection of LLMs by proposing ConStat, a performance-based statistical approach. - Instead of detecting the inclusion of test samples as contamination, the authors define interpretation as "abnormal" performance on benchmarks. - ConStat builds on this definition, and leverages reference models and synthetic benchmarks for statistical testing at syntax-, sample-, and benchmark-specific levels. - ConStat is empirically verified as effective. The authors also provide comprehensive analyses. Strengths: - The definition of contamination from a performance-based perspective is novel and interesting to my knowledge. - Based on the definition, the proposed ConStat is intuitive and looks effective based on experiments. The experiments and analyses are comprehensive and solid. - Contamination detection is timely and important in the current community. - The paper is well-written and a joy to read. Weaknesses: - As the paper discussed in Section 6, it can only estimate the contamination relative to reference models. - It would be a little costly to construct a synthetic dataset using close-sourced LLMs each time we want to detect contamination of a specific benchmark. - This paper lacks some related works that need to be discussed. I just list a few below. [1] Yang, Shuo, et al. "Rethinking benchmark and contamination for language models with rephrased samples." arXiv preprint arXiv:2311.04850 (2023). [2] Jiang, Minhao, et al. "Investigating data contamination for pre-training language models." arXiv preprint arXiv:2401.06059 (2024). Technical Quality: 3 Clarity: 3 Questions for Authors: - According to lines 1118-1120, it seems that the authors only used 1000 samples for fine-tuning. How is this number determined and will the number of chosen samples affect the performance? if so, how? - The above question applies to the number of synthetic data samples. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The authors offered a limitation paragraph in Section 6. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their detailed review and insightful questions. We are happy to hear that they find our paper a joy to read, our definition novel and interesting, and our experiments comprehensive and solid. Below, we address their questions. **Is measuring contamination relative to a set of reference models sufficient for contamination detection?** Please see our detailed answer in the main response. **Do we need to construct a synthetic dataset each time we want to measure contamination? If so, is this expensive?** No, we do not need to generate a new synthetic dataset each time we measure contamination. As long as our synthetic datasets remain private, they can be reused for future models. Of course, we will share these datasets with other interested parties if requested. A contamination analysis on other benchmarks would require us to create a new synthetic dataset. However, the cost of generating these datasets is relatively low compared to the total inference cost of model evaluation. Specifically, generating the synthetic dataset was only around five times as expensive as a single model evaluation. Since we evaluate 50 models on all benchmarks, the total cost of generating the synthetic dataset is much lower than the total inference cost of model evaluation. Finally, specifically for rephrasing, this task can be performed by cheaper open-weight models, further reducing costs. **Why have you not included citations to some recent works like Yang et al. and Jiang et al.?** Please see our detailed answer in the main response. **How does the number of samples in the synthetic dataset affect ConStat’s performance?** We choose the number of synthetic samples based on the trade-off between tight confidence bounds and computational budget. Computational complexity increases linearly with the number of samples, while the size of confidence intervals decreases. We found that 1000 samples provide tight confidence bounds and allow us to evaluate over 50 models within our budget. **Similarly, how does the number of samples used for fine-tuning affect results?** We again chose the number of samples used for fine-tuning in our ablation study and for comparison to other methods based on the trade-off between tight confidence bounds and computational complexity. While using more samples (a larger portion of the dataset) would yield tighter confidence bounds, it would also increase the cost of finetuning. We hope our answers address all of the reviewer’s concerns, remain happy to answer any further questions, and look forward to the reviewer’s response.
Rebuttal 1: Rebuttal: $\newcommand{R}{\textcolor{green}{E2e6}}$ $\newcommand{S}{\textcolor{blue}{tnxc}}$ $\newcommand{T}{\textcolor{purple}{hKkW}}$ $\newcommand{X}{\textcolor{red}{UkYs}}$ $\newcommand{Y}{\textcolor{brown}{xz9Q}}$ We thank the reviewers for their detailed reviews and insightful questions. We are pleased to hear that they found our paper well-written ($\R,\S,\T,\X$), our new definition of contamination novel and interesting ($\Y,\R,\X$), and our experiments comprehensive and solid ($\Y,\R,\S,\T$). We identified several questions shared among reviewers that we address here. Reviewer-specific questions are answered in the respective responses. **1. Does an abnormally high performance on your reference benchmark always imply information-flow based contamination on the actual benchmark? ($\Y,\T$)** Yes, we argue that in the case of syntax- and sample-specific contamination, an abnormally high performance always implies information-flow based contamination. To demonstrate this, let us first assume that our synthetic benchmarks are drawn from the same distribution as the original benchmark. A non-contaminated model should achieve a similar performance on both benchmarks. Indeed, if the model obtains a much higher score on one benchmark, we could distinguish between samples from the synthetic and original benchmark with non-random probability, contradicting the assumption that samples are drawn from the same distribution. Therefore, a model that shows abnormally high performance on one of the benchmarks has to be contaminated with that benchmark. Our synthetic datasets are designed to closely approximate the original benchmark distribution. Figure 2 shows that this approximation is highly accurate, as we can predict the exact performance difference between the original benchmark and another benchmark known to be drawn from the same distribution. Hence, syntax- and sample-specific contamination imply information-flow-based contamination. In contrast, for benchmark-specific contamination, the reviewers are correct to point out that differences in performance do not always indicate information-flow based contamination. We acknowledge this in our paper and draw more cautious conclusions regarding this type of contamination. However, measuring benchmark-specific contamination remains important. Common practice often involves evaluating performance on only one or two benchmarks for a specific task, such as mathematics or coding. If a model excels on these benchmarks but performs poorly on others measuring the same task, we argue that this indicates problematic behavior. While this might not be information-flow-based contamination, it may result from a focus on a subset of the task during training because of a drive to perform well on a specific benchmark. Benchmark-specific contamination can reveal this issue and indicate how representative reported scores are with regard to overall task performance. **2. Why have you not included citations to some recent works like Yang et al. and Jiang et al.? ($\Y,\R$)** The papers mentioned are part of a large subfield in contamination detection for LLMs that assumes access to training data. This subfield analyzes contamination and develops efficient algorithms to detect benchmark samples in the extensive training datasets of current LLMs. However, these works strongly assume access to training data. This makes them only relevant for model providers since training data is rarely shared, even for open-weight models. In contrast, ConStat does not make this assumption. Furthermore, these works rely on additional assumptions about the training data to measure the influence of the contamination on performance. For instance, they cannot determine the contamination's impact on performance if all benchmark samples are included in the training data. We were aware of this research, including both works, but did not discuss it due to its strong assumptions and inability to quantify contamination. Since two reviewers highlighted these works, we will update the paper to discuss them in our Related Work section, highlighting that they require access to the training data while ConStat does not. **3. Is measuring contamination relative to a set of reference models sufficient for contamination detection? ($\R,\T$)** Yes, as the primary goal of benchmarking is to compare the performance of different approaches/models accurately. Using these results for model selection and to track progress in the field only requires a relative performance measurement. For instance, a 5% performance increase across all models due to contamination will leave the model rankings untouched. Accurate absolute performance estimates primarily enable comparisons with a golden standard, such as human performance. While valuable, this is not the main goal of benchmarks, as can be seen by the fact that human performances are mostly ignored and are sometimes not even known. Thus, we argue that relative contamination measurement is almost as valuable as absolute measurement and worth pursuing.
NeurIPS_2024_submissions_huggingface
2,024
Summary: The paper introduces a new performance-based definition of data contamination, shifting the focus from the cause of contamination to its effect on performance. The paper also presents ConStat, a statistical method that detects and quantifies contamination by comparing performance on primary and reference benchmarks using a set of reference models. The effectiveness of ConStat is demonstrated through extensive evaluations on diverse model architectures, benchmarks, and contamination scenarios, revealing significant contamination in several popular models. Strengths: 1. The motivation for proposing new definition of contamination is clear and easy to understand. 2. The experiments are extensive to demonstrate the effectiveness of the proposed method. It shows that ConStat can outperform the previous methods on contamination detection and is effective for contamination quantification. 3. The paper extends the analysis to the current model families to show that these models are contaminated in different degrees for current benchmarks. And the observation basically matches other papers' findings, suggesting that these methods can be potentially used for contamination examination of LLMs in the future applications. Weaknesses: 1. The assumptions in the paper are too strong for me. I don't think the difference of performance in different benchmarks means that there would be contamination. The abnormal high performance in one dataset but normal performance in other benchmarks only imply higher possibility of contamination. 2. On line 152, the authors mentioned "additionally include an inherently uncontaminated random-guessing model". I wonder if more details could be provided about this. 3. I think the descriptions and writings in the methodology section should be re-written to provide more details and motivations for how the method is developed. The current version is somewhat hard to follow and lacks explanations. 4. Missing citations to many recent works. E.g., https://arxiv.org/abs/2311.04850 (Yang et al.), https://arxiv.org/abs/2401.06059 (Jiang et al.), etc. Technical Quality: 3 Clarity: 2 Questions for Authors: See the weaknesses above. Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: Yes, the authors included a limitations sections. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their detailed review and insightful questions. We are pleased to hear that they appreciate the extensive experiments and find the new definition clear and easy to understand. Below, we address their questions. **Does an abnormally high performance on your reference benchmark always imply information-flow based contamination on the actual benchmark?** Please see our detailed answer in the main response. **What is the random-guessing model that you add to the set of reference models?** The random-guessing model randomly guesses the answer to a query. It serves as an equivalent to including a very poor model in the reference set (e.g., gpt2-small). For open-form questions, this model's answers are always incorrect, resulting in 0% accuracy. For multiple-choice questions, it has a 1/k probability of being correct, where k is the number of options. Adding this model regularizes the hardness correction function's fit, ensuring it remains valid outside the score range of the other reference models. Figure 3f in Appendix B illustrates the impact of excluding the random-guessing model: without it, the hardness correction function overfits the reference models and fails to detect contamination accurately for weaker models. Including the random-guessing model prevents this issue. **Could you provide more details on the motivation for how the method is developed?** Certainly! In the methodology section, our goal is to develop a method that compares model performance across two related benchmarks and detect significant performance differences. Ideally, we would want to compute the performance of the model on both benchmarks and directly compare these results. We would like to conclude it is contaminated if the model scores high on one benchmark, but poorly on another. However, this straightforward approach is problematic because benchmarks often vary in difficulty, making direct performance comparisons unreliable. To address this, we introduce the hardness correction function, which maps performance from one benchmark to the corresponding expected performance on the other. This adjustment allows for performance comparisons, corrected for benchmark difficulty. The other parts of the section describe how to transform this method into a statistical test to calculate p-values for contamination. We note that all other reviewers found our paper well-written, with some even describing it as a joy to read. Thus, we believe our paper's structure and explanations are clear and precise. However, if the reviewer has further questions, we are more than happy to provide additional clarification. **Why have you not included citations to some recent works like Yang et al. and Jiang et al.?** Please see our detailed answer in the main response. We hope our answers address all of the reviewer’s concerns, remain happy to answer any further questions, and look forward to the reviewer’s response.
null
null
null
null
null
null
DEFT: Efficient Fine-tuning of Diffusion Models by Learning the Generalised $h$-transform
Accept (poster)
Summary: The authors propose a method for conditional generation using diffusion models, named DEFT. The idea is to combine a fixed, pre-trained unconditional model with an additionally learned conditional correction term to generate conditionally. The authors provide extensive experiments to demonstrate the effectiveness of DEFT. Strengths: - The paper is well-written and easy to follow. - The proposed method is simple, easy to understand, and should be easy to implement, making it potentially very useful for practical applications. Additionally, DEFT only requires the inference of a pretrained model, eliminating the need for fine-tuning or differentiation. - The authors provide theoretical motivation for their method through the Doob h-transform. - The authors conduct extensive experiments across different domains and provide an ablation study. - The code is provided. Weaknesses: - Compared to some other approaches like DPS, DEFT requires additional training. - Some image-to-image generative models (such as I2SB and DDBM, which the authors discuss in the appendix) demonstrate great performance in conditional generation. Unlike DEFT, they require fully training a generative model. Despite this difference, for completeness of comparison, it would be good to include both: resulting performance and training budget for these models and DEFT. Technical Quality: 3 Clarity: 3 Questions for Authors: Honestly speaking, I have only one question that may affect my decision. Do you claim to be the first to come up with the idea of learning a small conditional corrector to make an unconditional model conditional? I’m not familiar with all relevant research on conditional generation. However, if so, I believe your method might be very useful for others, and the paper must be accepted. If not, I would like to see a detailed discussion of DEFT’s contributions compared to prior works. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The authors have adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the valuable feedback, we will incorporate the following discussion in a revised version of the manuscript. ### Q1: Do you claim to be the first to come up with the idea of learning a small conditional corrector to make an unconditional model conditional? **A1**: Thank you for your question. We appreciate the opportunity to clarify the novelty of our work. The idea of learning a conditional corrector to make an unconditional model conditional has indeed been explored in earlier works. In [1] Dhariwal and Nichol, propose classifier guidance and utilize a pre-trained classifier to enable conditional sampling of an unconditional diffusion model. In a different work, ControlNet [3] proposes learning a conditional corrector based on an additional dataset for text-to-image diffusion models. However, our approach introduces several novel strategies and a mathematical foundation for these fine-tuning approaches. As far as we know, DEFT is the first fine-tuning approach that learns a purely **additive** corrector ($\epsilon_\text{new} = \epsilon_\text{old} + \epsilon_\text{corrector}$), allowing for a small (in model size) corrector term applicable to a wide range of inverse problems. This includes scenarios where measurements $y$ are either real-valued or discrete (whilst [1] is strictly limited to the discrete setting). Further, DEFT is **model agnostic** and makes no assumptions on the specific implementation of the pre-trained unconditional diffusion model. This is in contrast to ControlNet, which employs a non-linear composition of the pre-trained and fine-tuned network, which requires knowledge (and access) of the underlying architecture. In DEFT, we only need the availability of evaluating the pre-trained unconditional diffusion models, making our approach applicable even in cases where the pre-trained model is hidden behind an API. Further, DEFT **only adds 3-10% of additional parameters**, making it more parameter-efficient and making it possible to store multiple fine-tuning networks for different tasks. In contrast ControlNet adds about 30-40% additional parameters. ### Q2: DEFT requires additional training **A2**: We acknowledge that DEFT requires additional training, which in turn requires an additional dataset and an additional training budget. This is a common characteristic of many fine-tuning methods. However, as we discussed in the overall response we see that even when training with 100 or 200 images, we can get quite good results for inpainting on ImageNet. Further, after the initial training phase, sampling is more efficient compared to approaches such as DPS or RED-diff. Please also see our discussion in the overall response. ### Q3: Comparison against conditional trained image-to-image generative models **A3**: We were able to compare DEFT against I2SB [3]. Here, we used the pre-trained checkpoint on their github for `sr4x-pool` and `inpaint-freeform2030`, which directly correspond to our super-resolution and inpainting settings on ImageNet. The results can be found in the 1-page PDF. On inpainting, I2SB outperforms DEFT on all metrics, for example top-1 71.7% vs 74.5% or PSNR 22.18dB vs 23.26dB. In the following tables we present the results for inpainting and super-resolution on ImageNet. We see that the results are comparable even though I2SB was trained on the complete ImageNet training set. On super-resolution we are even able to outperform I2SB on PSNR and SSIM. | Inpainting | PSNR (↑) | SSIM (↑) | LPIPS (↓) | top-1 (↑) | KID (↓) | |------------|-------|------|-------|-------|-------| | DEFT | 22.18 | 0.85 | 0.09 | 71.7 | 0.29 | | I2SB | 23.26 | 0.86 | 0.068 | 74.5 | 0.238 | | Super-resolution | PSNR (↑) | SSIM (↑) | LPIPS (↓) | top-1 (↑) | KID (↓) | |------------------|-------|------|-------|-------|-------| | DEFT | 24.92 | 0.71 | 0.12 | 71.9 | 1.78 | | I2SB | 23.95 | 0.64 | 0.11 | 71.6 | 0.004 | The conditional trained generative models can be seen as an upper limit on the image quality of fine-tuned models, as they are generally trained on a large dataset. For example, the I2SB models were trained on 1M gradient steps on the full ImageNet training dataset. While DEFT was only trained on a subset of 1000 images. In addition, we present a comparison with conditional diffusion models in Appendix H.2 (see Table 7) using the Flowers dataset. Our results show that, with identical training budgets and dataset sizes, DEFT surpasses the conditional model [4] on several image restoration tasks. However, when provided with a substantially larger dataset (6 times the size) and a significantly greater compute budget (20 times more), the conditional diffusion model outperforms DEFT. Note that in many situations (such as medical imaging) the available fine-tuning dataset may be too small to effectively train a conditional diffusion model from scratch. [1] Dhariwal and Nichol, Diffusion models beat gans on image synthesis, Neurips 2021. [2] Zhang et al., Adding conditional control to text-to-image diffusion models, IEEE CVPR 2023. [3] Liu et al., I2SB: Image-to-Image Schrödinger Bridge, ICML 2023. [4] Batzolis et. al, Conditional Image Generation with Score-Based Diffusion Models, arXiv preprint 2021. --- Rebuttal Comment 1.1: Comment: I thank the authors for their comprehensive clarifications and additional comparisons. I will maintain my score. My final recommendation will depend on the entire discussion. --- Reply to Comment 1.1.1: Title: Thank you for your reponse ! Comment: We thank the reviewer for their prompt follow-up and helpful discussion. To aid the discussion further we would like to highlight that have conducted a more extensive literature survey, and we can conclude that our method is uniquely novel in its scalability and design of the conditional model, which is very different from existing techniques that suffer a litany of issues. In short to clarify and answer the reviewers' question we would like to highlight that: ``` We believe our approach is the first method that enables learning a small corrector network for real valued inverse problems. ``` Due to our corrector’s additive nature and inductive biases, it allows for parametrising **very small and efficient correctors**, unlike approaches such as ControlNet, which do not offer a significant enough parameter reduction. For example, we are able to achieve much better performance with as little as 3-10% of the original model’s size compared to ControlNet’s 40%. We would also like to point out that prior well-known approaches such as classifier guidance _do not apply_ when considering real valued inverse problems, which is the main task we tackle in the formulation we propose. We hope this shows how our approach is valuable to the community and that it is the first highly efficient learned corrector for general inverse problems, also highlighted by reviewer **no8y** `I believe that this work could have significant impact`. As before we thank the reviewer for their input and please let us know if there are any other questions which can help aid the dicussion. P.S. We have added all the additional experiments requested by the reviewer. Let us know if there are any additional comparisons we can perform in order to strengthen the presentation of the paper for the reviewer’s consideration.
Summary: The authors propose a novel conditional diffusion sampling strategy for solving inverse problems. Previous conditional diffusion-based inverse solvers are heuristically motivated, lack a unifying framework, and suffer from sensitivity to hyperparameters and heave computation of the Jacobian of the trained score network. The proposed method involves fine-tuning of a small network to quickly learn the conditional $h$-transform, enabling conditional sampling without altering the large unconditional network. The authors demonstrated the efficiency of their method by achieving SOTA performance across various benchmarks with faster inference times. Strengths: ### Theoretical Foundation This paper is built on solid mathematical theory, specifically “Doob’s $h$-transform,” which enhances our understanding to diffusion models and paves the way for future improvements in their applications. ### Comprehensive Experiments The authors conducted extensive experiments on their method, covering both linear and nonlinear inverse problems across domains such as natural images, medical images, and conditional protein design. They provide clear evaluation metrics and inference time, supporting their claims about the method’s efficiency. Furthermore, the network architecture is derived from theoretical principles, enabling systematic improvements. Additionally, they propose an extra loss function for fine-tuning that is applicable without paired data, potentially inspiring future research. Weaknesses: The proposed claims are poorly presented or difficult to understand. It would be beneficial to supplement the contents in the paper by addressing the following questions. 1. How does Doob’s h transform unify existing methods for diffusion-based inverse solvers? 2. What is the difference between $X_t$ and $H_t$? 3. I do not understand the rationale behind the DEFT network parametrization in Section 3.3. How is equation (13) derived from Doob’s $h$-transform? 4. In Line 281, it is stated that “DEFT assumes no knowledge of the forward operator.” How is this possible? To my understanding, the forward operator is needed to compute $\ln p(y|X_t)$. Errata bold 5.2 for Time(hrs) of DEFT for super-resolution Technical Quality: 3 Clarity: 3 Questions for Authors: See the weaknesses part Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Nothing to mention Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for this valuable feedback on the presentation - this is very helpful for clarifying our work. We have made updates to the manuscript to address the points you suggest for clarification. The error in Table 1 will be fixed in the camera-ready version. In detail: ### Q1: How does Doob’s h transform unify existing methods for diffusion-based inverse solvers? A1: Doob’s h-transform is the formal approach to conditioning an SDE and well established in SDE literature, see for example [1]. Yet, Doob’s transform has not been discussed in or been connected to the conditional generative modeling literature. By spelling out the mathematics explicitly, we can identify previous work as special case approximations to the Doob’s h-transform term, thereby providing an underlying framework in conditional generative modeling. Specifically, we derive previous methods as Doob’s h-transform approximations: 1. Reconstruction Guidance (DPS, FreeDoM [3], MPGD [4] etc), 2. CDE (amortized, conditional training) and 3. Classifier Guidance [2]. ### Q2: What is the difference between $X_t$ and $H_t$? A2: We denote with $X_t$ the unconditional diffusion process and with $H_t$ the conditional diffusion process. We will make sure to clarify this in the manuscript when $H_t$ is first introduced. ### Q3: How is equation (13) derived from Doob’s ℎ-transform? A3: The network architecture in equation (13) defines an inductive bias for DEFT. As discussed, we can derive reconstruction guidance (i.e., DPS) as a special case of the h-transform (see Equations 30-31 Appendix C3 of the manuscript). As the conditional update in DPS has a high computational cost, we omit the Jacobian of the unconditional diffusion model. This then serves as an inductive bias for the architecture of DEFT and is an important ingredient for the approach. We will make this clearer in the camera-ready, ensuring we talk more about the intuition behind using the specific network architecture we recommend. Please also refer to Table 5 in the appendix. In this ablation we test a version of DEFT without this inductive bias, leading to a worse PSNR/SSIM for the low-dose computed tomography experiments, showing that this inductive bias improves performance. ### Q4: In Line 281, it is stated that “DEFT assumes no knowledge of the forward operator.” How is this possible? Thank you for pointing out this imprecision on our part. Indeed, the forward operator is necessary to compute $p(y|x_t)$ and also to use the parameterization in Eqn. (13). What we meant to emphasize was that the training objective in Eqn. (8) does not require the likelihood. Thus, all we require for the DEFT objective is to be able to evaluate the forward operator such that we can obtain the measurement $y$ or alternatively, access to a paired data set of images and corrupted measurements. For the DEFT architecture in Eqn. (13), we use $\nabla_{x_0} || y- A(\hat{x}_0(x_t)) ||^2$ in most cases including cases where there's not necessarily an explicit likelihood available. This is again an inductive bias that helps guide our network in early iterations but does not require an explicit form for $p(y|x_0)$, although it does assume the forward operator can be differentiated. In cases where it can’t, one can resort to approaches such as ΠGDM [5]. To summarize, the DEFT objective **does not require explicit assumptions of the forward operator**, other than some very weak / regular assumptions for the existence of the score. We have adjusted the manuscript to reflect this. Also, here we want to point to the ablation in Table 5 in the Appendix, where we show that the architecture defined in Eqn. (13) leads to a boost in performance. However, without this inductive bias, DEFT can still work. [1] Särkkä and Solin, Applied Stochastic Differential Equation, Cambridge University Press. (2019) [2] Dhariwal and Nichol, Diffusion models beat gans on image synthesis, Neurips 2021. [3] Yu et. al., Freedom: Training-free energy-guided conditional diffusion model. IEEE CVPR 2023 [4] He et al., Manifold preserving guided diffusion. ICLR 2024 [5] Song et al., Pseudoinverse-Guided Diffusion Models for Inverse Problems, ICLR 2023. --- Rebuttal Comment 1.1: Comment: I find some aspects of the DEFT network parameterization, including the rationale behind it and the inductive bias, to be unclear. However, the strengthened experimental results demonstrate significant value in this work. Therefore, I remain positive about the acceptance of this paper and will keep my score unchanged. --- Rebuttal 2: Title: Derivation of the DEFT architecture and inductive bias. Part I Comment: Thank you for your response, please allow us to clarify the inductive bias and the motivation behind the DEFT network parameterization. We will provide a **short derivation** as well as **empirical evidence** motivating the architectural choice, we hope that this clarifies the reviewers concern and if not we would like to engage further and understand what is missing to make this more clear. In conditional sampling methods, the conditional score $\nabla \ln p_t(x_t | y)$ can be decomposed into the unconditional score, approximated with an unconditional score model $s_\theta(x_t, t) \approx \nabla_{x_t} \ln p_t(x_t)$, and a likelihood term, i.e., $$\nabla_{x_t} \ln p_t(x_t | y) = s_\theta(x_t, t) + \nabla_{x_t} \ln p_t(y | x_t) $$ Here, the likelihood term $\nabla_{x_t} \ln p_t(y | x_t)$ is the h-transform that we aim to learn with a neural network in this work. We can express this term as an expectation of the inverse problem likelihood with respect to the denoiser (as done in works such as DPS [1]): $$ \nabla_{x_t} \ln p_t(y | x_t) = \nabla_{x_t} \ln \mathbb{E}_{x_0 \sim p(x_t |x_0)}[p(y| x_0)] $$ As a next step, we can then make a MAP styled approximation to the posterior (more precisely, approximating a posterior with a mean point mass rather than MAP is known as a “Bayes Point Machine” [2]) i.e. $p(x_t |x_0) \approx \delta_{\mathbb{E}[x_0|x_t]}(x_t)$. This results in the following: $$ \nabla_{x_t} \ln p_t(y | x_t) \approx \nabla_{x_t} \ln p(y|\mathbb{E}[x_0|x_t]) \approx \nabla_{x_t} \ln p(y| \hat{x}_0(x_t))$$ Where $\mathbb{E}[x_0|x_t]$ can be estimated with Tweedies formula and the learned approximate score (i.e. $\hat{x}_0(x_t)$). If we now initialise the h-transform neural network with $\nabla_{x_t} \ln p(y| \hat{x_0})$, this is clearly a much better starting point than initialising it at $0$, as this term is an approximation of the h-transform and has been validated to perform well in these tasks. However, this approximate expression is prohibitive to train with as it requires the Jacobian $\partial_{x_t}\hat{x}_0(x_t)$ which backpropagates through the score network. To mitigate this, we follow works such as DreamFusion [3] which take the gradient with respect to $\hat{x}_0(x_t)$ rather than $x_t$. This step is completely heuristic, but it has been validated empirically. After this, we are left with: $$ \nabla_{x_t} \ln p_t(y | x_t) \approx \nabla_{\hat{x}_0} \ln p(y|\hat{x}_0) $$ As we have already motivated conceptually, this expression approximates the h-transform and thus it makes sense to incorporate this in our h-transform architecture as it provides a good warm start (also note in the MCMC/sampling community these style of gradient aided NN architectures have already demonstrated a lot of succes [4]): $$\text{NN}(x_t, y, t) = \text{NN2}(x_t, y, t) + \text{NN1}(t) \nabla_{\hat{x}_0} \ln p(y | \hat{x}_0) $$ Where prior to training, NN2 is initiatilised to $0$ and NN1 is initialised to 1 **such that at epoch 0 our network is initialised at this cheap approximate h-transform** $\text{NN}(x_t, y, t) = \nabla_{\hat{x}_0} \ln p(y | \hat{x}_0) $. Empirically we found that this initialisation gave much lower starting DEFT losses than without and non surprisingly it lead to converging faster as well as better results. In a way, the architecture $ \text{NN2}(x_t, y, t) + \text{NN1}(t) \nabla_{\hat{x}_0} \ln p(y | \hat{x}_0) $ achieves the best of both worlds: cheap like conditional sampling methods, accurate like conditional training methods. It starts off with the cheap guidance term in early epochs, thereby providing a good warm start to our objective. But in later epochs, it is able to learn a more accurate approximation (without MAP-like approximations or heuristics) to the h-transform with the term $ \text{NN2}(x_t, y, t) $. Note the term $\text{NN1}(t)$ is there to serve as a “guidance scale like” term. In methods like DPS [1] these terms are typically tuned on a small dataset. In contrast, we believe that the term $\text{NN1}(t)$ is particularly helpful in early iterations as it allows the guidance term to quickly become well tuned, whilst the NN2 network uses this warm start to more slowly learn the full h-transform. For empirical evidence of this hypothesis, see Figure 6 where NN1(t) is plotted and learns very expected guidance scales that increase along the diffusion trajectory as expected. --- Rebuttal 3: Title: Derivation of the DEFT architecture and inductive bias. Part II Comment: Please see the following table (can also be found in our appendix as Table 6) where we carefully ablate our architectural choice and prove its success. | Parametrisation | PSNR | SSIM | |------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------:|:-------:| | $\text{NN2}(x, \hat{x}_0, \nabla \ln p(y \hat{x}_0), t) + \text{NN1}(t) \nabla \ln p(y \| \hat{x}_0)$ | $35.81$ | $0.876$ | | $\text{NN2}(x, \nabla \ln p(y \| \hat{x}_0), t) + \text{NN1}(t) \nabla \ln p(y \| \hat{x}_0)$ | $35.74$ | $0.875$ | | $\text{NN2}(x, \hat{x}_0, A^*y, t)$ | $34.04$ | $0.851$ | | $\text{NN2}(x, A^*y, t)$ | $26.62$ | $0.724$ | $\nabla = \nabla_{\hat{x}_0}$ in the table above. As you can see, the added gradient-guided inductive bias boosts the performance of the DEFT objective significantly, showing that it is a good architectural choice. We have now provided both thorough empirical and conceptual motivations to the reviewer for our architecture. We are happy to continue clarifying where needed. We will add a detailed derivation in the revised version of the manuscript. [1] Chung et al. 2022. Diffusion posterior sampling for general noisy inverse problems. arXiv preprint arXiv:2209.14687. [2] Herbrich et al.. Bayes point machines. Journal of Machine Learning Research, 2001. [3] Poole et al. DreamFusion: Text-to-3D using 2D Diffusion, ICLR 2023. [4] Zhang and Chen, 2021. Path integral sampler: a stochastic control approach for sampling. arXiv preprint arXiv:2111.15141. --- Rebuttal 4: Title: Small typo correction to - "Derivation of the DEFT architecture and inductive bias. Part I" Comment: Dear Reviewer ESdx, We have just made some very small typo corrections (the cheap initialisation gradient we use in DEFT is with respect to $\hat{x}_0$ and there was a typo in the part I response ) to the above derivation (Part I) detailing how we arrive at the additional term in our architecture from the h-transform. We hope that this makes our derivation / conceptual motivation of the architecture more clear and as before we thank you for your feedback and continued input.
Summary: The paper tackles the problem of utilization of generative modelling to solve inverse problems. The main highlight is that the authors developed a technique that can solve inverse problems without the need for backward pass through the generative model. Hence enabling deriving the prior knowledge from even closed source models for solving inverse problems. Experiments are performed across multiple datasets to show the results. The results show that DEFT achieve a speed up and performance boost over some methods. Strengths: 1. The paper introduces a new novel method for conditional generation thorough efficient fine-tuning of a small network using Doob's h-transform 2. The proposed method enables learning a solution for inverse problems without backpropagation through the diffusion network, hence enabling learning solution for inverse problems even from closed-source models since it doesn't require a backpropagation operation 3. By bypassing the backpropogation through the Diffusion U-Net, the method achieves a speed up over existing methods. 4. The paper is well written and extensive experiments are performed across multiple tasks to validate the effectiveness of the method. Weaknesses: 1. Although the paper claims that existing baselines require backpropagation through U-Net for solving inverse problems (Ln 37-38). This is not always the case. I think some relevant baselines like [1] Manifold Preserving Guided Diffusion [2] FreeDoM: Training-Free Energy-Guided Conditional Diffusion Model have missed the author's notice. These methods do not require any backpropagation through the UNet and hence are faster and perform better. 2. The baselines compared in the paper are very old and refer to works more than 1 years ago. 3. An analysis of the computational overhead caused due to the fine-tuning process is missing Technical Quality: 3 Clarity: 4 Questions for Authors: 1. Could the authors give an analysis of the benefits of the method over [1,2] referred in weakness 2. I would also like to see the computational overhead in terms of memory and time involved in the fine-tuning process. Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: Yes, The limitations and potential negative impacts section looks reasonable to me. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for pointing us to further baselines beyond the ones we provide that would provide an interesting axis of comparison to our method. ### Q1: Comparison against FreeDoM and MPGD **MPGD** [1]: MPGD proposes three variants: MPGD w/o projection, MPGD-AE and MPGD-Z. MPGD-Z requires a latent diffusion model and is not applicable to pixel-based diffusion. MPGD-AE requires to train an additional auto-encoder and thus, similar to DEFT, requires an additional dataset and training time. Unfortunately, the codebase for MPGD does not provide a pre-trained autoencoder, and we were unable to replicate the MPGD-AE variant to match the paper. We provide additional experiments of MPGD w/o projection on ImageNet for inpainting, super-resolution and HDR. In all these settings, DEFT is able to outperform MPGD w/o projection, see the following Table were we present the KID. DEFT also outperforms MPGD w/o projection on the other metrics, see the one page PDF. For the KID, we can see that DEFT outperforms MPGD w\o projection on both the linear (inpainting, super-resolution) as well as the non-linear task (hdr). | KID (↓) | Inpainting | Super-resolution | HDR | |----------------|------------|------------------|-------| | MPGD w/o proj. | 3.02 | 3.693 | 3.571 | | DEFT | 0.29 | 1.78 | 0.10 | MPGD-AE defines an interesting conditional sampling approach. The use of an additional auto-encoder could also be used for DEFT to learn a conditional update in the latent space. This could possibly speed up the training. However, MPGD-AE requires training the auto-encoder, and thus an additional dataset is required, which is often much larger than our fine-tuning dataset, as an autoencoder such as VQVAE or VQGAN typically need a large amount of samples to train from scratch. Note, that the conditional update step of MPGD w/o projection mimics the initialization of DEFT (second term in Equation 13 of the manuscript), which can be derived by omitting the Jacobian of the unconditional diffusion model from the DPS [3] update. **FreeDoM** [2]: FreeDoM defines a distance-measuring function D. In the context of inverse problems and image reconstruction this is chosen as the negative log-likelihood (see also Section 3.3 and Appendix B in FreeDoM). In this case, the FreeDoM sampling scheme reduces to DPS [3], compare line 7 in Alg. 1 in FreeDoM against line 7 in Alg. 1 of DPS . Further, FreeDoM requires backpropagation through the trained score model, thus it would have the same computational cost as DPS. As we are already presenting comparisons against DPS as our reconstruction guidance baseline, we do not benchmark against FreeDoM. ### Q2: The baselines compared in the paper are very old and refer to works more than 1 years ago. **A2:** We respectfully disagree on this point with the reviewer. While we compare against relevant, yet older, methods such as DPS (arxived Sep ‘22), we also compare against **RED-diff** (published at ICLR 2024 in May ‘24 with a corresponding complete arxiv on Sept. ‘23). Given that our submission to NeurIPS (May ‘24) falls within a year of this timeline, we believe RED-diff does not qualify as *very old*. At most, it is a year old, but in fact, the complete version (with the experiments we compare to) has only been available for about **nine months**. However, we thank the reviewer for pointing us to more recent methods that we could compare against, and we appreciate the references they have provided. We thank the reviewer for suggesting MPGD, (arxived Nov’23) which we now compare against in our rebuttal experiments (c.f. point above). We are also happy to run further experiments against other baselines that were released in more recent months if the reviewer has any suggestions. Based on suggestions by other reviewers, we have also added comparisons against further baselines such as Controlnet [5] and I2SB [4], providing a more thorough ablation (see general response). ### Q3: Computational overhead in terms of memory and time **A3**: In our image reconstruction experiments, we report the combined sampling and training time of DEFT in Table 1 and Table 2 of the submission. However, we only give the total time. However in the text for each experiment, we also have mentioned the split between finetuning and sampling time - > For DEFT, this computational time additionally includes the 3.9 hrs of training time of the h-transform additionally with the 1.2 hrs of evaluation). We will endeavor to make these numbers more clear in the text and the table. The size of the DEFT model is about 5-10% (depending on the task) the size of the pre-trained unconditional model. This means that during inference, the DEFT model incurs about an additional 5-10% memory overhead, whereas baselines such as DPS incur an additional memory overhead due to needing to backpropagate through the pretrained unconditional score model. In particular, the memory cost of DPS is O(sum_i^L layer-width_i), where L is the number of layers on the unconditional model, as all activations have to be kept in memory for the backward pass. In contrast the memory cost of DEFT during inference is just O(max layer width) as only the forward pass is needed. [1] He et al., Manifold preserving guided diffusion. ICLR 2024 [2] Yu et. al., Freedom: Training-free energy-guided conditional diffusion model. IEEE CVPR 2023 [3] Chung et al., Diffusion Posterior Sampling for General Noisy Inverse Problems. ICLR 2023 [4] Liu et al., I2SB: Image-to-Image Schrödinger Bridge, ICML 2023. [5] Zhang et al., Adding Conditional Control to Text-to-Image Diffusion Models. IEEE CVPR 2023 --- Rebuttal Comment 1.1: Title: Further clarification on reported total cost of DEFT (fine-tuning + sampling) Comment: To further aid the discussion and address the computational overhead comment raised, we would like to highlight some key points of our experiments. In Tables 1-3 in our paper, we report the total wall clock time for DEFT, which includes the sum of both training time (fine-tuning time) and inference time. Importantly, even when accounting for the fine-tuning time, DEFT still **take less time** than many inference-only or training-free methods such as DPS and REDDiff. Additionally, we believe that the memory complexity analysis presented in our rebuttal further clarifies our approaches efficiency. Please let us know if there are any other points we can clarify or if you have additional questions. --- Rebuttal Comment 1.2: Comment: Dear author, I thank you for the detailed rebuttal. After going through the rebuttal, my concerns regarding the theoretical novelty is clear. have decided to improve my rating to BA. I believe the comparison with freedom and mid is not fair since these methods are training free and the authors were not give satisfactory explanation/ comparisons. --- Reply to Comment 1.2.1: Title: Title: Choice of comparison methods Comment: Dear reviewer, Thank you for your feedback. We appreciate your decision to improve your rating and value your continued input and discussion. We understand your concerns regarding the comparisons with training-free methods such as MPGD and we will add a note to the manuscript making this explicitly clear so that no unfair conclusions can be drawn. It is not our intention to perform any unfair comparisons. We really want to make this clear and for it not to damage the perception of our work. We still think there is value in the comparison to training-free, and hope that you see it the same way since you suggested comparing to the training-free methods MPGD and FreeDoM in your first response. You can interpret DEFT as fine-tuning of a training-free method (see also the response to reviewer ESdx about the motivation of the network parameterization). Thus this comparison is almost like an ablation, i.e., how is method X improved when we fine-tune it this way. Moreover, it is important to note that many conditional sampling methods, while training-free, require setting hyperparameters, such as the time-dependent strength $\lambda(t)$ for DPS or RED-diff. As these methods can be quite sensitive to the choice of $\lambda(t)$, in practice a small dataset is often necessary to appropriately tune these hyperparameters. In addition, we see the ability of DEFT to leverage a small dataset as **a positive feature of our method**. In many practical applications such as the medical imaging or protein design setting described in our paper, there are small datasets specific to the task available. Leveraging these for optimal performance is a strength of the DEFT method. We would also like to add further clarification pertaining to the fair element of comparison. Many of the training free methods such as DPS require backpropagating through the score network over many iterations; from a computational budget perspective this is not so different to training (especially in the small-scale fine-tuning that DEFT requires, i.e. small fine-tuning network and small dataset). As you can see in our total wall clock time comparisons, such training free methodologies actually result in longer GPU hours overall than our fine-tuning approaches. Therefore, we do believe this comparison to be very helpful as the validation compute time becomes **comparable / higher than our required finet-uning time**. Finally, we would like to emphasize that we did add some trained comparisons (ControlNet and I2SB as proposed by the reviewers). To the best of our knowledge there are not many other applicable trained methods. We did try out all the methods suggested by reviewers and we are happy to commit to adding more comparisons in the final version of the manuscript if you think there are other suitable methods to compare against.
Summary: The paper proposes a new framework for fine-tuning unconditional diffusion models for conditional generation based on Doob’s $h$-transform. By utilizing a small set of observations and ground truth samples, the algorithm can learn the conditional $h$-transform that is used to Strengths: - The work is a novel approach to fine-tuning pre-trained unconditional diffusion models for conditional generation. The authors provide an extensive formulation of sampling from the posterior given a diffusion prior using Doob's $h$-transform. The proposed method has the potential to unify existing diffusion posterior sampling methods under a common framework. - The work can be extended to non-linear inverse problems, which is a significant limitation of many existing diffusion posterior sampling methodologies. - By learning the transform, the proposed algorithm significantly speeds up the conditional inference in comparison to existing methods. Many of the previous approaches also required backpropagating through the denoising network during inference, making their usage impractical in many applications. Weaknesses: - The method requires a non-negligible dataset of observations and samples to train the $h$-transform on. Although there is a significant speed advantage during inference, access to this dataset is not guaranteed for every task and there could be issues with generalization. I.e. for training a generic non-linear deblurring operator, the network that parametrizes the $h$-transform has to be trained on a diverse enough set of observations and images. This is an important limitation that some other methods (such as [9]) do not suffer from. Even in the stochastic optimal control case, which is training-free, using VarGrad or Trajectory Balance requires significant computational resources that can exceed the requirements of previous posterior inference approaches. Technical Quality: 3 Clarity: 3 Questions for Authors: - How do the $h$-transform networks perform on out-of-distribution samples? Given that they still contain a sizeable number of parameters how important is the number of samples used to train them in the final image quality? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The limitations have been addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the valuable comments. We ran additional experiments, the complete results are available in Table 1 in the 1-page PDF. ### Q1: How does DEFT perform on OOD samples? **A1**: We evaluated DEFT, trained on ImageNet (Section 4.1), on a subset of 200 images of the ImageNet-O [1] dataset for both inpainting and HDR. The ImageNet-O dataset contains unique images that do not belong to any of the classes present in the original ImageNet dataset and is considered OOD. The results for inpainting are similar to the the evaluations of the main paper, see also Table 1 and Figure 3 in the rebuttal pdf. For a nonlinear inverse problem such as HDR, we still significantly outperform our best baseline RED-Diff, showing that our method is robust on images outside the training and finetuning distribution, and it has the potential to learn an agnostic solution to an inverse problem. ### Q2: Dependence on number of training samples and image quality? **A2**: As DEFT requires a dataset for fine-tuning, we ablate the number of training samples that can still result in good performance. We trained DEFT on a subset of 10, 100 and 200 ImageNet images for Inpainting. We see improvements of all metrics, when training on a larger dataset. For the KID, we can outperform RED-diff (KID: 0.86) even when trained on only 200 images. See also Figure 2 in the rebuttal pdf for an example reconstruction. Here, we can see that with increasing number of training samples, the resulting image looks more realistic. However, even with 10 images, we perform quite competitively, showcasing that our method is very sample-efficient when it comes to learning a conditional transform. See the following table for the results on inpainting on ImageNet. Here, we see that even with 10 samples we can outperform DPS w.r.t. to the KID and with 200 samples we outperform RED-diff. | DEFT, train on | 10 | 100 | 200 | 1000 (original) | | RED-diff | DPS | |----------------|----|-----|-----|-----------------|-|--------|-----| | KID | 1.85 | 0.978 | 0.401 | 0.29 | | 0.86 |15.2 | Further, we view the requirement of a fine-tuning dataset as a **feature**, not only as a limitation. While inference-time, training free methods (e.g. FreeDoM, DPS or RED-diff) have a ceiling on their performance when applied to new tasks or datasets, DEFT leverages fine-tuning to achieve high efficiency and performance even with a small dataset. We see that even with only 10 images, we can achieve a PSNR of 20.87 compared to 21.27 for DEFT. This efficiency is mostly due to the initialisation and network parameterization in Eq (13), where DEFT is initialized to mimic a cheap guidance term, similar to what is proposed in DPS or MPGD. ### Q3: Computational expense of VarGrad/Trajectory Balance for the online fine-tuning loss (Section 3.2) **A3**: We acknowledge that the online fine-tuning in its current form comes with a high computational cost. We included the online objective first and foremost to highlight the interesting connection of stochastic optimal control and conditional sampling. After the Neurips submission deadline Venkatraman et al. [2] published a fine-tuning approach for text-to-image diffusion models based on trajectory balance with a similar architecture (cf. eq. (11) in [2] with eq. (13) in our work). They were able to scale trajectory balance to higher dimensional problems by using off-policy training, i.e., re-using previous samples, and stochastic subsampling, i.e., calculating the gradient only for a randomly sampled subset of the trajectory. Similar tricks can be used for our online fine-tuning objective, to reduce the computational cost and scale it to high-dimensional settings. We believe that our proposed online objective can lead to interesting future work to scale up, similar to [2]. [1] Hendrycks et al., Natural Adversarial Examples, IEEE CVPR 2021. [2] Venkatraman et al., Amortizing intractable inference in diffusion models for vision, language, and control, arXiv preprint (2024) --- Rebuttal Comment 1.1: Title: Final Score Comment: Thank you for the detailed responses. Considering the additional clarifications and results, I believe that this work could have significant impact and will be raising my score to reflect it.
Rebuttal 1: Rebuttal: We thank the reviewers for their valuable and thorough feedback, we will revise the paper accordingly. Below, we will address the main points and describe improvements we made to our submission. #### Strengths and contributions We appreciate the recognition of our work's **novelty** (reviewers `no8y, qBFS, ESdx`) in fine-tuning pre-trained unconditional diffusion models for conditional generation using Doob’s ℎ-transform. The **solid theoretical foundation** (reviewers`no8y, qBFS, ESdx, ZJuc`), **comprehensive experiments** (reviewers `qBFS, ESdx, ZJuc`), and **speed improvements during inference** (reviewers `no8y, qBFS`) were also noted by the reviewers. #### Addressing weaknesses and reviewer questions Please find detailed point-to-point responses to each reviewer in the corresponding thread. Here, we summarize the main points from these discussions: 1. **DEFT requires a fine-tuning dataset**: We acknowledge that DEFT requires a fine-tuning dataset for training the h-transform. In applications where only a handful of fine-tuning examples are available this may limit performance. Yet, our additional experiments show that even in low data settings (n=100,200) DEFT is able to produce similar results to DPS or RED-diff. For many other applications for which fine-tuning datasets are available (conditional protein design, medical imaging) the fine-tuning phase of DEFT allows the model to **benefit from this data** (as compared to inference-time-only, training-free strategies such as DPS) leading to improved performance, as demonstrated in the experiments in our paper, with comparable or faster total evaluation time on the eval dataset even when taking the fine-tuning time into account (as the faster DEFT inference makes up for the computational overhead of fine-tuning). In the experiments on conditional protein design, we saw that reconstruction guidance methods such as DPS often fail on giving convincing results, whereas already a small fine-tuning dataset can improve the performance for DEFT. 2. **DEFT fine-tuning has a computational overhead**: Reviewers rightly point out that DEFT requires an initial training phase. This is common across many fine-tuning methods. However, the overall computational cost, combining both training and inference phases, may vary depending on the specific training budget and application. We demonstrate that DEFT provides a fast and efficient sampling during inference time. This speed-up during inference helps to offset the initial training time. To illustrate this, our experiments include the total sampling time, which encompasses both training and inference phases (see Table 1 and Table 2 in the manuscript). These results show that DEFT is comparable to or faster than other baseline methods, such as DPS and RED-Diff, when the goal is to sample 1000 images. In the revised version, we include a detailed breakdown of the computation time, not just the total time in the tables. 3. **Additional experiments for rebuttal**: According to the comments of reviewers, we performed a variety of experiments on the ImageNet dataset. We evaluated: - *MPGD [2] on inpainting, hdr and super-res* as a new training-free baselines, which does not require to backpropagate through the diffusion model. Our results show that DEFT is able to outperfrom MPGD. - *ControlNet [4] on inpainting, hdr and super-res* as a comparison to a different fine-tuning method. Our results show that DEFT is able to outperform ControlNet on these image reconstruction tasks. - *I2SB [3] on inpainting and super-res* as an example of a fully-trained conditional diffusion model. Here, our results show that I2SB outperform DEFT on some tasks. However, I2SB was trained on the full ImageNet training set with a big compute budget, whereas DEFT was trained on a subset of 1000 images. - *DEFT on inpainting, trained on a subset of 10, 100 and 200 images* to study the effect of the size of the training set. Here, we see that a larger training set, results in better image quality. However, already with 100 images is DEFT comparable to training-free methods as RED-diff or DPS. - *DEFT on inpainting, hdr for out-of-distribution data (ImageNet-O [1])* to study the generalisability. On ImageNet-O the quality metrics deteriorate. However, we are still competitive to RED-diff. **A table with all new results and some examples can be found in the 1-page PDF**, notice we strongly outperform the other requested inference-time baselines and ControlNet whilst having comparable to slightly worse performance compared to fully conditional training methods like I2SB, with significantly smaller models, training time and datasets. [1] Hendrycks et al., Natural Adversarial Examples, IEEE CVPR 2021. [2] He et al., Manifold preserving guided diffusion. ICLR 2024 [3] Liu et al., I2SB: Image-to-Image Schrödinger Bridge, ICML 2023. [4] Zhang et al., Adding Conditional Control to Text-to-Image Diffusion Models, IEEE CVPR 2023 Pdf: /pdf/1ebbf5767eeed26420681992171a5b72f7ae57c5.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Credal Learning Theory
Accept (poster)
Summary: This paper extended traditional statistical learning theory which usually considers a fixed underlying data-generating distribution to a case where the underlying distribution is assumed to be from a convex set of distributions. Several excess risk bounds (finite realizable, finite unrealizable, and infinite hypothesis space) were derived. Strengths: - This paper aims to analyze the uncertainty and divination of training/test data distributions in a principled manner, which is very appreciated. - Using convex sets of distributions is a reasonable approach. - This paper considered different assumptions on the hypotheses. Weaknesses: - The writing can be improved. For example, the first paragraph is basically the notation and a review of ERM. The second paragraph mentioned DA and DG, but it is still unclear what problems existing theories and problem settings have and why it is important to solve them. Then, the author implied that the existing assumptions are unrealistic and too strong so the resulting theories are not generalizable. However, these statements are too vague so it is difficult to verify or falsify them. - Partly due to the writing, I cannot fully understand the proposed theorems, let alone their proofs. For example, I do not understand what exactly a _well-defined_ quantity $\epsilon^\star(\delta)$ in Theorem 4.1 (and the following theorems) is. Therefore, the theoretical implications are very unclear to me, and I cannot accurately assess the significance and novelty of this work. Technical Quality: 3 Clarity: 2 Questions for Authors: Minor issues: - ERM, DA, DG: please provide references. - the position of Figure 1. - l. 74: the citing style "Zhou et al. [63]" seems strange (to me) Confidence: 2 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: The author discussed the limitations briefly in the conclusion section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: *The writing can be improved. For example, the first paragraph is basically the notation and a review of ERM.* We thank the reviewer for their input. In the updated version, we will strive to further improve our writing. We believe, however, that the first paragraph is needed to set up the notations, so that every reader can be put in a position to follow our arguments, regardless of their background knowledge of the matter. *The second paragraph mentioned DA and DG, but it is still unclear what problems existing theories and problem settings have and why it is important to solve them. Then, the author implied that the existing assumptions are unrealistic and too strong so the resulting theories are not generalizable. However, these statements are too vague so it is difficult to verify or falsify them.* Regarding DA and DG, we agree with the reviewer, and we will improve clarity regarding existing theories and their shortcomings. At a very high level, no existing technique takes into account possible distribution misspecification and drift like we do, thanks to our credal set approach. Additional clarifications statements have been added in the Related Work section. The following changes have been made addressing the reviewer’s comments: In line 78, remove from ", or are reliant on strong assumptions (e.g.," until line 80 up to "DG approaches." Include: "Regarding Kernel-based methods, assumptions related to boundedness of Kernels and continuity of feature maps and loss function render the approaches not directly applicable to broader scenarios (Deshmukh et al., 2019; Hu et al., 2020; Ye et al., 2021)" In line 82, after the sentence ending "H-divergence [1 ].", remove text up to line 84 ending in "robust generalization [49 ]." Include: Researchers have also focused on adaptation to new domains over time, treating DG as an online game and the model as a player minimizing the risk associated with introducing new distributions by an adversary at each step (Rosenfeld et al. , 2022). However, in scenarios where the training distribution is significantly outside the convex hull of training distributions (Albuquerque et al. 2019), or because of unmet strong convexity loss function assumption (Rosenfeld et al., 2022), they fall short from achieving robust generalization. Remove text from line 90 up to line 92 ending in "an adversary at each step [49 ]." After line 96 ending in "range of models [ 36 ]. ", include: Though this simplification has a number of practical benefits, models trained under covariate shift assumptions might suffer in terms of robustness to other distribution shift types. *Partly due to the writing, I cannot fully understand the proposed theorems, let alone their proofs. For example, I do not understand what exactly a well-defined quantity $\epsilon^\star(\delta)$ in Theorem 4.1 (and the following theorems) is. Therefore, the theoretical implications are very unclear to me, and I cannot accurately assess the significance and novelty of this work.* We thank the reviewer for pointing this out. Because of the 9 page limitation, we were not able to deliver the proofs to our results in the main part of the paper. They are postponed to Appendix A. We will likely not be able to move them to the main body of the paper, as there is a limitation in the rebuttal phase as well. The interested reader, though, can find there proofs to all the statements in the appendix mentioned. We note in passing that we use the term "well defined" in the usual mathematical sense of "expression whose definition assigns it a unique interpretation or value" [1]. In particular, given any value of $\delta$, the quantity $\epsilon^\star(\delta)$, depending on $\delta$, is indeed well-defined. To see this, we refer the reviewer to the last equation on page 13. [1] Joseph A. Gallian, Contemporary Abstract Algebra, Houghton Mifflin, 2006. *ERM, DA, DG: please provide references* We thank the reviewer for pointing this out. Regarding ERM, we refer to Liang’s lecture notes and [38] extensively in our manuscript. In the new version of the manuscript, we will mention them already in the first page. Regarding DG, we explicitly mention [47] in the last line of page 1. We will add a reference for DA in the new version (https://link.springer.com/article/10.1007/s10994-009-5152-4). *the position of Figure 1.* We thank the reviewer for their suggestion, and we will move Figure 1 right before the "Paper outline" subsection of the paper. *l. 74: the citing style "Zhou et al. [63]" seems strange (to me)* We do not know how to handle this suggestion, we must confess. It is the visualization of the \citet command that we use extensively throughout the paper. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed reply. I've read other reviews and responses. I'm now more positive about this paper but still not so sure about my understanding, so I raised my score while keeping my confidence. My previous rating was mainly due to my insufficient understanding of this work. I appreciate the author's explanation of the problem setting, their technical contributions, and the practical relevance (e.g., the availability of the credal set, continual learning, distribution misspecification). I do not doubt that it is difficult to explain such a topic clearly to every reader in 9 pages, as I also faced similar issues before. The current presentation may be sufficient for some researchers, but as someone who only vaguely remembers those important results in statistical learning theory, I have to agree with `Reviewer hWdx` and `Reviewer 2ZTx` that this paper is quite dense. Maybe a 10-page conference paper is not the best form to present this theory. I'm looking forward to the author's other expositions of this theory (journal papers, lectures, tutorials, etc.), if possible, so that I may be able to use it in other settings. Regarding the citing style, do not worry and please ignore it. It's just a personal preference. --- Reply to Comment 1.1.1: Title: Thank you! Comment: We thank the reviewer for the time they spent understanding more deeply our paper, for understanding our struggles with the page limit, and for raising their score. As they correctly suggest, we are currently preparing a journal version where we also plan to extend our results, and we are thrilled about their interest in possibly using our findings in their research. Once again, we thank the reviewer for their comments, The Authors
Summary: This is a paper on learning theory with a focus on machine learning. The authors consider a setup with several training sets available. Given an additional (test) set, the authors assume that the distribution generating these new data coincides with one of the distributions generating the training set or is a convex combination of them. Under a few additional assumptions, the authors can find bounds on the expected loss of the empirical risk minimiser. Strengths: The paper is quite technical and dense. Yet, the proofs gathered in the appendix are relatively easy to follow and, after a preliminary check, correct. I think the generalised setup considered by the authors is quite interesting and challenging. The author also presents an experimental evaluation of their bounds. This is very valuable for such a kind of theoretical paper. Weaknesses: The setup considered by the authors might appear a bit special and not very common. Most of the results are a generalisation to the multi-dataset setup of results included in Liang's Lecture Notes. In a sense, the work in this paper might appear as a (straightforward?) extension of those results to the case of multiple datasets. It is not very clear whether, relaxing the realizability assumption (something adding a lot of realism to the modelling) might have a strong impact to the bounds. Technical Quality: 3 Clarity: 2 Questions for Authors: - In which sense are the results in the paper not a simple corollary of those in Liang's LNs? - Is it possible to characterise the impact of the realizability assumption on the bounds? What about considering this case even in the experiments? - Is it possible to advocate the multi-dataset setup considered by the authors better? Are there examples of real tasks that cope with such a situation? I found the experiments very good for the paper, while the discussion about the credal set learning was not so crucial. What about having the latter in the appendix and the former in the main body of the paper? Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: The limitations discussed by the authors wrt the experiments are very reasonable and I see no problems with them. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: *In which sense are the results in the paper not a simple corollary of those in Liang's LNs?* We thank the reviewer for this deep question, and for giving us the opportunity to be clearer about this topic. Theorem 4.1 is an immediate extension (not a corollary though) to the credal case of Liang’s LNs, as shown in Corollary 4.3. Corollaries 4.2 and 4.4, instead, cannot be immediately traced back to Liang. The first because it shows that the diameter of the credal set plays a role in deriving a bound to the expected risk of the ERM computed according to any possible distribution within the credal set itself (not necessarily the oracle one). Corollary 4.4, instead, inspects the possibility of distribution drift in the newly observed dataset, something Liang does not take into consideration. Similar considerations hold for the results in Sections 4.2 and 4.3. Let us add two considerations. Theorem 4.5 utilizes a different proof technique than the analogous, non-credal counterpart in Liang’s work, and Theorem 4.9 requires the definition of the extremal Rademacher complexity $R_{n,P^{ex}}(\mathcal{A})$, which, to the best of our knowledge, has never been introduced before. *Is it possible to characterize the impact of the realizability assumption on the bounds? What about considering this case even in the experiments?* We thank the reviewer for this deep question. Yes, it is possible. By looking at the proofs of Theorem 4.1 (in particular the last equation of page 13) and Theorem 4.5 (in particular Equation (9)) in Appendix A, we see how foregoing realizability implies a slightly looser bound. We did not include this consideration in the original version of the manuscript because of page limitations, but we will do so in the updated version. A newly added synthetic experiment shows this behavior. In the experiment we computed the theoretical bound $\epsilon^{\star\star}(\delta)$ and verify (i) that it is slightly looser (e.g. with 200 samples, $\epsilon^{\star}(\delta) = 0.03785$ while $\epsilon^{\star\star}(\delta) = 0.03800$) and (ii) whether the difference $L_P(\hat{h}) - L_P(h^\star)$ is within this bound. The results showed that the empirical risk of the empirical risk minimizer $\hat{h}$ is within the bound $\epsilon^{\star\star}(\delta)$ of the best theoretical model $h^\star$. The difference $L_P(\hat{h}) - L_P(h^\star)$ also satisfies the condition $$L_P(\hat{h}) - L_P(h^\star) \leq \epsilon^{\star\star}(\delta).$$ The experiment validates empirically (in a synthetic environment) Theorem 4.5, showing that even under no realizability, the empirical risk minimizer's performance is close to the theoretical best within a computable bound. The experimental results are presented in revised manuscript’s Table B.4, and can be seen here: https://shorturl.at/x6JFq *Is it possible to advocate the multi-dataset setup considered by the authors better? Are there examples of real tasks that cope with such a situation?* Our approach is closely linked with continual learning applications, which emphasize the need to handle diverse and sequential datasets to achieve robust and generalizable models. Recent works in continual learning have demonstrated the practical applications and benefits of using a multi-dataset setup. For instance, [1] introduces a novel method for domain incremental learning, leveraging multiple datasets to adapt seamlessly across different tasks. Another example is [2], which proposes a parameter-efficient continual learning framework that dynamically expands a pre-trained CLIP model through Mixture-of-Experts (MoE) adapters in response to new tasks. [3] addresses the challenges of multi-modal medical data representation learning through a continual self-supervised learning approach. These examples from recent studies demonstrate the practical applications and benefits of using a multi-dataset setup in a continual learning framework. Furthermore, some techniques use a multi-dataset setup in continual learning without relying on a specific temporal order. For example, [4] presents a replay mechanism based on single frames, arguing that video diversity is more crucial than temporal information under extreme memory constraints. By storing individual frames rather than contiguous sequences, they can maintain higher diversity in the replay memory, which leads to better performance in continual learning scenarios. By including citations to these works and discussing their relevance, we will strengthen the advocacy for our approach in the updated version of our manuscript. References: [1] Jeeveswaran, K., et al. Gradual Divergence for Seamless Adaptation: A Novel Domain Incremental Learning Method. ICML 2024. [2] Yu, Jiazuo, et al. "Boosting continual learning of vision-language models via mixture-of-experts adapters." Proc. IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 2024. [3] Ye, Y., et al. Continual Self-supervised Learning: Towards Universal Multi-modal Medical Data Representation Learning. " Proc. IEEE/CVF CVPR. 2024. [4] Alssum, Lama, et al. "Just a glimpse: Rethinking temporal information for video continual learning." Proc. IEEE/CVF CVPR. 2023. *I found the experiments very good for the paper, while the discussion about the credal set learning was not so crucial. What about having the latter in the appendix and the former in the main body of the paper?* We thank the reviewer for the suggestion, and we will try to move the experiments in the main body of the paper. The current structure is the result of feedback we received in the importance of eliciting actual credal sets from the available data. Because deriving credal sets directly from data is currently an open question in Imprecise Probabilistic Machine Learning, in the new version we plan to keep a sketch of both how to derive credal sets, and of the key experimental results, in the main part of the manuscript. --- Rebuttal Comment 1.1: Comment: I thank the reviewers for their detailed feedback and for the additional work. I am happy to confirm my initial positive opinion about the paper.
Summary: This paper introduces a novel learning framework termed “Credal Learning Theory” which extends the traditional statistical learning theory to handle variations in data distributions, especially the topic “domain generalization”. The authors propose using credal sets, which are convex sets of probability distributions, to model the variability in data generating processes. By leveraging multiple training sets generated from different distributions, the framework infers a credit set that allows the model to capture uncertainty and have guarantees in domain generalization. The paper also provides theoretical bounds on the expected risk of models learned within this credal framework under various conditions, including finite and infinite hypothesis spaces, and both realizable and non-realizable scenarios. Strengths: This paper introduces the novel concept of Credal Learning Theory to account for data distribution variability using credal sets. This approach presents a new perspective for analyzing the domain generalization field. The authors innovatively apply techniques from Imprecise Probabilities to infer models from limited multiple sample sets, which is practically significant. The definition of credal sets as the convex closure of a family of distributions is also quite natural. Additionally, the paper provides complete mathematical proofs to support the derived generalization bounds. The use of credal sets to model epistemic uncertainty is robust and well-justified, providing theoretical guarantees. Weaknesses: My main concern is whether the use of the credal set introduced in this paper is sufficient for analyzing domain generalization. As the paper describes, the model is inferred from a finite number of datasets, and we are studying cases where each dataset is sampled from one of a potential family of distributions. Domain generalization aims to provide guarantees for generalization across different distributions within this potential family. The paper provides generalization guarantees for all distributions within the credal set constructed by convex combinations of inferred distributions. However, the gap lies in whether the credal set adequately represents the true potential family of distributions. To convincingly address this, the paper should at least consider the following two aspects: 1. Provide real-world examples demonstrating that the constructed credal set encompasses most of the potential distributions we need to consider in these scenarios, thus illustrating the practical relevance of the credal set. 2. Make assumptions about the potential family of distributions, such as assuming the family follows a certain distribution, and prove that the credal set occupies a significant proportion of the support region with high density for this distribution. This would show that the generalization guarantees given for the credal set are meaningful for real-world distribution scenarios. These two points are essential to demonstrate the universality of the credal set. Otherwise, there may be cases where the distributions of multiple datasets used for inferring the model are very close to each other, making their convex combination very small and only covering a tiny part of the support of the true family of distributions. This would mean there is no guarantee for the true potential family of distributions. Technical Quality: 3 Clarity: 3 Questions for Authors: Please refer to Weaknesses. In addition to the issues mentioned in Weaknesses above, are there any analyses of Computational Complexity that are specific to the use of credal sets as an approach to infer models? Can such an approach be implemented for limited but large datasets? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The authors have discussed the limitation of this paper in Section 5. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: *Provide real-world examples demonstrating that the constructed credal set encompasses most of the potential distributions we need to consider in these scenarios, thus illustrating the practical relevance of the credal set.* We thank the reviewer for the question, and for giving us the opportunity to expand on the matter. In general, as the reviewer points out and as we mention in our manuscript, a general method to guarantee (even probabilistically) that the true distribution of the new dataset will be included in the credal set is still to be developed. That being said, in many applications – e.g. to continual learning (CL) – scholars often make assumptions that make the use of credal sets with suitable coverage properties plausible. Think for example of the task similarity assumption in CL https://shorturl.at/UJuiZ. There, it is posited that the oracle distributions pertaining to all the tasks of interest are all contained in a TV-ball of radius $r$ chosen by the user. This captures the idea that the model studied will be used on tasks that are not too different from each other. In this case, the credal set elicited from a finite sample of distributions from the ball will be a good approximation of the ball itself, converging for $N \rightarrow \infty$ to the entire ball. In the healthcare setting, experts’ opinions can be incorporated alongside empirical data (plausible probability distributions) to represent the probability uncertainty, for example, for the prognosis of a disease given a set of patient characteristics/biomarkers (see e.g. https://shorturl.at/CTKid). To make sure that the credal set constructed encapsulates most of the `potential distributions needed’, a number of approaches can be taken including incremental learning; in this approach the AI model learns and updates knowledge incrementally. As a result, the credal set can be continuously updated (via incremental learning) as new data become available. In this direction, learning health systems are being implemented in practice. These are health systems “in which internal data and experience are systematically integrated with external evidence, and that knowledge is put into practice” https://shorturl.at/ubSCG. Once again, in the future we will study how to guarantee that the true distribution is an element of the credal set in full generality, and we will apply our findings to real-world datasets. *Make assumptions about the potential family of distributions, such as assuming the family follows a certain distribution, and prove that the credal set occupies a significant proportion of the support region with high density for this distribution. This would show that the generalization guarantees given for the credal set are meaningful for real-world distribution scenarios.* We thank the reviewer for their point, which ties to the one in the previous question. They are right in pointing out that – given that no general way (yet) exists of guaranteeing that the oracle distribution for the $(N+1)$-th dataset belongs to the credal set obtained from the previous $N$ ones (as also pointed out e.g. in Section 7 of https://shorturl.at/zbXiu) – assumptions must be made on a case-by-case basis. Once those assumptions are made, then it should be shown that either the credal set covers a non-negligible portion of the distribution class of interest, or that even a small credal set is "good enough". We will make this clear in the new version of the manuscript. We note in passing, though, that since our goal is maximal generality, the methods that we presented in Section 3 induce rather large credal sets. Consider for example Section 3.1.1. There, we first $\epsilon$-perturb all the likelihoods that we elicit for the $N$ "past" datasets, and then we take the convex hull of the union of such perturbations. If the $N$ likelihoods are "diverse enough", we will be able to cover a "wide area" of the distribution class of interest. For example, imagine that the class is "univariate continuous distributions supported on $\mathbb{R}$". If $\mathcal{L}_1$ is a Normal centered at $\mu_1$, and $\mathcal{L}_2$ is a Normal centered at $\mu_2$, $\mu_1 \neq \mu_2$ and "far enough" from each other, both having same (or similar) variance, then by considering the credal set induced by these two, we already cover virtually all distributions in the class. To see this, notice that, as pointed out in the equation between lines 145 and 146, the credal set includes all distributions that setwise dominate the lower probability. In addition, by Example 3 in https://tinyurl.com/3ftw7hrn, we know that the lower probability of any set $A$ in the sigma algebra is given by $\min_i (1-\epsilon_i) \mathcal{L}_i (A)$. Now, since the tails of a Normal distribution decay fastly, this means that $\min_i (1-\epsilon_i) \mathcal{L}_i (A)$ is very close to $0$, for all $A$. To visualize this, please refer to the second picture in Example 1 of https://shorturl.at/x0VbJ. There, we have two Normals depicted, one in red and one in blue, having the same variance, the former (call it $L_1$) centered at 9, and the latter (call it $L_2$) centered at 20. As we can see, $\min_i \mathcal{L}_i$ is basically zero everywhere, and in turn so will be $\min_i (1-\epsilon_i) \mathcal{L}_i$, for any $\epsilon_i >0$ that we choose. But then, almost all univariate continuous distributions on $\mathbb{R}$ will setwise dominate such a lower probability, resulting in a "virtually all-encompassing" credal set. *Are there any analyses of Computational Complexity that are specific to the use of credal sets as an approach to infer models? Can such an approach be implemented for limited but large datasets?* We thank the reviewer for their question, which we will answer in the Global Rebuttal. --- Rebuttal Comment 1.1: Comment: Thanks for your response. --- Reply to Comment 1.1.1: Title: Thank you! Comment: If our answers contributed to solve the doubts that the reviewer had (which we strongly hope), we would greatly appreciate it they could raise their score. Sincerely, The Authors --- Rebuttal 2: Title: Last Question Comment: **We report here too our answer to Reviewer sJhN's last question** *Are there any analyses of Computational Complexity that are specific to the use of credal sets as an approach to infer models? Can such an approach be implemented for limited but large datasets?* We thank the reviewer for their question, the answer to which we will try to add to the new version of our manuscript. Yes, there are analyses in the literature of computational complexity specific to the use of credal sets, particularly in the context of graphical models and probabilistic inference. Such approaches can be implemented for large datasets, but they often require approximation techniques to be computationally feasible [1, 2, 3 ]. Despite this, credal set approaches can be implemented for large datasets using techniques like parallel processing, distributed computing, and efficient data structures. Similar to Deep Learning-based approaches, utilization of high-performance computing resources, algorithm optimization, and domain-specific adaptations, the computational challenges can be effectively managed. Recent advancements demonstrate the practicality of these approaches. For instance, "Credal-Set Interval Neural Networks" (CreINNs) have shown significant improvements in inference time over variational Bayesian neural networks [4]. Thus, while the computational demands are comparable to those of deep learning-based methods, the robustness and flexibility of credal sets, as demonstrated in recent research, make them a practical and valuable approach [5, 6]. References: [1] Mauá, Denis Deratani, and Fabio Gagliardi Cozman. "Thirty years of credal networks: Specification, algorithms and complexity." International Journal of Approximate Reasoning 126 (2020): 133-157. [2] Lienen, Julian, and Eyke Hüllermeier. "Credal self-supervised learning." Advances in Neural Information Processing Systems 34 (2021): 14370-14382. [3] Mauá, Denis D., et al. "On the complexity of strong and epistemic credal networks." arXiv preprint arXiv:1309.6845 (2013). [4] Wang, Kaizheng, et al. "CreINNs: Credal-Set Interval Neural Networks for Uncertainty Estimation in Classification Tasks." arXiv preprint arXiv:2401.05043 (2024). [5] Marinescu, Radu, et al. "Credal marginal map." Advances in Neural Information Processing Systems 36 (2024). [6] Wang, Kaizheng, et al. "Credal Wrapper of Model Averaging for Uncertainty Estimation on Out-Of-Distribution Detection." arXiv preprint arXiv:2405.15047 (2024).
Summary: The paper develops a so called credal learning theory that uses convex sets of probability distributions (also known as credal sets) to model the uncertainty of the data-generating distribution. As in the classical statistical learning theory, the paper derives new theoretical bounds on the risk of the models learned from a collection of datasets instead of a single dataset. These datasets do not necessarily correspond to the same distribution. The new results (i.e., bounds) can be viewed as a generalisation of the classical results. Strengths: - The paper looks at a core problem in machine learning and the credal sets based approach to bounding the expected risk of a machine learning model appears to be quite novel. Weaknesses: - In my opinion, the presentation is quite dense, the notation appears to be quite heavy in places and the paper is not easy to follow. I think it's important to illustrate some of the key concepts introduced in sections 3 and 4 with some examples. Tables 1 and 2 as well as Figure 2 are good but they seem somewhat disconnected from the rest of the paper. Technical Quality: 3 Clarity: 2 Questions for Authors: - In my understanding, the results derived in the paper rely on the availability of the credal set that is supposed to contain the true data generating process. How is that credal set elicited? - In principle, the credal set can be arbitrarily large in the sense that it can have arbitrarily many extreme points. Is there a bound on its size? Confidence: 2 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: The limitations of the proposed approach are clearly discussed in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: *In my opinion, the presentation is quite dense, the notation appears to be quite heavy in places and the paper is not easy to follow. I think it's important to illustrate some of the key concepts introduced in sections 3 and 4 with some examples. Tables 1 and 2 as well as Figure 2 are good but they seem somewhat disconnected from the rest of the paper.* We thank the reviewer for their insights. In the updated version, we will strive for more clarity. In the present version, we already have an example pertaining to Section 3.1.3, and we will try to bring to the main body of the paper the example pertaining to Section 3.1.1 that is currently confined in Appendix C (it was relegated there because of the 9 pages limitation). In fact, a synthetic example pertaining to section 4 can also be found in Appendix B; it was relegated there because of the page limitation. We will try to add a sketch of it in Section 4 to ease the way the paper reads. We will also try to be clearer about Tables 1 and 2. The former tells the reader, for example, that pmf $\ell_1$ assigns a probability of $0.3$ to the element $\omega_1$ of the state space $\Omega$, and similarly for the other pmf’s and the other elements of the state space. Table 2 tells the reader the values that the lower and upper probabilities assign to each element of the power set $2^\Omega$. They are computed according to [6, Section 4.4]. That is, for all $A\in 2^\Omega$, $$\underline{P}(A)=\max \left\lbrace{ \sum_{\omega \in A} \underline{P}(\omega) , 1 - \sum_{\omega \in A^c} \overline{P}(\omega)}\right\rbrace$$ and $$\overline{P}(A)=1-\underline{P}(A^c)=\min \left\lbrace{ \sum_{\omega \in A} \overline{P}(\omega) , 1 - \sum_{\omega \in A^c} \underline{P}(\omega)}\right\rbrace.$$ Finally, Figure 1 is a visual representation of the resulting credal set (focusing only on the singletons in the power set $2^\Omega$). As we can see, $\omega_3$ has a probability between $0$ and $0.75$, $\omega_1$ between $0$ and $0.5$, and $\omega_1$ between $0.25$ and $1$. *In my understanding, the results derived in the paper rely on the availability of the credal set that is supposed to contain the true data generating process. How is that credal set elicited?* We thank the reviewer for the opportunity to clarify on this point. Section 3 presents three ways of eliciting a credal set directly from the available data (see Sections 3.1.1, 3.1.2, 3.1.3, and 3.2), and an extra one is presented in Appendix D. Perhaps this was not clear enough, so we will strive for greater transparency in the new version. For the sake of completeness, let us summarize here the three approaches in the main portion of the paper. In the first one (Section 3.1.1) we first perturb the likelihood pertaining to each observed training set $D_i$, and then we take the convex hull of these perturbations. In the second one (Sections 3.1.2 and 3.1.3), we derive a plausibility function from the likelihoods, and we use the latter to characterize a credal set of probability measures whose probability density functions are pointwise dominated by the plausibility function. Finally, in Section 3.2 we illustrate the subjectivist approach, in which the scholar first specifies in a subjective manner – but influenced by the available empirical probabilities – the lower probability of some events of interest, then they extend (via Walley’s extension principle) these values to a lower probability defined over the whole power set $2^\Omega$, and in turn consider the credal set of probabilities that setwise dominate such an extended lower probability. *In principle, the credal set can be arbitrarily large in the sense that it can have arbitrarily many extreme points. Is there a bound on its size?* We thank the reviewer for this deep question; let us try to answer it. The size of the credal set does not necessarily depend on the number of its extreme elements. Think of a very small ball (whose extreme points are infinitely many) inscribed in a large polygon (having finitely many vertices, and hence finitely many extreme elements). Rather, one way of capturing the size of a credal set is to look at its diameter. To the best of our knowledge, there is no general recipe to bound it: it depends on how “far” (in some well-defined metric or divergence) the true data generating distributions for each of the collected training sets are. In this paper, we aimed at giving general results, in which one cannot control the diameter of the credal set. However, in many applications this may be indeed possible. In continual learning, for example, one can rely on the task similarity assumption (see e.g. Assumption 1 in https://arxiv.org/abs/2305.14782), that tells us that the data associated with each task are generated i.i.d. from distributions whose distance (e.g. in the TV metric) does not exceed some value $r$ specified by the user. --- Rebuttal Comment 1.1: Comment: Thanks for the clarifications.
Rebuttal 1: Rebuttal: **We first answer to Reviewer sJhN's last question** *Are there any analyses of Computational Complexity that are specific to the use of credal sets as an approach to infer models? Can such an approach be implemented for limited but large datasets?* We thank the reviewer for their question, the answer to which we will try to add to the new version of our manuscript. Yes, there are analyses in the literature of computational complexity specific to the use of credal sets, particularly in the context of graphical models and probabilistic inference. Such approaches can be implemented for large datasets, but they often require approximation techniques to be computationally feasible [1, 2, 3 ]. Despite this, credal set approaches can be implemented for large datasets using techniques like parallel processing, distributed computing, and efficient data structures. Similar to Deep Learning-based approaches, utilization of high-performance computing resources, algorithm optimization, and domain-specific adaptations, the computational challenges can be effectively managed. Recent advancements demonstrate the practicality of these approaches. For instance, "Credal-Set Interval Neural Networks" (CreINNs) have shown significant improvements in inference time over variational Bayesian neural networks [4]. Thus, while the computational demands are comparable to those of deep learning-based methods, the robustness and flexibility of credal sets, as demonstrated in recent research, make them a practical and valuable approach [5, 6]. References: [1] Mauá, Denis Deratani, and Fabio Gagliardi Cozman. "Thirty years of credal networks: Specification, algorithms and complexity." International Journal of Approximate Reasoning 126 (2020): 133-157. [2] Lienen, Julian, and Eyke Hüllermeier. "Credal self-supervised learning." Advances in Neural Information Processing Systems 34 (2021): 14370-14382. [3] Mauá, Denis D., et al. "On the complexity of strong and epistemic credal networks." arXiv preprint arXiv:1309.6845 (2013). [4] Wang, Kaizheng, et al. "CreINNs: Credal-Set Interval Neural Networks for Uncertainty Estimation in Classification Tasks." arXiv preprint arXiv:2401.05043 (2024). [5] Marinescu, Radu, et al. "Credal marginal map." Advances in Neural Information Processing Systems 36 (2024). [6] Wang, Kaizheng, et al. "Credal Wrapper of Model Averaging for Uncertainty Estimation on Out-Of-Distribution Detection." arXiv preprint arXiv:2405.15047 (2024). --- **We also attach a pdf with the Table pertaining to the new experiment we discussed in our answer to reviewer 2ZTx's second question.** Pdf: /pdf/231ce50ca62a860f6f9ccd25beaa30be67303418.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
InterControl: Zero-shot Human Interaction Generation by Controlling Every Joint
Accept (poster)
Summary: This paper focuses on text-to-motion synthesis, specifically, to generate motion of multiple interacting people. The method can potentially work with an arbitrary number of people and models inter-person interactions as pairs of joints that can be either in contact or not, separated by a certain control distance. The method is composed of two main stages: a LLM planer and a diffusion model based on Human Motion Diffusion Model (MDM) that incorporates a motion controller, called Motion ControlNet, and an inverse kinematics (IK) module to have finer control over the joints of generated motion sequences and, thus, generate interactions between synthesized characters. The authors show that the distance between pairs of joints for interactions can be generated with off-the-shelf LLMs. A key feature of this work is that it is zero-shot, thus, the models used do not need to be trained with multi-person motion datasets. However, as it uses a motion controller (based on the ControlNet paradigm) which is fine-tuned to generate global pose sequences (global motion) as opposed to relative motion typically generated by SoTA methods such as MDM. This would be the first method to enable a single-person text-conditioned motion generation for generating interactions for multiple people. The paper includes comparisons with the SoTA using standard metrics (FID, R-precision, Diversity, Foot Skating, Trajectory error, Location error and Average error). Authors compare with single-person motion generation methods (MDM, GMD, OmniControl). For the two-person setting the paper includes a comparison of Spatial Errors and a user study comparing with PriorMDM. Strengths: * In part, the paper is clear and well-written. * The proposed InterControl framework is meaningful. * Although comparisons with multi-person motion generation models is not comprehensive mostly due to differences in training datasets used (which is understandable), the paper clearly shows an improvement compared to the priorMDM model, both quantitatively and qualitatively. Weaknesses: ### **Method section needs a bit of re-writing** The method section is somewhat confusing especially in the beginning. There are some definitions that the reader needs to make assumptions about in order to continue reading. For example: - There is no overview of the method in this section and even if Fig. 2 gives an overview, there is no link to it in the text. I would advise the authors to add this after the formulation and preliminaries so that the reader can easily understand the overall architecture. - L154: It is not so obvious to me what this discrepancy between relative and global motion is precisely and why it makes priorMDM less controllable in global space. I understand that generating global motion poses different challenges than relative motion, but could the authors explain this further? Doesn't it boil down to generating the global trajectory and then coupling/refining the global motion so that it is coherent with this trajectory as they do in [1]? - L157: While it may be considered as concurrent work, I suggest the authors discuss a bit [1] which also addresses this issue. - L174: At this point it is not clear to me what exactly are the spatial conditions. Authors could clarify with a simple example. - L175+: At this point, it is not clear why the control signal c is potentially sparse? Is it because it comes from text? This is hard to understand as there is no overview pointing out that the method uses a “Contact Plan” based on a LLM. - L177: I don't understand why the control will be desired for only a select few frames. Don't we want this control all the time? But again, authors may need to first clarify what exactly is this control signal and where does it come from. ### **Additional baselines** I believe that authors did not include other multi-person motion generation baselines because those explicitly learn from multi-person data. However, I am curious to know how different the results from the InterHuman method differ from this approach? References: [1] Zhang, Siwei, et. al, RoHM: Robust Human Motion Reconstruction via Diffusion (CVPR 2024) Technical Quality: 3 Clarity: 2 Questions for Authors: Aside from the questions posted in the Weakness section, I have the following: ### **Need for clarifications** - I don't fully understand how the model generates the motion for all people. Is it done first individually and then merged somehow? Does the method need to run one MDM+Motion ControlNet instance per person? - From the qualitative results, it seems to me that I see the effect of an optimization post-processing step after the generation? Do the authors use a post-processing step with the IK or is this the effect of the guidance by the IK module? - Is the IK module guidance applied both at the end of each denoising step and also after $x_0$ is obtained from the diffusion model? - To apply the IK module guidance, does the method use the gradient information from the optimization steps to influence or does it apply directly on the estimated poses in a similar fashion as in PhysDiff? ### **Additional comments** I would suggest including a discussion on the Inter-X dataset at the first paragraph of the introduction. Even though this work is concurrent and this is not strictly necessary, this can make the paper stronger and more up-to-date. - L34: Could the authors expand a bit about which methods ignore a good design? - L38: Could the authors include a counter-example for such interactions that do not require 'additional interaction data'. Are these interactions closer interactions such as hugging or, the opposite, more subtle interactions such as talking or holding a meeting? - L108: I would suggest adding a comment about the method proposed by InterHuman which is also based on the MDM architecture. ### Minor (typos and grammar) - L155: "is not able" should be “are” instead of “is”. - L160: is proposed--> proposes - L166: is “almost” a random noise to be sampled. Almost? Why almost? $x_T$ should be an isotropic Gaussian at T. - L187: desperately needs a comma, otherwise the sentence completely changes meaning!: "(FK) to convert" -->"(FK), to convert" Confidence: 5 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: Yes, the authors include limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the constructive review. We will revise our paper according to the insightful suggestions in our final version. We are happy that our paper is regarded as ‘clear’ and ‘well-written’ and our framework is ‘meaningful’. We have tried our best to clarify questions and we hope that our response could address your concerns. **Q1: Paper Writing.** **A1:** Thanks for your suggestions, we will add the reference to Fig.2 in the beginning of Sec.3 and give more detailed overview of our method. **Q2: Discrepancy between relative and global motion.** **A2:** Yes, the discrepancy between relative and global motion is similar to RoHM, where the root joint is represented in global space, while other joints' position are represented in a relative position by setting the origin to the root joint, i.e., relative (local) space. Such data format is adopted in HumanML3D and it is proved to be useful in motion generation by MDM, PriorMDM, etc. However, the global positions of joint in such data format is not available except for the root joint, which makes PriorMDM unable to control joints in the global space. We achieve this by converting such data format to global positions for the input of Motion ControlNet and IK guidance. In RoHM, they also use diffusion models to model root trajectories in global space and other joints' positions in the body-root space, which shares similar idea with motion generation methods in HumanML3D. **Q3: A concurrent work RoHM for pose estimation.** **A3:** Thanks for your kind suggestion. We will include this paper in the discussion in the final version. **Q4: Clarification of spatial condition.** **A4:** Spatial conditions are coordinate points in the global space, such as the origin (0,0,0). Our framework could accept multiple points together as the spatial condition, e.g., let the left wrist to (1,0,0) and right wrist to (2,0,0) in the same time. **Q5: Sparsity of spatial condition.** **A5:** A motion of single person with N frames will have (N, 22, 3) in global space in HumanML3D, where 22 is number of joints and 3 is xyz space. However, for example, we may only set 2 points as spatial condition according to the contact plan among total 22*N joint positions. We want to generate coherent motions with dim (N, 22, 3) by using spatial conditions such as 2 global points. In such cases, spatial conditions are sparse compared to the entire motion sequence. **Q6: The reason of control being desired for only a select few frames.** **A6:** For example, when we generate hand-shaking interactions, the contact plan outputted by LLM could only have contact pairs of one's right wrist and the other's right wrist for only 1s (e.g., 30 frames), and does not impose constraints over other frames. In other frames without control, we want such motions to be coherent to the frames being controlled. **Q7: Qualitative comparison with InterGen.** **A7:** As our method does not explicitly learn interaction data distributions, it is not suitable to evaluate metrics like FID and Top-3 precision. We shows some qualitative comparison with InterGen in Q1 of general response and Figure 1 of the attached PDF: Our method performs better in fine-grained distances in interactions, e.g., it is hard for InterGen to actually let two people holding hands (e.g., distance = 0). The hand-holding interaction in InterGen always leaves some small distances between hands. **Q8: How the model generates the motion for all people in the inference process.** **A8:** We generate all people in a batch, e.g., use a tensor of (B, N, D) to represent all people to be interacted, where B is the batch size (number of people), N is frame number, and D is the motion data dimension. The IK guidance is conducted between motions within a batch, leading to IK optimization back-propagated to each individual motion at the same time. Thus, our network is utilized once per denoising step for interaction generation. **Q9: Post-processing step** **A9:** We does not use post-processing step. Our IK module is applied at each denoising step in the diffusion process, not at the end of inference process. **Q10: Details of IK guidance.** **A10:** No. IK guidance is only applied at the end of each denosing step, like a classifier-guidance operation. **Q11: Optimization steps in IK guidance.** **A11:** We use L-BFGS in IK guidance to optimize global joint positions. Like PhysDiff, we don't use this gradient to train network parameters. Instead, the updated motion data feeds into the next denoising step. IK guidance addresses the discrepancy between relative joint data and the need for global position optimization in each step. **Q12: Inter-X dataset.** **A12:** Thanks for your suggestion. We will include this paper in our final version. **Q13: More clarifications.** **A13:** L34: Previous methods could only generate interactions of fixed number of people. Thus, we think they are not 'good' general interaction modeling methods, as they are not flexible to generate group motion with arbitrary number of people. L38: Thanks for this question. We think the counter-example is more like 'hugging', where close interactions are difficult to be described only by distance between joint pairs: it also require the mesh penetration control to avoid unnatural interactions with penetrations. On the contrary, for talking and holding a meeting, we could use a constant distance to constrain two people's root joints (e.g., 2 meters) and also constrain their head orientation to be face-to-face to make such interaction looks realistic. L108: Thanks for the suggestion. We will include the discussion of InterGen in our final version. Typos: Thanks for pointing out the typos. We will carefully revise our paper. L166: Yes, x_T is a random gaussian noise; L187: Yes, we should add a comma after (FK). --- Rebuttal 2: Title: Feedback Comment: I thank the authors for their responses and the rebuttal document. I have some additional comments/questions. #### **Comparison with OmniControl.** In Table 1, the authors do not specifically state how they deal with the randomly selected joints for comparison. Are these joints the same for both OmniControl and the proposed method for each of the text prompts used? Or are these joints randomly selected for each generation independently? Also, as pointed out by another reviewer, directly comparing the controllability of the method with other single-person methods (e.g., OmniControl) can make the paper stronger and more convincing. Furthermore, it seems that OmniControl’s code was already published around mid December 2023, so I would dare to say that considering this work concurrently is walking on a thin line. #### **Qualitative results.** I agree with other reviewers that the qualitative results present abrupt changes in velocity, especially when contact between two people is enforced. It seems that the guidance of the optimization step is “snapping” two joints together. This may show that the guidance step has an overly important role on the final results which also makes the generated motions look less natural than current methods. #### **Clarification of method’s details.** L225-228. Consecutive sentences between these lines seem to directly contradict each other. Here it is stated that IK guidance is applied when training the ControlNet, but then it states that IK guidance eliminates the need for training Motion ControlNet. This does not make sense to me. #### **Correction about guidance.** Based on the author’s response to the questions related with IK guidance, then I would say that the IK guidance is NOT like classifier-free guidance as it does not use the gradients to guide the generation. As in PhysDiff, this method uses a direct modification of the pose during the denoising process. Thus, this is indeed a type of guidance but it is not like classifier guidance or done in “a classifier guidance [9] manner,” (L204-205). I suggest authors change the wording in the paper to make this clearer. #### **Writing.** L199: Using e.g. to refer to the single person motion dataset used for training makes the wording confusing. Are the ControlNet authors present in this work trained with HumanML3D only or is it trained with more datasets similar to this one? Having said all of these. I am still a bit concerned that the method section is not clear at this current stage and may need important re-writing and will slightly lower my rating. --- Rebuttal Comment 2.1: Comment: Thanks for your feedback! ### Comparison with OmniControl The 'random one/two/three' item in Table 1 means one/two/three joints randomly selected for each generation independently. OmniControl also adopts similar strategy in the inference process. Yet it is hard to ensure that 'joints the same for both OmniControl and the proposed method for each of the text prompts used', as the joints are randomly sampled for each text prompt and OmniControl does not provide such configuration file to record the randomly sampled joints. For controlling specific joints, please refer to Table 6. For the concurrent work statement, our paper was done in mid November 2023 and then submitted to the CVPR 2024 conference (We could provide such submission record to ACs and Reviewers if needed). Our paper's code is independently developed and we made it available to CVPR reviewers in late November 2023. Please consider this information to compare our methods with OmniControl. ### Qualitative results Thanks for watching our qualitative results. As we mentioned in the general response of rebuttal, our qualitative results provided in supp. mat. were not processed by 1d gaussian filter. Yet, we find later that many methods commonly adopt it to promote the smoothness of their qualitative results, such as InterGen. In Q4 of the general response of our rebuttal, we have shown that our motion shows similar acceleration with InterGen with the same post-process with InterGen. ### L225-228 Thanks for the detailed review. IK guidance could be used in two types of intermediate result: $\mu$ or $x_0$. IK guidance is applied when training the ControlNet if we use $\mu$. IK guidance eliminates the need for training Motion ControlNet if we use $x_0$. The detailed algorithm could be found in Algorithm 1 of our Appendix, which uses $\mu$. ### Correction about guidance We agree that IK guidance is NOT like classifier guidance as it does not use the gradients to guide the generation. Yet it is like classifier guidance in some degree to guide the denoised results in each denoising step, which is certainly dissimilar to classifier-free guidance. Besides, OmniControl uses the standard classifier guidance. We will revise such description of IK guidance to make it more precise, such as 'IK guidance utilizes IK to guide the pose during the denoising process similar to PhysDiff'. ### L199 We use 'e.g.' to say that our model could also be trained on other single-person data. In our interaction experiments, all results are generated by model trained on HumanML3D. Yet, it could be further improved by training on larger single-person datasets. We will revise such description in the final version to make it precise.
Summary: This paper tackles the challenge of generating human interaction motions involving a flexible number of characters. To simplify the representation of these interactions, the authors propose a joint-pair contact/avoid representation. Given an interaction description, a large language model (LLM) generates motion instructions and identifies contact joint pairs. These outputs serve as inputs for the subsequent motion diffusion model. The Motion ControlNet is trained to produce motion sequences based on the motion instructions and to optimize the joints to align with the specified joint contact pairs. Strengths: 1. This paper is addressing an important human interactions learning task that is not well studied due to the lack of multi-person interactions datasets, and this paper suggests a framework that only leverages the single-person dataset to produce reasonable multi-person interactions. 2. The joint-pair distance representation of human interactions are simple and the IK-guidance shows to be effective. 3. The manuscript is well-written and easy to follow. Weaknesses: 1. It is not clear how does the contact joint pairs for each frame are generated by LLM. A joint jump usually presents when the contact between two humans happens, and what can be this unsmooth motions artifacts result from? 2. I still observe that there are still severe human collision happens in some generated human interactions. The joint pair contact type produced from LLM might not be fine-grained enough to produce natural and plausible interactions. The followed physical simulator tracking step might mitigate the collision artifacts a bit, but still it needs better spatial control over body roots to avoid severe collisions during motion. 3. The iterative IK guidance based optimization happens at each diffusion step, and this might result in less efficient framework. Might directly optimise the noise like done in DNO [1] or a single-step post-optimisation also work? [1] Optimizing Diffusion Noise Can Serve As Universal Motion Priors, CVPR 2024 Technical Quality: 3 Clarity: 3 Questions for Authors: This paper is addressing an important task - human interaction generation, by only leveraging single-person dataset. In addition to some of my questions presented in the weakness section, I have a few general questions here, and I am glad to hear some feedback or insights from authors: 1. If we want to scale up to more involved human characters * How about the efficiency at inference time of the proposed pipeline? * Would LLM planning still be effective and fine-grained enough for multiple-person interactions? 2. It would also be insightful to see the comparison with Intergen or fine-tuning on InterHuman dataset as well in the two-person interaction scenario. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes, the authors have addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the insightful review. We will revise our paper according to the constructive suggestions in our final version. Please refer to General Response for comparison with OmniControl and InterGen, and the explanation of penetration and unsmooth motion issues. **Q1: How does the contact joint pairs for each frame are generated by LLM?** **A1:** The format of contact joint pairs (contact plans) is in Table 9 of our Appendix. The prompt for generating contact plans by LLM is shown in Table 7 of our Appendix, where the raw output of LLM is in Table 8. **Q2: Comparison with DNO, and efficiency issues.** **A2:** Thanks for the insightful suggestion. Firstly, the concurrent work DNO is a good exploration of motion editing which shares similar idea with our IK guidance. However, it also require 300 or 500 steps for motion editing or refinement, which takes more than 3 minutes according to their paper (Sec.6 in page 6). On the contrary, our method need 80s for inference, while previous methods takes more seconds to control the motions following spatial conditions (GMD needs 110s and OmniControl needs 120s). Secondly, if the inference speed is really important, we could use other speed-up techniques in diffusion models to improve the inference speed, such as DDIM and Consistency Model. For example, our framework could utilizes the recent MotionLCM (https://dai-wenxun.github.io/MotionLCM-page/) to speed up the inference to 30ms per denoising step and get realistic motions within 4 steps. Yet, our main contribution of zero-shot interaction generation ability will not be influenced. **Q3: Scale up to more characters.** **A3:** Our framework is able to perform motion generation in a batch to speed up multi-human motion generation, where all characters' motion are generated together. Thanks to the batch computation ability in GPUs, the inference time is almost the same with single-person motion generation. The IK guidance optimization process will add little extra burden with the increase of number of characters. In practice, 2-character inference takes about 80s and 3-character inference takes about 90s with a standard 1000-step DDPM inference. As we mentioned above, we could use MotionLCM to further speed up the inference process of multi-character interaction generation. **Q4: LLM planning for more characters** **A4:** We do not collect contact plans of more characters from LLM for quantitative experiments. Yet, as we show in the supp. mat., LLM works well in the three people cases of fighting and holding hands by providing meaningful prompts. Furthermore, we believe the ability of LLM will be better in the future to handle more complex interactions. Our current experiments illustrate that GPT-4 works well in the two-people interaction contact plan generation. And it also works in some three-people interaction cases. Finally, LLM is a tool to scale up the contact plans in a batch. Yet, our framework does not necessarily need a LLM to produce reliable contact plans: contact plans could also be provided by professionals such as artists. Instead of manually designing keyframes, writing contact plans could greatly alleviate their efforts to generate interacted motions in their work flow. --- Rebuttal Comment 1.1: Comment: I appreciate a lot for the detailed response from authors, and I have read the rest reviews and authors' responses as well. Many of my questions are addressed by authors. Overall, the proposed zero-shot human interaction generation pipeline with LLM-based contact planning together with IK-guided optimization shows to be effective, though it is still challenging to generate very realistic and plausible interaction motions. Also, I agree with the other two reviewers that it is encouraged that the author includes further clarifications on the differences and comparisons between the intercontrol and intergen and omnicontrol. Additionally, what would further complete this work is to compare the proposed intercontrol with data-based methods trained on multi-person dataset, and this would be insightful for the community. I would like to keep my original score as borderline accept after reading authors' response and other reviews. --- Reply to Comment 1.1.1: Comment: Thanks for your kind feedback. We sincerely appreciate your effort to review our paper.
Summary: The paper introduces InterControl, a method designed to address the task of controllable human motion generation and zero-shot human interaction generation. By leveraging the prior knowledge of LLM, InterControl can generate human interactions involving any number of people in a zero-shot manner. Specifically, the authors utilize LLMs to convert textual descriptions of interactions into contact plans, transforming the task of multi-person interaction generation into single-person motion generation. The InterControl model, which is based on Motion ControlNet and IK Guidance, is then used to achieve controllable single-person motion generation. Experimental results demonstrate the superiority of InterControl over previous controllable motion generation methods and its ability to produce realistic human interactions. Strengths: - The proposed task(zero-shot human interaction generation) is interesting, important, challenging, and meaningful. - The related work section is great, providing an excellent summary of relevant research. - The proposed method makes sense and is logically sound. - The results are reasonable. I particularly appreciate the application section, where the character interactions in the simulation are executed very well. Weaknesses: - The qualitative results are not really good. a) The supplementary video shows some cases where some joints have abrupt velocity changes, which conflicts with L232: "IK guidance can adaptively modify velocities from the start to frame n." b) There are some cases of body penetration and unrealistic interactions in the three-people-fighting/two-people-fighting scenarios. - It might be beneficial to incorporate some visualization of the contact plan. According to Sec 3.1, contact should include both contact and avoid forms, described by distance d. However, in the supplementary demo, I only observe d=0 contact. I would like to see some examples of avoidance and how non-zero distance contact plans guide motion generation. - Maybe more discussion is needed on the differences between omnicontrol and intercontrol, as they are similar methods (both of InterControl/OmniControl use ControlNet-like framework and classifier guidance). The author may want to discuss if using OmniControl as the control yields better results in the task of zero-shot human interaction generation? - The paper didn’t compare InterControl with data-based methods such as InterGen. The author may want to demonstrate the superiority of InterControl as a zero-shot method, possibly by highlighting interactions that InterGen cannot generate (2 people interaction). - It might be beneficial to include some qualitative comparisons between InterControl and state-of-the-art methods(e.g. OmniControl) in the task of Single-Person Controllable Motion Generation to better demonstrate the superiority of InterControl. Technical Quality: 3 Clarity: 3 Questions for Authors: - What is the corresponding text for the supplementary material? - Can the LLM really provide a reliable contact plan? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Although the authors included a "Limitations'' subsection in Sec.5, from my perspective, they did not clearly claim the limitations of the method. The authors may want to point out that the realism of the interactions generated by InterControl heavily depends on the LLM's correct interpretation of the text, which cannot be guaranteed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the insightful review. We will revise our paper according to the constructive suggestions in our final version. Please refer to General Response for comparison with OmniControl and InterGen, and the explanation of penetration and unsmooth motion issues. **Q1: Examples of avoidance.** **A1**: Thanks for the kind suggestion. We do have examples of avoidance in our qualitative results, yet we did not annotate them out. For example, the 11-th second of video 'two-people-dancing.mp4' (or the Figure 1 of the attached PDF in the author rebuttal) shows an example of contact and avoidance at the same time: when two characters are holding hands, their other hands are away from each other by at least 2.4 meters. Such avoidance leads to stretching dance motions, which could not achieved by joint contact alone. **Q2: Discussion of the differences between omnicontrol and intercontrol.** **A2**: First of all, we want to clarify that our work is an concurrent work with Omnicontrol, and we are not aware of this paper when we did our work. We have discussed the difference of technical contribution in Sec.A.4. Besides, our work shows notable improvement of non-root joint control over OmniControl in HumanML3D dataset. We have included the zero-shot interaction generation results of OmniControl in the Table of A2 in general response. **Q3: Qualitative comparisons between InterControl and OmniControl.** **A3**: We have include detailed quantitative comparsion with OmniControl in Table.1. Besides, our main contribution is the ability of zero-shot interaction generation. The proposed single-person spatial control is an approach to achieve our main contribution. In Figure 2 of the attached PDF in the author rebuttal, we show a qualitative comparison between InterControl and OmniControl, where our method shows better hand joint control in hand-shaking. **Q4: Corresponding text for supp. mat.** **A4**: For user study videos, the texts are in 'two_person_text_prompts.txt'. For visualization videos, such texts are writted by ourselves to show in-the-wild results. Here are specific texts: 'two-people-dancing.mp4': Two people are dancing, sometimes they stand in an open dance position: one hand joined together, while their other hands are extended outward. 'two-people-winning-gesture.mp4': Two people walk slowly, their arms raised high while holding each other's hands, displaying a triumphant gesture after a fighting contest. 'three-people-holding-hands.mp4': One person held the hands of two others, each with one hand, forming an arc with all three facing the same direction. 'three-people-fighting.mp4': Two people are fighting against a third person, using their wrists and feet to attack. The third person is also counterattacking. 'two-people-fighting.mp4': One person is fighting against another person, using his wrists and feet to attack. The second person is also counterattacking. **Q5: Can the LLM really provide a reliable contact plan?** **A5**: Thanks for this insightful review. We agree that LLM is not guaranteed to provide a reliable contact plan in our framework. As our main motivation and contribution is to design a zero-shot interaction generation framework, we leverage LLM to provide necessary information for this goal. From our empirical results, we think LLM (e.g., GPT-4 in our experiments) could provide a reasonable contact plan in most cases, which is agreed by many previous works in robotics, such as [64] and [a]. Whether or not LLMs could guarantee to generate reliable contact plan is beyond our paper's scope. **Q6: Limitations.** **A6**: We agree that the LLM's correct interpretation of interaction descriptions cannot be guaranteed. As our paper's focus is not LLM, our framework provide a pioneer exploration to perform zero-shot interaction generation, and we empirically find that LLM works well in most cases. With the improvement of LLM's knowledge and robustness in the future, we believe the contact plan generated by newer LLMs will be more reliable. [a] Song, Chan Hee, et al. "Llm-planner: Few-shot grounded planning for embodied agents with large language models."  *in CVPR,* 2023. --- Rebuttal Comment 1.1: Comment: I greatly appreciate the authors' detailed response, which has addressed most of my concerns. In the response, the authors included comparisons between InterControl, InterGen, and OmniControl, provided examples of avoidance, and supplemented the text descriptions in the video. I am grateful for the authors' efforts in addressing and expanding on these issues. Although I suspect that the LLM may not be able to provide a reasonable contact plan "in most cases," and the qualitative results generated by InterControl are not perfect, I think that the novelty of the zero-shot interaction generation task and InterControl outweigh these shortcomings. After reviewing the supplementary materials, considering the other reviewers' opinions, and the authors' response, I am willing to change my score to borderline accept. --- Reply to Comment 1.1.1: Comment: Thank you for your thoughtful reply. We sincerely appreciate your effort in reviewing our paper and considering our rebuttal.
Summary: ### Summary The paper "InterControl: Generating Human Motion Interactions by Controlling Every Joint" aims to generate interactions between multiple people based on text descriptions, with precise joint control. It leverages a pre-trained single-person motion diffusion model and extends it to multi-person scenarios using a large language model (LLM) to guide joint interactions. The methodology involves: - Using an LLM planner to map out interactions and distances between joints. - Combining priorMDM with ControlNet, conditioned on LLM-extracted distances. - Optimizing interactions with LGBFS to enhance performance. Evaluations are conducted using Text2Motion datasets and user studies, focusing on joint control and positional accuracy. The approach also assesses 100 interactions from the InterHuman dataset. ### Contributions 1. **Dynamic LLM Guidance**: Introduces LLMs to guide joint interactions dynamically, making the process scalable and suitable for large-scale data generation. 2. **Precise Joint Control**: Allows precise control over any joint, improving flexibility compared to methods requiring predefined control signals. 3. **Multi-Person Interaction**: Extends a single-person model to handle multi-person interactions, demonstrating scalability. 4. **Enhanced Visualization**: Suggests using human body models like SMPL or GHUM for better visualization and interpretability of interactions. 5. **Comprehensive Evaluation**: Provides thorough evaluation and comparison with previous models, addressing issues like interpenetration and alignment between control and text prompts. While the paper presents promising results and tackles the challenging task of generating realistic multi-person interactions, it also identifies areas for further improvement, such as better handling interpenetration and improving alignment between spatial control and text conditions. Strengths: - The use of large language models (LLMs) to dynamically guide joint interactions introduces a scalable and innovative approach to multi-person motion generation. - The method allows for precise control over any joint at any time, enhancing flexibility and improving upon previous methods that relied on predefined control signals. - The approach successfully extends a single-person motion model to generate realistic interactions among multiple people, demonstrating its scalability and applicability to complex scenarios. - The paper is well-written and easy to follow, with a clear presentation of previous works and a strong introduction that provides good context. - The simplicity of the IK guidance and LLM-generated contact points helps automate the generation process, which is beneficial for large-scale data generation pipelines. - The paper includes comprehensive evaluations and comparisons with existing models, showing that the results for single-joint control and multi-person interactions are indeed better in several aspects. - The integration of ControlNet and LGBFS optimization with LLM planning is a novel contribution, pushing the boundaries of 3D human motion generation. Weaknesses: - The contributions of the paper seem limited, with similarities to existing works like OmniControl and GMD. - The generated motions often suffer from interpenetration, which reduces the realism of the interactions. - There are discrepancies between the spatial control and text conditions, suggesting that the alignment between them needs improvement. - It is unclear how ControlNet is finetuned, especially regarding the use of the HML3D dataset and the extraction of necessary control features. - A simpler baseline for generating each person’s motion and optimizing interperson distances is missing, which could highlight the necessity of additional modules. - The qualitative results lack sufficient visualization, making it difficult to assess the plausibility of interactions from the provided videos. - The method focuses on multi-person motion generation but only evaluates single-person text-to-motion datasets, missing evaluations on available multi-person datasets. - The use of 3D joint locations without a human body surface makes it hard to judge motion plausibility and perceive contacts between people. - The presentation lacks clarity in some sections, particularly in explaining complex steps and figures, such as the Gaussian noise issue and Figure 2. - The approach relies on LLMs for joint distances, which may hallucinate content and lead to errors in generating plausible contact maps. - The work does not include suitable multi-person motion capture data during training, limiting the robustness of the interactions modeled. - The focus on joint-to-joint contacts overlooks other types of human contact, such as grabbing an arm, which are not modeled effectively. ___ Small notes - The teaser is misleading as it suggests conditioning on images for interaction generation. - The claim that TEMOS performs worse than MDM is not substantiated; TEMOS actually performs better in some cases. Technical Quality: 2 Clarity: 3 Questions for Authors: ### Questions and Suggestions for Improvement (importance sorted) **Evaluation and Results**: - Expand evaluations to include more comprehensive datasets, especially those involving multi-person interactions, and provide a detailed performance analysis. - Provide an analysis of the LLM's error rates in generating plausible contact maps and discuss how this impacts the model's performance. - Evaluate the model on available multi-person datasets and collect text descriptions for these datasets to provide a thorough assessment. **Methodological Improvements**: - Address the issue of interpenetration in generated motions by refining IK guidance and adding constraints to improve realism. - Elaborate on the process of fine-tuning ControlNet on the HML3D dataset and extracting necessary control features. - Implement and compare a simpler baseline that generates each person’s motion using existing methods and optimizes interperson distances using LGBFS. **Clarification of Contributions**: Clearly distinguish the paper's contributions from existing works like OmniControl [49] and GMD [26]. Highlight the novel aspects and how this work advances the field beyond these prior studies. **Visualization and Presentation**: - Use human body models like SMPL for better visualization of motions, facilitating better perception of contact and interaction. - Include mesh-based visualizations and use original AMASS skeletons for training to enhance interpretability and clarity of results. - Improve the clarity of the presentation, particularly in explaining complex steps and figures, such as the Gaussian noise issue and Figure 2. By addressing these points, the paper can provide clearer contributions, ultimately strengthening the overall quality and impact of the work. Confidence: 5 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: The Limitations are discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the detailed review. Please refer to general response for more common questions. Q1: Similarity to OmniControl and GMD. A1: (1) Our method could control all joints while GMD only controls root joint. (2) We focus on zero-shot interaction generation and use controllable single-person motion generation as the approach to achieve this goal, while OmniControl only consider single-person motion. Our method shows better joint control ability in non-root joint control (refer to Table 1 of our paper, and qualitative results). Besides, our work is concurrent to OmniControl and we design InterControl independently. Q2: Discrepancies between the spatial control and text conditions. A2: It is unclear the meaning of discrepancies. We will appreciate it if the reviewer could elaborate it. Q3: How Motion ControlNet is finetuned. A3: We have included the training details of Motion ControlNet in Line 193-199. The control features are global positions converted from HumanML3D data format, by using forward kinematics. Q4: A simpler baseline for optimizing inter-person distances on single-person motion should be compared. A4: It is unclear that how the 'inter-person distances' in the review is defined. If it is defined on the distance of any joints from different people, our method itself is similar to such 'simple baseline'. If is is only defined on root joint, GMD is such baseline for spatial control, which has been included in the spatial control comparison quantitatively. Q5: Qualitative results lack sufficient visualization. A5: We have provided 3D-skeleton visualization on user study cases and in-the-wild cases with both two-people and three-people scenarios. Furthermore, we provided physical animation results from our generated kinematics-based motions for better illustration of the execution of a hard interaction case: fighting. The effectiveness of our visualization is agreed by reviewer#HwRy. Q6: Missing training and evaluations on multi-person datasets. A6: As our method is a zero-shot interaction generation method, we only train our method on single-person motion datasets and it does not learn data distribution on multi-person datasets. Thus, it is not suitable to evaluate FID or Top-3 precision on these benchmarks. The usage of multi-person motion capture data during training is beyond this paper's scope and we leave this to our future work. Q7: 3D joint locations without a body surface makes it hard to judge motion plausibility and perceive contacts. A7: As the mainstream motion datasets adopt 3D joint locations as motion data, it is more direct to visualize motions with 3D skeleton. SMPL visualization used by previous methods need an additional step that utilize SMPLify method to convert 3D joint locations to SMPL meshes. However, such conversion will introduce errors and the converted mesh will not be faithful to the original motion output. Furthermore, the mainstream motion datasets such as HumanML3D commonly remove the hand joints from SMPL, leading to a 22-joint data format instead of 24-joint data format in original SMPL. Therefore, many interactions could only be achieved by using wrists instead of hands in current interaction results. The effectiveness of our 3D joint location visualization is agreed by three other reviewers. Finally, we have shown that physical animation could be useful to improve the plausibility of both motion and contact. Our method has the potential to further be improved by leveraging surface optimization methods. Q8: More explanations on Gaussian noise and Figure 2. A8: We will carefully revise our paper in the final version. Q9: The LLM may hallucinate content and make errors in generating contact maps A9: Thanks for this suggestion. We agree that LLM could hallucinate contents in our framework. As our main motivation and contribution is to design a zero-shot interaction generation framework, we leverage LLM to provide necessary information for this goal. From our empirical results, we think LLM (e.g., GPT-4 in our experiments) could provide a reasonable contact plan in most cases, which is agreed by many previous works in robotics, such as [64] and [a]. Furthermore, we believe the contact plan generated by LLM will be more reliable with the new improvements of LLMs. Whether or not LLMs could guarantee to generate reliable contact plan without hallucination is beyond our paper's scope. Q10: The focus on joint-to-joint contacts overlooks other types of human contact, such as grabbing an arm. A10: Thanks for this insightful suggestion. We agree that our joint-to-joint contacts could not handle all kinds of human contact, such as grabbing or hugging. We have clearly discussed such limitation of interaction definition in Line 35-43. Furthermore, our method could be seamlessly extended to joint-to-bone contacts by sampling keypoints in the bone. It could be easily achieved by interpolation between two adjacent joints. Q11: Teaser could be misleading. A11: Thanks for the suggestion. We want to illustrate that our definition of interactions could result in meaningful human interactions in our daily life. As we clearly mentioned zero-shot motion interaction generation in the title and paper, we believe the teaser will not be misleading or hard to understand, which is agreed by other reviewers. Q12:TEMOS performs worse than MDM is not substantiated. A12: We does not claim that TEMOS performs worse than MDM in our paper. We will appreciate that if reviewers could check our paper again. Q13: Analysis of the LLM's error rates. A13: As LLM is not our focus, we empirically show that LLM could work well in the interaction generation. It is worth noting that we use an off-the-shelf LLM without any finetuning. The effectiveness of LLM has been agreed in many previous works, such as [64] and [a]. [a] Song, Chan Hee, et al. "Llm-planner: Few-shot grounded planning for embodied agents with large language models."  in CVPR, 2023. --- Rebuttal Comment 1.1: Title: Feedback Comment: - ### Comparison with InterGen InterGen is dataset-based method, so it may not be able to generalize to unseen interactions. Is this interaction within its scope and training set? Since you are not training in their dataset, it would be nice to provide some more details in this comparison. Your generalizability is clearly superior, but comparisons should be fair. - ### OmniControl Indeed your method performs slightly better than OmniControl. However, the IK-guidance could results in such better interactions. OmniControl results seems close to yours. Are you visualizing the results before or after IK-LGBFS optimization? What happens if you apply the same optimization to OmniControl? - ### Teaser I have read the rest of the reviews and haven't seen any agreement on the teaser's relevance/appropriateness. Showing images is not so scientific since this is neither your contribution or input to the model. The input are text descriptions and your not guiding your interactions through poses. Hence, this is not even in the scope of this work. It could be a potential application after some retrieval method. Teaser images are normally used as a gist of a method and should avoid overstating things. - ### Ablating LLMs I manually checked your plans and seem quite accurate in general. However, it would be nice if you added some extra analysis on failure cases or some -at least empirical- discussions on it. - ### 3D joint locations without a body surface makes it hard to judge motion plausibility and perceive contacts. I kindly disagree that joint locations is common representation for human motion. The largest database (AMASS), which most of the works benefit from adopts SMPL rotations as a representation which enables using a full body mesh. The adoptions of joint positions and non-SMPL rotations from users of datasets such as HML3D and the post-processing optimization they commonly performs is a technique used by such works for text-to-motion, but there are other works which predict SMPL rotations and meshes directly. TUCH (CVPR 2021) highlights the importance of the human body surface for self, and person-to-person interactions. I would expect retraining MDM with SMPL rotations - done in STMC(https://mathis.petrovich.fr/stmc/) - and using such features to control the generated motions. ### Final Remarks The zero-shot manner that is used for the interactions is nice. The proposed LLM-based planning is clear and seems to work fairly well. - The surface of the human body is the appropriate way for this to be shown and achieved. - Except from qualitative comparisons, is there any quantitative comparisons on the datasets proposed in InterGen? - I suspect that OmniControl which such IK guidance could perform similarly. Is that the case? --- Reply to Comment 1.1.1: Title: Response to the feedback of reviewer#VAgJ (2/2) Comment: ### Response to Final Remarks The surface of the human body could be beneficial, but it does not affect our main contribution. We think it will be better to try it as an individual paper in the future work. Quantitative comparisons with InterGen will be unfair for our method, as our method is a zero-shot method and InterGen is fully-supervised. IK guidance is one of our major difference with OmniControl. Comparing our methods with IK-guidance + OmniControl is like an ablation study of our method, which has been quantitative compared in Table 4 of our Appendix. --- Rebuttal 2: Title: Response to the feedback of reviewer#VAgJ (1/2) Comment: Thanks for your kind feedback! We will address your concerns point by point. ### Comparison with InterGen As you mentioned, InterGen is a data-driven method while our method is a zero-shot method. Our definition of interaction and the training dataset is totally different with InterGen. Thus, our method is not comparable with InterGen. For comparable methods like GMD, we have conducted extensive experiments in our paper to compare with them. According to the request of reviewers, we qualitatively compare our generalization ability on some distance-sensitive interaction cases with InterGen, which is in the rebuttal PDF. As we do not train our method on InterHuman, the comparison with InterGen within its data distribution will be unfair for us: it is obvious that zero-shot method is hard to match the performance of fully-supervised methods, especially on the generation tasks that require the model to learn data distribution. By comparing with InterGen qualitatively on some distance-sensitive interactions, we show that the generalization ability could be an advantage of our method over InterGen. Yet, quantitatively comparing our method with InterGen is an unfair comparison. ### Comparison with OmniControl The major difference with OmniControl is (1) our IK-guidance and (2) the non-root joint control ability in Motion ControlNet. As OmniControl adopts a gradient-based optimization method similar to classifier-guidance, it requires more steps (slow inference speed) and lead to worse joint alignment results than ours. Our IK-guidance draws the intuition of IK that is effective and quick for optimization in diffusion models, as the optimization of joint locations is 2nd-order differentiable (unlike the classifier-guidance in image generation). On the contrary, OmniControl directly follows classifier-guidance. If we add IK-guidance to OmniControl, it will be very similar to our method itself. So conducting comparison mentioned by reviewer is like comparing two versions of our own method. Such result has been quantitatively compared on single-person dataset in Table 4 of our Appendix (effectiveness of IK guidance: item 4,5,6 compared with item 1; effectiveness of Motion ControlNet: item 2,3 compared with item 1). ### Teaser As I mentioned ‘agreed by other reviewers’, I mean no other reviewers mention that our teaser is misleading. I agree with reviewer that image is not our contribution/scope. We show images to illustrate that our distance-based interaction definition commonly exists in many scenarios in our daily life and it is effective to represent a large portion of interactions. Such definition could results in meaningful interactions, which could also be found in the Internet images. We will adjust our teaser and consider to remove these images and only keeps interaction visualizations. ### Ablating LLMs Thanks for your effort to check our LLM plans. As you mentioned, our LLM plans seem quite accurate in many simple interactions, such as shaking hands. Here we provide an empirical discussion on failure cases: (1) Very close whole-body interactions, such as hugging. It is difficult to describe joint distances in very close whole-body interactions, even by humans. Yet, such interactions are also hard for data-driven methods, e.g., InterGen also shows artifacts like penetration in hugging on their homepage demos (https://tr3e.github.io/intergen-page/). (2). Distant Interactions, such as playing tennis. The distance between two players is unclear when they play tennis, yet they actually are interacting through the tennis ball. All we can do in this case is to restrict two players in the tennis court by setting their root joint within some region by our IK guidance. ### 3D joint locations v.s. mesh surfaces As HML3D is the current largest text-annotated mocap dataset, many previous methods follow its data format and setting, such as MDM, PriorMDM, T2M-GPT, MLD, OmniControl, etc. Besides, InterHuman dataset also adopts 3D joint locations instead of original SMPL rotations (page 7 of intergen paper). We want to emphasize that STMC is not publicly available when we developed our InterControl. Actually our work is done about 2 month earlier than it. I understand that original SMPL representation could be beneficial to modeling human interactions. Yet, I think it is more like a problem of the HML3D dataset or the problem of text-to-motion base model MDM, not our problem. Our framework is agnostic to the data format, and our main contribution is zero-shot interaction generation in the first time. Currently, our method is instantiated on MDM model architecture and follow HML3D data format. Yet, such framework and our main contribution could also be implemented on other base model with different data formats. We want to thank the reviewer for such constructive suggestion, but we think it is better to try it as an individual paper in our future work. --- Rebuttal Comment 2.1: Title: Response Comment: - Thanks for the clarification about the teaser. - Thanks for the explanation about InterGen. - The HML3D dataset provides the timestamps from the AMASS dataset, so it would be easy to extract them and use SMPL rotations in your setup e.g., following this [repo](https://github.com/Mathux/AMASS-Annotation-Unifier), which has been adopted in other works such as Motion-X. Using body surfaces to study interactions is a clearly more reasonable and correct choice. Also, since you were trying to generate zero-shot interactions, it would be very meaningful to retrain using rotations from AMASS and Motion-X to retrain MDM and use this as a frozen copy. - The limitations of LLM plans should be discussed and included in the final paper. - IK-guidance is a common trick to improve results and it would be nice, if not added to OmniControl, to see your output without it. Also, you referred to slower inference. How does this compare with OmniControl, eg. their guidance vs your IK guidance? --- Rebuttal 3: Comment: Thanks for your feedback! ### Revise paper We will add more clarification of comparison with InterGen and failure cases of LLM plans in our final version. ### About IK Guidance The quantitative results without IK guidance could also be found in Table 4 of our Appendix. Qualitatively, we find that Motion ControlNet alone (without IK guidance) could lead to good motion generation quality (e.g., good FID). Yet, the joint alignment to expected location is not good enough, which will be vital for our interaction generation process. For example, two people's hands cannot reach exactly the same location when they shake hands. Thus, IK guidance is one of our major contribution in controllable motion generation. Besides, the inference speed comparison with omnicontrol has also discussed in the line 731-739 in our appendix. ### Mesh representations I agree that 'using body surfaces to study interactions is a clearly more reasonable', especially for interaction modeling. Yet, rethinking the data format of a widely adopted motion generation dataset seems to be beyond the scope of our paper. Our main contribution of this paper is also not the base model (i.e., how to learn a distribution on a specific data format). As I mentioned in the earlier response, many previous text-to-motion base models such as MDM, PriorMDM, T2M-GPT, MLD adopts the same data representation in HML3D, indicating that the data format proposed in HML3D is widely accepted by this community. As our proposed network is a controllable motion generation network which requires such base model as an initialization, training a new base model seems to be beyond this paper's scope. Besides, InterGen also adopts joint locations, instead of original SMPL rotation format or meshes. It will be better to submit an individual paper to rethink the data format of HML3D and InterGen's dataset. We will add the discussion of TUCH and STMC paper in our final version. We will also add discussion of motion data representation in the future work section to provide valuable information for this community. We think it will be better to leave the rethinking of motion data format to the future work as an individual paper, as our current paper has already involved many information about the zero-shot interaction generation framework (24 pages without the NeurIPS checklist). Please raise further questions if you still have concerns about our explanation. Please also consider to raise the rating if our explanations resolve your concerns. Thanks again for your effort to review our paper!
Rebuttal 1: Rebuttal: We sincerely thank reviewers’ effort for our paper and the insightful review for us to improve our paper. We carefully read all reviews and address common concerns point by point here. **Q1@VAgJ, HwRy and 7GL6**: Comparison with InterGen. **A1**: Firstly, our method is fundamentally different from InterGen. As a zero-shot multi-person motion generation method, InterControl does not learn data distributions from multi-person interaction datasets. This approach offers a significant advantage: the ability to generate interactions for groups of three or more people, as demonstrated in our qualitative results. In contrast, InterGen is inherently limited to generating interactions between only two people. Furthermore, InterControl exhibits greater sensitivity to specific distances, accurately representing scenarios such as physical contact (distance = 0) or maintaining a particular separation (e.g., distance ≥2m). InterGen, however, often struggles to ensure desired distances, particularly in close-contact scenarios like hand-holding. A qualitative comparison between InterControl and InterGen can be found in Figure 1 of the accompanying PDF. **Q2@VAgJ and HwRy**: Comparison with OmniControl. **A2**: Firstly, OmniControl does not propose the framework of zero-shot interaction generation, thus our focus is fundamentally different from it. We were unaware of OmniControl when we chose controllable single-person motion generation to achieve the goal of zero-shot interaction generation. The detailed comparison with OmniControl can be found in Sec. A.4 in the Appendix. Our major advantage over OmniControl in single-person motion control is the superior performance on non-root joint control. Secondly, we have included a quantitative comparison with OmniControl on single-person motion evaluation in Table 1 of our paper. To address the concern of HwRy, we also include a quantitative comparison with OmniControl on zero-shot interaction generation using our collected contact plans in the following Table. OmniControl exhibits poorer joint control performance in the contact joint pairs. Thirdly, qualitative comparison with OmniControl can be found in Figure 2 of the PDF. It also demonstrates cases where the joint alignment in contact joints (e.g., distance = 0) is not as precise as ours. | Method | Traj. err. (20 cm) ↓ | Loc. err. (20 cm) ↓ | Avg. err. (m) ↓ | | --- | --- | --- | --- | | PriorMDM [51] | 0.6931 | 0.3487 | 0.6723 | | OmniControl [65] | 0.0322 | 0.0029 | 0.0194 | | Ours | 0.0082 | 0.0005 | 0.0084 | **Q3@VAgJ, HwRy and 7GL6**: Interpenetration (body penetration, human collision) issues. **A3**: As our method is a kinematics-based motion generation approach, it primarily focuses on semantic alignment with language rather than local penetration or artifacts, which is consistent with MDM and other kinematics-based methods. The main evaluation metrics for single-person motion generation are FID or Top-3 classification precision, which measure the ability to learn data distribution. Consequently, local artifacts are not the primary concern of kinematics-based motion generation methods, and eliminating such artifacts is challenging for these approaches. For instance, InterGen also exhibits body penetrations (e.g., "Two people embrace each other" in https://tr3e.github.io/intergen-page/). In a fair comparison with other zero-shot methods, e.g., PriorMDM, we demonstrate superior results in body penetration avoidance. Our IK guidance ensures that any two characters' torso joints maintain a minimum distance of one meter. PriorMDM displays significant torso collision in the qualitative results (refer to Figure 3 in our paper ), while our method only shows minor collisions of limbs. Moreover, if we need to eliminate penetration issues in specific applications, physical animation in a simulator presented in our application section (Lines 334-342 and Figure 1(c)) will be an effective solution. This is effective because the simulator inherently prevents collisions. Our qualitative results on physical animation demonstrate that training-free imitation learning methods [40] execute our generated interactions effectively, a point acknowledged by Reviewer#HwRy. **Q4@HwRy and 7GL6**: Unsmooth motion (abrupt velocity changes) in qualitative results. **A4**: Thank you for the detailed review. We set the FPS to 15 in the visualization video for qualitative results, while the common practice is 30. It may misleadingly suggest that our generated motions are not smooth enough. Additionally, some cases in the visualization videos use more than 5 contact pairs and short intervals between contact and separation, potentially leading to higher velocities in these intervals. As explained in our paper, IK guidance can adaptively modify velocities from the start to frame n. Generated motion will be smoother if the frame number n is larger. However, some cases in the visualization use a very small n, which could be easily misinterpreted. We sincerely appreciate the insightful review and will carefully revise it in our final version. Furthermore, we have observed that many motion generation methods (e.g., InterGen) use a 1D Gaussian filter to smooth the generated motion without affecting the semantics. In the following Table, we present quantitative results of acceleration in generated motions where both methods adopt the same post-processing. Our acceleration is similar to InterGen's when utilizing the standard 1D Gaussian filter as post-processing, demonstrating that our velocity changes are comparable to existing methods'. Additionally, we provide a sequence of acceleration in two people fighting to qualitatively compare the acceleration between our generated interaction and InterGen's. This figure can be found in Figure 3 of the attached PDF. It further illustrates that our acceleration is comparable to InterGen's. | Method | Acceleration (m/frame^2) | | --- | --- | | InterGen [36] | 0.0046 | | Ours | 0.0042 | Pdf: /pdf/6b77c07d213e687540bf07944bf76bf14acdfdd6.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Disentangling Interpretable Factors with Supervised Independent Subspace Principal Component Analysis
Accept (poster)
Summary: Here the authors propose an extension of PCA that decomposes datasets into multiple independent subspaces that are encouraged to reflect provided covariates of interest. To enforce these dependencies, the authors leverage the Hilbert-Schmidt Independence Criterion (HSIC). Strengths: In short, my views of the manuscript are generally positive. Some specific thoughts below: * **Clarity**: I found the manuscript well written and easy to follow. Well done! * **Motivation**: A recent line of work [1,2] has observed that disentangling separate sources of variation in case-control datasets (e.g. in scRNA-seq studies [3-4]) can assist with data exploration tasks. This work extends similar ideas to settings where no explicit case versus control datasets exist, but samples can still be associated with different covariates of interest. I can imagine this work will similarly prove useful in a number of data exploration tasks. * **Novelty**: To my knowledge, the authors proposed method is indeed novel, and seems like a sensible extension of the previous supervised PCA method and recent works on supervised disentanglement (e.g. [1-2]). [1] Abid et al., "Contrastive principal component analysis" [2] Abid et al., "Contrastive variational autoencoder enhances salient features" [3] Weinberger et al., "Isolating salient variations of interest in single-cell data with contrastiveVI" [4] Jones et al., "Contrastive latent variable modeling with application to case-control sequencing experiments" Weaknesses: While my views of the manuscript are generally positive, I do have a couple of minor concerns. If the authors are able to address my concerns I would be happy to raise my score. * **Impact of $\lambda$**: The authors' method attempts to find a balance between encouraging dependence between subspaces and their corresponding covariates while also minimizing the dependence between individual subspaces, and this balance is tuned via $\lambda$ in Equation (1). I imagine this parameter could have a significant impact on the embeddings/loadings produced by sisPCA, though this isn't explored in the manuscript. It would be great if the authors could perform additional experiments exploring these changes. For example, do the GO results presented in Figure 5 vary for different values of $\lambda$? How different do the UMAPs look for the scRNA-seq data for different values of $\lambda$? * **Automatic selection of $\lambda**: Related to my previous point, the authors briefly mention that $\lambda$ could be selected using a procedure similar to [1]. It would be great to see an example of this in practice: forcing the user to select $\lambda$ manually seems like it could present a big obstacle in terms of usability, so I think it would be useful to see how the automated method performs. * **Re: connections with self-supervised learning**: In the abstract the authors note "We elucidate the mathematical connections of sisPCA with self-supervised learning and regularized linear regression". While I was able to locate the regularized linear regression results in Appendix B.1, I couldn't find any additional mathematical details on the self-supervision connection (the only other mention of self-supervision in the main text was in line 83 in the related works section). Could the authors expand upon this connection? (Please let me know if I missed anything here). Minor points (these did not affect my score): * It appears that the authors may have used \citet accidentally instead of \citep throughout the manuscript. * Figure quality was relatively low-resolution when I printed out the manuscript- perhaps the authors could try increasing the DPI? * In line 91 the authors mention "contrastive learning"- it may be worth putting explicit citations to e.g. Abid et al. [1] to clarify for the reader that "contrastive learning" here isn't referring to contrastive in the sense of e.g. SimCLR. * I couldn't find any details on how the real-world biological datasets were preprocessed (e.g. I assume library size normalization was applied to the scRNA-seq data?) [1] Abid et al., "Contrastive principal component analysis" Technical Quality: 3 Clarity: 3 Questions for Authors: Did the authors try applying their method with different values of $\lambda$ to the real-world datasets? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The greatest limitations I see are (1) the method relying on linear transformations (as opposed to e.g. deep neural networks) and (2) potential difficulties in setting $\lambda$. For (1) the authors specifically note that they are intentionally trading off expressivity for increased interpretability Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## **Response to Reviewer debZ** Thank you for your positive feedback and insightful comments! We appreciate your thorough review and would very much like to address any remaining concern. ### **W1. Impact of $\lambda$ selection and an auto-selection pipeline** Thank you for the suggestion. We conducted additional experiments by varying $\lambda$ for the scRNA-seq dataset with values [0, 0.1, 0.3, 1, 3, 10, 30, 100]. On top of that, we implemented a preliminary pipeline for automatic $\lambda$ selection that aggregate resulting subspaces using clustering and outputs aggregated results at different resolution, as implemented in Contrastive PCA. However, due to time constraints, we have yet to thoroughly test some design choices, such as the representation distance, which appears to be important in subspace clustering. To speed up training, we also introduced a mini-batch approach of Alg 1 (Appendix B.2) since the learnable projection matrix *U* is of size (n_feature, n_dim), independent of n_sample. We plan to implement additional features such as parallel training and better initialization search in the future. Despite these limitations, we confirmed the following results: #### **Impact on subspace similarity** Subspaces learned with similar $\lambda$ values are generally more similar, as measured by Grassmann distance between representations, indicating gradual and continuous changes as a function of $\lambda$. The infection subspaces for $\lambda$ = [0.1, 0.3, 1] cluster together, as do the time subspaces for $\lambda$ = [0, 0.1], while the other subspaces appear more detached. #### **Impact on gene selection and GO analysis** We quantified overlaps of the top 100 genes list for the infection subspace (ranked by absolute PC1 weights) across different $\lambda$ values. Results for $\lambda$ = [0, 0.1, 0.3, 3, 10] are highly consistent, sharing more than 90 of the top 100 genes. The result also suggests that sPCA ($\lambda$ = 0) is sufficient for the GO analysis in Figure 5 ($\lambda$ =10). We examined sPCA representations and confirmed that the confounding effects from the temporal subspace (as visualized in Figure 9b and the new Fig S1c in **M2**) are mainly concentrated in PC2, which is not used here for feature selection. Notably, for reasons still under investigation, $\lambda$ = 1 is an outlier, sharing only 68 of the top 100 genes with sPCA. A possible explanation is that Alg 1 (Appendix B.2) on large datasets may experience numerical issues with PyTorch’s SVD solver. We will add visualizations of these results in the final version. ### **W2. Connection with self-supervised learning** We apologize for any ambiguity. Our reference of "self-supervised learning" is the connection to the auto-encoder (Appendix B.1), where target variables - features from the data - are used to disentangle the data itself. We chose the term mainly to reflect the difference with supervised learning. Here our goal isn't perfect target prediction, as labels are already known. Instead, we aim to better explain the data by reweighting the self-reconstruction loss with supervision guidance (as reflected in line 401). We recognize the potential confusion and will clarify the term in the final version. The connection to regularized linear regression is discussed in Appendix B.3. ### **Minor points** Thank you for pointing out the problems. - We will fix citation issues, and will increase the DPI of all figures to improve resolution. - We will provide more details on the preprocessing of the single-cell infection data. The data, preprocessed by the original authors, was subject to library size normalization followed by log1p transformation. A background correction step was also applied before normalization, where the mean expression in empty wells was subtracted. This makes the raw counts technically not integers, but the negative binomial likelihood in hsVAE-sc (**see M2**) seems to work just as well. --- Rebuttal Comment 1.1: Comment: Dear authors, Thank you for your thoughtful responses! I've accordingly raised my score. Based on the rebuttal I have a few remaining minor comments: * I would encourage the authors to avoid the name "sVAE" when describing the supervised VAE baseline, as this name has previously been used for a different class of models [1,2] * Two recent works [3,4] have investigated using HSIC-based penalties with VAEs for similar supervised disentanglement tasks; for completeness it would be great if the authors could include citations to these works * Please do include your final results re: the automatic selection procedure for $\lambda$, as I believe this will make a big impact in terms of usability of the method. [1]: Lachapelle et al. "Disentanglement via mechanism sparsity regularization: A new principle for nonlinear ICA" [2]: Lopez et al. "Learning Causal Representations of Single Cells via Sparse Mechanism Shift Modeling" [3]: Tu et al. "A Supervised Contrastive Framework for Learning Disentangled Representations of Cell Perturbation Data" [4]: Qiu et al. "Isolating structured salient variations in single-cell transcriptomic data with StrastiveVI" --- Reply to Comment 1.1.1: Title: Thank you Comment: Thank you for your positive feedback. We very much appreciate your recognitions and all your helpful suggestions for improving our paper. We will include the automatic selection procedure as part of the package release. To further enhance usability, we will also release the re-implemented HCV (hsVAE-sc) models for single-cell data and provide corresponding tutorials on example datsets and general documentations.
Summary: This paper proposed a linear dimensionality reduction and subspace extraction method based on Hilbert-Schmidt Independence Criterion (HSIC) and Supervised PCA. In specific, several interpretable subspaces, which are independent from each other, are disentangled from the data observations and the leaned subspaces are enforced to be maximally dependent on the supervision variables. The proposed approach is shown to be effective on both synthetic data and two real datasets. Strengths: 1. A new method for subspace learning is proposed using supervised PCA and dependence maximization and minimization with HSIC for extracting independent latent low-dimensional subspaces. The framework is corroborated effective in extracting linearly mixed simulation data, and the learned representations are more interpretable than that of supervised PCA. 2. The problem formulation and optimization are clearly derived and delivered. The overall manuscript is written in an organized way and the notations are well-defined and relatively easy to follow. Weaknesses: 1. It is not clear if the proposed method is guaranteed to extract the latent subspaces and/or under what conditions/assumptions the method would work or fail. For example, in the synthetic experiment, the mixing matrix is uniformly drawn from [0, 1]. Does it still work if the matrix is changed to Gaussian? Overall, the theoretical analysis is somewhat lacking, e.g., identifiability analysis for linearly mixed subspaces. 2. Baselines seem missing in the real data applications. Only PCA and supervised PCA are considered here. The authors are encouraged to include more state-of-the-art methods to showcase the superiority of the proposed method, e.g., how does PCA work under such a setting? Are other HSIC-based nonlinear methods, e.g., [1], capable of extracting the independent subspace with interpretability? [1] Lopez, Romain, Jeffrey Regier, Michael I. Jordan, and Nir Yosef. "Information constraints on auto-encoding variational bayes." Advances in neural information processing systems 31 (2018). Technical Quality: 3 Clarity: 3 Questions for Authors: 1. In Sec. 3.2, the linear kernel is used to target variables. Does it make a difference if a nonlinear kernel function is used? 2. What are A and B in Fig. 2? Are they the target variables? In the same figure, why does it maximize dependence (HSIC) with the components of a subspace? I thought the only two objectives are 1) maximizing dependence between the subspace and its corresponding target variable; 2) minimizing the dependence between the subspaces. How is this maximization reflected in Eq. (2)? 3. What does balanced/unbalanced supervision mean mathematically in Conjecture 3.1? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The authors are encouraged to discuss limitation of the work in the main paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## **Response to Reviewer QSK9** Thank you for your thoughtful review of our work. We address your main points below: ## **Weaknesses** ### **W1: Theoretical analysis and subspace recovery** We acknowledge the limitations in our theoretical analysis. However, sisPCA makes minimal assumptions about the data, and its objective function provides straightforward insights into the model's behavior and optimization landscape (Conjecture 3.1 and Appendix D). #### **Regarding the simulation example** 1. We confirmed that PCA-based models yield similar results with Gaussian-drawn mixing matrices compared to the uniform case. This is because the mixing matrix was normalized to ensure equal contribution from each subspace. 2. As discussed in Appendix D (lines 449-453), subspace scale and supervision strength are the major factors influencing the optimization landscape. For example, sisPCA will prioritize S2’s structure and ignore other subspaces if we scale up the S2 supervision (X, Y) large enough. #### **General subspace recoverability depends on supervision quality** 1. In the most intertwined case of identical supervision for two subspaces, only one can be recovered and the other will collapse to rank zero (Figure 6, Appendix D). Here, linearity ensures that the same information is not split between subspaces - it's an all-or-nothing scenario. 2. When both supervised and unsupervised subspaces are presented, their dynamics are complicated by the dual role of HSIC supervision indicated in Eq.4, Appendix B.3. In particular, the supervised subspace will also try to expand upon the direction that maximize the (unsupervised) variance, potentially leading to identifiability or multi-optima issues (Section 3.3, lines 190-193, Figure 7). Though we reason that the same information will again concentrate in one subspace due to linearity. ### **W2: Comparison with state-of-the-art baselines** We appreciate your suggestion to include more state-of-the-art methods in our comparison. In response: 1. We've added comparisons with non-linear VAE-based models (**M2**), focusing on the real single-cell data where the practical need is to learn genes responsible for malaria infection defense (a feature selection task). 2. Our analysis shows that while VAE-based models can extract independent subspaces, they lack a straightforward method for identifying genes upon which each subspace was constructed (**M2.2**). 3. Regarding representation quality, we've confirmed that linear models like sPCA serve as strong baselines, both qualitatively and quantitatively (**see Fig S1, Table S2 in M2.3**). Specifically, quality of the linear infection subspace often surpasses its non-linear counterpart. ## **General questions** ### **Q1: Effect of non-linear kernels for target variables** Yes, the choice of target kernel does influence sisPCA's results, primarily affecting the rank of the learned subspace. This is because sPCA's subspace dimensions are determined by supervision kernel rank (Appendix B.3, line 406-408), and sisPCA shows similar behavior. Specifically, for a linear kernel on *K* continuous features, the effective dimension of the corresponding sisPCA subspace is the rank of the sum of a rank-K kernel and a rank-D disentanglement kernel, where *D* depends on other subspace dimensions (line 146-147). A non-linear target kernel can effectively expand the learned subspace while maintaining interpretability, albeit losing the connection with regularized linear regression (Appendix B.3). ### **Q2: Clarification on Fig. 2 and maximization objectives** We apologize for the confusion. To clarify: - A and B represent different target variables used for supervision. - The maximization arrow within each subspace represents a PCA-like objective of variance maximization, which is equivalent to maximizing the HSIC towards an identity kernel (Appendix B.1, line 397-398). - In supervised PCAs, the variance maximization objective comes from the HSIC to the target (first component of Eq.2, see Eq.4 in Appendix B.3). Upon decomposition, the arrows between the subspace and A/B indeed represents minimizing the prediction error (first component of Eq.4), and the arrows within each subspace represents maximizing the variance (second component of Eq.4). We will revise the figure to make notation consistent. ### **Q3: Definition of balanced/unbalanced supervision** Balanced/unbalanced supervision refers to the relative ratio of scale/norm between target kernels (discussed in Appendix D, line 449-473). It can alternatively be defined as the ratio of loss gradient with respect to each supervision kernel. Intuitively speaking, unbalanced scenarios occur when one kernel is scaled up, favoring the corresponding subspace and creating a single global optimum (Figure 6 in Appendix D). --- Rebuttal Comment 1.1: Comment: Thank you for the clarification and for including the non-linear VAE-based baselines, which should make this work more solid. I agree that linear methods have more benefits in terms of interpretability and simplicity. --- Reply to Comment 1.1.1: Title: Thank you Comment: Thank you for your feedback. We appreciate your recognition and the suggestion on adding the non-linear baselines which has helped improve the quality of our work. We will include relevant benchmark results in the final version.
Summary: The work proposes a new method that determines the factors of variation bonded to different labels in a supervised way, obtaining a method akin to supervised PCA but in several independent subspaces thorough the Hilbert-Schmidt independent criterion (that it’s employed to maximize both the independence between different subspaces and the dependence between the data subspaces and the labels). The paper focus on a linear kernel application of HSIC, but other kernels are also considered. Then it’s applied to some simulated data and to DNA/RNA sequencing data. Strengths: The theoretical connections are interesting. The paper is technically sound, with a well-derived mathematical background. The results of genetic sequence analysis are meaningful and interesting. Weaknesses: The novelty is limited, it appears to be an extension of previously described methods. The comparison is provided only with PCA or sPCA, which are not by any means SOTA methods. The explanation of the simulated data is not clear, what should it be the expected result? Are you learning the dimensions of the subspaces? A better description of the experiments is needed. How do you choose the hyperparameters (for instance, the dimension of the subspaces is 10 in both sections 4.2 and 4.3, why?). In section 4.2, the use of HSIC as a quality metric is not fair, since it’s already included in your loss. The significance of the paper would be increased if the authors prove that the method is useful for the analysis of other types of real-world datasets. The discussion is too short and not too clear. I acknowledge the limitations of space, but the discussion is one of the most important sections and should be properly done (even by moving some mathematical details to SI). Technical Quality: 3 Clarity: 2 Questions for Authors: Could you please provide a comparison of your method with other SOTA methods? Would you prove that the method is useful in other kinds of data types? Which are the hyperparameters of the method? How do you set them? Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: I’m not satisfied with the way the limitations are addressed. One should explain them clearly and provide examples in which the method is not working. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## **Responses to Reviewer UeQu** Thank you for your constructive feedback on the strengths and weaknesses of our work. We address your main concerns below: ## **Weaknesses** ### **W1: Limited novelty** While our method extends PCA, it offers significant contributions by uniquely combining supervision and disentanglement in a linear, interpretable framework valuable for feature selection. This addresses an important gap in current dimensionality reduction techniques, especially in an era where linearity and interpretability are often overlooked (**See M1**). ### **W2: Comparison with SOTA methods** We've added comparisons with non-linear VAE-based models (**See M2**), focusing on: 1. **Representation quality:** Low-dimensional representations capturing specific data aspects. 2. **Interpretability:** Understanding feature contributions to different data aspects. While the interpretability criterion clearly flavors linear models, our results show that linear models like sPCA are indeed strong baselines too in representation quality (**See M2.3, Fig S1, Table S2**). We also confirmed similar insights on supervision and disentanglement regularization effects across both linear and non-linear models. ### **W3: Inadequate discussion of limitations** We apologize for the poor presentation of limitations, which are somewhat scattered throughout the paper. We will concentrate them and improve the discussion section in the final version. Key limitations and their potential impacts include: 1. **The limitation of linear-kernel HSIC on theoretical independence guarantee** (Section 3.1, line 159 - 165). Though in practice this is less concerning, since minimizing HSIC-linear will also reduce HSIC-Gaussian (Section 4.2.3, Table 1). 2. **Subspace interpretability issue** (Section 3.3, line 189 – 193, and Appendix D on the resulting optimization landscape). This would make it especially challenging when the model needs to learn both supervised and unsupervised subspaces. Figure 7 is an example where unsupervised subspace failed to capture the true dimension. 3. **Challenges in balancing the disentanglement regularization strength \lambda** (Section 5). This is a general challenge to all supervised disentangled models. 4. **Potential performance loss due to the lack of non-linear feature interactions** (Section 3.1, line 166-169; Section 5). This could lead to underperformance on datasets with complicated patterns. ## **General questions** ### **Q1: How to set model hyperparameters, in particular the dimension of each subspace.** We again apologize for the lack of clarity. The main hyperparameters are subspace dimensions and disentanglement penalty scale λ (**See Table S1, M2.1**). The latter was discussed as a limitation in the Discussion and above in **W3**. For sPCA and sisPCA, subspace dimension generally doesn't affect performance beyond numerical issues from SVD solvers. - sPCA's subspace dimensions are determined by supervision kernel rank (Appendix B.3, line 406-408). That is, the effective dimension is always 1 for a 1-D continuous variable with linear kernel (age in Section 4.2) or K for a categorical variable of K groups with delta kernel (cancer type in Section 4.2) regardless of the specified dimension. The extra axes beyond the effective dimension (sPCA-age PC2-10) will collapse to zero since the eigenvalue is zero. - The effective dim of sisPCA subspaces is determined by both the supervision kernel rank as well as other subspaces (Remark 3.2, line 144-147). In practice, sisPCA shows similar behavior as sPCA, with extra dimensions typically collapsing to zero. For example, the sisPCA-infection subspace in Section 4.3 has approximately two effective dimensions (Figure 4, 9 and Figure S1). - In contrast, VAE models are more sensitive to dimension changes and have more hyperparameters to tune (**M2.1**), which is generally beyond the scope of our work. Due to time constraints, we mostly follow the SCVI’s default in designing VAE models for benchmark. ### **Q2: Better description of experiments** #### **Simulated data Figure 3** As mentioned in **Q1**, the dimensions of supervised subspaces are mostly predetermined by their target kernel ranks. Here dim(S1) and dim(S2) are 2 regardless of hyperparameter choice. The dimension of the unsupervised space S3 is unknown, and we aim to recover the ground truth donut structure (dim = 2) as did in Fig 3c. Figure 7 represents a failed example where only one dimension is recovered. #### **Subspace dimensions in Section 4.2 and 4.3** Addressed in **Q1**. Dimensions in PCA-based models are generally ranked by importance and can be selected using methods like "the elbow curve" (Discussion, line 311-313). In **M2** subspace dimensions of VAEs models are set to 10 to ensure fair comparison. #### **Applications to other types of real-world datasets** We note that similar to PCA and its extensions [1], our model is also general-purpose and can assist with data exploration tasks (*as pointed out by Reviewer debZ*). The only domain-specific model is the hsVAE-sc tailored for single-cell data with a count-based likelihood. [1] Abid et al., "Contrastive principal component analysis" ### **Q3: The use of HSIC as a quality metric in Section 4.2** The HSIC metrics in Table 1 are to support the claim that minimizing the HSIC-linear can also reduce the computationally more expensive HSIC-Gaussian (not explicitly optimized for in the loss) and thus encourage independence in the representations (line 259-261). --- Rebuttal Comment 1.1: Title: Follow up Comment: I thank the authors for their reply and acknowledge their usefulness. I will, accordingly, raise my score. However, I have to say that I'm not satisfied with the reply to "other kinds of data sets" reply and it refrains me from further raising the score. --- Rebuttal 2: Title: New Application on Breast Cancer Diagnostic Data Comment: We appreciate your feedback and apologize for our previous insufficient response. Due to space constraints, we couldn't include additional figures and results initially. To further illustrate sisPCA's versatility as a plug-in replacement of PCA for disentangling quantities, we provide below a new application, which will be included in the appendix of the final paper, and as a tutorial upon package release. ### **Problem and Dataset** We used the Kaggle Breast Cancer Wisconsin Data Set (569 samples, 30 summary features, uciml/breast-cancer-wisconsin-data, CC BY-NC-SA 4.0 license). The 30 real-valued features are computed from imaging data of breast mass, which include the mean, standard error and the extremum of quantities like cell nuclear radius, texture, perimeter etc. Here, our goals are to: 1. Learn a representation for predicting sample disease status (Malignant or Benign, not used during training). 2. Understand how original features contribute to data variability, the learned representation, and diagnosis potential. ### **Experiments and Results** We focus our comparison on three linear models: PCA, sPCA, and sisPCA, using zero-centered and variance-standardized data as inputs. The diagnosis label used to measure subspace quality remains invisible to all models. Below we summarize the quantitative results, and will include subspace visualization in the final paper. #### **PCA** We projected all features (dim = 30) onto one PCA subspace (dim = 6, determined by the elbow rule), explaining 61.6% of total variance. Malignant and benign samples appear well separated in PC1 and PC2. 'symmetry_mean' (loading = -0.223) and 'radius_mean' (loading = -0.219) are the top 2 features that negatively contribute to PC1. That is, the higher the two feature scores, the lower PC1 score and the greater the possibility of the sample being malignant. #### **sPCA** From the PC1 loadings, we sought to construct two subspaces to separately reflect nuclei size (‘radius_mean’ ) and shape ('symmetry_mean'). We set $Y_{radius}$ ('radius_mean', 'radius_sd') of $569 \times 2$ as the target variable for the radius subspace, and $Y_{symmetry}$ ('symmetry_mean', 'symmetry_sd') of $569 \times 2$ as the target for the symmetry subspace. The remaining 26 features were projected onto the two subspaces using sPCA (dim = 3, effective dim = 2). *Both subspaces better explained diagnosis status than PCA (**Table S3**) but remained highly entangled.* Specifically, the PC2 loadings of the two spaces have a Pearson correlation of 0.716, and 'perimeter_worst', which is highly correlated to 'radius_mean' (corr = 0.965), also strongly contributes to the PC2 of the symmetry subspace (loading = 0.238). #### **sisPCA** We next applied sisPCA ($\lambda$ = 10) to further disentangle the two subspaces, increasing their separation (Grassmann distance increased from 1.593 to 2.058, PC2 loadings correlation decreased to 0.257). Here ‘perimeter’ features no longer contribute to the symmetry subspace. *As a result, the radius subspace remained predictive of diagnosis, while the symmetry subspace became less relevant (**Table S3**).* We confirmed the finding by directly measuring the predictive potential of the target variables $Y_{radius}$ and $Y_{symmetry}$ (Silhouette score = 0.457 and 0.092, respectively). **Table S3: Predictability of diagnosis status, measured by Silhouette score** | | Radius subspace (dim = 3) | Symmetry subspace (dim = 3) | Overall subspace (dim = 6) | |:-|:-|:-|:-| | PCA (one subspace) | 0.294 (PC 1-3) | 0.013 (PC 4-6) | 0.160 (PC 1-6) | | sPCA | 0.534 | 0.374 | 0.464 | | sisPCA | 0.553 | 0.027 | 0.511 | ### **Interpretation** 1. Our results suggest that nuclear size (radius subspace) is more informative for breast cancer diagnosis than nuclear shape (symmetry subspace), aligning with clinical observations [1]. 2. Unsupervised PCA and sPCA captured both radius and symmetry aspects as they contribute most to data variability. *Without disentanglement, the two aspects remain intertwined, potentially leading to incorrect conclusions about symmetry's predictive power for diagnosis.* 3. *sisPCA successfully separated the two representations, revealing their distinct relationships to diagnosis.* The results are interpretable: the radius subspace is constructed using features like 'area' and 'perimeter', while the symmetry subspace uses features like 'compactness' and 'smoothness'. This example demonstrates sisPCA's ability to disentangle different aspects of data variation and uncover underlying relationships. **Importantly, sisPCA improves upon the diagnosis potential of PCA's representation, even through the diagnosis labels were never used during training.** [1] Kashyap, Anamika, et al. "Role of nuclear morphometry in breast cancer and its correlation with cytomorphological grading of breast cancer: A study of 64 cases." Journal of cytology 35.1 (2018): 41-45. --- Rebuttal Comment 2.1: Comment: I thank the authors for their reply. This new application makes both the application and the method much clearer, and I would put it in the main text (just before 4.2). I will raise my score accordingly. --- Reply to Comment 2.1.1: Title: Thank you Comment: Thank you for the feedback and suggestion. We will add this application as a new subsection and adjust the main text accordingly.
Summary: This paper presents a method for distentangling multiple independent linear latent subspaces that align with a set of response variables. This uses the Hilbert-Schmidt Independence Criterion (HSIC) to measure the dependence each subspace and the targe variable providing supervision for that subspace, a concept which was previously applied to infer a single subspace that was connected to a set of response variables. HSIC is additionally used to enforce independence between the subspaces, to encourage the method to find independent subspaces that correlate with each of the provided dependent variables. This expands on Supervised PCA by enabling inference of an indendent subspace for each response variable, rather than a single subspace which aligns with the vector of response variables, reflecting the fact that different responses may be interacting with different subspaces in the data, e.g., methylation changes reflecting natural human aging and changes related to a disease like cancer. The method is evaluated on synthetic data and shown to do a good job at extracting the different subspaces. In particular, Supervised PCA is dominated by one of the two supervised subspaces. The method is then evaluated on real methylation data and single-cell gene expression data. In the cancer data, the new method is shown to provide subspaces with greater separatio Strengths: The paper tackles an important problem in the analysis of high-dimensional omics data, where there is a need to identify interpretable subspaces that can be mapped onto higher-level biological processes but where different phenotypes (e.g., aging vs cancer) may manifest differently, requiring independent subspaces rather than supervision of a single subspace as per sPCA. The paper is well-written and combines simulated data experiments, which help understand the method is behaving as expected, with two applications on real data. Weaknesses: One could argue that the work is somewhat incremental. The use of the HSIC is not new and nor is the idea of providing supervision to a latent variable model to ensure the latent subspaces reflected some additional variables. However, I feel the contribution is novel enough to warrant presentation at NeurIPS Technical Quality: 3 Clarity: 3 Questions for Authors: No questions Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors describe the well-known limitations of unsupervised PCA, i.e., identifiability, and other limitations implied by their choice of kernel. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## **Response to Reviewer PWKM** Thank you for your summary. We appreciate your evaluation that our technical contribution is novel enough to warrant presentation at NeurIPS.
Rebuttal 1: Rebuttal: We thank the reviewers for their thoughtful comments. We first address the two main points raised by multiple reviewers, followed by responses to individual comments. ### **M1: Highlight the technical novelty (Reviewers PWKM and UeQu)** We appreciate your acknowledgment of our paper’s technical contribution as novel and sensible (Reviewer debZ). To further clarify our design insights: - **Linearity and interpretability by design.** PCA remains popular due to its interpretability. It is particularly valuable in data exploration, where the goal is to identify features that best represent the data in specific aspects. In this regard, linear models are indeed the state-of-the-art (See **M2.2**). We motivated our work as a multi-subspace extension of PCA to fully leverage its interpretability. It is the first linear supervised disentanglement model and is competitive with or even outperform non-linear counterparts (See **M2.3**). - **Theoretical advantages from linearity.** We show that sisPCA can be viewed both as linear auto-encoder (Remark 3.1 and Appendix B.1) and as regression for continuous targets (Remark 3.3 and Appendix B.3). Linearity also allows for more efficient and reliable optimization (Algorithm 1 and Conjecture 3.1). ### **M2: Comparison with non-linear SOTA methods (Reviewers UeQu and QSK9)** We have now included additional baselines based on the HSIC-constrained VAE (HCV) ([1] of Reviewer QSK9). ### **M2.1 SOTA implementation** We reimplemented the idea from [1] using the latest SCVI framework (scvi-tools v1.1.2) for variational inference training and designed three non-linear counterparts of PCA models: 1. **VAE**: Vanilla VAE with Gaussian likelihood. 2. **sVAE**: VAE with additional predictors for target variables. 3. **hsVAE**: sVAE with additional HSIC penalty. Non-linear models generally have more hyperparameters. In our benchmark, we fixed the VAE architecture following the SCVI default (e.g. one layer of NN with 128 hidden units and batch normalization), or the scVIGenQCModel in [1] (e.g. equal weighting of prediction and reconstruction losses). **Table S1: General model comparison** | | Linear ||| Non-linear ||| |:--|:--:|:--:|:--:|:--:|:--:|:--:| | | PCA | sPCA | sisPCA (this work) | VAE | sVAE | hsVAE | | Supervision | - | HSIC | HSIC | - | NN prediction | NN prediction| | Disentanglement | - | - | HSIC | - | - | HSIC | | Interpretation | Linear projection *U* as feature importance ||| Blackbox || | Hyperparameters | 1) #dim | 1) Subspace #dim. 2) Target kernel choice | 1) and 2) of sPCA. 3) HSIC penalty | 1) #dim. 2) General NN design. | 1) and 2) of VAE. 3) Predictor design | 1), 2) and 3) of sVAE. 4) HSIC penalty| | Optimization | Closed form | | Simple landscape | General limitations of NN and VI like robustness** || \** We were not able to run VAEs on the 6-dimensional simulated data in Figure 3 due to NaNs generated during variational training, which is not uncommon for SCVI models and is likely the result of exploding parameters. ### **M2.2 Interpretability** We define and compare model interpretability based on the practical need to *learn how each feature contributes to specific aspects of the high-dimensional data*. Linear models, as demonstrated in Section 4.3.3, are inherently interpretable because the learned projection *U* directly shows how each subspace axis relates to original features. PCA-based models have the additional advantage of ordering subspace axes based on importance (variance explained). In Figure 5, we selected subspace-specific genes based on PC1 loading scores, which would not be possible without the mapping *U*. In contrast, VAE models link features and subspace axes through a non-linear black box. While some interpretation approaches such as gradient-based saliency map are becoming standard practice, they are nowhere near as straightforward as the linear mapping. In addition, since gradient and loss can only be calculated per sample, aggregating them into a global feature importance score is non-trivial. ### **M2.3 Results on the single-cell infection data** We have now included results from VAE-based models (**see attached Fig S1 for visualization**). We also provide below quantitative metrics in **Table S2**. While higher scores generally indicate better results, we caution that these metrics may not fully capture the biological relevance of the representations. As in Section 4.3.2, our analysis aims to isolate genes directly involved in parasite harboring from those associated with broader temporal changes post-infection. A better temporal subspace should tell us about subtle progression (pseudo-time) even of the same collection time beyond prediction accuracy (since we already knew the labels). **Table S2: Quantitative evaluation of representation quality** | Method | Linear ||| Non-linear |||| |:--|:--:|:--:|:--:|:--:|:--:|:--:|:--:| | | PCA | sPCA | sisPCA (this work) | VAE | sVAE | hsVAE | hsVAE-sc | | Subspace separateness-Grassmann | 0 | 3.802 | 4.467 | 0 | 3.510 | 3.797 | 3.459 | | Information density-Silhouette: (infection, time) | (0.068, 0.083) | (0.313, 0.279) | (0.294, 0.164) | (0.045, 0.137) | (0.058, 0.215) | (0.064, 0.197) | (0.235, 0.353) | | Information density after UMAP-Silhouette: (infection, time) | (0.009, 0.238) | (0.153, 0.364) | (0.197, 0.228) | (-0.022, 0.335) | (0.008, 0.602) | (0.047, 0.586) | (0.258, 0.603) | Overall, we've confirmed that: - HSIC against target efficiently imposes supervision, achieving performance equivalent to neural networks and making sPCA indeed a strong baseline. - sisPCA outperforms its general-purpose VAE counterparts, particularly in the infection subspace. However, VAEs can gain further SOTA performance from domain-specific knowledge (such as count-based modeling). - Disentanglement may come at the cost of weakening supervision, aligning with Section 4.2.3 and Table 2. We will include the above results (**Fig S1, Table S1 and S2**) in the final version. Pdf: /pdf/7b56b32521c20ce51368f17f5e14b650848a6d7f.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Fast Graph Sharpness-Aware Minimization for Enhancing and Accelerating Few-Shot Node Classification
Accept (poster)
Summary: The paper introduces an innovative approach to improve the generalization capabilities of Graph Neural Networks (GNNs) in few-shot node classification tasks. The authors propose a novel algorithm, Fast Graph Sharpness-Aware Minimization (FGSAM), which incorporates sharpness-aware minimization (SAM) techniques into GNN training but reduces the typical computational overhead by integrating multilayer perceptrons (MLPs) for efficiency. This method not only demonstrates superior performance on few-shot learning tasks compared to traditional techniques but also offers significant computational advantages, particularly in reducing training times. Strengths: originality: medium quality: medium clarity: medium significance: medium Weaknesses: Some experiments are missing and the proof need to be double checked carefully. Technical Quality: 2 Clarity: 3 Questions for Authors: 1. What is the standard deviations for the results in table 4? Are the improvements significant? 2. Is there any theoretical justification for PeerMLP? 3. What is g in equation (8)? 4. The proof of theorem 4.1 is confusing. Do you assume all nodes have the same degree deg(i)? Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: the authors adequately addressed the limitations Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable comments and reviews. We provide a point-by-point response below. Hope this can address your concern and make things clearer. > **Q1:** What is the standard deviations for the results in table 4? Are the improvements significant? **Response to Q1:** The detailed standard deviations for the results presented in Table 4 are provided in Table 8, located in Appendix D.3. In order to assess the significance of the observed improvements, we performed paired t-tests on the results obtained from ten repeated trials comparing SAM with FGSAM and FGSAM+, respectively, across each dataset. The null hypothesis for these tests was that the performance of SAM would not be inferior to that of FGSAM or FGSAM+. The outcomes of these statistical tests are encapsulated in the **Table A** and **Table B**. In these tables, the plus symbol denotes levels of statistical significance, with +, ++, and +++ corresponding to significance levels of 5\%, 1\%, and 0.1\%, respectively. The results indicate that both FGSAM and FGSAM+ achieve statistically significant improvement compared with SAM in most cases. **Table A. FGSAM vs SAM** | | Cora | Citeseer | Pubmed | Chameleon | Squirrel | Actor | Cornell | Texas | Wisconsin | |-----------|------|----------|--------|-----------|----------|-------|---------|-------|-----------| | **GCN** | - | + | +++ | + | ++ | +++ | +++ | ++ | ++ | | **GraphSAGE** | +++ | +++ | +++ | +++ | +++ | +++ | +++ | +++ | +++ | | **GAT** | + | ++ | ++ | ++ | +++ | +++ | +++ | +++ | +++ | **Table B. FGSAM+ vs SAM** | | Cora | Citeseer | Pubmed | Chameleon | Squirrel | Actor | Cornell | Texas | Wisconsin | |-----------|------|----------|--------|-----------|----------|-------|---------|-------|-----------| | **GCN** | - | + | +++ | - | ++ | ++ | +++ | - | + | | **GraphSAGE** | ++ | +++ | +++ | - | +++ | +++ | +++ | - | +++ | | **GAT** | + | ++ | +++ | ++ | +++ | +++ | +++ | +++ | +++ | > **Q2:** Is there any theoretical justification for PeerMLP? **Response to Q2:** Our Theorem 4.1 in the paper demonstrates that under the linear case, whether the MP layer is used or not, the optimal decision bound is the same in the context of K-classes CSBM. We believe this theorem can give some valuable insight in using PeerMLP. Moreover, our approach reintroduces the graph topology in minimization. This is essentially different from PeerMLP which does not involve any graph topology information in training. > **Q3:** What is g in equation (8)? **Response to Q3:** The term $g$ is defined as $\nabla_w L_{D_{\text{tr}}}(w)$, referred to line 189 in page5. This represents the *vanilla gradient* of the loss function $L$ with respect to the weights $w$, computed on the training dataset $D_{\text{tr}}$. > Q4: The proof of theorem 4.1 is confusing. Do you assume all nodes have the same degree deg(I)? **Response to Q4:** We do not assume all nodes having the same degree. The theorem and its proof are based on the definition of the K-classes CSBM, as detailed between Lines 218 and 226 of our manuscript. In the context of CSBM, while it is true that the expected degree of nodes within the same class is identical, it is crucial to note that this does not necessitate uniformity in the actual degrees across all nodes. --- Rebuttal Comment 1.1: Comment: Thanks for the response. The authors addressed some of my concerns and I will increase my score to 5 --- Rebuttal 2: Comment: Dear Reviewer phcW, We hope our rebuttal sufficiently addressed your concerns. Is there any additional information we can provide that might lead you to increase your rating? We look forward to your feedback. Many thanks, Author --- Rebuttal 3: Comment: We thank you for acknowledging our work and for raising the score. Thanks again for your time and effort in reviewing our work.
Summary: This paper focuses on efficient graph neural network (GNN) training in few-shot node classification (FSNC) problem by extending sharpness-aware minimization (SAM) for reducing the computational cost and improving the generalization of GNNs on unseen classes. The training phase is accelerated by perturbing the parameters of GNN and then minimizing the perturbation loss of GNN without the message passing mechanism (MLP). Experiments have been conducted to verify the effectiveness and efficiency of the proposed method. Strengths: 1. It is interesting to incorporate SAM technique with GNN by removing MP during training and reintroducing MP in inference, which is reasonable to improve the generalization of FSNC. 2. The proposed method could be conveniently integrated into existing GNN-based few-shot node classification models, which is verified in the experiments. 3. The carefully designed experiments highlight the effectiveness of the proposed method in reducing the computational costs of GNN training. 4. The paper is well-organized and explains the method very clearly. In addition, the landscape visualization and toy case analyses also make the motivation and ideas easy to understand. Weaknesses: 1. There are related work on applying SAM to few-shot tasks (Sharp-MAML) [1] and performing SAM every $k$ steps [2], and the innovation of this work is not significant compared with the above work. [1] Sharp-maml: Sharpness-aware model-agnostic meta learning. ICML, 2022 [2] Towards efficient and scalable sharpness-aware minimization. CVPR, 2022 2. The experiments only compare with small-sample node classification models and variants of SAM from 2022, lacking comparisons with the latest baseline methods of FSNC. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Since this work focuses on solving FSNC problems, the authors should provide a comparison with the state-of-the-art FSNC methods, such as COSMIC [3] and COLA [4], in theoretical analysis and experiments. [3] Contrastive meta-learning for few-shot node classification. SIGKDD, 2023 [4] Graph contrastive learning meets graph meta learning: A unified method for few-shot node tasks. WWW, 2024 2. Why adding 25% noisy edges yields better results than adding only 15%? in Figure 4? The authors should discuss or explain this issue in details. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The authors describe some limitations of the proposed method in time consumption section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable comments and reviews. We provide a point-by-point response below. Hope this can address your concern and make things clearer. > **W1:** There are related work on applying SAM to few-shot tasks (Sharp-MAML) [1] and performing SAM every k steps [2], and the innovation of this work is not significant compared with the above work. **Response to W1:** Both Sharp-MAML [1] and LookSAM [2] focus on vision data, while our work is crafted in the context of graph data. Specifically, we utilize GNNs for parameter perturbation while employing MLPs to minimize the perturbed loss. Our experimental results demonstrate that directly applying SAM-like algorithms from the vision domain to graphs does not yield satisfactory performance (refer to Table 2, Table 3, and Figure 5), while our method is not only faster but also better. This indicates the necessity for SAM variants that are specifically designed for graph data. Furthermore, although sharp-MAML applies SAM to few-shot tasks, their algorithm is only designed for MAML models. In contrast, our work can be applied to both MAML and non-MAML methods in GNN-based FSNC tasks. Most notably, from the perspective of SAM, our work is crafted for graphs by its unique property, enabling the first SAM-like algorithm that can be faster than the base optimizer. This effectively turns SAM's core drawback of slower training speed into an advantage, distinguishing our work from previous SAM-like works. > **Q1:** Lacking comparisons with the latest baseline methods of FSNC such as COSMIC [3] and COLA [4], in theoretical analysis and experiments. **Response to Q1:** Our proposed FGSAM is fundamentally orthogonal to GNN-based FSNC models, such as COSMIC and COLA, due to its unique positioning as a SAM-like optimizer designed to leverage the intrinsic properties of GNNs. We selected classical and widely used FSNC models and NC models for evaluation. The strategy is borrowed from previous SAM works, where they also select classical and widely used models in the vision domain. We believe the selected baseline is diverse enough to verify the effectiveness of our proposed methods. However, we are glad to provide additional experiments on integrating FGSAM with state-of-the-art FSNC methods, such as COSMIC [3]. | | Corafull | | DBLP | | |-----------|----------|------|-------|------| | | acc | time | acc | time | | **COSMIC** | 75.74 | 3.95 | 80.80 | 3.78 | | **COSMIC+FGSAM** | 77.11 | 5.66 | 81.93 | 5.00 | | **COSMIC+FGSAM+** | 76.99 | 3.95 | 81.46 | 3.35 | Due to the time limitations, we evaluate our proposed method on COSMIC with 5N3K setting only on CoraFull and DBLP. As shown in the above table, FGSAM/FGSAM+ can effectively improve the performance of COSMIC, demonstrating the superiority of our proposed method. > **Q2:** Why adding 25\% noisy edges yields better results than adding only 15\%? in Figure 4? **Response to Q2:** Firstly, we found a typo in the coordinate axis labels in the original manuscript. The correct coordinate axis should be 0\%, 1\%, 5\%, 10\%, 15\%, not as previously misstated. Consequently, the observed phenomenon is that adding 10\% noisy edges yields better results than adding only 5\%. We appreciate the reviewer's interest in this intriguing phenomenon. In response, we conduct similar experiments on the AMM-GNN to validate the consistency of this phenomenon across different models. The results are presented in the following table: | edge noise | 0\% | 1\% | 5\% | 10\% | 15\% | |--------------------|-------|-------|-------|-------|-------| | **AMM-GNN** | 72.92 | 70.50 | 69.98 | 70.33 | 68.41 | | **AMM-GNN+FGSAM+** | 72.79 | 71.94 | 70.40 | 70.85 | 68.73 | It can be seen that adding 10\% noisy edge also yields better results than adding 5\%, similar to the results on GPN. Hence, we suppose that this phenomenon may be attributed to the uniform introduction of a 10\% noisy edge, which perhaps aligns more closely with certain inherent characteristics of the dataset, thereby mitigating the extent of performance degradation. --- Rebuttal Comment 1.1: Comment: Thanks for your feedback, which has addressed my concerns on the comparison methods and the results of adding noisy edges. Thus, I will maintain my score. --- Rebuttal 2: Comment: We thank you for acknowledging our work. Thanks again for your time and effort in reviewing our work.
Summary: The paper proposes a method for few-shot learning on graphs leveraging sharpness-aware minimization (SAM) from the vision community. The paper explains SAM as a technique for gradient perturbation during training to push the parameters to "flatter" regions of the loss space in hopes of achieving better generalization in the few shot setting. However, SAM is inherintely slower, requiring two backward/forward passes during training; one to compute the loss gradient and a second to determine the perturbation direction. The authors propose multiple ideas to overcome the added computation and to leverage graph topology. The authors strategically substitute Graph Neural Networks (GNNs) with Multilayer Perceptions (MLPs) to avoid the burden of message passing in some parts of their algorithm while preserving graph topologies in others. The authors propose a second scheme that leverages iterations of an approximate perturbation scheme with periodic exact evaluations of SAM. The authors present ablation studies on multiple SOTA models and 9 datasets and show improvements in training times and accuracy. Strengths: The authors present multiple novel approaches to overcome computational challenges applying SAM techniques to structured data to achieve better generality in the few shot case. I only have a few style suggestions at the beginning of the paper but otherwise the paper does a good job of explaining multiple, complex ideas in a clear way. The paper does present a significant result, blending ideas from PeerML and SAM from the vision community to contribute to making structured models more generalizable. Weaknesses: I think the paper is very good and clear in most cases. The only weaknesses I would note (and this may be subjective) would be to make some style improvements in the beginning of the paper. (1) In the abstract (line 14) Moreover, our method ingeniously reutilizes... is a bit grandiose; I would prefer a more scientific tone and delete "Moreover" and "ingeniously"; "Our method reutilizies... (2) Remove "for the first time" in 20. It is clear that the ideas are novel. (3) Line 50 is the first time Message Passing is abbreviated as MP. I would follow the convention used in the paper for the introduction of other abbreviations where the first letter of the abbreviated character is in bold and capitalized. (4) Line 93 "as follows" is not followed by an equation but another sentence. Maybe the equation should be presented after the "as follows" text and the next sentence come after the equation? Technical Quality: 3 Clarity: 3 Questions for Authors: (1) I find the paragraph containing lines 137-145 hard to read due to how the ideas are presented: "Inspired by previous work...we propose to remove MP during training, but reintroduce it..Reintroducing MP after training can improve the performance significanly but still cannot surpass GNNs...""Henve we propose minimizing training loss on PeerMLPs but minimizing the sharpness according to GNNs. There seems to be some logical oscillation in the text. First it seems that a scheme similar to PeerMLP is uses where the GNN is removed during training and injected back during inference. In the next few sentences, this seems to be abandoned and the GNN brought back for the SAM portion of training. I believe the experiments and ideas really did implement the latter formulation but either I'm misunderstanding or this paragraph could use some work to be clearer. (2) In table 8 in the Appendix, I don't believe the best results are actually in boldface. Maybe this was due to my printer or perhaps the table needs to be reformatted? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: No limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate your constructive feedback and valuable comments on our manuscript. We found that your comments are really valuable to further improve our manuscript. Below, we address each of your concerns and questions: > **Q1:** I think the paper is very good and clear in most cases. The only weaknesses I would note (and this may be subjective) would be to make some style improvements in the beginning of the paper. **Response to Q1:** Thanks for your careful pointing out. We will improve our writing in the revision. > **Q2:** I find the paragraph containing lines 137-145 hard to read due to how the ideas are presented: "Inspired by previous work...we propose to remove MP during training, but reintroduce it..Reintroducing MP after training can improve the performance significantly but still cannot surpass GNNs...""Henve we propose minimizing training loss on PeerMLPs but minimizing the sharpness according to GNNs. There seems to be some logical oscillation in the text. First it seems that a scheme similar to PeerMLP is uses where the GNN is removed during training and injected back during inference. In the next few sentences, this seems to be abandoned and the GNN brought back for the SAM portion of training. I believe the experiments and ideas really did implement the latter formulation but either I'm misunderstanding or this paragraph could use some work to be clearer. **Response to Q2:** Thank you for your valuable feedback, which has highlighted an area of our text that could benefit from greater clarity. In response, we would like to clarify that our approach indeed adopts the latter formulation mentioned, wherein MP is partially removed during training, with the reintroduction of GNN for the SAM portion of the training. The intent of our original discussion was to illustrate the relative performance of PeerMLP, acknowledging its merits yet also its limitations, thereby motivating our proposed method. To address the issue raised, we will revise the mentioned text to ensure the presentation of our idea is more clear and logical. > **Q3:** In Table 8 in the Appendix, I don't believe the best results are actually in boldface. Maybe this was due to my printer or perhaps the table needs to be reformatted? **Response to Q3:** Thanks for pointing out this! Upon review, we found that the boldfacing issue in Table 8 was indeed a formatting oversight on our part. We will correct this in the revised manuscript. --- Rebuttal Comment 1.1: Title: Addressed concerns Comment: Thank you for addressing the concerns. Best of luck with the conference. --- Reply to Comment 1.1.1: Comment: We sincerely thank you for acknowledging our work. Thanks again for your time and effort in reviewing our work.
null
null
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Beyond Accuracy: Ensuring Correct Predictions With Correct Rationales
Accept (poster)
Summary: This paper introduces a novel method called dual-correct predictions, designed to train models to make accurate predictions based on correct rationales, thereby improving their safety for deployment. Additionally, the authors develop a unique dataset containing structured rationales that explicitly outline the reasoning processes required for visual recognition tasks. Strengths: 1. Authors constructed a new fine-grained rationale ontology dataset, which provides new solution ideas for generating faithful explanations 2. Authors propose a dual-correct method to improve the effectiveness of the models. Weaknesses: 1. Equation 4 lacks sufficient explanation. Especially the correct rationale. 2. Lack of algorithm flowchart to show the training and inference process of the model. 3. Authors do not have a more up-to-date explainable baselines. Other: Equation 4 and 5, missing in the last part ')' Technical Quality: 3 Clarity: 3 Questions for Authors: 1. What is the difference between Rationale ontologies proposed by the authors and prototypes in prototype learning? Is it considered to add prototype learning as baseline? 2. How does the authors' approach scale to a data without ontology labeling? 3. How does the model predict when the authors are reasoning? 4. Authors present the background of the problem in Figure 1, using GPT4V as an example. Why is it not in the experiment, comparing GPT4V? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: See weaknesses and questions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your positive feedback and instructive reviews. Below we provide point-to-point responses. **Q1. Detailed explanation of Equation 4.** In Eq.4, **to achieve correct predictions**, our backbone objective ensures the correct alignment of image and text embeddings in a shared space (Line 157-159), the same as the CLIP model. **To achieve correct rationales**, the biggest challenge here is the absence of human annotations of rationale explanations (Line 85-88). To address this, we propose two constraints to learn correct rationales in a self-supervised manner. One is the disentanglement constraint $\mathcal{D}(h(g(I,r)), h(g(I,r')))\ge\epsilon$, which ensures the vision embeddings of different rationales from the same category are disentangled (Line 154-155). However, only disentangling rationale embedding can easily fall into trivial solutions (as shown in Table 6 of our ablation study). Therefore, we propose the other reconstruction constraint $\mathcal{D}(\sum _ {r\in\{r _ k\} _ y}h(g(I,r)),f _ \theta(y))\le\delta$, which ensures the aggregation of vision embeddings for all rationales from the same category aligns with the language embedding of the category (Line 155-156). This constraint regularizes the model to achieve semantically meaningful disentanglement of rationale embeddings, i.e., correct rationales. Our experiments demonstrate the effectiveness of the proposed optimization in achieving dual-correct predictions. Thank you for pointing out the typo, we will make this part more clear in the final version. **Q2. Authors do not have more up-to-date explainable baselines.** We respectfully disagree. The explanation method we proposed is developed based on the recent works that decompose the CLIP-ViT predictions into the contributions of attention heads [1][2]. As shown in Table 1, we compare our explanation method with both classic methods (e.g., LRP, rollout, GradCAM) and state-of-the-art methods (e.g., Chefer et al., CLIP-PRS). The results reveal that the explanation accuracy of our method outperforms up-to-date state-of-the-art methods. [1] Gandelsman et al. "Interpreting CLIP's Image Representation via Text-Based Decomposition." ICLR 2024. \ [2] Chen et al. "Interpreting and Controlling Vision Foundation Models via Text Explanations." ICLR 2024. **Q3. What is the difference between the proposed Rationale ontologies and prototypes in prototype learning?** Rationales and prototypes are conceptually related but fundamentally different. Through the lens of prototype learning works, such as [3] and [4], our rationales can be viewed as a set of prototype vectors for a certain category. However, the differences are significant. **1)** Our rationales are more comprehensible to users. Prototypes are "visual words" (image patches), while our rationales are free text. Using image patches and their activations as explanations is ambiguous; their interpretation heavily relies on subjective user judgment. For example, while a prototype may highlight the wing of a bird, it remains unclear whether the model's prediction is based on the wing’s shape, color, or texture. In contrast, our rationales expressed in natural language deliver unambiguous explanations and can precisely localize the regions on the image. **2)** Our rationale aligned with human knowledge and reasoning process for making predictions. Since prototypes are learned in a data-driven manner, these approaches inevitably introduce spurious correlations and biases inherent in the data, i.e., correct prediction with wrong rationales. In contrast, our rationales are generated from human knowledge extracted from pre-trained LLMs, ensuring the model makes correct predictions with correct rationales, i.e., dual-correct predictions. [3] Snell et al. "Prototypical networks for few-shot learning." NeurIPS 2017. \ [4] Chen et al. "This looks like that: deep learning for interpretable image recognition." NeurIPS 2019. **Q4. How does the authors' approach scale to data without ontology labeling?** Our rationale generation method can be generalized to arbitrary image recognition datasets, and does NOT rely on ontology labeling of the original dataset, such as the WordNet for ImageNet. Specifically, we query the GPT-4 with example responses and instructions (as shown in Appendix. A) to automatically generate rationale ontologies. Our method requires only the category names, making it applicable to any image classification dataset. **Q5. How does the model predict and reasoning?** (Although we are not sure whether this is the correct question you are asking, we will try to provide an explanation.) **For prediction**, the model will compare the similarity of the image embedding with the text embedding of all category names in the shared space [5]. The category prediction will be the one that has the highest similarity score. **For reasoning**, in Table 7 of our ablation studies, we demonstrate that the model can first compare the similarity between image embeddings and all rationale embeddings, then average the scores of rationales within each category. The category prediction will be the one that has the highest average similarity score. This approach not only enhances prediction accuracy but also exposes the model's reasoning process by revealing intermediate predictions of rationales. **Q6. Why not compare with GPT-4V in the experiments?** This is because GPT-4V is a closed-source model, we cannot access its internal features or produce heatmaps through the current API. In Figure 1, we can only tell from the language responses of the GPT-4V that it will also produce incorrect predictions with correct rationales. We tried to prompt GPT-4V to highlight corresponding regions for rationales. However, it always utilizes other classic pretrained smaller models for detection or segmentation, rather than performing these tasks itself. We plan to include this evaluation once more APIs for GPT-4V become available. --- Rebuttal Comment 1.1: Title: Response to Reviewer dtoB Comment: Dear Reviewer dtoB, We sincerely appreciate the time and effort you have invested in providing us with your encouraging feedback. Your insightful comments are invaluable to the enhancement of our work. We eagerly look forward to your feedback and are ready to address any additional questions or concerns that might arise. Thank you once again for your valuable contribution. Best wishes, Authors --- Reply to Comment 1.1.1: Title: We are keen to discuss further with you Comment: Dear Reviewer dtoB, We sincerely appreciate your encouraging feedback. As the discussion period for authors and reviewers draws to a close within the next two days, we wish to confirm that our point-to-point responses have addressed your questions. Your feedback is valuable to us, and we eagerly await your thoughts. Best wishes, Authors
Summary: This paper proposes a method for incorporating rationales into the predictions of foundation models. The goal is to build models that make "dual-correct predictions": predictions that are correct because the model "reasons" using the correct rationale. To do this, the authors collect a new rationale dataset that captures additional information for each category in ImageNet. They then develop a method that faithfully attributes predictions to rationales for CLIP-VIT models, and propose an optimization approach that uses the rationale to guide the model during training. This method is then evaluated through a series of experiments. The authors find that their proposed method outperforms state-of-the-art models in both prediction accuracy and rationale correctness across image classification and retrieval tasks. As a result, the paper concludes that the proposed dataset, explanation method, and rationale-informed optimization approach provide a way to develop models with dual-correct predictions. Strengths: I believe the strengths are as follows: - Problem goal: With the rise of foundation models being applied in increasingly impactful settings, it is imperative for them to not only make correct predictions but for them to do so for the right reasons; otherwise, they run the risk of failing in unpredictable ways. This paper does a nice job of discussing this problem and the goal of "dual-correct" prediction is intuitive and informative. I thought Figure 1 was a really nice figure. - Rationale dataset: The collected rationale dataset could be useful for future research that builds on rationale methods for ImageNet. - Experimental results: The paper conducts extensive experiments across a wide range of tasks, including image classification, retrieval, rationale localization, and disentanglement. The proposed method consistently outperforms other methods and ablated versions. Weaknesses: Although the experimental results are strong, I think there are weaknesses that take away from the overall impact of the work: - Limited applications: The paper is framed in the title and intro as providing general methods for forming models with dual-correct predictions. However, the results are only for vision transformers: the faithful explanation method used is based on CLIP-ViT models, and the collected dataset is only relevant for object classification on ImageNet. Since the method and problem are framed generally, there should be more experiments across other domains (e.g. text) and models (e.g. diffusion models). - Using GPT-4 to generate gold-label rationales: The premise of the paper is that foundation models may not be using the right rationales when making predictions. However, their dataset of rationales is constructed using GPT-4, a foundation model. If we believe the premise (which I do), why should we trust GPT-4's generations as gold-label rationales? - Missing related work: There is a large literature in NLP that seek to develop methods that form sensible rationales (e.g. [1, 2, 3]). These should be included in the related work and there should be a discussion about how the method developed in the paper differs from them. - Clarity: Parts of the paper are confusing. For example I thought the rationale-informed optimization section was difficult to understand. There were also typos throughout the paper (e.g. "[33] averaging across layers and heads to get contribution of image token" in line 137 isn't a complete sentence). [1] T. Lei, R. Barzilay, and T. Jaakkola. 2016. Rationalizing neural predictions. [2] J. Bastings, W. Aziz, and I. Titov. 2019. Interpretable neural predictions with differentiable binary variables. [3] S. Gurrapu, A. Kulkarni, L. Huang, I. Lourentzou, L. Freeman, and F. A. Batarseh. 2023. Rationalization for Explainable NLP: A Survey. Technical Quality: 3 Clarity: 2 Questions for Authors: See questions above relating to limited applications, trusting the rationales of GPT-4, and missing related work. Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your encouraging comments and recognition of our contributions. Below we provide point-to-point responses. **Q1. Since the method and problem are framed generally, there should be more experiments across other domains (e.g. text) and models (e.g. diffusion models).** Indeed, in this paper, we aim to introduce a general problem, dual-correct prediction, for existing models. We then instantiate the research problem using Vision-Language Models (VLMs) and develop a method tailored for the CLIP-ViT model to achieve dual-correct predictions. This is because the CLIP-ViT model is the most representative model that serves as the foundation of other state-of-the-art models. For example, most generative Multimodal Large Language Models (MLLMs), such as LLaVA [1], use a fixed CLIP-ViT as their vision encoder. Recent works reveal that the capability of the CLIP model is the bottleneck of improving these MLLMs [2]. Moreover, the CLIP-ViT model is widely used to guide the generation of diffusion models, e.g., Stable Diffusion. Therefore, ensuring the CLIP-ViT model to make dual-correct predictions has a crucial impact across different domains and tasks. In our experiments, we further demonstrate the superiority of our model in text-to-image retrieval tasks on MSCOCO and Flickr30K datasets. This indicates that the enhanced dual-correctness of the vision encoder will improve the performance of its jointly trained text encoder. Ensuring dual-correct predictions for language and diffusion models will require adapting current methods or developing new ones. We will leave this for future work. [1] Liu et al. "Visual instruction tuning." NeurIPS 2023. \ [2] Tong et al. "Eyes wide shut? exploring the visual shortcomings of multimodal llms." CVPR 2024. **Q2. Why should we trust GPT-4's generations as gold-label rationales?** **1) Sufficient knowledge.** Existing studies prove that GPT-4 has expert-level expertise in commonsense [3] and domain knowledge [4]. **2) Prompt engineering.** Recent studies show that LLMs can follow structured instructions with examples [5-6]. Our examples in the prompt are manually refined across several runs to ensure GPT-4 follows our instructions. **3) Rationale quality evaluations.** We further conduct automatic evaluations on the rationale quality in a scalable manner, thereby eliminating expensive and subjective human assessment. Inspired by [7], we employ the latest GPT-4o and GPT-4v to assess rationale quality from three aspects rated on a Likert scale from 0 to 5. Summary results are presented below, with detailed score distributions in Fig.1 of our rebuttal's one-page PDF (metric definitions are in Fig.1 caption). Overall, 964 out of 1,000 categories have high-quality rationales (≥4.0). |Evaluator|Factual Consistency|Comprehensiveness|Visual Disentanglement| |:-:|:-:|:-:|:-:| |GPT-4o|4.74|4.39|4.52| |GPT-4v|4.89|4.59|4.61| [3] Bubeck et al. "Sparks of artificial general intelligence: Early experiments with gpt-4." arXiv 2023. \ [4] Liu et al. "Holistic evaluation of gpt-4v for biomedical imaging." arXiv 2023. \ [5] Menon et al. "Visual classification via description from large language models." ICLR 2023. \ [6] Qin et al. "Medical image understanding with pretrained vision language models: A comprehensive study." ICLR 2023. \ [7] Bills et al. "Language models can explain neurons in language models." OpenAI 2023. **Q3. Discussion of rationale-related works in NLP.** In NLP, the rationale is exchangeable to language explanation [8], however, our rationale refers to visual evidence. In [9], the authors develop a greedy method to generate rationales, which selects the minimal subset of input tokens that produces the same output as the full sequence. [10] and [11] enhance the interpretability of language models in a two-model manner, one latent model selects rationales from the input text, and the other model makes predictions based on the rationales alone. In [12], the authors annotate a commonsense explanations dataset to train a language model that can provide rationales for commonsense reasoning tasks. **1)** Different from these works that are focused on language models, our method can not only provide language explanation for the vision predicitons, but also be based on correct visual evidence. **2)** In contrast to existing works that require human annotations, our method learns correct rationales in a self-supervised manner. We will add more discussion in the final version. [8] Gurrapu et al. "Rationalization for explainable NLP: a survey." Frontiers in Artificial Intelligence 2023. \ [9] Vafa et al. "Rationales for Sequential Predictions." EMNLP 2021. \ [10] Lei et al. "Rationalizing Neural Predictions." EMNLP 2016. \ [11] Bastings et al. "Interpretable Neural Predictions with Differentiable Binary Variables." ACL 2019. \ [12] Rajani et al. "Explain Yourself! Leveraging Language Models for Commonsense Reasoning." ACL 2019. **Q4. Details of Rationale-informed optimization section.** Thank you for pointing out the typos. For the rationale-informed optimization section, our key idea is to leverage the two proposed constraints to force the model to learn correct visual evidence for prediction. Specifically, the disentanglement constraint guides the model to learn distinct visual embeddings for different rationales (Line 154-155). However, without human annotations of ground truth regions of rationales, the model can easily fall into trivial solutions that randomly disentangle rationale embeddings without correct localization (as shown in Table 6 of our ablation studies). Therefore, our reconstruction constraint forces the embeddings of rationales can reconstruct the embedding of the category (Line 155-156). By regularizing the search space, our method achieves semantically meaningful disentanglement solutions without relying on costly human annotations. We will correct the typos and make the section more clear in the final version. --- Rebuttal Comment 1.1: Comment: Thank you to the authors for their response. I'm still concerned by the validity of using GPT-4 to construct the dataset of gold-label annotations. I agree with the author rebuttal that there's plenty of work showing GPT-4 has amazing capabilities, but it also has mind-boggling failure modes, and more evaluation needs to be done to show this specific GPT-4 generated dataset is valid. --- Reply to Comment 1.1.1: Title: Response to Reviewer WT2A Comment: Dear Reviewer WT2A, Thank you for your prompt reply. To address your concern on the GPT-4 generated rationales, we further conducted **human evaluations**. In our previous rebuttal, we performed an automatic evaluation of the entire dataset, and the results indicated the high quality of our data. This human evaluation further proves that the machine evaluators are strongly aligned with human evaluators, confirming the reliability of our automatic evaluations and the high quality of our rationale dataset. Here are the details: \ **Evaluators:** \ **1) Human evaluators:** We recruited four human evaluators, who are mostly graduate students. They are asked to conduct assessments based on commonsense knowledge and perform Internet searches for validation. On average, it takes them one minute per sample. **2) Machine evaluators:** The latest GPT-4o and GPT-4v models. For each evaluation, we perform three independent runs and calculate the average scores. **Evaluation Metrics:** \ Factual Consistency: Whether the rationales are consistent with facts \ 5 - 100% consistent with fact \ 4 - 75% \ 3 - 50% \ 2 - 25% \ 1 - 0% consistent with fact (completely wrong) Comprehensiveness: Whether the rationales provide all information necessary to predict the category \ 5 - cover 100% of discriminative visual features \ 4 - cover 75% \ 3 - cover 50% \ 2 - cover 25% \ 1 - cover 0% of discriminative visual features Visual Disentanglement: Whether the rationales are visually disentanglable or non-overlap \ 5 - 100% of rationale visually non-overlap (completely disentangle) \ 4 - 75% non-overlap \ 3 - 50% non-overlap \ 2 - 25% non-overlap \ 1 - 0% of rationale visually non-overlap (completely overlap) **Evaluation Data:** \ We sample **three independent groups** of data from our rationale dataset, each consisting of 50 categories and their corresponding rationales. Specifically, categories were randomly selected from their superclasses: Animals (20), Objects & Artifacts (15), Natural Scenes (5), Plants (5), and Human Activities (5). This ensures not only each superclass is represented but also the robustness of our results [1]. [1] Torralba et al. "Unbiased look at dataset bias." CVPR 2011. **Quantitative Results:** | Evaluator | Factual Consistency | Comprehensiveness | Visual Disentanglement | |:---:|:---:|:---:|:---:| | GPT-4o | 4.89$\pm$0.05 | 4.55$\pm$0.06 | 4.66$\pm$0.06 | | GPT-4v | 4.92$\pm$0.03 | 4.67$\pm$0.05 | 4.70$\pm$0.02 | | **Machine Avg.** | **4.91** | **4.61** | **4.68** | | | | | | | Human_A | 4.85$\pm$0.11 | 4.64$\pm$0.19 | 4.42$\pm$0.15 | | Human_B | 4.97$\pm$0.02 | 4.77$\pm$0.02 | 4.20$\pm$0.11 | | Human_C | 4.78$\pm$0.04 | 4.60$\pm$0.11 | 4.78$\pm$0.10 | | Human_D | 4.81$\pm$0.08 | 4.64$\pm$0.05 | 4.77$\pm$0.07 | | **Human Avg.** | **4.85** | **4.66** | **4.54** | **Takeaway 1: The quality of our generated rationale dataset is high.** In the table above, we show the results on the average of three sample groups. The average scores demonstrate a high degree of agreement between machine and human evaluators. The dataset consistently achieves scores of 4.61 or higher on the average of machine and human evaluators for each metric, indicating that over 90.3% of the rationales for each category are highly factual, comprehensive, and visually disentanglable. **Takeaway 2: Our automatic evaluation is reliable and scalable.** As shown in the table above, the average scores of each metric are almost identical between machines and humans. The Pearson correlation coefficient of 0.82 reveals the strong positive correlation between machine and human evaluations. Therefore, using the automatic evaluation method can efficiently evaluate the entire dataset, and align with human evaluations. Note that the results of the entire dataset are reported in the previous rebuttal. **Qualitative Results:** \ Here we list some categories that rated high in automatic evaluation. As shown, all the rationales are consistent with fact, sufficient for distinguishing the corresponding category, and visually non-overlap (disentangled). \ "ostrich": ["Long, bare neck", "Large, powerful legs", "Small, feathered head", "Two-toed feet", "Black-and-white feathers", and "Long, curved beak"] \ "tabby cat": ["Striped fur pattern", "Dark "M" on forehead", "White paws/chest", "Dark ears/tail tip", "Round face shape", "Whiskers/eyebrows"] \ "airliner": ["Long, slender body", "Multiple engines", "High wingspan", "Narrow fuselage", "Tail fin", "Multiple windows"] Again, thank you for your constructive comments, we hope this additional human evaluation can address your concerns. Please let us know if our response addresses your questions or if you have any further questions.
Summary: Machine learning models are mostly commonly evaluated based on accuracy, yet highly accurate models can conceal issues with the underlying reasoning. For example, models can predict the right label while predicting it for the wrong reason. To improve the ability of foundation models to make good predictions for the right reasons, the authors contribute two items: 1) they develop a dataset where prediction labels are connected with a series of rationales, and 2) they develop a fine-tuning strategy to incorporate this into the model. Strengths: 1. **Formulation as rationales matches intuition** - The core idea of rationales is interesting as a building block for developing explanations. Such an idea seems to be present in different settings/scenarios, and matching up explanations with the underlying rationales seems to be an intuitive way to ensure that predictions are made for the right reason. 2. **The method is pluggable/usable across different models** - As pointed out in Section 3.3, the method proposed in the paper is usable across different architectures. Essentially, the proposed method is accomplished through optimization techniques rather than additional parameters or architecture changes. Because of this, the model proposed in the paper can easily be used with different models. 3. **Evaluation is across many datasets** - The impact of their fine-tuning method is evaluated against nine different datasets, which makes it clear what the results + impact is. In Figure 4, different images are shown comparing a CLIP baseline with the method proposed by the authors. Such a figure demonstrates the utility provided by their method, as they can improve heatmaps, which implies that they improve a model's ability to make predictions for the "right reason." Weaknesses: 1. **Unclear if the Dual-correct Prediction Problem is the same as the Right for the Right Reason** - Equations 4 and 5 sketch out the optimization problem that defines the dual-correct predictions, and how to optimize for right for the right reason. However, it's unclear if the optimization problem listed here is the correct one to pursue, and the reasons behind such a problem are unclear. It seems to be that far-apart rationales should still have dissimilar embeddings, and close-together rationales have similar embeddings, but such intuitions are not explained in the paper. More generally, the reason and structure of this optimization problem should be explained better, as it is unclear how or why such a problem connects with the overall issue of producing correct rationales. 2. **Generalizability of the dataset generation method is unclear** - The dataset generation method is focused on the ImageNet dataset, where different hierarchies are created based on the underlying ontologies. It is unclear whether such ontologies can be generated in non-ImageNet datasets, and how the location of such rationales can be determined in generality. Because of this, it's unclear whether the method to generate the structured rationale dataset can be widely replicated across domains. Technical Quality: 3 Clarity: 2 Questions for Authors: 1. Why is a tree structure used when designing rationales (line 46)? 2. Why does incorporating rationales also improve accuracy? 3. What is the difference between localizability and disentaglability? Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: The authors acknowledge the limitations of their paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your encouraging comments. Below we provide point-to-point responses. **Q1. Whether dual-correct prediction problem is the same as the Right for the Right Reason?** \ No, our dual-correct prediction problem is different from the Right for the Right Reason in [1]. **1) Explanation granularity.** [1] is limited to providing category-level explanations, i.e., a single explanation heatmap for one prediction. In contrast, our dual-correct prediction can provide concept-level explanations, i.e., multiple explanation heatmaps for one prediction. **2) Comprehensibility.** For image data, [1] is limited to heatmap explanations that are ambiguous in their interpretation. For example, given a heatmap highlighting the object, it remains unclear whether the model is focusing on the object’s color, texture, or shape. Differently, our dual-correct prediction provides explanations in user-understandable natural language based on valid visual evidence. **3) Supervision.** [1] requires ground truth location of explanations during training, which is prohibitively expensive for fine-grained rationales. In contrast, our dual-correct prediction method trains the model to learn correct rationales in a self-supervised manner without human annotations. **4) Real-world applications.** [1] is limited to experiments on small-scale toy datasets, it is unclear whether the method can be applied to real-world data. On the contrary, our experiments demonstrate the effectiveness of our dual-correct prediction large-scale datasets. [1] Ross et al. "Right for the right reasons: training differentiable models by constraining their explanations." IJCAI 2017. **Why solving Eq.4 and Eq.5 can achieve dual-correct predictions?** **To achieve correct predictions**, our backbone objective ensures the correct alignment of image and text embeddings in a shared space (Line 157-159), the same as the CLIP model. **To achieve correct rationales**, the biggest challenge here is the absence of human annotations of rationale explanations (Line 85-88). To address this, we propose two constraints to learn correct rationales in a self-supervised manner. One is the disentanglement constraint, which ensures the vision embeddings of different rationales from the same category are disentangled (Line 154-155). However, only disentangling rationale embedding can easily fall into trivial solutions (as shown in Table 6 of our ablation study). Therefore, we propose the other reconstruction constraint, which ensures the aggregation of vision embeddings for all rationales from the same category aligns with the language embedding of the category (Line 155-156). This constraint regularizes the model to achieve semantically meaningful disentanglement of rationale embeddings, i.e., correct rationales. Our experiments demonstrate the effectiveness of the proposed optimization in achieving dual-correct predictions. **Q2. Whether the ontology generation method can be generated to non-ImageNet datasets?** Yes, our method can be generalized to arbitrary image recognition datasets. Our method for dataset generation does NOT rely on the underlying hierarchies for creating ImageNet, i.e., WordNet. Specifically, we utilize GPT-4 to extract the desired rationales ontologies (Line 112-113) and our detailed prompt provided to the LLM is detailed in Appendix.A. As demonstrated, our method requires only the category name to generate the rationale ontology, making it applicable to any conventional image recognition dataset, such as CIFAR, CUB, and Caltech101. **How are the locations of rationales determined?** As detailed in Sec.3.3 and **Q1**, our method can learn the locations of rationales without human annotations of location. **Q3. Why is a tree structure used when designing rationales?** The reasons are two-fold: **1)** In the context of ensuring rationale correctness, a tree structure helps in disentangling the contributions of different rationales to the final decision. Each path from the root to a leaf can represent a logical chain of reasoning, isolating the impact of individual rationales and their interactions. This disentanglement is essential for training our model that not only performs well but also aligns their decision-making process with human-understandable reasoning. **2)** Trees are scalable structures that can adapt to varying levels of complexity in data. They can expand to accommodate new rationales or condense by merging similar nodes, providing flexibility in how data is organized. For example, our rationale ontologies can be seamlessly integrated following the ImageNet category ontology. This flexibility is crucial when dealing with complex or evolving datasets where more data becomes available. **Q4. Why does incorporating rationales also improve accuracy?** This is achieved by alleviating the model's reliance on spurious correlations, which are unrelated to the causal correlations of interest [2]. Our method forces the model to make correct predictions based on correct rationales, which makes the learned representations of the model more transferable [3]. Our model is more robust, thereby achieving improved prediction accuracy on *unseen data*, as demonstrated by zero-shot and linear probe accuracy in Table 2. [2] Arjovsky et al. "Invariant risk minimization." arXiv 2019. \ [3] Radford et al. "Learning transferable visual models from natural language supervision." ICML 2021. **Q5. What is the difference between localizability and disentaglability?** Our disentaglability measures the overlap between rationale explanation heatmaps within an image (Line 211-215). Our localizability measures the mean Intersection over Union (mIoU) between rationale explanation heatmaps and their corresponding ground truth segmentation masks (Line 205-210). This ground truth is only available for a few small datasets, such as CUB-Part and PartImageNet. --- Rebuttal Comment 1.1: Title: Followup Questions Comment: Thank you for your rebuttal and comments. I had a couple of followup questions: 1. What is the relationship between equation 4 and finding the correct rationale? Moreover, why is D(h(...)) the correct mathematical formulation corresponding to having the correct rationale? 2. Do the results of the experiment imply that models not built upon spurious correlations would not be assisted by rationales (in terms of accuracy)? Moreover, is the role of incorporating rationales to alleviate the impact of spurious correlations? --- Reply to Comment 1.1.1: Title: Response to Reviewer L5Ee Comment: Dear Reviewer L5Ee, Thank you for your prompt reply. Below we provide point-to-point responses to your follow-up questions. **Q1. What is the relationship between equation 4 and finding the correct rationale? Moreover, why is D(h(...)) the correct mathematical formulation corresponding to having the correct rationale?** Our rationales are the fine-grained visual evidence for making a prediction. For example, to recognize a bird in the image, the valid rationales could be the beak, wings, and legs. In Equation 4, we first obtain the visual evidence of rationales using the explanation method $g(I,r)$, then enforce two constraints on rationales' representations to ensure the visual evidence is localized correctly, i.e., correct rationales. Here are the detailed explanations of Equation 4. Given an image $I$ and a rationale text $r$, explanation method $g(I,r)$ will provide a region on the image that highlights corresponding visual evidence. The function $h(g(I,r))$ will extract the representations indicated by the region from the visual encoder. $\mathcal{D}(\cdot,\cdot)$ is a metric that measures the distance between two representations. First of all, correct rationales are visually distinct evidence for recognizing the object, such as the beak and wing of the bird. Therefore, the vision representation of correct rationales should be disentangled from one another. The first constraint $\mathcal{D}(h(g(I,r)),h(g(I,r')))\ge\epsilon$ ensures the visual representations of different rationales for the same category are far from each other, i.e., disentanglement. However, merely disentangling the representations cannot guarantee the correct localization of visual evidence. The model can fall into trivial solutions that randomly localize invalid visual evidence with sufficient disentanglement (as shown in Table 7 of our ablation studies). To address this issue, we further enforce that the aggregation of visual representations for all rationales within the same category aligns with the language representation of the category. This limits the search space of solutions and forces the model to disentangle and localize the rationales using semantically meaningful visual evidence within the category. The second constraint $\mathcal{D}(\sum _ {r\in\{r _ k\} _ y}h(g(I,r)), f _ \theta(y)) \le \delta$ ensures that the sum of all rationale representations (which collectively explain the category) is close to the category's language representation $f _ \theta(y)$, i.e., localization. Together, these constraints ensure the model uses visual evidence that is both precise (disentangled) and coherent (localized) with the overall category when making predictions, i.e., correct rationales. Note that our method does not need human annotations like bounding boxes or segmentation masks of rationales, as they are prohibitively expensive or even impossible for large-scale datasets. **Q2. Do the results of the experiment imply that models not built upon spurious correlations would not be assisted by rationales (in terms of accuracy)?** No, models not built upon spurious correlations will still benefit. As explained in Q1, our rationales are concept-level visual evidence for recognizing objects. Fine-grained distinctions between rationales enable the model to capture more nuanced information about the data. This enriched representation allows the model to make more informed decisions, particularly in complex scenarios where subtle differences are crucial, such as the fine-grained classification of bird species in the CUB dataset. **Q3. Is the role of incorporating rationales to alleviate the impact of spurious correlations?** No. While alleviating spurious correlations is a significant benefit, incorporating rationales serves more important roles. 1) Incorporating rationales ensure dual-correct predictions, which significantly enhance the safety of machine learning models. 2) Rationales provides human-understandable explanations for the model's predictions using natural language. 3) As discussed earlier, the ability to distinguish between fine-grained concepts can improve the model’s accuracy. Again, thank you for your valuable comments. Please let us know if our response addresses your questions or if you have any further questions.
Summary: The goal of the paper is to train Vision Transformer models in a way that they make the "right predictions for the right reasons". The paper follows up on prior work in ML explainability that extracts "rationales" for model predictions. The idea of the paper (which is also considered in some prior works) is to align the rationale with the model predictions. The key issue that the paper faces is that rationales are quite difficult to label at a large scale. The paper gets around this problem by using GPT-4 based annotations where given a ImageNet class, 6 rationales in the form of a two level deep tree are generated. The paper then fine-tunes some Vision Transformer models to that maximize accuracy while disentangling rationales from each other. Experiments show an increase in accuracy and explainability performance. Strengths: 1. Explainablity of ML models is an important topic as it is critical for building trust in models. 2. The idea of disentangling rationales is interesting and looks like it should indeed lead to better explanations. 3. Experiments cover a wide range of datasets. Weaknesses: 1. The biggest issue with the paper is that it makes some choices without assessing their merit, which leads to a number of open questions. For instance, the paper uses GPT-4 to build the graph of rationales. How were the resulting rationales from GPT-4 evaluated? How were the examples that were passed to the model in Appendix A constructed? Did the paper study how well GPT-4 followed the instructions? For instance, some of the instructions are quite intricate. Consider for instance: “These features should be visually distinctable and have limited overlap with each other”. It is not clear how the model interprets “visually distinctable” and “limited overlap”. Did the model actually follow these instructions? 2. Continuing from the previous point, in line 180: How was the value of 6 chosen? What would be the effect of changing this value? Should all classes have the same number of rationales associated with them? Are some classes inherently "more complex" than others? Similarly, should all rationales be disentangled? One would assume that some rationales might be overlapping. For instance, a traffic light and a traffic light pole are overlapping whereas a traffic light and the road aren’t. 3. The paper also adds no discussion on the presence of rationales in the image. Are all the rationales (e.g., beak, tail) always visually present in every image of that class? Does it ever happen that the rationale is not visually present but the explainer still predicts it presence? Similarly, how does one ensure that the model “looks at” the rationales in the input when making the prediction? Could we actually blur that part and see if the model predictions / rationale still stay the same? 4. The writing of the paper is also quite rushed. Important details missing which make it hard to properly judge the merit of the paper. For instance, in line 158: What does the function $h$ actually look like? How does it connect to the vision and the language components mentioned in the previous paragraph? How is $c_i$ computed in Eq. 3 used later on? Similarly, in line 205: The rationales were extracted from GPT-4, so how is the localization ground truth (in terms of pixels in the image) obtained? In Figure 4: What exact procedure was used to get the heatmaps for CLIP explanations? How does the conversion of Eq. 4 to Eq. 5 via Lagrange multipliers solve the non-convexity issue? Isn’t the problem still non-convex? How exactly are the KKT conditions used here? Technical Quality: 2 Clarity: 1 Questions for Authors: 1. Please see the questions in points 1-4 in the "Weaknesses" section. 2. Minor nit: The example in Figure 1 seems a bit problematic. From a distance, the balloons could be taken as red lights in the shape of globes. Wouldn’t the correct question be: Is there a red *traffic* light in the image? 3. Eq. 3: Shouldn’t $w_l$ be inside the summation? 4. Table 2 caption: The last part of the caption is difficult to parse and looks like there is a typo here “our method also hence the prediction performance”. Is this phrasing intended? 5. Line 28: Looks like the base model predictions and the rationales (which often themselves are kinds of predictions) are two often independent things. Theoretically, is it possible to have models that always make the *right prediction* (100% generalization) but always with the *wrong rationale*? Could one train such models, e.g., via using some adversarial training? Would such models be trustworthy, even when one knows they are 100% accurate? Confidence: 3 Soundness: 2 Presentation: 1 Contribution: 1 Limitations: The reviewer sees no societal issues. The paper does discuss some limitations, though there are many (see the Weaknesses section) that need discussion. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your detailed and instructive reviews. Below we provide point-to-point responses. **Q1. Why can we trust the rationales generated by GPT-4?** \ **1) Sufficient knowledge.** Existing studies prove that GPT-4 has expert-level expertise in commonsense [1] and domain knowledge [2]. **2) Prompt engineering.** Recent studies show that LLMs can follow structured instructions with examples [3-4]. Our examples in the prompt are manually refined across several runs to ensure GPT-4 follows our instructions. **3) Rationale quality evaluations.** As suggested, we conduct automatic evaluations on the rationale quality in a scalable manner, thereby eliminating expensive and subjective human assessment. Inspired by [5], we employ the latest GPT-4o and GPT-4v to assess rationale quality from three aspects rated on a Likert scale from 0 to 5. Summary results are presented below, with detailed score distributions in Fig.1 of our rebuttal's one-page PDF (metric definitions are in Fig.1 caption). Overall, 964 out of 1,000 categories have high-quality rationales (≥4.0). |Evaluator|Factual Consistency|Comprehensiveness|Visual Disentanglement| |:-:|:-:|:-:|:-:| |GPT-4o|4.74|4.39|4.52| |GPT-4v|4.89|4.59|4.61| [1] Bubeck et al. "Sparks of artificial general intelligence: Early experiments with gpt-4." arXiv 2023. \ [2] Liu et al. "Holistic evaluation of gpt-4v for biomedical imaging." arXiv 2023. \ [3] Menon et al. "Visual classification via description from large language models." ICLR 2023. \ [4] Qin et al. "Medical image understanding with pretrained vision language models: A comprehensive study." ICLR 2023. \ [5] Bills et al. "Language models can explain neurons in language models." OpenAI 2023. **Q2. Do all categories have the same number of rationales?** \ No. Each category has "approximately five to six independent concepts" (Line 180). We visualize the distribution of rationales, which vary from 3 to 7 with an average of 5.25 per category, in Fig.2 of the PDF. As suggested, we conduct additional studies to evaluate the effect of changing the number of rationales. We force each category to have a certain number of rationales and train the models on ImageNet, then evaluate their zero-shot accuracy: |# rationales|0|2|4|6|8|10| |-|:-:|:-:|:-:|:-:|:-:|:-:| |CIFAR-10|83.6|86.7|88.7|90.8|87.2|86.4| |CIFAR-100|59.5|62.7|64.4|68.1|56.6|56.0| |CUB|46.3|46.5|48.9|56.0|14.4|17.1| Best performance is consistently observed when the number is close to six. When the number is small, rationales have limited predictiveness; when the number is large, GPT-4 might generate low-quality rationales. **Should all rationales be disentangled?** Yes. **1)** In Appendix.A, we instruct GPT-4 to generate visually disentanglable rationales. **2)** Our evaluations in Q1 further prove the disentanglability of the rationales. For example, the traffic light, as you mentioned, our rationales are ["Red, yellow, green lights", "Vertical pole structure", "Reflective surfaces", "White background color", "Hanging wires",...], which are mostly non-overlap and should be disentangled. **Q3. Are all the rationales always visually present?** \ **For training data (ImageNet):** Not always. However, ImageNet images are mostly single object front view in which most of the rationales are visually presented. Rationales could be shared across categories, which significantly increases their overall presence. **For testing data (e.g., CUB-Part):** Always. These small datasets offer ground truth segmentation mask rationales for each image. The existence of a mask indicates the presence of the rationale. **Will the model predict rationale that is not visually present?** No. Based on Table 7 of the main paper, using not-presented rationales (random strings) will deteriorate performance. **Does the model "look at" rationales when making predictions?** Thanks for the constructive suggestions, we conducted a fidelity evaluation to assess this, and the answer is **Yes**. We measured the area under the insertion curve (iAUC $\uparrow$), which tracks the incremental prediction probability increase as features from the original input are sequentially added to a blurry reference based on the rationale heatmaps' order. The results indicate that our model's predictions are more sensitive to the perturbation of rationale regions: |Models|ImageNet|CIFAR-10|CIFAR-100|CUB| |-|:-:|:-:|:-:|:-:| |CLIP|0.590|0.457|0.462|0.548| |CLIP-ft|0.533|0.420|0.405|0.494| |Ours|0.612|0.498|0.509|0.636| **Q4. Additional details for equations.** \ Thanks for pointing out the typos, $w_l$ should be inside the summation in Eq.3. The $c_\mathrm{token}^i$ is then used for calculating $\langle c_\mathrm{token}^i, f_\theta(t) \rangle$ that measures the importance score of the image token to the prediction of $t$ (Line 145). The function $h(R) = \frac{1}{|R|} {\sum_{i\in R}^{}} c_\mathrm{token}^i$, where $R$ denotes the set of token indices within the region. Converting Eq.4 to Eq.5 via Lagrange multipliers does NOT solve the non-convexity issue. This is the standard solution to the constrained optimization problem [6]. It allows for the application of optimization techniques (e.g., SGD) that can handle non-convex problems more efficiently. [6] Boyd et al. Convex optimization. Cambridge university press, 2004. **Q5. Minors and other questions.** - In the PDF, we visualize the results for the suggested "red traffic light" in Fig.3, it still highlights incorrect visual evidence. - Theoretically, it is possible to train a Clever Hans predictor that always relies on wrong rationales (i.e., spurious correlations) to make correct predictions. For example, in [7] the model is intentionally trained to use artificial hospital tags for classifying MRI images. Even if we know these models might achieve 100% accuracy, they should NOT be considered trustworthy or safe. [7] Adebayo et al. "Post hoc explanations may be ineffective for detecting unknown spurious correlation." ICLR 2022. --- Rebuttal Comment 1.1: Title: Response to Reviewer VWuj Comment: Dear Reviewer VWuj, Thank you once again for your valuable comments. To address your concern on the GPT-4 generated rationales, we further conducted **human evaluations**. In our previous rebuttal, we performed an automatic evaluation of the entire dataset, and the results indicated the high quality of our data. This human evaluation further proves that the machine evaluators are strongly aligned with human evaluators, confirming the reliability of our automatic evaluations and the high quality of our rationale dataset. Here are the details: \ **Evaluators:** \ **1) Human evaluators:** We recruited four human evaluators, who are mostly graduate students. They are asked to conduct assessments based on commonsense knowledge and perform Internet searches for validation. On average, it takes them one minute per sample. **2) Machine evaluators:** The latest GPT-4o and GPT-4v models. For each evaluation, we perform three independent runs and calculate the average scores. **Evaluation Metrics:** \ Factual Consistency: Whether the rationales are consistent with facts \ 5 - 100% consistent with fact \ 4 - 75% \ 3 - 50% \ 2 - 25% \ 1 - 0% consistent with fact (completely wrong) Comprehensiveness: Whether the rationales provide all information necessary to predict the category \ 5 - cover 100% of discriminative visual features \ 4 - cover 75% \ 3 - cover 50% \ 2 - cover 25% \ 1 - cover 0% of discriminative visual features Visual Disentanglement: Whether the rationales are visually disentanglable or non-overlap \ 5 - 100% of rationale visually non-overlap (completely disentangle) \ 4 - 75% non-overlap \ 3 - 50% non-overlap \ 2 - 25% non-overlap \ 1 - 0% of rationale visually non-overlap (completely overlap) **Evaluation Data:** \ We sample **three independent groups** of data from our rationale dataset, each consisting of 50 categories and their corresponding rationales. Specifically, categories were randomly selected from their superclasses: Animals (20), Objects & Artifacts (15), Natural Scenes (5), Plants (5), and Human Activities (5). This ensures not only each superclass is represented but also the robustness of our results [1]. [1] Torralba et al. "Unbiased look at dataset bias." CVPR 2011. **Quantitative Results:** | Evaluator | Factual Consistency | Comprehensiveness | Visual Disentanglement | |:---:|:---:|:---:|:---:| | GPT-4o | 4.89$\pm$0.05 | 4.55$\pm$0.06 | 4.66$\pm$0.06 | | GPT-4v | 4.92$\pm$0.03 | 4.67$\pm$0.05 | 4.70$\pm$0.02 | | **Machine Avg.** | **4.91** | **4.61** | **4.68** | | | | | | | Human_A | 4.85$\pm$0.11 | 4.64$\pm$0.19 | 4.42$\pm$0.15 | | Human_B | 4.97$\pm$0.02 | 4.77$\pm$0.02 | 4.20$\pm$0.11 | | Human_C | 4.78$\pm$0.04 | 4.60$\pm$0.11 | 4.78$\pm$0.10 | | Human_D | 4.81$\pm$0.08 | 4.64$\pm$0.05 | 4.77$\pm$0.07 | | **Human Avg.** | **4.85** | **4.66** | **4.54** | **Takeaway 1: The quality of our generated rationale dataset is high.** In the table above, we show the results on the average of three sample groups. The average scores demonstrate a high degree of agreement between machine and human evaluators. The dataset consistently achieves scores of 4.61 or higher on the average of machine and human evaluators for each metric, indicating that over 90.3% of the rationales for each category are highly factual, comprehensive, and visually disentanglable. **Takeaway 2: Our automatic evaluation is reliable and scalable.** As shown in the table above, the average scores of each metric are almost identical between machines and humans. The Pearson correlation coefficient of 0.82 reveals the strong positive correlation between machine and human evaluations. Therefore, using the automatic evaluation method can efficiently evaluate the entire dataset, and align with human evaluations. Note that the results of the entire dataset are reported in the previous rebuttal. **Qualitative Results:** \ Here we list some categories that rated high in automatic evaluation. As shown, all the rationales are consistent with fact, sufficient for distinguishing the corresponding category, and visually non-overlap (disentangled). \ "ostrich": ["Long, bare neck", "Large, powerful legs", "Small, feathered head", "Two-toed feet", "Black-and-white feathers", and "Long, curved beak"] \ "tabby cat": ["Striped fur pattern", "Dark "M" on forehead", "White paws/chest", "Dark ears/tail tip", "Round face shape", "Whiskers/eyebrows"] \ "airliner": ["Long, slender body", "Multiple engines", "High wingspan", "Narrow fuselage", "Tail fin", "Multiple windows"] We sincerely appreciate the time and effort you have invested in providing us with your constructive feedback. We eagerly look forward to hearing from you and are fully prepared to address any additional questions or concerns you may have. --- Reply to Comment 1.1.1: Title: We are keen to discuss further with you Comment: Dear Reviewer VWuj, Once again, we sincerely appreciate the time and effort you have invested in providing us with your constructive feedback. As the author-reviewer discussion period draws to a close in less than two days, we would like to reach out and ensure that our point-by-point responses and additional evaluations have addressed your questions. Your feedback is valuable to us, and we are eager to hear your thoughts. Best regards, Authors
Rebuttal 1: Rebuttal: Dear Reviewers, Area Chairs, and Program Chairs, Thank you for your time and effort in reviewing our paper. We appreciate the reviewers find we `"does a nice job"` (WT2A) on studying an `"important topic"` (VWuj) which is `"critical"` (VWuj) for building trust in models, our research problem of dual-correct prediction is `"intuitive and informative"` (L5Ee, WT2A), our collected dataset is `"unique"` (dtoB) and `"useful for future research"` (WT2A), our idea of `"disentangling rationales is interesting"` (VWuj, L5Ee), `"new"` (dtoB), and `"can easily be used"` (L5Ee), our experiments `"cover a wide range of datasets"` (VWuj, L5Ee, WT2A), and our results are `"strong"` (WT2A). We have responded to the individual comments of each reviewer. **Contribution of this work.** The main contribution of this paper is we introduce and formally define the dual-correct prediction problem, i.e., correct prediction with correct rationale. We define the rationales as the valid visual evidence for the language justifications of the prediction. This problem is critical since most of the machine learning models are currently evaluated primarily on prediction accuracy, overlooking a critical aspect for ensuring safety, i.e., the validity of the reasons behind their accurate predictions. To achieve dual-correct prediction, we propose a scalable pipeline to automatically augment existing datasets with language rationales. Based on our dataset, we develop a principled Rationale-informed Optimization method to disentangle and localize rationales on the images. We evaluate the proposed method on 14 benchmark datasets for prediction accuracy and rationale validity. **Summary of reviews.** Reviewers L5Ee and dtoB vote for acceptance. Reviewers VWuj and WT2A vote for rejection. We summarize the main comments of the four reviewers below: 1. The major concern of Reviewers VWuj and WT2A is the trustworthiness of the rationales generated by GPT-4. On the one hand, the feasibility of this idea is supported by literature that GPT-4 has expert-level expertise in commonsense and domain knowledge, and can correctly follow user instruction via prompt engineering. On the other hand, we provide automatic evaluations on the quality of generated rationales in terms of three metrics. The results show that 96.4% of the rationales are assessed as high-quality. These additional results further enhance the trustworthiness of our generated rationales. 2. The common concern of Reviewers L5Ee and dtoB is the generalizability of our rationale generation pipeline to non-ImageNet datasets. We provide detailed explanations of our automatic rationales generation pipeline through GPT-4. We clarify that our method does not rely on the underlying hierarchies for creating the original dataset, such as WordNet for ImageNet, it only requires the category names. Therefore, our rationale generation pipeline can be applied to arbitrary conventional image classification datasets. 3. Reviewers L5Ee, WT2A, and dtoB request details on achieving dual-correct predictions by solving Eq.4 and Eq.5. We provide explanations on how our backbone objective ensures correct predictions. Moreover, we clarify why we need the proposed disentanglement and reconstruction constraints. We provide detailed explanations on how these constraints work together to achieve correct rationales, i.e., localize rationale using valid visual evidence. We will make this part more clear in the final version. We believe that these clarifications and additional results have addressed the concerns of the reviewers, and kindly ask the reviewers to take this into account when considering score adjustments. We welcome any further discussion with the reviewers. Pdf: /pdf/adc66f794140d55b1fdbacc13272d65002402f3f.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
2DQuant: Low-bit Post-Training Quantization for Image Super-Resolution
Accept (poster)
Summary: The paper presents 2DQuant, an innovative low-bit post-training quantization technique for transformer-based image super-resolution that significantly advances the state-of-the-art by introducing a two-stage optimization process. This process includes a novel Distribution-Oriented Bound Initialization strategy and a Distillation Quantization Calibration method, resulting in exceptional performance with minimal loss of accuracy for SwinIR-light. The 2DQuant approach demonstrates the potential to compress and accelerate transformer-based super-resolution models effectively for edge deployment. Strengths: 1. This paper is well-written and easy to understand. 2. Transformer-based SR is much popular in recent year, this method is much helpful for obtaining accurate quantized transformer-based SR models. 3. The experimental results and visualizations are sufficient. Weaknesses: 1. The proposed DOB and DQC are much similar with DBDC and PaC, but only change weight compression to symmetric quantization, could you give more details about the differences? 2. The motivation in the introduction are confusing. Usually, the deterioration of self-attention in quantized transformer causes sever performance degradation instead of the unadaptable changes in weight and activation distributions, which is also appeared in CNN-based SR networks. 3. The selected baselines in this paper are rough and the experimental results are not convinced. This paper does not compare with transformer-based post-training quantization methods, such as PTQ4ViT[1], FQ-ViT[2], NoiseQuant[3] and RepQ-ViT[4], these methods are open-source, the author should apply these PTQ methods for super resolution for that the basic transformer blocks are not different for super resolution and image classification. [1] Z. Liu et al. Post-training quantization for vision transformer. Advances in Neural Information Processing Systems, 34:28092–28103, 2021. [2] Y. Lin et al. Fq-vit: Post-training quantization for fully quantized vision transformer. arXiv preprint arXiv:2111.13824, 2021. [3] Y. Liu et al. Noisyquant: Noisy bias-enhanced post-training activation quantization for vision transformers. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 20321– 20330, 2023. [4] Z. Li et al. Repq-vit: Scale reparameterization for post-training quantization of vision transformers. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 17227–17236, 2023. Technical Quality: 3 Clarity: 3 Questions for Authors: Please refer to the weaknesses above. Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors have discussed the limitations in this paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Reviewer4 EXmU > Q1:The proposed DOB and DQC are much similar with DBDC and PaC, but only change weight compression to symmetric quantization, could you give more details about the differences? A1: In fact, our DOBI and DQC are quite different from DBDC and Pac. The specific differences are as follows: - **DOBI vs. DODB:** 1. We perform DOBI on both weights and activations, while DODB only on weights. 2. We use a one-sided search for activations, while DODB does not involve this aspect. 3. Our search objective is to find the minimum MSE, which can be accelerated by GPU, whereas DODB aims to find the minimum interval containing the T-th parameter size, where T is a hyper-parameter. - **DQC vs. Pac:** 1. Granularity is different. **DQC** is performed on the four Transformer layers in the SwinIR architecture, which is the largest substructure. We have tried finer granularity, such as layer-wise (24 in total) or linear-wise (96 in total), and the results indicate that this granularity is the most suitable. Limited by the number of pages, this part of the content was not included in the paper. In contrast, **DODB** is performed on each residual block, totaling 32 blocks, slower and worse than DQC. > Q2:The motivation in the introduction are confusing. ... A2:We assume that you are referring to lines 60-62 in the original text. Our statement is closer to the surface, while your statement is indeed the essence. We will modify this part to be "Secondly, most of these methods can not adapt well to Transformer-based models because of the deterioration of self-attention in quantized transformer.". > Q3:The selected baselines in this paper are rough and the experimental results are not convinced. This paper does not compare with transformer-based post-training quantization methods, such as PTQ4ViT[1], FQ-ViT[2], NoiseQuant[3] and RepQ-ViT[4]... A3: Thank you for your suggestions! For the four articles you mentioned, we have completed comparative experiments with them. The results of the experiments are shown below and more results can be found in the attachment file in Author Rebuttal. | Method | Bit | Set5($\times 2$) | | Set14($\times 2$) | | B100($\times 2$) | | Urban100($\times 2$) | | Manga109($\times 2$) | | |---|---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| | | | PSNR$\uparrow$ | SSIM$\uparrow$ | PSNR$\uparrow$ | SSIM$\uparrow$ | PSNR$\uparrow$ | SSIM$\uparrow$ | PSNR$\uparrow$ | SSIM$\uparrow$ | PSNR$\uparrow$ | SSIM$\uparrow$ | | Baseline | 32 | 38.15 | 0.961 | 33.86 | 0.921 | 32.31 | 0.901 | 32.76 | 0.934 | 39.11 | 0.978 | | Bicubic | 32 | 32.25 | 0.912 | 29.25 | 0.841 | 28.68 | 0.810 | 25.96 | 0.809 | 29.17 | 0.913 | | PTQ4ViT | 2 | 33.25 | 0.892 | 30.22 | 0.840 | 29.21 | 0.807 | 27.31 | 0.811 | 32.75 | 0.909 | | RepQ | 2 | 31.65 | 0.833 | 29.19 | 0.779 | 28.27 | 0.741 | 26.56 | 0.746 | 30.46 | 0.827 | | NoisyQuant | 2 | 30.13 | 0.762 | 28.80 | 0.756 | 28.26 | 0.742 | 26.68 | 0.763 | 30.40 | 0.820 | | Ours | 2 | 36.00 | 0.950 | 31.98 | 0.901 | 30.91 | 0.881 | 28.62 | 0.882 | 34.40 | 0.960 | | Method | Bit | Set5($\times 3$) | | Set14($\times 3$) | | B100($\times 3$) | | Urban100($\times 3$) | | Manga109($\times 3$) | | |---|---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| | | | PSNR$\uparrow$ | SSIM$\uparrow$ | PSNR$\uparrow$ | SSIM$\uparrow$ | PSNR$\uparrow$ | SSIM$\uparrow$ | PSNR$\uparrow$ | SSIM$\uparrow$ | PSNR$\uparrow$ | SSIM$\uparrow$ | | Baseline | 32 | 34.63 | 0.929 | 30.54 | 0.846 | 29.20 | 0.808 | 28.66 | 0.862 | 33.99 | 0.948 | | Bicubic | 32 | 29.54 | 0.852 | 27.04 | 0.755 | 26.78 | 0.719 | 24.00 | 0.714 | 26.16 | 0.838 | | PTQ4ViT | 2 | 29.96 | 0.790 | 27.36 | 0.700 | 26.74 | 0.659 | 24.56 | 0.646 | 27.37 | 0.739 | | RepQ | 2 | 27.32 | 0.648 | 25.63 | 0.592 | 25.44 | 0.565 | 23.42 | 0.558 | 24.51 | 0.572 | | NoisyQuant | 2 | 27.53 | 0.664 | 25.77 | 0.595 | 25.37 | 0.561 | 23.59 | 0.574 | 26.03 | 0.663 | | Ours | 2 | 31.62 | 0.889 | 28.54 | 0.804 | 27.85 | 0.768 | 25.30 | 0.768 | 28.46 | 0.881 | | Method | Bit | Set5($\times 4$) | | Set14($\times 4$) | | B100($\times 4$) | | Urban100($\times 4$) | | Manga109($\times 4$) | | |---|---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| | | | PSNR$\uparrow$ | SSIM$\uparrow$ | PSNR$\uparrow$ | SSIM$\uparrow$ | PSNR$\uparrow$ | SSIM$\uparrow$ | PSNR$\uparrow$ | SSIM$\uparrow$ | PSNR$\uparrow$ | SSIM$\uparrow$ | | Baseline | 32 | 32.45 | 0.898 | 28.77 | 0.786 | 27.69 | 0.741 | 26.48 | 0.798 | 30.92 | 0.915 | | Bicubic | 32 | 27.56 | 0.790 | 25.51 | 0.682 | 25.54 | 0.647 | 22.68 | 0.635 | 24.19 | 0.767 | | PTQ4ViT | 2 | 27.23 | 0.670 | 25.38 | 0.591 | 25.15 | 0.562 | 22.94 | 0.559 | 24.66 | 0.613 | | RepQ | 2 | 25.55 | 0.583 | 23.54 | 0.475 | 23.30 | 0.430 | 21.62 | 0.449 | 23.60 | 0.556 | | NoisyQuant | 2 | 25.94 | 0.586 | 24.33 | 0.507 | 24.16 | 0.472 | 22.32 | 0.484 | 23.82 | 0.540 | | Ours | 2 | 29.53 | 0.837 | 26.86 | 0.732 | 26.47 | 0.693 | 23.84 | 0.691 | 26.07 | 0.816 | --- Rebuttal 2: Comment: Thank you for the response and additional experiments. The specific differences about 2DQuant and PTQ4SR are clearly stated, which further shows that the differences between these two methods are marginal. DOBI on both weights and activations, one-side searching,new search object and different granularity are just engineering optimisations, the core distributions are still based on PTQ4SR. Although 2DQuant gets much better performance than baselines, the academic contributions of this paper are negligible. And I agree with Reviewer DoGX that this paper use the A+B approach to exaggerate the contribution. So I decided to keep my score. --- Rebuttal Comment 2.1: Comment: Dear Reviewer EXmU, We appreciate your constructive reviews on the additional experiments and the positive feedback on our performance. We would like to continue addressing your concerns and resolving any misunderstandings. > Q1: The specific differences about 2DQuant and PTQ4SR ... are marginal. DOBI on both weights and activations, one-side searching,new search object and different granularity are just engineering optimisations, the core distributions are still based on PTQ4SR. A1: We clarify that the only similarity between our DOBI and PTQ4SR is the use of a search algorithm, while all other aspects are different. If these differences are merely considered engineering optimizations, then under this assumption, most search-based PTQ methods would lack any innovative aspects, which is clearly unrealistic. > Q2: Although 2DQuant gets much better performance than baselines, the academic contributions of this paper are negligible. Our 2DQuant shows an increase in PSNR of up to 4.52 dB on Set5 (×2) compared with PTQ4SR. This increase is significant in the SR field. How can such a substantial improvement be considered negligible? The SR field urgently requires lightweight and high-performance DNN-based SR models. Our method demonstrates that a DNN-based SR model with 2-bit quantization can surpass Bicubic, which is the standard for practical use. Therefore, our method can greatly accelerate the deployment of SR models in the real world, representing a significant contribution to the SR field. > Q3: ...this paper uses the A+B approach to exaggerate the contribution. Could you please specify how we have exaggerated our contributions? We believe that your request for additional experiments is intended to further highlight and strengthen our contributions. The additional experiment results show that 2DQuant surpasses all other PTQ methods you requested for comparison, as well as the state-of-the-art PTQ4SR, confirming that we have not exaggerated our contributions. We value your feedback and have addressed each of your concerns in detail. The significant performance improvements demonstrated by 2DQuant, combined with its novel approach and impact on real-world deployment, affirm its substantial contribution to the SR field. We hope this clarification resolves any misunderstandings and further underscores the value of our work. Thank you for your consideration. Sincerely, Authors --- Rebuttal 3: Comment: Thanks for the detailed explanation, I think I misunderstood a bit about the contributions of this paper. 2DQuant is much useful for accelerating the deployment of transformer-based image super resolution. This method is particularly beneficial to reduce the quantization error of post-softmax and post-GELU feature maps in transformer architectures. After reviewing the latest response, I decided to raise my score to 6 (weak accept). --- Rebuttal Comment 3.1: Comment: Dear Reviewer EXmU, We extend our sincerest gratitude for your thoughtful and constructive feedback on our 2DQuant. Your recognition of the innovative aspects of our work, particularly the utility of 2DQuant in enhancing the deployment of transformer-based image super-resolution models, is **greatly appreciated**. We are honored by your decision to raise your score to a "weak accept", reflecting your positive view of our research's contribution to model quantization and SR fields. **Thank you again for your time and the positive impact your suggestions have had on our work.** Sincerely, Authors
Summary: This paper introduces a novel two-stage post-training quantization (PTQ) method aimed at compressing image super-resolution (SR) models for efficient deployment on edge devices. The authors address the challenge of accuracy degradation in low-bit quantization by proposing the 2DQuant method, which combines Distribution-Oriented Bound Initialization (DOBI) and Distillation Quantization Calibration (DQC) to achieve efficient and accurate super-resolution under low-bit quantization. Strengths: 1. The writing of this paper is very clear. 2. The figures in the paper are simple and easy to understand. 3. This paper conducted some experiments to verify the effectiveness of the proposed method. Weaknesses: 1. The contribution lacks novelty. This paper does not bring any new insights. For example, the distribution of weights and activations is well studied by previous works[1][2][3]. Some papers have used searchable upper/lower bounds for quantization [2] (although these methods are based on QAT, the essence of QAT and PTQ is not different, with only slight differences in training strategies) and NO references were given in this manuscript. The proposed distillation approach is a widely used routine operation in quantization tasks [1][2][3][4][5][6]. 2. Experiments are insufficient. The PTQ for transformers has been extensively studied in many other papers (e.g. [4][5]), but it seems that the authors have not compared it with these methods. 3. Experimental settings are not consistent. Only 100 images are used in [7], which is inconsistent with this paper. References: [1] Li, Huixia et al. “PAMS: Quantized Super-Resolution via Parameterized Max Scale.” ArXiv abs/2011.04212 (2020): n. pag. [2] Zhong, Yunshan et al. “Dynamic Dual Trainable Bounds for Ultra-low Precision Super-Resolution Networks.” European Conference on Computer Vision (2022). [3] Hong, Chee and Kyoung Mu Lee. “Overcoming Distribution Mismatch in Quantizing Image Super-Resolution Networks.” ArXiv abs/2307.13337 (2023): n. pag. [4] Li, Yanjing et al. “Q-ViT: Accurate and Fully Quantized Low-bit Vision Transformer.” ArXiv abs/2210.06707 (2022): n. pag. [5] Liu, Shi et al. “Oscillation-free Quantization for Low-bit Vision Transformers.” International Conference on Machine Learning (2023). [6] Tu, Zhaopeng et al. “Toward Accurate Post-Training Quantization for Image Super Resolution.” 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2023): 5856-5865. Technical Quality: 2 Clarity: 2 Questions for Authors: See weaknesses. Confidence: 5 Soundness: 2 Presentation: 2 Contribution: 1 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > Q1a: The contribution lacks novelty... For example, the distribution of weights and activations is well studied by previous works. A1a: First, the study of distribution is not our core contribution or innovation, it is one of our experimental observations. While our main contributions are 1) the proposed 2DQuant, which utilizes DOBI and DQC to quantize Transformer-based model. 2) 2DQuant achieves 3.60x compression ration, 5.08x speedup ratio, and 4.52dB increasement in Set5 compared with SOTA[6]. Second, [1][2][3] focus on CNN-based models, such as ResNet, EDSR, and RDN networks, but our paper targets SwinIR, a Transformer-based model. There are significant differences between CNN networks and Transformer-based networks in terms of both architecture and model performance, so the direct application of these CNN-based insights/techniques on Transformer-based networks is not straightforward and even impossible. > Q1b: Some papers have used searchable upper/lower bounds for quantization (although these methods are based on QAT, the essence of QAT and PTQ is no different, with only slight differences in training strategies) and NO references were given in this manuscript. A1b:**First**, PTQ and QAT are two different concepts. PTQ stands for Post-Training Quantization, while QAT stands for Quantization-Aware Training. The former uses a pre-trained neural network without needing to update its parameters, only updating the quantizer parameters. In contrast, the latter typically uses an untrained neural network and updates both the neural network and quantizer parameters. From the perspective of memory usage, PTQ mainly requires storing the parameter values and their gradients (for backpropagation). The space required to store gradients is consistent with the space required for parameters. In addition to this, QAT needs to store the optimizer states for all parameters due to the necessity of updating the neural network's parameters. For example, Adam (used in [2]) requires saving momentum and squared gradients for each parameter, which has the same shape as the parameters. Therefore, for the same model, the memory requirement for QAT is typically twice that of PTQ, which is not a slight difference. **Second**, we are not a survey paper, so we do not need to cite all relevant articles. We have already cited similar works (like [6]), and not citing [2] is not a serious issue. To further enhance the quality of the paper, we will include a discussion of this article in the Related Work section. > Q1c: The proposed distillation approach is a widely used routine operation in quantization tasks. A1c: Although we both use a distillation method, there are significant differences in our specific implementations. 1. We only distill the quantizer parameters and do not need to distill the model parameters, which greatly reduces memory requirements and speeds up the algorithm. 2. The granularity of distillation is different. We target the largest substructure of the Transformer, which is the Transformer Layer. In SwinIR-light, there are a total of four Transformer layers, each containing six Transformer blocks. In contrast, [1-6] distills the residual blocks, of which there are 32 in EDSR—eight times more than ours. Therefore, our loss function computation will be faster and the distillation approach is different. > Q2:Experiments are insufficient... A2: Both [4] and [5] are QAT methods, training the model from scratch, while our proposed algorithm is a PTQ method. Comparing between PTQ method with QAT method is unfair as QAT often performs better than PTQ because of the fine-tuning of model parameters. A fairer comparison between [4][5] and our paper is applying their methods to optimize the quantizer parameters but freeze the model parameters. The result is shown below and more is in Author Rebuttal's attachment file. The additional experiments show our robustness and advantages. |Method|Bit|Set5(x4)||Set14(x4)||B100(x4)||Urban100(x4)||Manga109(x4)|| |-|-|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:| |||PSNR|SSIM|PSNR|SSIM|PSNR|SSIM|PSNR|SSIM|PSNR|SSIM| |OFQ*|3|30.16|0.854|27.26|0.746|26.73|0.705|24.25|0.714|26.93|0.843| |Ours|3|30.91|0.870|27.75|0.757|26.99|0.713|24.85|0.736|28.21|0.868| |OFQ*|2|29.15|0.827|26.59|0.725|26.33|0.688|23.59|0.680|25.61|0.803| |Ours|2|29.53|0.837|26.86|0.732|26.47|0.693|23.84|0.691|26.07|0.816| > Q3:Experimental settings are not consistent. Only 100 images are used in [7], which is inconsistent with this paper. A3: Our paper mentions using 32 images in line 228. **First**, we and [6] are two independent papers, so there is no need to adopt the same settings. **Second**, we achieved better performance with fewer images, which is our advantage instead of weakness. **Finally**, the results of 100 images are shown below, more can be seen in Author Rebuttal. |Set Size|Bit|Set5(x4)||Set14(x4)||B100(x4)||Urban100(x4)||Manga109(x4)|| |-|-|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:| |||PSNR|SSIM|PSNR|SSIM|PSNR|SSIM|PSNR|SSIM|PSNR|SSIM| |32|3|30.91|0.870|27.75|0.757|26.99|0.713|24.85|0.736|28.21|0.868| |100|3|30.94|0.870|27.79|0.757|27.02|0.713|24.90|0.737|28.22|0.867| |32|2|29.53|0.837|26.86|0.732|26.47|0.693|23.84|0.691|26.07|0.816| |100|2|29.82|0.843|27.03|0.736|26.57|0.696|24.01|0.698|26.40|0.823| References: [1] Li, Huixia et al. “PAMS: Quantized Super-Resolution via Parameterized Max Scale.” ECCV (2020). [2] Zhong, Yunshan et al. “Dynamic Dual Trainable Bounds for Ultra-low Precision Super-Resolution Networks.” ECCV (2022). [3] Hong, Chee and Kyoung Mu Lee. “Overcoming Distribution Mismatch in Quantizing Image Super-Resolution Networks.” ECCV (2024). [4] Li, Yanjing et al. “Q-ViT: Accurate and Fully Quantized Low-bit Vision Transformer.” NIPS (2022) [5] Liu, Shi et al. “Oscillation-free Quantization for Low-bit Vision Transformers.” ICML (2023). [6] Tu, Zhaopeng et al. “Toward Accurate Post-Training Quantization for Image Super Resolution.” CVPR (2023). --- Rebuttal Comment 1.1: Comment: The author's rebuttal partially solved my problem. The authors mentioned that 'not urbanization [2] is not a serious issue.' In fact, PTQ and QAT are only different in the training process, and there is no difference in quantization essence. Since there are already highly similar quantization approaches in previous works, the authors should first consider clarifying the contribution of this paper during their writing, rather than attempting to use the A+B approach to avoid citation and exaggerate the contribution. So the final rating would be 3. --- Reply to Comment 1.1.1: Comment: Dear Reviewer DoGX, We appreciate your constructive reviews and positive feedback on our work, 2DQuant. We would like to continue addressing your concerns and resolving any misunderstandings. > Q1: The author's rebuttal partially solved my problem. First, you mentioned that we partially solved your problem. We would like to know what remains unresolved. We will do our best to address it. Please let us know the specifics. > Q2: In fact, PTQ and QAT are only different in the training process, and there is no difference in quantization essence. Second, you emphasize once again that QAT shares the same essence with PTQ. We do not agree with this point due to the differences in training cost, performance degradation, and task-specific designs. Nevertheless, we have provided detailed experimental results of [2]. We also mentioned in the rebuttal that we will cite [1] in our reversion. Our method outperforms [2] when following the PTQ settings. We think we have reached an agreement in this issue. > Q3: Since there are already highly similar quantization approaches in previous works... Third, we do not think the previous works are "highly similar" to ours. **IF so, how could our method significantly outperform other methods in the 2-bit senario?** The experimental results demonstrate that our approach is distinct from previous ones and **certainly not a simple combination of A and B**. > Q4: ...rather than attempting to use the A+B approach to avoid citation and exaggerate the contribution. Additionally, we must clarify that **we do not avoid citation or exaggerate the contribution**. As mentioned earlier, we will cite [1] in the reversion. Furthermore, the experimental results, not we, demonstrate the contributions, leaving no room for exaggeration. > Q5: So the final rating would be 3. We noticed that you scored **3** at first, but shortly after the beginning of the rebuttal, you revised it to **2** without explanation. After addressing your concerns and making efforts to resolve them, you reverted your rating back to **3**. We'd like to know why you decided to make the initial change. Could you please clarify the reason behind your initial change? Kindly list any unresolved issues so we can further improve the work. Thank you for your continued feedback and constructive engagement with our work. We look forward to resolving any remaining concerns and continuing the discussion to further improve our research. Best Regards, Authors [1] Zhong, Yunshan et al. “Dynamic Dual Trainable Bounds for Ultra-low Precision Super-Resolution Networks.” ECCV (2022). [2] Liu, Shi et al. “Oscillation-free Quantization for Low-bit Vision Transformers.” ICML (2023). --- Rebuttal 2: Comment: Dear Authors, I hope this message finds you well. My concerns still revolve around the similarity between the core idea of this article and [1]. Although you mentioned the differences in training details between PTQ and QAT (such as the parameters updating during backward pass) and the contribution of your method to SwinIR (which I have never denied), the academic contribution of this paper is still not convincing. The difference between SwinIR and CNN models lies in the model structure, rather than the constituent modules of the model. Therefore, applying existing quantization methods to new model architectures in this paper has a certain amount of engineering contribution rather than academic contribution. If the paper considers contribution from the perspective of engineering quantity, should deployment on practical reasoning frameworks become a necessary factor? In conclusion, I choose to maintain my score. Sincerely, Reviewer DoGX References: [1] Zhong, Yunshan et al. “Dynamic Dual Trainable Bounds for Ultra-low Precision Super-Resolution Networks.” ECCV (2022). --- Rebuttal Comment 2.1: Comment: Dear Reviewer DoGX, Thank you for your quick response and for outlining the concern that remains unresolved. > Q1: My concerns still revolve around the similarity between the core idea of this article and [1] A1: The main contributions of [1] are 1) A layer-wise quantizer with trainable upper and lower bounds, 2) A dynamic gate controller to adaptively adjust the upper and lower bounds at runtime. We differentiate ourselves from [1] in the following aspects: - **Quantizer.** We have indeed utilized the same quantize. But this is widely used and is not our main contribution. - **Different loss functions.** The granularity of the loss functions during training differs. Our DQC is conducted on the four Transformer layers within the SwinIR architecture, which represents the largest substructure. Experimentation with finer granularity, such as layer-wise (24 in total) or linear-wise (96 in total), has shown that this level of granularity is most optimal. Due to page limitations, this part of the content was omitted from the paper. In contrast, [1] applies quantization to each residual block, totaling 32 blocks in EDSR, which is slower and less effective than DQC. - **No additional modules.** [1] employs a __Dynamic gate controller__, necessitating additional floating-point (FP) modules to obtain dynamic bounds, thereby increasing the inference cost. We, on the other hand, use entirely static quantizers and do not require any additional modules, achieving the theoretical optimal speedup ratio for quantization. - **Initial values from DOBI.** [1] uses the percentile method to assign initial values, which is highly sensitive to the hyperparameters and thus requires a time-consuming search for the best parameters. Failure to do so can lead to **model collapse**, as reported in their GitHub repository issue 3. This is particularly problematic for Transformer-based models, where the self-attention modules contain many more activations. The consequence is that all elements in the attention map become the same value after quantization. Therefore, it is imperative to apply our efficient DOBI method for initial value assignment. The experimental results in Table 3 demonstrate that DOBI alone can achieve SOTA performance. Additionally, we would like to bring to your attention a **critical issue** in [1]'s implementation. They report the quantization method as per-layer quantization. But, in their github repo files(line 327-349 in model/quant\_ops.py in github repo \"zysxmu/DDTB\"), they actually uses different bounds for different channels in one activation tensor. This implement is a typical error for per-channel quantization. This is not appliable in real world GPU and will greatly **increase** the model performance. And the correct approach to quantization involves using a single pair of clipping bounds for activations and different pairs for different convolutional kernels, which is also noted in [2]. We hope you can focus on **correct** quantization approaches instead of one with implement error. We trust these clarifications address your concerns and further highlight the unique contributions of our work. Sincerely, Authors References: [1] Zhong, Yunshan et al. "Dynamic Dual Trainable Bounds for Ultra-low Precision Super-Resolution Networks." ECCV (2022). [2] Markus Nagel et al. "A White Paper on Neural Network Quantization." arXiv:2106.08295.
Summary: This paper present a practical post-training quantization method for SR transformer, namely 2DQuant. The 2dquant mainly rely one two techniques, the first is Distribution-Oriented Bound Initialization (DOBI) determine the quantization range initially, and the second one is distillation Quantization calibration (DQC) to further finetune the quantizer for accurate quantization. Both of the proposed two techniques significantly improving the performance of PTQ for SR transformer, and the quantization can even be pushed to as low as 2-bit without retraining. Strengths: 1. The proposed method is a PTQ method without full retraining instead of the usually studied quantization-aware training methods for SR tasks, which can be seen as resource-saving in real applications. Considering the targeted architectures are SR transformers is always used as well-pretrained models, the PTQ pipeline is significant and practical. 2. The proposed quantization method for SR transformer are clear and effective. DOBI allows the optimization of quantization ranges start from a statistical search-based optimal, and then use DQC to fully optimized the quantizer. It’s brings improvements of both weight and activation quantization, especially for the activation with dynamic distribution that requires more robust and powerful quantizer. 3. Experiments show that the proposed method achieve SOTA results on ultra-low 2-4 bit Sr networks, which allows SR transformer enjoys very good efficiency with little accuracy loss. And the visualization also show the good performance of the networks quantized by the proposed methods. 4. The paper is easy to follow and with good writing and presentation, and with comprehensive visualization for the distribution of both weights and distributions. Weaknesses: 1. As presented in section 3, the authors use fake quantization during the design of the method, but didn’t mention if it can be replaced by the real quant, which can be implemented and bring real acceleration in hardware. The author should discuss if the proposed method can achieve real quantized inference when deployment. 2. The algorithm of DQC is missed and make the description not very clear, I suggest to add it or merge it to the algorithm of DOBI. And Figure 3 not highlight the application of the proposed techniques, which is also suggested to revise. 3. Some details of writing should be improved and polished carefully, e.g., the subtitles of section 2 and 3, some explain of the proposed equations are missing. Technical Quality: 3 Clarity: 3 Questions for Authors: Please respond for the issues raised in weaknesses part. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 4 Limitations: The authors have discussed the limitation in their paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > Q1:As presented in section 3, the authors use fake quantization during the design of the method, but didn’t mention if it can be replaced by the real quant, which can be implemented and bring real acceleration in hardware. The author should discuss if the proposed method can achieve real quantized inference when deployment. A1: Fake quantization is a widely used technique in model quantization. Its purpose is to simulate the precision loss caused by quantization. Using fake quantization, we can obtain the model parameters of the quantizer. With the quantizer's parameters, we can deploy and test the model using common quantization techniques. We omitted this part in the article because the quantization method we adopted is already very mature in this regard and our speedup ratios are obtained from real deployment. Besides, the speedup ratio reported in lines 20-22 of our paper is the actual (real quant) deployment speedup ratio——2DQuant achieves an increase in PSNR of up to 4.52 dB on Set5 (×2) compared to SOTA when quantized to 2 bits, along with a 3.60× compression ratio and a 5.08× speedup ratio. > Q2a:The algorithm of DQC is missed and make the description not very clear, I suggest to add it or merge it to the algorithm of DOBI. A2a: Thank you for your suggestion. The nature of DQC is distillation between the FP model and the quantized model, which optimizes the parameters of the quantizers. We will merge this part of the algorithm into the DOBI algorithm. > Q2b: And Figure 3 not highlight the application of the proposed techniques, which is also suggested to revise. A2b: You are correct. Figure 3 is not closely related to the proposed techniques. However, it is still very necessary to present Figure 3. We included Figure 3 to show that in the Transformer block, we have quantized all the computationally intensive parts, such as BMM, FC, etc, some of which are often forgotten to be quantized. Other parts, such as Softmax and Layernorm, have relatively small computational loads, but quantizing these parts would have a significant negative impact on the model. > Q3: Some details of writing should be improved and polished carefully, e.g., the subtitles of section 2 and 3, some explain of the proposed equations are missing. A3: Thank you very much for your suggestions on writing details. We have already made the changes you mentioned in the arXiv version. --- Rebuttal Comment 1.1: Title: After rebuttal Comment: Thank you for the response. After reviewing the rebuttal, I can confirm that all of my concerns have been fully addressed.
Summary: The authors propose a low-bit post-training quantization (PTQ) method, 2DQuant, for image super-resolution. 2DQuant is a dual-stage low-bit PTQ method. They first investigate the weight and activations. They propose Distribution-Oriented Bound Initialization (DOBI) by using different searching strategies to get a coarse bound for quantizers. They further propose Distillation Quantization Calibration (DQC) to refine the quantizer parameters with a distillation approach. They provide extensive experiments on different bits and scaling factors. When compared with SOTA methods, there proposed methods achieve superior performance with 3.6X compression ratio and 5.08X speedup ratio in the 2-bit case. Strengths: 1)The authors explore PTQ to Transformer based image super-resolution (SR), which is one of the first works in this research field. This topic is also very practical and is favored in real-world applications. 2)The authors investigate this topic with some observations and visualizations. For example, the visualizations in Figures 4, 5, 7-11, are inspiring. 3)They propose distribution-oriented Bound Initialization (DOBI) to minimize the value heterogenization between quantized and the FP models. They DOBI to search for quantizer parameters by employing customized search strategies for different distributions to balance speed and accuracy. 4)They propose Distillation Quantization Calibration (DQC) to adjust each bound to its best position finely. DQC is used to ensure the quantized model align with the FP model on the calibration set. 5)In the ablation study, the authors investigate the effect of some key components, like DOBI and DQC. 6)In the main comparisons with other SOTA methods, the proposed method obtain the best performance and visual results. I believe this low-bit quantization work in Transformer achieves excellent permance and has good potential in future research. 7)The authors give detailed derivation of their backward gradient propagation formula, which makes the method more convincing. Weaknesses: The authors did not give results about inference time. The quatization can reduce params and ops largely. Then, how about the acceleration when deploying the quantized models into device? Some parts in the experiments are confusing. For example, in Table 3, the results of EDSR confuse me somehow. I am not very sure how the EDSR results are obtained. The original EDSR is full-precision model. Here, the authors report its quantized version. The baseline in Table 3 is not very clear. Please clarify them. In Table 4 (b), when the batch size becomes larger, the performance changes slightly. This is not very consistent with the common observation, where larger batch size usually improves performance. The writing can be further improved. Some details should be given attention and revised. For example, Section 3.2, the first letter should be capital. Technical Quality: 3 Clarity: 3 Questions for Authors: In Figure 1, the proposed method performs even better than the full-precision (FP) version. What are the potential reasons behind this observation? Can this method be applied to diffusion based image SR models? If so, please give some key ideas. It will be much better if the authors can provide some preliminary results. In Table 3, what is the model size for the original EDSR? I do not which version of EDSR did the authors use in the paper. What is the motivation by providing two versions, i.e., DOBI (ours) and 2DQuant (ours), in Table 3? ------------After Rebuttal and Discussions---------- After rebuttal and disscussions, my concerns have been well solved. Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 4 Limitations: The authors have discussion the limitation in their paper. Flag For Ethics Review: No ethics review needed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > Q1: ... how about the acceleration when deploying the quantized models into device? A1: In the field of quantization research, when we apply the same quantization method to the same neural network module, the speedup ratio of the model does not change due to variations in the quantizer parameters. Therefore, the algorithm typically focuses on how to better optimize the quantizer parameters to improve model performance after applying such quantization methods. Besides, the speedup ratio reported in lines 20-22 of our paper is the actual deployment speedup ratio——2DQuant achieves an increase in PSNR of up to 4.52 dB on Set5 (×2) compared to SOTA when quantized to 2 bits, along with a 3.60× compression ratio and a 5.08× speedup ratio. > Q2:.... I am not very sure how the EDSR results are obtained. The baseline in Table 3 is not very clear. Please clarify them. A2: We clarify that the EDSR in Table 3 does not refer to the original FP model. Its specific meaning is explained in lines 237-239 – it is the test result of the previous SOTA method DBDC+Pac on EDSR. It should be noted that the parameter size of EDSR is 172.36MB, while the parameter size of SwinIR-light is 3.42MB, which is about 2% of the former. As for the baseline in Table 3, it is the full-precision version of SwinIR-light. Our comparison with EDSR aims to illustrate that: 1. The best result in [1] was achieved on EDSR, and applying our quantization method on SwinIR-light performs better than their method on EDSR; 2. The choice of the quantized model itself is crucial. To achieve better quantization results, we cannot only experiment on classic models (as done in the [1]), because in terms of parameter size, cutting-edge models before quantization already outperform quantized classic models. > Q3:In Table 4 (b), when the batch size becomes larger, the performance changes slightly. This is not very consistent with the common observation, that larger batch size usually improves performance. A3:Thank you for your keen observation. The main reason for this issue lies in the fact that DOBI provides excellent quantizer parameters. In the SR field, the numerical evaluation metric usually adopted is PSNR. If we only look at PSNR, with the increase of batch size, our performance shows a slight improvement. The reason for this slight improvement is that our one-stage search algorithm, DOBI, has already provided quite optimal quantizer parameters, resulting in a very high starting performance for the model. At the same time, the initialization parameters provided by DOBI ensure that the model can achieve almost the same excellent results regardless of changes in batch size during subsequent training. I believe this phenomenon is beneficial because, even when GPU memory is limited, we can still achieve results comparable to those obtained with a large batch size by reducing the batch size and extending the training time for DQC. > Q4: The writing can be further improved. Some details should be given attention and revised. For example, Section 3.2, the first letter should be capital. A4: Thank you for your suggestions on writing details. We have revised in the Arxiv version. > Q5:In Figure 1, the proposed method performs even better than the full-precision (FP) version. What are the potential reasons behind this observation? A5:We clarify that the main reason is that FP models tend to overfit the training set, leading to weaker generalization performance on **some** test data. Model quantization might alleviate this overfitting. Although there might be an overall performance drop, the performance on certain data can be better than that of the FP model. > Can this method be applied to diffusion based image SR models? If so, please give some key ideas. It will be much better if the authors can provide some preliminary results. A6:Yes! The core components of diffusion are the denoising network and the sampler, with UNet being a common form of the denoising network. The computationally intensive parts of the UNet are the convolutional layers. We can use the DOBI and DQC methods mentioned in the article to perform two-stage quantization of the convolutional layer parameters. However, it is important to note that the generation process of diffusion models typically involves several steps, and the distribution of activations is usually associated with the time steps. Therefore, designing algorithms tailored to each time step might yield better results. For example, we can divide the time steps into different segments and apply different quantizers to the activations in different segments. > Q7:In Table 3, what is the model size for the original EDSR? I do not which version of EDSR did the authors use in the paper. A7: As mentioned in A2, the EDSR in Table 3 refers to the data in Table 3 of [1]. Details and parameters of the model used can be found in the following link at line 11 in the blob/master/src/model/edsr.py file in the github repo "sanghyun-son/EDSR-PyTorch". To our knowledge, this is the largest version of EDSR. > Q8: What is the motivation by providing two versions, i.e., DOBI (ours) and 2DQuant (ours), in Table 3? A8: DOBI is our one-stage algorithm while 2DQuant represents the final results of both the one-stage and two-stage processes. The reason we included both in Table 3 is to demonstrate that even just using DOBI, the model can exceed the SOTA performance on most datasets, and the two-stage algorithm can further enhance the model's performance. [1] Tu, Zhaopeng et al. “Toward Accurate Post-Training Quantization for Image Super Resolution.” 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2023): 5856-5865. --- Rebuttal Comment 1.1: Comment: Thanks for your detailed responses, which have well addressed my concerns. I also go through other reviewer's comments, and recognize the contributions of low-bit post-training quantization (PTQ) for image super-resolution with using different searching strategies to get a coarse bound for quantizers. Overall, I tend to keep my original score as 6.
Rebuttal 1: Rebuttal: Dear Reviewers and Area Chairs, We appreciate all reviewers (R1-6qcS, R2-Xp5V, R3-DoGx, R4-EXmU) for the constructive reviews and positive feedback to our 2DQuant. Your expertise and insightful comments help us to further improve our paper. We are pleased that: - R1 and R2 acknowledge the practicality of our techniques. - R1, R2, and R4 think our visualizations are inspiring, comprehensive, or sufficient. - R1 and R2 recognize the impressive performance of our proposed DOBI and DQC. - R2, R3, and R4 appreciate our writing as easy to follow or clear. We have responded individually to each reviewer to address any concerns. Here, we offer a summary: - We clarify the acceleration, deploying problem, fake quantization, and EDSR related problems. - We discuss the reason that the Quant model outperforms FP model, the application of 2DQuant on the Diffusion model, the batch size ablation, and the motivation of table design. - We explain the difference between our method and [7]. - We improve the writing, including motivation in the introduction, the subtitle of Section 2 and 3, and merging DQC into DOBI. - We compare our approach with four PTQ methods, and one QAT method and provide analysis. The results are in the **attached PDF**. Thanks again to all the reviewers and area chairs. We appreciate you taking the time to review our responses and hope to discuss further whether the issues have been resolved. If you need any clarification, please let us know. Best Regards, Authors --- The attached PDF includes: 1. Table 1: Quantitative comparison with PTQ and QAT methods. 2. Table 2: Ablation on size of calibration set. ## Discussion of Table 1 Reviewer EXmU requests us to compare with more PTQ methods, such as PTQ4ViT[1], FQ-ViT[2], NoiseQuant[3], and RepQ-ViT[4]. Reviewer DoGX requests us to compare with more QAT methods, such as Q-ViT[5] and OFQ[6]. We have compared our methods with OFQ[6] in the PTQ fashion and all the PTQ methods. The results are shown in **Table 1 in the attached PDF file**. We fail to compare with Q-ViT[5] as their code assigns inappropriate initial values (see line 162-165 and line 188-193 in \_quan\_base.py file in YanjingLi0202/Q-ViT) to the quantizers and the pre-trained model fails to perform Forward Propagation, let alone optimization. For a fair comparison, we made necessary adjustments to their codes. - FQ-ViT quantizes all modules in SwinIR, which certainly leads to worse performance compared with partly-quantized models. So we align its quantization scheme with ours, only quantize the linear and BMM modules in the Transformer block. - NoiseQuant doesn't quantize the BMM part in the Transformer block and we fixed this bug. - OFQ is a QAT method. For a fair comparison, we apply OFQ in our task but freeze the model parameters, only optimize the quantizer parameters. We mark the highest with red and the second highest with blue. It's worth noting that our method **gains SOTA performance in 2bit and 3bit situations** and is _slightly_ lower than RepQ. But RepQ uses **per-channel** quantization while ours is per-tensor. And in one transformer block, RepQ needs **128** quantizers but 2DQuant only takes **16** quantizers. If switched to per-tensor quantization, RepQ's performance is lower than ours in all situations, which is not shown in Table 1 due to page limitation. Besides, FP-ViT adopts minmax optimizer, which leads to model collapse in 4bit. So it's not necessary to compare it with lower bits. ## Discussion of Table 2 Table 2 shows the result. With a larger calibration set, 2DQuant gains higher performance in all situations. Reference: [1] Z. Liu et al. “Post-training quantization for vision transformer.” NIPS (2021). [2] Y. Lin et al. “Fq-vit: Post-training quantization for fully quantized vision transformer.” IJCAI (2022). [3] Y. Liu et al. “Noisyquant: Noisy bias-enhanced post-training activation quantization for vision transformers.” CVPR (2023). [4] Z. Li et al. “Repq-vit: Scale reparameterization for post-training quantization of vision transformers.” ICCV (2023). [5] Li, Yanjing et al. “Q-ViT: Accurate and Fully Quantized Low-bit Vision Transformer.” NIPS (2022). [6] Liu, Shi et al. “Oscillation-free Quantization for Low-bit Vision Transformers.” ICML (2023). [7] Tu, Zhaopeng et al. “Toward Accurate Post-Training Quantization for Image Super Resolution.” CVPR (2023). Pdf: /pdf/ed055d973a66c63a5af49b0c6df8ca35ae4062e8.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Feint Behaviors and Strategies: Formalization, Implementation and Evaluation
Accept (poster)
Summary: This paper proposes a method to generate feint behaviors and strategies so that the agent can obtain temporal and spatial advantages when competing with opponents. Specifically, this paper first describes the characteristics of feint behaviors at action-level and proposes a Feint behavior template generator called Palindrome-directed Generation that extracts subsets of semi-symmetrical actions from an offensive behavior and synthesizes them as a Feint behavior. Then, the paper proposes a dual-behavior model, which considers the physical constraint and effectiveness when constructing effective combinations of feint behaviors and follow-up actions. Moreover, the paper formalizes the feint implications at strategy-level and proposes rewards reflecting the temporal, spatial, and collective impacts. Finally, the paper provides an implementation scheme of feint behaviors that can be integrated into the existing MARL frameworks. Strengths: 1. This paper provides the formalization of feint behaviors and the corresponding palindrome-directed feint behavior template generator. The analysis of feint behaviors and their combination with follow-up actions makes sense. 2. The proposed feint behavior implementation scheme can be easily integrated into existing MARL frameworks. 3. The paper is organized well and the presentation is easy to follow. Weaknesses: 1. This paper claims that most offensive behaviors can be decomposed into three action sequences, which are Stretch-out Sequence (Sequence 1), Reward Sequence (Sequence 2), and Retract Sequence (Sequence 3) and it proposes a feint behavior template generator based on palindrome structure. This design should be effective for many feint behaviors composed at the action level. However, more complex feint behaviors belonging to the strategy level may not consist of the above three parts and thus the palindrome structure will not work. This paper may specify more clearly the scope of feint behaviors to which the proposed scheme can be applied. 2. The discussion in the experiment section is relatively weak. For example, according to Appendix E, when applying the feint behavior implementation to MARL frameworks, the implementation trains a feint policy model to generate feint behaviors. The experiment results show that the feint behaviors can increase task returns. However, it is unclear whether the actions generated by the feint policy model are really feint behaviors. It is also possible that the feint policy model trained based on $Rew_{collective}$ manages to find certain good actions to beat opponents. This paper may have more discussion about this based on a clear definition of feint behaviors. Technical Quality: 2 Clarity: 3 Questions for Authors: 1. When combining feint behaviors with follow-up actions, this paper proposes to first select an intended follow-up high-reward behavior, and the end physical state of the feint behavior is constrained to be close to the starting physical state of the follow-up behavior. How to implement this process for MARL framework and what is the time complexity when using the palindrome-directed feint behavior template? 2. In section 4.2.1, the paper proposes that "the Dynamic Short-Long-Term temporal impacts of Feint shall be (1)...(2)...(3)...". How do the following design of $Rew_{temporal}$ achieve this? 3. In Equation (1) and (2), should $\alpha_t$ and $\beta_t$ be inside $\sum_t$? Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: Please see the weakness section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## [Reviewer 8F3g] ### - Whether the actions generated by the Feint policy model are really Feint behaviors? The actions generated by Feint policy model are indeed Feint behaviors. As discussed in Appendix E, the Feint policy model is constraint by intended high-reward behaviors and corresponding available Feint templates to choose actions. The collective rewards are effective under such constraints, and thus this reward contributes to selecting good actions of Feint behaviors in a Dual-Behavior model. [### - How to implement the process of “Palindrome-directed Feint behavior generation” in MARL framework? We discussed this implementation in Appendix E, which detailed explain how the Palindrome templates and Dual-Bahavior models are fused into our MARL frameworks. ### - How does the design of Rew_temporal achieve the 3 points of Dynamic Short-Long-Term temporal impacts discussed in Section 4.2.1? The Design of Rew_temporal achieves the 3 points discussed in Section 4.2.1 as follows: We use large weighted accumulation of short-term rewards for Feint behaviors and the follow-up high-reward behaviors (the Dual-Behavior model) to address that strong correlation of Feint behaviors and follow-up high-reward behaviors. We incorporate long-term reward calculation to avoid the short-sighted issue of simply focusing on short-term rewards, which address the explicit or implicit long-term effects of Feint cycles. This setting helps to stabilize the agents and prevent them from being too aggressive. The design of dynamic short-term and long-term thresholds naturally reflects our action-level design for different lengths of Feint templates. When choosing different templates and composing different Dual-Behavior models in gaming iterations, the evaluation of short-term and long-term thresholds is dynamically adjusted according to the length of Dual-Behavior models. ### - In Equation (1) and (2), should $\alpha_𝑡$ and $\beta_𝑡$ be inside $\sum t$? We thank the reviewer for pointing out our oversight on this and we sincerely apologize. The $\alpha_𝑡$ and $\beta_𝑡$ should indeed be inside $\sum t$. We will fix this notation in the revision. --- Rebuttal Comment 1.1: Comment: Thanks for the response. I know Appendix E mentions the implementation. However, I think the explanation is not clear. For example, how to generate feint templates through palindrome-directed generation? It is not very complex for domains like single-agent boxing games. However, it can be very complicated for general MARL domains. And what is the time complexity when using the palindrome-directed feint behavior template? --- Reply to Comment 1.1.1: Title: Response for the expanded questions Comment: We thank the reviewer for the clarification and suggestions. We answer the question as follows: ### How to generate feint templates through palindrome-directed generation? It is not very complex for domains like single-agent boxing games. However, it can be very complicated for general MARL domains. And what is the time complexity when using the palindrome-directed feint behavior template? Given a dataset of action sequences, palindrome-directed templates provide constraints on the possible combinations of action sequences to generate Feint behaviors. - Pre-processing: Physically-feasible Feint behavior template candidates can be computed once during data pre-processing. Note that here only the possible action connections in the Dual-Behavior Model are calculated, not the exact action sequence length. - Learning: To learn the Effectiveness of Feint behaviors, we train the Feint Policy Model to sample actions under the constraints of possible palindrome-directed templates. Since the templates are pre-computed, the learning task is constrained to the choice of proper templates with proper action sequence lengths. Thus the complexity of each imaginary play is no more than a regular play (having fewer available actions to choose). And since our design (shown in Figure 11) only inference either the Feint Policy Model or the Regular Policy Model at each gaming iteration, the upper bound of the time complexity increase is average_game_episode_length / average_Dual_Behavior_Model_length (This upper bound happens when an agent can initiate Feint behaviors at every possible game state, which is not possible in most games). Our above design can be naturally extended to more complex MARL domains since we do not introduce more selection possibilities in every game iteration than using a regular policy model. We do acknowledge that there might be possible scenarios where more efficient implementations for Feint behaviors, but it doesn’t harm the main contribution of our work, which is to provide a generalizable way to generate Feint behaviors and concrete examples to effectively fuse our formalization of Feint behaviors in existing MARL models.
Summary: The authors present a comprehensive formalization of feint behaviors in competitive multi-player games. The paper introduces a method for the automatic generation of feint behaviors using Palindrome-directed templates and combines them with high-reward actions in a Dual-Behavior Model. This formalization is incorporated into Multi-Agent Reinforcement Learning (MARL) frameworks. The authors conducted extensive evaluations using various MARL models in both two-player and six-player scenarios, demonstrating that their formalization significantly improves game rewards. Strengths: * The paper introduces a new formalization of feint behaviors, in which automatic generation of Feint behaviors via Palindrome-directed templates, and combine them with intended high-reward actions in a Dual-Behavior Model * The methodology addresses Feint implications on game strategies in terms of the temporal, spatial, and their collective impacts via the implementation in existing MARL frameworks. * In the experiments using various MARL models in both two-player and six-player scenarios, the authors demonstrate that their formalization significantly improves game rewards. Weaknesses: * The paper may lack clarity and consistency in the definitions and notations of key concepts, such as the reward functions in Section 4.2.1 and the Policy Occupancy Measure for model-free policies in Section 4.2.2. Additionally, the inconsistency in describing the payoff matrix dimensions in Section 4.2.3 creates confusion about the interaction between different agents' policies. * The paper also does not provide sufficient methodological details for calculating crucial measures, such as the Policy Occupancy Measure for model-free policies and the structure of the new policy occupancy measure. * The evaluation results focus primarily on gaming rewards without a thorough analysis of the types and distributions of observed Feint behaviors. A more comprehensive assessment, including the underlying reasons for focusing on gaming rewards and the detailed impact of Feint behaviors, would enhance the understanding of the study's contributions. Technical Quality: 2 Clarity: 2 Questions for Authors: 1. The current framework of Palindrome-directed Generation of Feint Templates may not capture certain Feint behaviors. For example, while linear movements can be reversed to return to the same position, planar movements often involve feints that result in a change of location, which may not be adequately represented by this method. Additionally, there may be other types of Feint behaviors that this framework cannot express. It would be beneficial to clarify these limitations and discuss any potential Feints that fall outside the scope of this approach. 2. In Section 4.2.1, the paper introduces two reward functions, R^i, in Eqs (1) and (2). However, it is unclear how these reward functions are determined. If the R^i in both equations are the same, it seems this reward is simply proposing a weighted combination of short-term and long-term rewards. If they are not the same, the authors should use distinct notations to clarify the difference between the two. Additionally, the definition of R^i is not provided, making it difficult to understand the practical implementation of these reward functions. To improve clarity, the authors can provide concrete examples illustrating how R^i is defined and calculated in both the short-term and long-term contexts. 3. In Section 4.2.2, it is unclear how the Policy Occupancy Measure is calculated for model-free policies, such as those used in deep RL. The paper may not provide a detailed explanation of how to derive this measure without a model of the environment. 4. (Section 4.2.3) In the discussion of the payoff matrix, it is stated as P_i ​ ×P_i, but then it is described as an M×N matrix. This seems inconsistent. Shouldn't it be P_i ​ ×P_j ​ instead, representing the policy spaces of different agents? Please clarify how M and N correspond to the policy spaces of agents i and j respectively, and ensure that the definition and usage of the payoff matrix align with the interaction between different agents' policies. 5. (Section 4.2.3) To accurately calculate the divergence between the payoff matrix and the vector a_{M+1}, it is necessary that a_{M+1} also be represented as an M×N matrix. The paper should clarify how a_{M+1}, representing the new policy occupancy measure after introducing the Feint policy, is structured as an M×N matrix, and provide detailed steps on how these values are computed to maintain coherence with the payoff matrix. 6. In Section 5.1, the authors refer to Appendix D for Testbed Implementations. It is crucial to specify whether these implementations are original or derived from prior research in the main text. 7. In Section 5.2, it appears that only gaming rewards are evaluated. To provide a comprehensive assessment, it is essential to detail the underlying reasons for this focus. Additionally, the results should include a thorough analysis of the types and distributions of observed Feint behaviors. Such detailed results would offer a clearer understanding of how Feint behaviors impact the overall performance and diversity in gaming scenarios. 8. The citation format used in the manuscript does not adhere to the NeurIPS guidelines, making it difficult to read. For instance, in lines 20-21, "and explore strategies based on them Wampler et al. [2010], Won et al. [2021a]" should be revised to either [1, 2] or (Wampler et al. 2010; Won et al. 2021a) for better readability and compliance with the NeurIPS format. Confidence: 3 Soundness: 2 Presentation: 2 Contribution: 3 Limitations: The authors have not adequately addressed the limitations and potential negative societal impact of their work. While the paper presents a novel approach to generating and integrating Feint behaviors in MARL frameworks, it lacks a detailed discussion on the limitations of the proposed methods, such as the scope of applicability and potential challenges in real-world implementations. To improve, the authors should include a section discussing these aspects, providing a balanced view of the strengths and weaknesses of their approach, and consider potential negative outcomes and mitigation strategies. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## [Reviewer R6PG] ### - In Section 4.2.1, the two reward functions, $R^i$, in Eqs (1) and (2). Are they the same? Yes, the $R^i$ in Eqs(1) and Eqs(2) are the same, which are the game environment reward given a state $s_t$, the agent action $a_t^i$, and opponents actions $a_t^{-i}$. We do not intend to modify such a reward because this is an objective reward. Our key design here is to use a dynamic short-long-term way to calculate the accumulated rewards for the combination Dual-Behavior model and long-term actions. We highlight the effect of short-term Feint behaviors due to their characteristics and also incorporate long-term consideration, aiming to solve the “short-sight” and “far-sight” issue of previous reward calculations (discussed in Section 4.1). ### - In Section 4.2.2, how is the Policy Occupancy Measure calculated for model-free policies, such as those used in deep RL? Directly computing the exact occupancy measure for general Markov games is indeed generally intractable. However, since our goal is to reward Feint behaviors in terms of effective influence range, we focus on the discrepancy of occupancy measures introduced by Feint behaviors. We use the method introduced in [1] to approximate the maximization of occupancy measure difference between regular policy and Feint policy. Their key observation is that although tracking the policy in the parameter space of models is intractable, there is a one-to-one correspondence between the policy and occupancy measure. Using the inspiration from expert distillation, which treats a neural network $f_{\hat{\theta}}(s, a)$ as being trained to fit a randomly initialized fixed one on state-action pair $(s, a)$ dataset. By assigning an intrinsic reward $r_i^{int}(s, a) = \|f_{\hat{\theta}}(s, a) - f_{\theta}(s, a)\|$ to the player, players are encouraged to aggressively explore state-action with large prediction errors. Thus, the occupancy measure of new policy is pushed to be different from the old ones. ### - In Section 4.2.3, the discussion of the payoff matrix causes confusion. And how is a $a_{M+1}$ structured as an $M*N$ matrix after introducing the Feint policy? We thank the reviewer for pointing out the confusing notation and we sincerely apologize for our oversight here. We provide the following corrections for this part: An agent maintains a pool of policy \$P_i = \{\pi_i^1, …, \pi_i^M\}\$ while an opponent maintains a pool of policy $P_{-i} = \{\pi_-i^1, …, \pi_-i^N\}$: $$ Rew_{collective-diversity}(\pi_i^{M+1}) = D(a_{M+1} \mid\mid A_{P_{i} \times P_{-i}}) $$ $$ a_{M+1}^T:=(Rew_{collective}(\pi_i^{M+1}, \pi_{-i}^j))_{j=1}^N. $$ In our actual implementation, the payoff matrix (the policy pool $P_i = \{\pi_i^1, …, \pi_i^M\}$) is the regular policy sampled from the regular policy model while the $\pi_i^{M+1}$ is the Feint policy sampled from the Feint policy model. We use the approximation method introduced in [1] to compute the divergence between vector $a_{M+1}$ and the payoff matrix, which calculates the lower bound of occupancy divergence: $$ Rew_{collective-diversity}(\pi_i) \geq \frac{\sigma^2_{\min}(A)(1 - 1^\top (A^\top)^+ a_{M+1})^2}{M} + \left\| (I - A^\top (A^\top)^+) a_{M+1} \right\|^2 $$ where $(A^\top)^+$ is the Moore-Penrose pseudoinverse of $A^\top$, and $\sigma_{\min}(A)$ is the minimum singular value of $A$. ### - Need to specify whether the implementations in Appendix D are original or derived from prior research in the main text. We thank the reviewer for this suggestion. We will add such clarification in the main text in our revision. We clarify that the MARL framework is derived from existing research while all the scenario crafting and our Feint formalization are implemented by ourselves. ### - What are the types and distributions of Feint behaviors We respectfully clarify that: the authors are not clear about the questions. Would you mind clarifying more on this? ### - Citation Style We thank the reviewer for pointing out the citation style. We will fix them in our revision. ### Reference [1] Xiangyu Liu, Hangtian Jia, Ying Wen, Yaodong Yang, Yujing Hu, Yingfeng Chen, Changjie Fan, and Zhipeng Hu. Unifying behavioral and response diversity for open-ended learning in zero-sum games. CoRR, abs/2106.04958, 2021. URL https://arxiv.org/abs/2106.04958. --- Rebuttal Comment 1.1: Title: Thank you for your detailed responses Comment: Thank you for your detailed responses. I appreciate the clarifications provided and your explanations have addressed many of my concerns. However, I'd like to expand on a few points to ensure clarity and to maximize the impact of your work. Q1. Palindrome-directed Feint Movements: My earlier comment regarding the types of Feint behaviors was based on a specific concern. For example, while linear movements can be reversed to return to the same position, planar movements often involve feints that result in a change of location, which may not be adequately represented by this method. Is my understanding correct? Does this framework capture such scenarios, or are there limitations in representing more complex spatial feints? While the novelty of your work is significant, providing an honest assessment of these limitations would enhance the paper's overall value. Q7-1. Types and Distributions of Feint Behaviors: To clarify my earlier query, I am interested in understanding how often the patterns depicted in Figure 10 are generated. For instance, what is the frequency and distribution of these Feint behaviors in the test scenarios? Such information would provide deeper insights into how effectively the proposed method generates diverse and meaningful Feint strategies. Q7-2. Comprehensive Evaluation Beyond Gaming Rewards: My comment about the focus on gaming rewards was aimed at suggesting a more comprehensive assessment. Beyond just gaming rewards, including information on the learning curves associated with the rewards defined by Eqs. 5 and 6 would help illustrate whether the proposed method is functioning as intended. Such data would give a clearer picture of how well the new reward structures work in practice. --- Reply to Comment 1.1.1: Title: Response for the expanded questions Comment: We thank the reviewer for the clarification and suggestions. We answer the questions as follows: Q1. We thank the reviewer for the suggestions. For the formalization to leverage Palindrome-directed behaviors, we believe it shall be separated into design and implementation. Our design, presented in this work, shall be considered generally applicable when viewing it under a possible formalization of cognition via set theory. However, our implementation, as done throughout this work, is limited by pre-defined standards (such as dimensions and representation encoding). As far as we are concerned, our design via the symmetry is expected to be a possible way for the formalization; however, our implementation is restricted and we do NOT have empirical evidence on these possibilities. Q7-1. We thank the reviewer for the clarification and we believe this is an insightful thought. We would like to first clarify the example behaviors shown in Figure 10. These example behaviors are action sequences from Mixamo, which are used as candidate behaviors to generate Feint behaviors using our Palindrome-directed Templates. Therefore, it shall be considered as pre-generated behaviors, so strategies can take advantage of these behaviors. The main focus of our work is to provide a generalizable pipeline and demonstrate concrete examples to fuse Feint behaviors into MARL frameworks. We would add analysis for Feint behavior application frequency in our revision. However, we do believe that such an analysis may be restricted to our handcrafted scenarios since it may differ when MARL frameworks and gaming scenarios change. Nevertheless, we thank the reviewer for bringing up such inspiring suggestions and we believe this is an important path for future explorations. Q7-2. We thank the reviewer for the clarification and we acknowledge that such assessments are interesting. We would add the information on learning curves in our revision, though, we believe these assessments may not reflect our main contribution and focus of this paper. Our main experimental goal is to demonstrate that our formalization of Feint behaviors (in action-level and strategy-level) can be well-fused into existing MARL frameworks and have meaningful impacts on game rewards. Gaming reward is one of the most representative metrics to prove the effectiveness of our formalization. Note that, due to the lack of sufficiently-complex scenarios (and benchmarks), we can only hand-crafted several simulation scenarios. Bearing such limitations, more assessments (like learning curves) may not be capable of inspiring sufficiently-useful conclusions, from our own perspective. Nevertheless, we thank the reviewer for bringing up such inspiring suggestions and we believe this is an important path for future explorations. --- Rebuttal 2: Title: Reminder Comment: Dear Reviewer R6PG, We have addressed all of your concerns, and we hereby remind you of double-checking them. Thanks, Authors
Summary: The paper presents a comprehensive approach to formalizing and implementing feint behaviors in multiplayer games. It introduces a new method for the automatic generation of feint behaviors using Palindrome-directed templates and combines these with high-reward actions in a Dual-Behavior Model. The paper further explores the implications of feint behaviors on game strategies, focusing on temporal and spatial impacts, and provides a unified implementation scheme. Experimental results demonstrate significant improvements in game reward gains, diversity, and minimal time overheads when incorporating feint behaviors. Strengths: * The introduction of Palindrome-directed templates for generating feint behaviors is a novel concept that adds value to the field of game strategy formalization. * The paper offers the first detailed formalization of feint behaviors at both the action and strategy levels, addressing a gap in existing literature. * The integration of feint behaviors into common MARL frameworks and the use of diverse experimental scenarios enhance the practical applicability of the proposed methods. * The implementation scheme is designed to be adaptable across various MARL models, which increases its versatility and potential for widespread adoption. Weaknesses: * While the paper evaluates several MARL models, it would benefit from a broader comparison with more diverse baseline strategies and models to establish the relative performance gains more comprehensively, such as [1-2]. * The paper does not sufficiently address potential scalability issues when applying the proposed methods to larger and more complex game environments beyond the tested scenarios. * The focus is primarily on multiplayer games, and the applicability of the proposed methods to other domains with different characteristics is not explored in depth. * It is recommended to apply appropriate smoothing to the curves to enhance the readability of the figures. This is particularly important for Fig 4, where almost all the curves without feint are nearly overlapping. [1] Yu, Chao, et al. "The surprising effectiveness of ppo in cooperative multi-agent games." Advances in Neural Information Processing Systems 35 (2022): 24611-24624. [2] Yang, Tianpei, et al. "ASN: action semantics network for multiagent reinforcement learning." Autonomous Agents and Multi-Agent Systems 37.2 (2023): 45. Technical Quality: 3 Clarity: 3 Questions for Authors: * How does the performance of the proposed feint behavior generation method compare with other state-of-the-art techniques in terms of computational efficiency and scalability? * Can the proposed methods be extended or adapted to non-gaming domains, such as robotics or real-world strategic planning? If so, what modifications would be necessary? * How does the complexity of the Palindrome-directed templates affect the learning curve and training time of the MARL models? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The author mentioned "Limitation discussed in the Discussions," but there is no section called "Discussion" in the manuscript. I also couldn't find any discussion about limitations in other parts of the document. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## [Reviewer CkiB] ### - Apply appropriate smoothing to the curves for Figure 4 We thank the reviewer for this suggestion. We will add appropriate smoothing to the curves for Figure 4 for better visualization in our revision. ### - How does the complexity of the Palindrome-directed templates affect the learning curve and training time of the MARL models? We discussed the computational overheads in Appendix F.2. During training/gaming iterations, the Palindrome-directed Feint behaviors are dynamically generated, thus the overhead induced by them is already measured in the plots Figure.13. We want to note that the available templates (i.e., to satisfy the physical constraints in Palindrome-directed templates) can be precomputed for game simulation once having a set of reference action sequences (discussed in Appendix D.1). Thus in training/gaming iterations, the choice of Feint behaviors given an intended high-reward behavior are well constraint to a reasonable number and can be directly retrieved from pre-computed templates. There is no need to do tedious searching for all possible action sequence combinations (which is not possible in complex game scenarios.) ### - What are the computational efficiency and scalability to more complex games? We respectfully identify that: examining the computational efficiency and scalability to more complex games is NOT our purpose. Our work aims to provide a low-cost approach to inject Feint into existing games (from actions to strategies). And our work demonstrates that our approach only adds negligible overheads to existing games in a variety of contexts. We have reported our evaluation of the overhead incurred by Feint behaviors in Appendix F.2, by measuring the overhead increment at each training epoch. Our Feint implementation introduces less than 5% overhead increment in all MARL modes under all game scenarios. Our key design for this is to infer only one MARL model at each training step (explained in Appendix-E). --- Rebuttal Comment 1.1: Title: Response Comment: After reading the author's response, it alleviated some of my concerns. Also taking into account the comments of the other reviewers, the current score I am keeping.
Summary: This paper introduces the first comprehensive formalization of Feint behaviors in multi-player games. The authors present a novel approach to automatically generate Feint behaviors using Palindrome-directed templates and combine them with intended high-reward actions in a Dual-Behavior Model. The formalization addresses both action-level and strategy-level aspects of Feint behaviors, considering temporal, spatial, and collective impacts. The authors provide a unified implementation scheme to incorporate Feint behaviors into common Multi-Agent Reinforcement Learning (MARL) frameworks. They evaluate their approach using multiple MARL models in a custom boxing game scenario and a strategic real-game simulation. The results demonstrate that incorporating Feint behaviors can significantly increase game rewards and improve the diversity of multi-player games. The authors show that their method outperforms existing approaches in several scenarios, with minimal computational overhead. While the work presents a novel and potentially impactful approach to modeling deceptive behaviors in games, it is limited by its focus on specific game scenarios and lack of theoretical analysis. Nevertheless, this paper contributes a valuable framework for enhancing the realism and strategic depth of AI agents in multi-player games. Strengths: This paper demonstrates significant originality by providing the first comprehensive formalization of Feint behaviors in multi-player games. The authors address a notable gap in the literature, as previous works have only touched on Feint behaviors superficially or as proof-of-concept. The formalization at both action and strategy levels shows a depth of thinking that goes beyond existing approaches. The quality of the work is evident in the thorough development of the formalization, from the Palindrome-directed templates for generating Feint behaviors to the Dual-Behavior Model for combining them with high-reward actions. The authors have clearly put considerable effort into creating a robust framework that can be applied across different game scenarios. In terms of clarity, the paper is generally well-structured, guiding the reader through the complexities of Feint behavior formalization in a logical manner. The use of illustrative examples, particularly in the boxing game scenario, helps to ground the abstract concepts in concrete applications. The significance of this work lies in its potential to enhance the realism and strategic depth of AI agents in multi-player games. By incorporating Feint behaviors, the authors have shown significant improvements in game rewards and diversity, which could lead to more engaging and challenging game AI. Furthermore, the unified implementation scheme for common MARL frameworks suggests broad applicability of this approach. Weaknesses: While the paper presents a novel approach, there are several areas where it could be strengthened: 1. The primary experimental validation is conducted on a custom boxing game scenario. While this provides a good test case, it may not fully demonstrate the generalizability of the approach. The authors should consider including experiments from a wider range of game types (e.g., strategy games, team sports) to show the broad applicability of their formalization. 2. While the paper provides a detailed formalization, it lacks rigorous theoretical analysis or proofs of the properties of the proposed approach. For instance, the authors could provide theoretical bounds on the performance improvements or convergence guarantees for their method. 3. The paper primarily compares the performance of agents with and without Feint behaviors. However, it would be valuable to see comparisons against other existing methods for modeling deceptive or strategic behaviors in games. 4. The formalization and implementation details are quite complex, which could make it challenging for others to replicate or build upon this work. The authors could consider providing a simplified version or pseudocode of key algorithms to improve accessibility. 5. While the paper focuses on game simulations, it doesn't adequately address how this approach might be applied to or impact real-world game design or AI systems beyond simulations. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. How well does your formalization of Feint behaviors generalize to other types of games beyond the boxing scenario? Could you provide examples or preliminary results from applying your approach to significantly different game genres (e.g., strategy games, team sports)? 2. Have you considered any theoretical analysis of your approach? For instance, can you provide any bounds on the expected performance improvements or convergence guarantees when incorporating Feint behaviors? 3. Your paper primarily compares agents with and without Feint behaviors. Have you considered comparing your approach to other methods for modeling deceptive or strategic behaviors in games? If so, which methods and why were they not included in the current paper? 4. The implementation of your approach seems quite complex. Could you provide a simplified version or pseudocode of the key algorithms to make it easier for others to understand and implement your method? 5. How do you envision your formalization of Feint behaviors being applied in real-world game design or AI systems beyond simulations? Are there any specific applications or domains where you think this approach could have significant impact? 6. Your Palindrome-directed templates for generating Feint behaviors are intriguing. Could you elaborate on how these templates were developed and whether there are any limitations to this approach for generating diverse Feint behaviors? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: n/a Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Title: Further clarification and apology for not reponse individually through initial rebuttal Comment: We sincerely apologize for not creating an individual response during the rebuttal period, where we assume all relevant questions are addressed in the General Question part showing to all reviewers. We thank the reviewer for all the insightful questions and suggestions. In case of any misunderstanding, we add additional responses hereby. We would like to elaborate on these two questions in case our response in the General Questions does not provide enough detail: ### Have you considered any theoretical analysis of your approach? For instance, can you provide any bounds on the expected performance improvements or convergence guarantees when incorporating Feint behaviors? We thank the reviewer for such a suggestion, and we think such a theoretical analysis would be inspiring. However, we believe that the analysis of performance improvements and guarantees are closely connected to specific gaming scenarios. At the current stage, we face a lack of complex enough gaming scenarios and deceptive benchmarks. Bearing such limitations, we might not draw generalizable conclusions. Nevertheless, we thank the reviewer for bringing up such inspiring suggestions and we believe this is an important path for future explorations. ### The implementation of your approach seems quite complex. Could you provide a simplified version or pseudocode of the key algorithms to make it easier for others to understand and implement your method? We provided a graph to clarify our implementation (Figure 11). We would provide a more detailed explanation for each part of the implementation in revision. Note that we only consider this work as a proof-of-concept of other concrete examples, therefore we do not take credits for optimizing for simplification of a specific implementation.
Rebuttal 1: Rebuttal: We thank all reviewers for their constructive feedback, and address all concerns. We address the following general questions here, and we address individual questions separately under corresponding reviews. ## General Questions ### [Reviewer: Wu2K, CkiB, R6PG] Lack of Discussion (and Identifications of) Limitations We hereby clarify that: the successful formation of Feint actions and strategies can be considered as a concrete example for indicating the potential unsafety of Machine Learning models. Therefore, we preserve the discussion (and identification of limitations) to ensure the knowledge sharing would NOT cause any harm. We will add a section for the discussion of negative impacts, and identifications of limitations in terms of our approach on the revision. ### [Reviewer: Wu2K, R6PG, 8F3g] Whether Palindrome-directed Feint Formalization can cover all types of Feint behaviors? How to adapt such formalization to other Feint behaviors? Currently, we only consider the usage of Palindrome for the purpose of Feint formation only. As for its generality, we believe it can be applied to other contexts (one can consider Feint's behaviors as a three-dimensional example from the high-order perspective). One potential approach to generalize our method is to form similar symmetries of actions under a formalization of topological manner, and then these shrunk actions can be inserted using the temporal variations. Note that we only consider this work as a proof-of-concept of other concrete examples, therefore we do not take credit for other concrete examples. ### [Reviewer: Wu2K, CkiB] How to have broader comparisons with other deceptive strategies and models? We have addressed the generality of our approach above. As for the comparisons, we assume this is a case-by-case comparison - therefore, based on the current knowledge, we are unable to provide concrete elaborations. One hypothesis from the authors is that: given any actions may be formed in the explained manner, the tolerance of temporal variations can be an interesting metric for further investigations (in the context of the questions, asked by these two reviewers). ### [Reviewer Wu2K, CkiB] What is the broad applicability/scalability of the Feint formalization in a wide range of games, and possibly, to non-gaming domains, such as robotics or real-world strategic planning? We consider Feint as a set of ACTIVE actions from agents. The generalization of our approach to non-gaming domains shall also take the differentiations of active and reactive actions into account. Therefore, this shall vary across different contexts. As we explained above (in terms of the generalization): though actions can be formed under our approach, we believe an outstanding point would be temporal variations.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Vector Quantization Prompting for Continual Learning
Accept (poster)
Summary: This paper presents VQ-Prompt, a prompt-based continual learning method using Vector Quantization (VQ) to enhance task knowledge representation and overcome catastrophic forgetting. VQ-Prompt incorporates VQ into the end-to-end training of discrete prompts, optimizing the prompt selection process with task loss and achieving effective task knowledge abstraction. Extensive experiments demonstrate that VQ-Prompt outperforms state-of-the-art methods in various benchmarks, particularly in class-incremental settings. Strengths: 1. This paper is well-written and easy to understand. 2. Introducing VQ to the prompt CL is intuitive and can end-to-end train these prompts. 3. Experiments have demonstrated the effectiveness of the proposed method. Weaknesses: 1. There is no constraint when calculating the similarity score \alpha and prompt key K, which may lead to corrupt prompt learning, i.e., for most test samples from different tasks, the selected prompts are similar, and large parts of prompts are useless. 2. Lack of ablation study on the fine-tuned classifier. Fine-tune classifier is a trick to improve the classifier performance, which can also be applied to other methods. It is unfair to directly compare with other methods. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. In Fig. 3(a), temperature $\tau=1$ achieves the best performance, then what about the temperature larger than 1? Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the valuable and insightful comments. **W1. There is no constraint when calculating the similarity score \alpha and prompt key K, which may lead to corrupt prompt learning, i.e., for most test samples from different tasks, the selected prompts are similar, and large parts of prompts are useless.** Thanks for the insightful comment. We have observed that at certain layers, several prompts are more frequently selected for test samples from different tasks, while others are less frequently used. To alleviate this, one possible strategy is to introduce constraints for prompt selection, such as excluding prompts that have already been frequently used by previously learned tasks to enhance the diversity and utility of the prompts. We will explore it as future work. **W2. Lack of ablation study on the fine-tuned classifier. Fine-tune classifier is a trick to improve the classifier performance, which can also be applied to other methods. It is unfair to directly compare with other methods.** In Section 5.3, we show the performance of “VQ-Prompt-S”, a simplified version of VQ-Prompt that does not use representation statistics to mitigate the negative effect of classifier bias developed during continual learning. VQ-Prompt-S achieves an FAA value of 78.05 (single run) on 10-task ImageNet-R, which is superior than the second best method EvoPrompt with an FAA of 77.16. To address your concerns, we trained another method L2P++ with this component. The results are shown in the following table. As can be observed, L2P++ with representation statistics (“L2P++ V2”) performs better than the original L2P ++ in the “5-task”, “10-task”, and “20-task” settings on ImageNet-R, but remains inferior to our method. Further, our VQ-Prompt-S outperforms other methods such as EvoPrompt (AAAI24), demonstrating the effectiveness of our approach. The results show that while representation statistics can contribute to performance improvements, they are not the sole determinant of the final performance. ||5-task FAA|5-task CAA|10-task FAA|10-task CAA|20-task FAA|20-task CAA| |-|-|-|-|-|-|-| | L2P++ | 70.83 $\pm$ 0.58 | 78.34 $\pm$ 0.47 | 69.29 $\pm$ 0.73 | 78.30 $\pm$ 0.69 | 65.89 $\pm$ 1.30 | 77.15 $\pm$ 0.65 | | L2P++ V2 | 74.11 $\pm$ 0.08 | 78.44 $\pm$ 0.63 | 72.93 $\pm$ 0.27 | 78.63 $\pm$ 0.80 | 70.99 $\pm$ 0.26 | 77.65 $\pm$ 0.79 | | EvoPrompt | 77.16 $\pm$ 0.18 | 82.22 $\pm$ 0.54 | 76.83 $\pm$ 0.08 | 82.09 $\pm$ 0.68 | 74.41 $\pm$ 0.23 | 80.96 $\pm$ 1.42 | | VQ-Prompt-S | 78.52 $\pm$ 0.34 | 82.64 $\pm$ 0.68 | 78.00 $\pm$ 0.39 | 82.83 $\pm$ 0.69 | 76.19 $\pm$ 0.26 | 81.68 $\pm$ 1.02 | | VQ-Prompt | 79.32 $\pm$ 0.29 | 82.96 $\pm$ 0.50 | 78.71 $\pm$ 0.22 | 83.24 $\pm$ 0.68 | 78.10 $\pm$ 0.22 | 82.70 $\pm$ 1.16 | **Q1. In Fig. 3(a), temperature $\tau=1$ achieves the best performance, then what about the temperature larger than 1?** Following the suggestion, we conducted additional experiments regarding temperatures larger than 1. The results are shown in the following table. As can be observed, $\tau=5$ achieves a result comparable to $\tau=1$. However, as $\tau$ increases, the performance decreases. These findings indicate that a temperature of 1 yields the best performance for our method. |$\tau=1$|$\tau=5$|$\tau=10$|$\tau=50$|$\tau=100$| |-|-|-|-|-| |77.15|77.12|76.89|76.13|76.06| --- Rebuttal Comment 1.1: Title: Thanks for the rebuttal Comment: I thank the authors' answers to my question. After going through all the other reviews and the given rebuttal, I will keep my score to 5 because I'm still concerned the unconstrained prompt selection may be similar, leading to wasted prompts.
Summary: The representations learned by large pre-trained models have led to many Continual Learning approaches based on these models. Specifically, prompt-based approaches train a small set of learnable parameters (prompts) to guide a fixed pre-trained model for a particular task. One key component in these approaches is the selection of relevant prompts, which helps to specialise the guidance depending on the need for the corresponding input. Most previous works select the relevant prompt based on the similarity of the input and a key in the representation space of the same visual pre-trained model. The key is commonly connected to the prompt via a key-value data structure, making it unfeasible to traceable optimisation sequence and infeasible to optimise with task loss. In this paper, the authors propose Vector Quantization Prompting, which incorporates vector quantisation into an end-to-end training process, including the keys and prompts. By using a look-up NN over a weighted sum of all the prompts and a straight-through estimator to approximate the gradient for the prompts and keys, the authors were able to optimise not only the prompts but also the key when selecting relevant prompts, which, as shown in the results, can help achieve better performance in diverse benchmarks. Strengths: - The authors motivate the proposal by pointing out a clear disadvantage of current prompt-based methods: the unoptimised selection of prompts. This paper presents a creative and simple approach to tackling this issue, which also performs better, as shown in the results. - The paper is well-written and structured to help readers understand the problem and the authors' proposed solution. Although the solutions lack some intuitions and explanations concerning the reasons for the comparisons and ablations, this could be because of the lack of space. Weaknesses: - One of the authors' motivations is learning discrete prompts, which they even mentioned as a contribution. However, even after reading the explanation between lines 50 and 61, the contribution of having discrete prompts is unclear. I understand the motivation of having concrete concepts represented as discrete vectors for human understanding, but a continual vector can easily represent the same concept and also be distinct categories and even linearly separable. - Furthermore, the implementation of discrete prompts in the proposal must be clarified. While I assume it occurs in the NN look-up or the prompt formation, there is a lack of ablation on this point. For instance, how is prompt usage distributed (Are they uniformly distributed, or are some very specialised)? Are there specialised prompts? Is there a discernible relationship between a 'concept' and a prompt? Technical Quality: 4 Clarity: 4 Questions for Authors: - In line 195, it is mentioned that this selection allows more updates in relevant prompts without disrupting less relevant ones. This idea is very interesting and crucial in CL. However, some questions remain: - Concerning this point, how different is a traditional prompt-based approach from the proposal? In both cases, you only update the “relevant” prompt, but the meaning of relevant changes. Or is there something else? - The look-up table selects the most relevant prompt; however, we should be able to compose concepts to learn more abstract ideas. Can this proposal be extended to this? - There is a second training phase when applying the component described in Section 4.3. How many epochs does this new phase have? Could it be an unfair advantage over other methods? For example, how does L2P behave with this component? - The experiments done in 5.3 concerning the temperature of alpha suggest that a lower value leads to a sharper distribution, which can lead to a more discrete selection; however, a higher value achieves better performance. How can we read this, as it seems contrary to the need for discrete prompts? Confidence: 5 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: The authors present some limitations and concerns regarding the proposal. Another limitation that can be added is the intrinsic limitation in the pre-trained models concerning the limitations of the pre-trained data distribution and the computational cost of running these models. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the valuable and insightful comments. **W1. One of the authors' motivations is learning discrete prompts, which they even mentioned as a contribution. However, even after reading the explanation between lines 50 and 61, the contribution of having discrete prompts is unclear. I understand the motivation of having concrete concepts represented as discrete vectors for human understanding, but a continual vector can easily represent the same concept and also be distinct categories and even linearly separable.** We would like to clarify that while we refer to them as discrete, the values of these discrete prompts are indeed float (continuous) during optimization. They are distinct from continuous prompts in their conceptual structure and function. Continuous prompts typically refer to input-conditioned prompts where each input sample has a unique prompt, even if the samples are instances of the same concept. This approach can lead to redundancies in concept representation. In contrast, discrete prompts use a limited set of prompts to represent concepts, which can serve as a more abstract representation of knowledge and align more closely with human cognitive structures. This form of knowledge representation can enhance the model's ability to learn and manage task-specific knowledge efficiently. **W2. The implementation of discrete prompts in the proposal must be clarified. While I assume it occurs in the NN look-up or the prompt formation, there is a lack of ablation on this point. For instance, how is prompt usage distributed (Are they uniformly distributed, or are some very specialised)? Are there specialised prompts? Is there a discernible relationship between a 'concept' and a prompt?** Regarding prompt usage distribution, we have observed some prompts are more frequently selected for samples from different tasks while others are less frequently used. Since the prompts are designed to learn task-specific knowledge, each task might not have a universal ‘concept’. However, different tasks might share some features that can be captured by certain prompts. We will further clarify these observations and provide a more detailed analysis in the revised paper. **Q1. In line 195, it is mentioned that this selection allows more updates in relevant prompts without disrupting less relevant ones. This idea is very interesting and crucial in CL. However, some questions remain:** - **How different is a traditional prompt-based approach from the proposal? In both cases, you only update the “relevant” prompt, but the meaning of relevant changes. Or is there something else?** The discussion in line 195 aims to explain how our design can enable CL. How to effectively select and update relevant prompt(s) is crucial for achieving CL, and the meaning of “relevant” in our context shares the same insight as traditional prompt-based methods. However, the primary difference lies in whether the prompt selection can be optimized with the task loss. - **The look-up table selects the most relevant prompt; however, we should be able to compose concepts to learn more abstract ideas. Can this proposal be extended to this?** One possible way is to design a two-level prompt pool, with each level responsible for a different degree of abstraction. The model can first identify relevant prompts from the pool of lower abstraction. Next, it learns the combination weights to form a more abstract prompt. This newly composed prompt is then matched with a prompt from the higher abstraction pool, and the matched one is used for further processing. **Q2. There is a second training phase when applying the component described in Section 4.3. How many epochs does this new phase have? Could it be an unfair advantage over other methods? For example, how does L2P behave with this component?** Ten epochs. This phase stabilizes task knowledge learning by leveraging representation statistics of old data, which can contribute to performance improvements but is not the sole determinant of the final performance. Following your suggestion, we trained the L2P++ with this component. As can be observed in the following table, L2P++ with representation statistics (“L2P++ V2”) performs better than the original L2P ++ in the three settings on ImageNet-R, but remains inferior to our method. Additionally, we list the performance of VQ-Prompt-S, a simplified version of VQ-Prompt that does not use representation statistics. VQ-Prompt-S still outperforms other methods such as EvoPrompt, demonstrating the effectiveness of our approach. ||5-task FAA|5-task CAA|10-task FAA|10-task CAA|20-task FAA|20-task CAA| |-|-|-|-|-|-|-| |L2P++|70.83$\pm$0.58|78.34$\pm$0.47|69.29$\pm$0.73|78.30$\pm$0.69|65.89$\pm$1.30|77.15$\pm$0.65| |L2P++ V2|74.11$\pm$0.08|78.44$\pm$0.63|72.93$\pm$0.27|78.63$\pm$0.80|70.99$\pm$0.26|77.65$\pm$0.79| |EvoPrompt|77.16$\pm$0.18|82.22$\pm$0.54|76.83$\pm$0.08|82.09$\pm$0.68|74.41$\pm$0.23|80.96$\pm$1.42| |VQ-Prompt-S|78.52$\pm$0.34|82.64$\pm$0.68|78.00$\pm$0.39|82.83$\pm$0.69|76.19$\pm$0.26|81.68$\pm$1.02| **Q3. The experiments done in 5.3 concerning the temperature of alpha suggest that a lower value leads to a sharper distribution, which can lead to a more discrete selection; however, a higher value achieves better performance. How can we read this, as it seems contrary to the need for discrete prompts?** This can be understood by considering the balance between discrete selection and robust learning. While a lower temperature leads to a sharper distribution and more discrete selections, it can lead to overly aggressive prompt choices early in training when the model is less fully trained. This can negatively impact the learning process and overall performance, but it does not negate the value of discrete prompts themselves. **Another limitation.** Thanks for the insightful suggestion. We will include a discussion of the intrinsic limitations of pre-trained models in the revised paper. --- Rebuttal Comment 1.1: Comment: I thank the author's answers. For now, I will keep my score waiting for comments from the other reviewers.
Summary: This paper introduces Vector Quantization Prompting (VQ-Prompt), a novel prompt-based method designed to mitigate catastrophic forgetting in the sequential learning scenario of Class Incremental Learning (CIL). VQ-Prompt utilizes Vector Quantization (VQ) to facilitate end-to-end learning with a set of discrete prompts. Initially, it computes the queries for the input images and determines similarity scores between the query and keys, similar to the Learning to Prompt (L2P) approach. This involves using a pretraining image method to generate queries. Subsequently, it aggregates prompts based on these similarity scores, resulting in what is called a continuous prompt. This aggregated prompt is then quantized by selecting the nearest neighbor (NN) from the prompt pool. Then, the pipeline to classify the instances remains constant compared to L2P. To ensure gradient propagation through the key and all prompts, VQ-Prompt employs the straight-through estimator along with similarity scores. Additionally, it incorporates representation statistics to address classification bias on the classifier. The authors demonstrate that VQ-Prompt outperforms state-of-the-art (SOTA) baselines across three datasets: ImageNet-R (5, 10, and 20 tasks), CIFAR-100 (10 tasks), and CUB-200 (10 tasks). Strengths: This paper presents the following strengths: 1. The paper is well-written and organized. It is easy to follow. 2. The problem studied is highly relevant and valuable. 3. The paper proposes a method that enables end-to-end training for architecture based on prompt learning for CIL and shows that this can also be beneficial in achieving state-of-the-art results in three well-known datasets. Weaknesses: 1. Considering Prompt Learning for Continual Learning (CL) assumes that there is a good and generalizable pretraining that could be transferable (tuned) to the different downstream tasks. Moreover, considering the way that the queries are generated, it also presumes that this pretraining can accurately select prompts for all tasks. I would like to see results on more challenging tasks like out-of-domain. Although the authors perform the evaluation in different scenarios, two of them, ImageNet-R and CIFAR-100, are closely related to ImageNet. For instance, ImageNet-R is derived from ImageNet-1K, and also its classes overlap with ImageNet-1K. 2. Additionally, it is worth noting that the authors use a larger pretraining dataset for the experiments in Table 3, where VQ-Prompt achieves nearly the same performance as HiDe-Prompt. This raises the question of whether this is related to the initialization issue mentioned earlier. I would like to see these results with a different initialization to understand this relationship better. 3. There is no consistency in the evaluation of state-of-the-art (SOTA) methods across all tasks (Table 1, Table 2, Table 3), making comparison and analysis challenging. For instance, HiDe-Prompt shows a significant improvement over CODA-P and almost matches the performance of VQ-Prompt. So, why is it not evaluated on the other tasks? I would like to see how HiDe-Prompt performs on the other tasks presented in Table 1 and Table 2. Technical Quality: 3 Clarity: 3 Questions for Authors: Please refer to the weaknesses section, I left some questions for you. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors mention a limitation of this method: scaling up the prompt size does not increase performance. However, this is not necessarily a limitation. This could indicate that the model does not need significant parameters to learn the task because it is relatively easy due to its initial knowledge. (Please see point 1 of the weaknesses). Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the valuable and insightful comments. **W1. Considering Prompt Learning for Continual Learning (CL) assumes that there is a good and generalizable pretraining that could be transferable (tuned) to the different downstream tasks. Moreover, considering the way that the queries are generated, it also presumes that this pretraining can accurately select prompts for all tasks. I would like to see results on more challenging tasks like out-of-domain. Although the authors perform the evaluation in different scenarios, two of them, ImageNet-R and CIFAR-100, are closely related to ImageNet. For instance, ImageNet-R is derived from ImageNet-1K, and also its classes overlap with ImageNet-1K.** Please note that ImageNet-R is widely recognized as a critical benchmark for CL, especially for methods utilizing backbones pre-trained on the ImageNet, as discussed in DualPrompt. This is because ImageNet-R contains artistic renditions of ImageNet categories that are discouraged by the collecting rules of the original ImageNet. CIFAR-100 has also been widely used in previous prompt-based CL methods. Following your suggestion, we conducted experiments on two new datasets, namely ImageNet-A [1] and VTAB [2]. ImageNet-A contains adversarial images that fool current ImageNet pre-trained classifiers, while VTAB includes 19 datasets with diverse classes that do not overlap with ImageNet-1K. For ImageNet-A, we split the 200 classes into 20 tasks. For VTAB, we sample five 10-class datasets from it to construct the cross-domain CIL setting. As shown below, our VQ-Prompt outperforms other SOTA methods such as HiDe-Prompt on these challenging datasets on the FAA metric. ||ImageNet-A|VTAB| |-|-|-| |HiDe-Prompt|51.67|86.38| |VQ-Prompt|52.96|90.46| [1] Natural adversarial examples. CVPR21 [2] A large-scale study of representation learning with the visual task adaptation benchmark. arXiv19 **W2. Additionally, it is worth noting that the authors use a larger pretraining dataset for the experiments in Table 3, where VQ-Prompt achieves nearly the same performance as HiDe-Prompt. This raises the question of whether this is related to the initialization issue mentioned earlier. I would like to see these results with a different initialization to understand this relationship better.** It is important to clarify that the results in Table 3 for both VQ-Prompt and HiDe-Prompt use the SAME pre-training dataset, i.e., ImageNet-21K, as denoted in the caption. Our method achieves superior performance compared to HiDe-Prompt, particularly in the CAA metric. Regarding the initialization with different dataset sizes, besides the results of pre-training on ImageNet-21K shown in Table 3, we present additional comparisons with HiDe-Prompt on ImageNet-R with backbones pre-trained on ImageNet-1K. The results for HiDe-Prompt are obtained by running its official repository. As shown in the table below, our VQ-Prompt consistently outperforms HiDe-Prompt in the 5-task, 10-task, and 20-task settings on ImageNet-R. ||5-task FAA|5-task CAA|10-task FAA|10-task CAA|20-task FAA|20-task CAA| |-|-|-|-|-|-|-| |CODA-P|76.51$\pm$0.38|82.04$\pm$0.54|75.45$\pm$0.56|81.59$\pm$0.82|72.37$\pm$1.19|79.88$\pm$1.06| |HiDe-Prompt|76.29$\pm$0.10|78.77$\pm$0.11|76.74$\pm$0.18|78.76$\pm$0.11|76.46$\pm$0.06|78.76$\pm$0.11| |VQ-Prompt|79.32$\pm$0.29|82.96$\pm$0.50|78.71$\pm$0.22|83.24$\pm$0.68|78.10$\pm$0.22|82.70$\pm$1.16| Moreover, we have conducted experiments with different initializations originating from various self-supervised pre-training paradigms, as shown in Section 5.2 and Table 4. The findings demonstrate that VQ-Prompt consistently achieves competitive performance across different initializations, highlighting its robustness and adaptability. **W3. There is no consistency in the evaluation of state-of-the-art (SOTA) methods across all tasks (Table 1, Table 2, Table 3), making comparison and analysis challenging. For instance, HiDe-Prompt shows a significant improvement over CODA-P and almost matches the performance of VQ-Prompt. So, why is it not evaluated on the other tasks? I would like to see how HiDe-Prompt performs on the other tasks presented in Table 1 and Table 2.** We would like to clarify that results in Tables 1, 2 and 3 are taken from previous publications, which utilize backbones pre-trained on different datasets. For example, HiDe-Prompt in Table 3 uses weights pre-trained on ImageNet-21K, whereas methods in Tables 1 and 2 are pre-trained on ImageNet-1K. This difference in pre-training datasets impedes direct comparisons across these tables. Following your suggestion, we ran the official code of HiDe-Prompt and obtained its performance with a backbone pre-trained on ImageNet-1K. The results on ImageNet-R are shown in the **Table in the response to W2**. As can be observed, HiDe-Prompt achieves inferior results compared to our method on ImageNet-R. Unfortunately, we could not provide results for CIFAR-100 because we failed to obtain reasonable results with the settings in the paper (bs=128 and lr=0.005). We found that the actual learning rate changes with the batch size. We are exploring other training settings. **The authors mention a limitation of this method: scaling up the prompt size does not increase performance. However, this is not necessarily a limitation. This could indicate that the model does not need significant parameters to learn the task because it is relatively easy due to its initial knowledge. (Please see point 1 of the weaknesses).** We agree with the reviewer's perspective that this could also be seen as a strength of our method. Our model does not require significant parameters to learn tasks due to its effective end-to-end knowledge learning achieved by integrating VQ into the prompt-based framework. We will revise our manuscript to provide a more in-depth discussion regarding the scalability of our approach. --- Rebuttal Comment 1.1: Title: Rebuttal Response Comment: I really appreciate the author's effort to address all my concerns. The new results provided by the authors show that the model can work in challenging scenarios for pretraining knowledge. Therefore, I am considering increasing my score. Best, Reviewer 7MCa
Summary: Prompt-based continual learning has emerged to address catastrophic forgetting in sequential task learning by using a pre-trained Vision Transformer (ViT) enhanced with learnable "prompts." These prompts contain task-specific knowledge and guide the model in generating task-relevant outputs. During inference, the most suitable prompt is selected based on the input image. However, current methods face challenges due to non-differentiable indexing, leading to suboptimal prompt selection and performance issues. Discrete prompts offer better task knowledge representation and prevent interference among different tasks. This paper introduces Vector Quantization Prompting (VQ-Prompt) for continual learning, which optimizes prompts using task loss while maintaining their discrete properties. The method generates a continuous prompt and replaces it with the nearest match from a prompt pool, using gradient estimation to handle non-differentiability and additional regularization to refine the prompt pool. Representation statistics are used to mitigate classification bias towards previous tasks, improving continual learning performance. Strengths: This paper develops a SoTA prompt-based continual learning method that improves prompt design with a widely used idea of vector quantization. The proposed methodology is sound and rational in design and outperforms other recent strong baselines. The paper is easy to follow and provides a meaningful analysis to better understand the key components of the proposed approach. Weaknesses: The performance improvements are meaningful, but not significant - particularly on specific tasks, such as CIFAR-100 and CUB-200 benchmarks. The technical contribution of the proposed method is not solid. Indeed I didn't find unique insightful ideas/analyses/novelty that I can get only from this paper. Most technical components used in this work are somewhat general in vector quantization (e.g., NN look-up and straight-through gradient estimation) and continual learning literature (e.g.,post-adjustment of representation based on statistics). And this proposed method is more like a good combination of vector quantization idea for prompt design during continual learning. The question of "Why is the vector quantization idea more beneficial for prompt-based continual learning against other baselines" is not sufficiently discussed. It achieves competitive performance with a smaller prompt length than other continuous-prompt methods. However, the paper does not provide any analysis/evidence as to why this happens, but only gives typical ablation studies. In Table 4, the proposed method seems to have pros and cons for different self-supervised pre-training paradigms. It shows better cumulative accuracy (CAA) for on-par final accuracy (FAA); in my understanding, this means the proposed method shows a higher degree of forgetting compared to baselines, while showing better adaptation performance to new tasks. Technical Quality: 3 Clarity: 3 Questions for Authors: - Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The authors addressed the limitations and broader impacts. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the valuable and insightful comments. **W1. The performance improvements are meaningful, but not significant - particularly on specific tasks, such as CIFAR-100 and CUB-200 benchmarks.** We respectfully disagree with the assessment that the improvements are not significant. As demonstrated in Table 1, our method achieves a relative improvement of 2.80%, 2.45%, and 4.96% over the second-best method, EvoPrompt, on the 5-task, 10-task, and 20-task settings of ImageNet-R using the FAA metric. ImageNet-R is widely recognized as a critical benchmark for continual learning, especially for methods utilizing backbones pre-trained on the ImageNet dataset, as discussed in DualPrompt. Additionally, while our improvements on CIFAR-100 (Table 2) and CUB-200 (Table 3) are not as large as those on ImageNet-R, they are still considerable. On CUB-200, although we slightly trail behind SLCA in the CAA metric, it is important to note that SLCA does not freeze the backbone and employs a higher number of trainable parameters, allowing for better adaptation to the downstream task of fine-grained classification. The differences in performance improvements might be attributed to the inherent characteristics of the datasets. CIFAR-100 and CUB-200 have data distributions that are closer to the original ImageNet distribution compared to ImageNet-R, which contains artistic renditions of ImageNet categories that are discouraged by the collecting rules of the original ImageNet. This closer alignment with ImageNet may reduce the relative improvement margins on CIFAR-100 and CUB-200. **W2. The technical contribution of the proposed method is not solid. Indeed I didn't find unique insightful ideas/analyses/novelty that I can get only from this paper. Most technical components used in this work are somewhat general in vector quantization (e.g., NN look-up and straight-through gradient estimation) and continual learning literature (e.g.,post-adjustment of representation based on statistics). And this proposed method is more like a good combination of vector quantization idea for prompt design during continual learning.** Thanks for the comment. Our method is motivated by identifying a clear disadvantage in current prompt-based methods, i.e., the unoptimized selection of prompts. We propose a simple yet effective approach to address this issue by integrating vector quantization with prompt-based continual learning, which has not been explored before. While our technical components are based on established principles in vector quantization and continual learning, the innovation lies in the unique application and combination of these techniques to solve the specific problem of prompt selection. By incorporating vector quantization, we optimize the prompt selection process with task loss to achieve a more effective abstraction of task knowledge, which is crucial for continual learning. Experimental results demonstrate the effectiveness of our approach. Thus, while we build upon existing techniques, our work provides a fresh perspective and a practical solution to a challenge in prompt-based continual learning. **W3. The question of "Why is the vector quantization idea more beneficial for prompt-based continual learning against other baselines" is not sufficiently discussed. It achieves competitive performance with a smaller prompt length than other continuous-prompt methods. However, the paper does not provide any analysis/evidence as to why this happens, but only gives typical ablation studies.** Thanks for the insightful comment. The use of vector quantization (VQ) in our method provides several key benefits for prompt-based continual learning. First, VQ allows us to encode task knowledge into discrete prompts, which provide a more compact representation compared to continuous prompts, as discussed between lines 50 and 61 of the manuscript. This discrete nature helps in capturing essential task-specific features with necessary abstraction compared with continuous representations. Second, by integrating VQ into the prompt-based framework, we enable the optimization of prompt selection with task loss. This end-to-end optimization ensures that the selected prompts are highly relevant to the task at hand, thereby enhancing the task knowledge learning of the prompts. In summary, we can use a smaller prompt length to achieve competitive performance due to effective end-to-end knowledge learning. We will extend our analysis in the revised manuscript by providing detailed explanations. **W4. In Table 4, the proposed method seems to have pros and cons for different self-supervised pre-training paradigms. It shows better cumulative accuracy (CAA) for on-par final accuracy (FAA); in my understanding, this means the proposed method shows a higher degree of forgetting compared to baselines, while showing better adaptation performance to new tasks.** Thanks for the detailed observation. The higher CAA suggests that VQ-Prompt is more efficient in leveraging past knowledge to aid current tasks, even though it might exhibit a higher degree of forgetting compared to some baselines. This trade-off between adaptation to new tasks and forgetting of previous ones is a common challenge in continual learning. With self-supervised pre-training paradigms, our approach tends to prioritize adaptability to new tasks to ensure that the model remains relevant and effective in dynamic environments. In our revised manuscript, we will provide additional analysis to elucidate this trade-off and further highlight the strengths and limitations of VQ-Prompt in various self-supervised pre-training scenarios.
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Learning Transferable Features for Implicit Neural Representations
Accept (poster)
Summary: This paper introduces STRAINER, a novel training framework for Implicit Neural Representations (INRs). Unlike traditional INRs, which are trained on a single signal, STRAINER aims to learn transferable features that can be effectively utilized for fitting new signals from a similar distribution. The method involves sharing initial encoder layers across multiple INRs while using independent decoder layers. This setup enables STRAINER to achieve faster convergence and better reconstruction quality compared to baseline models. The paper evaluates STRAINER on various tasks, including image fitting, super-resolution, and denoising, demonstrating its effectiveness in both in-domain and out-of-domain scenarios. Detailed analyses are provided to understand the transferability of the learned features and their implications for INR training dynamics. Strengths: Originality: The paper presents a novel approach to improving the transferability of INRs by sharing initial encoder layers. This is a significant departure from traditional methods that train INRs on a single signal without focusing on feature transferability. Quality: The empirical evaluations are thorough, covering multiple datasets and tasks such as image fitting, super-resolution, and denoising. The experiments convincingly demonstrate the advantages of STRAINER in terms of reconstruction quality and convergence speed. Detailed analysis of training dynamics and feature transferability adds depth to the understanding of STRAINER's performance. Clarity: The paper is well-structured, with clear explanations of the methodology, experimental setup, and results. The inclusion of visualizations and tables enhances the readability and comprehension of the findings. Significance: STRAINER addresses a critical limitation in the current INR landscape by enabling feature transferability, which has broad implications for various applications, including medical imaging and inverse problems. This makes the work highly significant for the field. Weaknesses: Limited Theoretical Insights: While the empirical results are strong, the paper could benefit from more theoretical insights into why the shared encoder layers facilitate better feature transferability. A deeper theoretical framework could strengthen the contributions. Stability Issues: The paper mentions occasional instability in the form of PSNR drops during test signal fitting. Although STRAINER recovers quickly, addressing this issue more comprehensively would improve the robustness of the method. Comparison with Other Models: The comparison with other advanced models, such as those using CNNs and Transformers for transfer learning, is not fully explored. Including more comparisons could provide a clearer picture of STRAINER's relative performance. Technical Quality: 3 Clarity: 2 Questions for Authors: 1.Can the authors provide more theoretical insights or an intuitive explanation for why sharing initial encoder layers results in better feature transferability? 2.How does STRAINER perform when the distribution of training and test signals significantly differs? Are there specific characteristics of the signals that influence the transferability of features? 3.Could the authors elaborate on the stability issues observed during test signal fitting? Are there specific conditions or types of signals where these issues are more pronounced? 4.How does STRAINER compare with other transfer learning techniques that use CNNs or Transformers? Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: The authors have adequately addressed the limitations of their work, discussing the occasional instability observed during test signal fitting and the need for further characterization of the transferred features. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for careful and thorough evaluation of our work. We appreciate the reviewer’s kind comments on STRAINER’s originality in learning transferable representations for INRs, extensive evaluation of STRAINER on datasets, and the detailed analysis provided by STRAINER on feature transferability and its training dynamics. We address the comments and suggestions below. __Theoretical insights: Why the shared encoder layers facilitate better learning.__ Suppose we have an INR consisting of $L$ fully connected layers and non-linearity $\sigma$ after each layer. Assume that $\sigma$ is a piecewise-linear approximation of the sine function. For any piecewise linear non-linearity, the INR can be considered a continuous piecewise-affine operator, i.e., the INR subdivides the input space into pieces or regions $\omega \in \Omega$, where $\Omega_L$ is the input space partition due to all $L$ layers. Each region/piece of the function is mapped to the output in an affine manner. The subdivision $\Omega_L$, happens in a layer-wise fashion, e.g., layer $\ell$ contributes $\partial \Omega_\ell$ to the overall subdivision [13]. An important outcome of this layerwise subdivision is that shallower layers learn coarser partitions of the input domain, therefore resulting in shallower layers learning lower frequency features [55]. Deeper layers however can be more localized therefore, learn higher frequency or less symmetric features. The motivation behind INR is to learn a joint partition of the shallower layers, that can be further subdivided by the deeper layers into an optimal input space partition, resulting in a low error continuous piecewise-affine operator. __Unstable test time optimization for STRAINER.__ We attribute the instability of STRAINER at test time fitting to the underlying SIREN model. In SP Figure 7 of the supplementary, reconstruction quality (PSNR) plots of Siren and STRAINER models plotted across time reflect the periodic instability. However, this can be fixed by using a learning rate (LR) scheduler. As shown in RT Figure1(b), we apply an exponential multiplicative LR scheduler that decays the learning rate by 0.4 resulting in a smooth and stable learning for STRAINER-10 (shown in green). __Training distribution of images.__ We conduct experiments in in-domain (ID) and out of domain (OD) image fitting where the shared encoder is trained on CelebA-HQ[16] and tested on CelebA-HQ, AFHQ[5], and OASIS-MRI[12,21]. We notice the superior performance of STRAINER compared to baselines (see RT Table 1) for in-domain image fitting. We also find that STRAINER captures highly transferable features. For OD image fitting on AFHQ, we find that STRAINER is able to reconstruct OD AFHQ almost comparable to ID fitting on AFHQ reconstructions, differing by a single digit in PSNR (MP Table 1,2). While for OASIS-MRI, we find STRAINER trained on CelebA-HQ surprisingly outperform even ID fitting on OASIS-MRI. To further assess the effect of training distribution on STRAINER, we train the shared encoder on Flowers [45] and Stanford Cars [46] as they have different spatial distribution of color and high frequencies compared to AFHQ and OASIS-MRI. We find that STRAINER is able to learn transferable features and achieves comparable PSNR to the shared encoder trained on CelebA-HQ[16]. As shown in RT Table 1, in-domain fitting for AFHQ[5] has the highest reconstruction PSNR, followed by STRAINER trained on CelebA-HQ, Flowers, and Stanford Cars (in decreasing order, presented in RT Table 1). The results suggest that STRAINER trained on natural images yields highly transferable representations, that inturn generalize well across natural images. We propose this as a direction for future work. __Comparison with other transfer learning techniques.__ We compare STRAINER with other metalearning and transformer baseline models such as Meta-Learned 5K[37], TransINR[47] and IPC[48] (with and without test time optimization) for image fitting. We evaluate STRAINER and baselines on CelebA-HQ (in-domain) , AFHQ(out of domain) and OASIS-MRI(out of domain). As seen in RT Table 1, STRAINER significantly outperforms IPC[48], TransINR[47] and other baselines by ~ 3-5+ db in PSNR. We see that pretrained TransINR and IPC yield good PSNR at iteration 0, STRAINER catches up within little time and then consistently outperforms up until the end of 2000 iterations. We refer the reviewer to a more detailed summary of baseline results in our response to Reviewer FYQ8, “Comparison of baselines”, and “Metalearning baseline for Kodak”. Additionally, please also refer to our response to Reviewer HDgr, “New Out-of-Distribution generalization experiments” for detailed evaluation of out of domain generalizability. Due to limited time, our experiments have been carefully chosen to address the concerns of all reviewers and are indicative of STRAINER learning highly transferable features for tasks such as image representation, with minimum training time and as little as 10 images. A comparison of training statistics and model footprints is mentioned in RT Table 3. We are happy to engage more and provide more findings during the discussion period. --- Rebuttal Comment 1.1: Comment: I think the author has addressed the issues I raised. --- Rebuttal 2: Comment: We sincerely thank the reviewer for their time, effort and their positive feedback in evaluating our work! We request the reviewer to kindly update their scores if our rebuttal experiments and analysis have satisfactorily addressed their comments and questions.
Summary: This paper proposes STRAINER, where implicit neural representations (INRs) are trained for a class of similar objects, with the INRs for all objects having a shared, instance-agnostic encoder and instance-specific decoders. The paper also examines INR training dynamics. STRAINER is evaluated on 2D image regression and fitting MRI images. Strengths: Proposes the novel idea of sharing the first $K$ layers of the INR among all INR instances for a class of objects, while having instance-specific decoders for each instance, with some improvements in performance over SIREN/meta-learning baselines. The authors include many experimental details so the experiments should be easy to reproduce. The paper is well-written and easy to understand. Weaknesses: The paper only compares against one meta-learning baseline, ignoring the large amount of related literature (Trans-INR [1], Instance Pattern Composer (IPC) [2], PONP [3], Locality-aware generalizable implicit neural representation [4], etc.) that also looks at generalizable INRs. In particular, the results of this paper seem incongruent with the results of Instance Pattern Composers, which shows that modulation is most effective in the second layer of an INR (i.e., the second layer should be instance-specific while all other layers should be instance-agnostic). The paper has limited experimental evaluation, limited only to image reconstruction and MRI reconstruction. The paper did not investigate the more complicated tasks that were also investigated by the baselines (e.g. LearnIt), such as novel view synthesis or CT reconstruction. Results on some tasks are also only comparable to the baseline (e.g. superresolution), and some tasks do not include all the baselines (e.g. the Kodak task does not have the LearnIt baseline). No ablation study is done on some key hyperparameters, such as the number of shared encoder layers $N$. In light of other works on generalizable INR, the contribution of this work is limited since it does not compare to many of the other works. References 1. Chen, Yinbo, and Xiaolong Wang. "Transformers as meta-learners for implicit neural representations." European Conference on Computer Vision. Cham: Springer Nature Switzerland, 2022. 2. Kim, Chiheon, et al. "Generalizable implicit neural representations via instance pattern composers." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023. 3. Gu, Jeffrey, Kuan-Chieh Wang, and Serena Yeung. "Generalizable Neural Fields as Partially Observed Neural Processes." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023. 4. Lee, Doyup, et al. "Locality-aware generalizable implicit neural representation." Advances in Neural Information Processing Systems 36 (2024). Technical Quality: 1 Clarity: 3 Questions for Authors: 1. While in a different setting, the IPC paper finds that the optimal instance-specific layer is the second layer of an INR. What accounts for the discrepancy between this paper's findings (the instance-specific layers should be the last few layers) and the findings of the IPC paper? 2. Why is the LearnIt baseline not used for the Kodak task? Confidence: 4 Soundness: 1 Presentation: 3 Contribution: 1 Limitations: Limitations are discussed in the paper. Negative societal impact does not seem to be a concern for this paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for careful and thorough evaluation of our work. We appreciate the kind comments made on the novelty of our method and performance over Siren/meta-learning baselines. We welcome the reviewers suggestion on comparing with more baselines, and are thankful for the references provided and we will update them in Section 2, Background. __Comparison of baselines:__ Prompted by the primary concern of comparisons to baselines, we further conduct thorough image fitting comparison of STRAINER with IPC[48] and TransINR[47] with and without test time optimization as detailed below. As seen in RT Table 1, STRAINER significantly outperforms IPC[48], TransINR[47] and other baselines by __~ 3-5+ db__ in PSNR. We attribute the superior performance of STRAINER to better input space partitioning learned by STRAINER as a consequence of sharing the initial layers, coupled with the instance specific decoder which provides an added degree of freedom allowing STRAINER to fit an unseen image with different morphology better. We see that pretrained TransINR and IPC yield good PSNR at iteration 0, STRAINER catches up within little time and then consistently outperforms up until the end of 2000 iterations. We also note that STRAINER features are more transferable for out of domain image fitting as shown for AFHQ and OASIS-MRI, even when the STRAINER encoder has been pre-trained on domains such as Flowers[45] and Stanford Cars[46]. In-domain fitting for AFHQ has the best reconstruction quality, followed by STRAINER trained on CelebA-HQ, Flowers, and Stanford Cars (in decreasing order, presented in RT Table 1.). __More details on our baselines.__ We use the publicly available implementation of IPC[48] which also includes the baselines for TransINR, both of which use FFNets[38]. While STRAINER uses Siren[31], we remark that FFNets and Siren models offer comparable representation capabilities for image fitting. We train TransINR[47] and IPC[48] baselines on 14,000 images from CelebA-HQ for 300 epochs[48]. To verify successful training of the model, we ensure that the PSNR on the test set is in near agreement with that reported in their respective papers barring changes due to the number of data samples. We also ensure that all baselines and STRAINER MLPs have 6 layers and the same number of parameters. All models are evaluated on our test images for 2000 iterations. Lastly, for the OASIS-MRI dataset, we run all baselines using 3 channel MRI images and report metrics in RT Table 1. __STRAINER and IPC.__ In MP Figure 6.iv, we visualize the layerwise subdivisions / partitions in the input space of INRs. We observe that partitions evolve from coarse to fine as we go deeper in the INR. Initial layers provide for more global (low-frequency) transferable features. The deeper layers of the INR result in more complex subdivision of the input space giving rise to more local features and pertaining to high frequency detail in the image. As mentioned in MP Sec. 3.1, we motivate sharing layers in STRAINER to learn representations that give rise to partitions that generalize well across samples. IPC[48] proposes that the second layer in an INR be instance-specific while keeping all the remaining layers instance-agnostic. Modulating the second layer's weights would result in non-local changes resulting in suboptimal reconstruction. Further, IPC’s second layer weights obtained from a Transformer model may also affect the location of the optimal instance-specific layer. We further adapt IPC’s design of an instance specific second layer in STRAINER and compare it to other STRAINER models. As shown in RT Figure 1(b), we see the best reconstruction given by STRAINER-10(ours), then STRAINER (instance-specific second layer) and then SIREN - which validates our above intuition on input space division for STRAINER and IPC. __Metalearning baseline for Kodak:__ STRAINER learns higher quality reconstruction as indicated by image metrics (PSNR, SSIM[54], and LPIPS[53]) compared to Learnit baseline as shown in RT Table 2. We train all models on CelebA-HQ images of the same resolution as Kodak[1] images. STRAINER performs better than SIREN and Meta-Learned 5K[37] with +2db in PSNR for network width of 256, and ~+3db for network width of 512. STRAINER(width=256) performs similar to SIREN(width=512), further attesting to the representation power of STRAINER’s learned features. We also remark that training time for STRAINER’s shared encoder on high resolution is negligible compared to meta learning models for high resolution image fitting - highlighting that STRAINER can be easily adapted to high resolution and out of domain images. __Ablation study on key hyperparameters:__ We refer the reviewer to SP Figure 7. We show an ablation study on the number of layers shared in the STRAINER encoder and its ability to reconstruct the signal. We observe progressive increase in reconstruction quality (PSNR) as we increase the number of layers shared. __Evaluating STRAINER on complex (3D) signals.__ Please refer to our response to Reviewer HDgr, “Evaluating STRAINER on 3D signals” Due to limited time, our experiments have been carefully chosen to address the concerns of all reviewers and are indicative of STRAINER learning highly transferable features for tasks such as image representation, with minimum training time and as little as 10 images. A comparison of training statistics and model footprints is mentioned in RT Table 3. We are happy to engage more and provide more findings during the discussion period. --- Rebuttal Comment 1.1: Comment: I appreciate the authors’ detailed response to the points raised by the reviewers. Most of my main concerns, such as comparisons to baselines such as IPC have been addressed by the authors, except the limitation that the method is mostly limited to signal fitting and is either inapplicable (e.g. NeRF) or has mixed performance (e.g. super-resolution, denoising) on inverse problems. Therefore, I would like to raise my rating to ‘borderline accept’. --- Reply to Comment 1.1.1: Comment: We sincerely thank the reviewer for their time and effort in evaluating our work. Indeed adapting STRAINER for 3D applications such as NERF is an interesting future direction for us. We greatly appreciate the positive assessment and feedback on our work!
Summary: The research focuses on learning transferable features using a SIREN model, where encoder and decoder sub-networks are utilized to optimize weights for encoding and decoding. The paper evaluates the main claims made in the abstract and introduction, confirming that the goals are clearly stated and the method effectively addresses them, supported by results. The study discusses fitting unseen signals using optimized encoder weights and highlights the fast convergence of STRAINER to low and high frequencies during training. Experiments involve using the SIREN MLP model with sinusoid nonlinearities, along with different versions of STRAINER compared to baselines, all with equal parameters for fair comparison. Datasets such as CelebA-HQ, AFHQ, and OASIS-MRI are used for experimentation, with the shared encoder of STRAINER trained on 10 images from each dataset for shared initialization in subsequent tests. The research also explores image fitting tasks in-domain and out-of-domain, demonstrating the effectiveness of STRAINER compared to other models, specifically showcasing superior performance in various scenarios like super-resolution and denoising, emphasizing the importance of learned transferable features. Strengths: - The paper introduces the concept of STRAINER, which demonstrates fast learning of signals at test time by capturing high frequencies efficiently. - It highlights the adaptability of STRAINER's initialization for fitting new signals, showcasing better performance compared to Meta-learned 5K in learning input space partitioning. - Improved in-domain performance with STRAINER compared to meta-learning methods for INRs, providing better input space subdivision that aids in faster convergence on inverse problems like super-resolution and denoising. Weaknesses: - The paper missed mentioning a couple of important papers on meta-learning INRs, say Functa [A] and Spatial Functa [B]. Especially the latter achieves good PSNR as well as classification accuracy, despite that the methods and the focus are different. [A] E Dupont et al. Your data point is a function and you can treat it like one. [B] M Bauer et al. Scaling Functa to ImageNet Classification and Generation. - The method is only tested on image datasets but not 3D shapes or radiance fields, where INRs are more applied to. Technical Quality: 3 Clarity: 3 Questions for Authors: - Besides the improvement in PSNR, is the generalization ability expected to help with completion of the signal (i.e. inpainting)? - It is mentioned that the encoder trained on CelebA-HQ works even better than in-domain pretraining on AFHQ (cats) and OASIS-MRI datasets. Have you tried out-of-distribution generalization trained on other datasets as source domain? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Limitations are discussed in Section 5.1. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the careful and thorough evaluation of our work. We appreciate the kind comments on STRAINER being able to learn fast at test time, efficient recovery of high frequency components, and its ability to learn more transferable input space subdivision. We address the comments and suggestions below. __Citing additional papers on meta-learning INR.__ We thank the reviewer for the additional references. We find them relevant to our literature review and will include them in *Section 2, Background*. __Evaluating STRAINER on 3D signals.__ STRAINER is a general purpose transfer learning framework which can be used to initialize INRs for regressing 3D data like occupancy maps, radiance fields or video. To demonstrate the effectiveness of STRAINER on 3D data, we have performed the following OOD generalization experiment. We pre-train STRAINER on 10 randomly selected ‘Chair’ objects from the ShapeNet[44] dataset. At test time, we fit the ‘Thai Statue’ 3D object[52]. STRAINER achieves a 12.3% relative improvement in IOU compared to random initialization for a SIREN architecture – in 150 iterations STRAINER-10 obtains an IOU of 0.91 compared to an IOU of 0.81 without STRAINER-10 initialization. We present visualizations of the reconstructed Thai Statue in RT Figure 1(c). Upon qualitative evaluation, we see that STRAINER-10 is able to capture ridges and edges better and faster than compared to SIREN. We will add the results in the main text in Sec 4. and compare with baselines LearnIt[37], and IPC[48]. Fitting signals (2D, 3D shapes, video) to STRAINER is straightforward and intuitive. However, extending STRAINER to inverse problems in 3D such as learning Neural Radiance Fields from limited observations and 3D Gaussian Splatting is non-trivial. We consider this an exciting future direction for us and thank the reviewer for their kind suggestions. __New Out-of-Distribution generalization experiments.__ Following the recommendations by the reviewer, we have trained STRAINER’s shared encoder on Flowers [45] and Stanford Cars [46] datasets and report reconstruction quality (PSNR) when testing on AFHQ [5] dataset in RT Table 1. We specifically chose Flowers [45] and Stanford Cars [46] as they have different spatial distribution of color and high frequencies compared to AFHQ and OASIS-MRI which would allow us to evaluate out of domain image fitting more thoroughly. We find that STRAINER is able to learn transferable features and achieves comparable PSNR to the shared encoder trained on CelebA-HQ[16]. As shown in RT Table 1, in-domain fitting for AFHQ has the highest reconstruction PSNR, followed by STRAINER trained on CelebA-HQ, Flowers, and Stanford Cars (in decreasing order, presented in RT Table 1). The results suggest that STRAINER is capable of capturing transferable representations that generalize well across natural images. We propose this as a direction for future work. We also emphasize in RT Table 2 (and MP Table 3 ) where we train on CelebA-HQ but evaluate out of domain on high resolution Kodak Images[1] on categories of airplane and statue where STRAINER outperforms baseline models substantially or is very comparable. Further, we also compare STRAINER with recent work on generalizable INRs [47, 48]. We show in RT Table 1, that STRAINER outperforms meta-learning as well as transformer baselines such as TransINR[47] and IPC[48], for out of domain fitting for AFHQ and OASIS-MRI datasets even when STRAINER is trained on unrelated source domains such as stanford cars. Due to limited time, our experiments have been carefully chosen to address the concerns of all reviewers and are indicative of STRAINER learning highly transferable features for tasks such as image representation, with minimum training time and as little as 10 images. A comparison of training statistics and model footprints is mentioned in RT Table 3. We are happy to engage more and provide more findings during the discussion period. __Generalization ability of STRAINER for Inpainting.__ We thank the reviewer for raising this important point. Successful recovery for inverse problems such as single-image inpainting relies on the inductive bias of the model. STRAINER’s rich representation encoded in the shared initial layers of an INR captures a prior over the data allowing it to converge rapidly fast. However, since it’s based on SIREN, its inductive bias is the same as siren - which may result in similar performance as Siren. We expect results similar to denoising and super-resolution as reported in MP Table 5 depending on the fidelity of the available signal. Further, for a non-trivial problem such as inpainting, conventional INRs may require support from added regularization terms if lacking strong inductive biases that favor solving inpainting, as shown by INRR [49]. Additionally, convolutional architectures such as the Deep Image Prior [50] leverage self-similarity and are successful at inpainting. While INRs do not exhibit locality bias, models such as Wire[51] have strong inductive bias due to the Gabor Wavelet which favors solving inverse problems. We’re curious to assess if STRAINER can capture any favorable properties which can benefit the task of inpainting and will explore it as future efforts to understand STRAINER deeply. --- Rebuttal Comment 1.1: Comment: Thanks for the response. I wonder if Spatial Functa should be added as a baseline in the experiments in rebuttal PDF Table 1. In terms of PSNR this method is also competitive. --- Reply to Comment 1.1.1: Comment: Functa[A] proposes to use neural fields (e.g. INRs) as data points itself for deep learning tasks such as classification and generation, but is limited in its ability to represent complex datasets such as ImageNet. Spatial Functa[B] addresses the limitation by using a spatially arranged latent representation of neural fields. At the core, both these papers use a separate _model_ to modulate the weights of an INR that represents an instance of the data, which is later used for downstream deep learning tasks. STRAINER (our work) provides a framework for learning powerful and transferable features for INRs, capturing a shared low-frequency across images by sharing initial _encoder_ layers of an INR and having signal-specific _decoder_ layers. Our work addresses feature transferability in INRs - enabling INRs to fit unseen in-domain and out-of-domain signals faster and with better quality. As you had previously mentioned and we agree, that Functa[A] and Spatial Functa[B] have different focus from our work (STRAINER). Further, since the code for Spatial Functa is not publicly available and due to limited available time in the discussion period, we find it unlikely to run additional baselines of Spatial Functa. Similar to Spatial-Functa, we find TransINR[47] and IPC[48] as reasonable comparisons since they too have separate _models_(transformer networks) that modulate the weights of the INR. We report our comparison in the rebuttal PDF RT Table 1. We show that while TransINR[47] and IPC[48] are trained using 14000 images for 1 day, STRAINER (our work) outperforms them significantly $\approx$ 5-7+db for in-domain and out of domain image fitting with just 10 training images and 24 seconds of training time. We also show a detailed comparison of training data size, training time and number of learned parameters in RT Table 3, and note that STRAINER uses orders of magnitude lesser training data and faster training time. We show extensive comparisons with other baselines in RT Table 1, and emphasize on STRAINER’s superior performance. We also show STRAINER on other modalities such as 3D occupancy maps. We request the reviewers to update their scores if our rebuttal experiments have addressed their questions and comments satisfactorily.
null
null
Rebuttal 1: Rebuttal: We are thankful to the reviewers for their careful and thorough evaluation of our work. We first provide key contributions of our work and then provide additional experiments and results as requested by reviewers. __Summary of paper:__ STRAINER provides a framework to learn powerful and transferable features for implicit neural representations (INRs) by sharing a set of initial encoder layers of multiple INRs while being trained on a small set of prototypical signals (e.g. images, 3D occupancy). At test-time, for fitting an unseen signal, a STRAINER INR is initialized with the learned encoder and a randomly initialized signal-specific decoder. Our work addresses feature transferability in INRs - enabling INRs to fit unseen in-domain and out-of-domain signals faster and with better quality. We empirically show that STRAINER learns a powerful representation from just 10 images and 24 seconds of training time (measured on Nvidia A100) and significantly outperforms models such as Siren and Meta-Learning based learned initializations[37] for image fitting - making it highly efficient for learning transferable features in data-scarce scenarios. Current data-driven learned initialization methods such as meta learning or hypernetworks rely on large models, big datasets, and long and potentially unstable training regimes[40]. Further, we provide detailed visualization of STRAINER’s learned representations, training dynamics and input space partitioning to further validate that STRAINER learns high frequency details faster and leads to better convergence. Using an approximation of SplineCAM[13], we visualize the input space partitioning which evidences STRAINER to learn more transferable features. We also explore the nature of prior capture by STRAINER which leads to rapid or better convergence for inverse problems such as super resolution and denoising, and showcase its applicability for enabling INRs to learn powerful representations in medical imaging as well. __Summary of Reviews.__ We thank the reviewers for their careful and thorough evaluation of our work, and for providing relevant references. Our reviewers have requested to conduct - Detailed evaluation of baselines with recent INR literature such as TransINR[47], and IPC[48]. - Showcase STRAINER on various tasks beyond image fitting, such as 3D data and inpainting. - Offer theoretical insights in how STRAINER learns transferable features and address the difference in STRAINER and IPC for generalization. - address distribution of training data, out of domain generalizability of STRAINER, and stability of STRAINER while fitting a new signal __Summary of additional experiments and analysis__ - We run multiple baselines with recent transformer based models such as TransINR[47] and IPC[48] for in-domain(ID) and out-of-domain(OD) image fitting and show that STRAINER outperforms across all baselines and tasks. - We run OD experiments on 3 channel OASIS MRI images (compared to previously reported single channel MRI images) to facilitate fair comparison across newer baselines of TransINR and IPC. - We show additional experiments where the STRAINER encoder is trained on different source datasets Flowers[45] and StanfordCars[46] and showcase the out of domain generalizability of STRAINER. - We provide more detailed theoretical understanding on transferable features learned by STRAINER and address the difference in STRAINER and IPC for generalization. - We expand STRAINER beyond just 2D images, inverse problems, and apply it on 3D occupancy maps showing STRAINER to learn 12.3% better in a limited number of iterations. For conciseness and reading clarity, we define following acronyms to refer to figures and tables from their respective sources. - RT : Refers to the Rebuttal PDF present in the author rebuttal. - MP : Main paper - SP : Supplementary material / Appendix attached to Main paper. __References__ [44] A Chang et.al., ShapeNet: An Information-Rich 3D Model Repository, CORR 2015 [45] Kaggle, Flowers Dataset. [46] Kaggle, Stanford Cars Dataset. [47] Yinbo Chen et.al, Transformers as Meta-Learners for Implicit Neural Representations, ECCV 2022 [48] Chiheon Kim, Doyup Lee et.al, Generalizable Implicit Neural Representations via Instance Pattern Composers, CVPR 2023. [49] Zhemin Li, et.al. Regularize implicit neural representation by itself, CVPR 2023. [50] Dmitry Ulyanov et.al., Deep Image Prior, CVPR 2018 [51] Vishwanath Sargadam et.al, Wire : Wavelet Implicit Neural Representations, CVPR 2023 [52] Stanford 3D Scans Repository, Thai Statue. [53] Richard Zhang et.al, The Unreasonable Effectiveness of Deep Features as a Perceptual Metric, CVPR 2018. [54] Zhou Wang et.al., Image Quality Assessment: From Error Visibility to Structural Similarity, IEEE Trans. on Image Processing 2004. [55] Randall Balestriero et.al., A Spline Theory of Deep Learning, PMLR 2018. Pdf: /pdf/c339e27749240427ae6676fa5ff36a25cb534f56.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Many-Shot In-Context Learning
Accept (spotlight)
Summary: This paper explores the effectiveness of in-context learning with hundreds to thousands of examples, bringing the number of examples closer to the range one might use for supervised training methods. Experiments are performed on a large number of tasks and benchmarks using Gemini 1.5 as the LLM, in each case giving it a prompt containing varying number of dataset examples and observing its performance as the number of examples increases. These robustly demonstrate the effectiveness of using more examples, at times beating fully-supervised models on the same data. In addition to using the ground truth examples, two additional prompting methods are evaluated: Unsupervised ICL, which only adds additional inputs to the prompts, and Reinforced ICL, which adds inputs along with machine-generated responses filtered by correctness of result; both techniques limit the amount of "ground truth" responses required for adding prompt examples, and are also found to perform well for all tasks. Strengths: * These experiments provide a very good overview of the effects on performance of including these numbers examples in prompts, surveying the effects in a wide set of tasks. * Additional studies, particuarly sec 4, are interesting, finding some behaviors that parallel those seen in supervised training, in the context of ICL with substantive benchmark tasks. * Unsupervised ICL and Reinforced ICL are useful techniques to limit the amount of gt reponses, and shown to be effective in these evalutaions Weaknesses: * Only Gemini 1.5 was explored, though this was discussed by the authors in the discussion and limitations section, I think it's a significant weakness as it limits findings of many-shot ICL to describing the behavior of this particular model, not the typical range of performance behaviors for different systems with this approach, and whether these are automatic simply by extending context, or if there are model differences that can impact this technique. * I didn't see anything on the computational costs or runtime differences varying K, which would provide fuller picture of the behavior of many-shot ICL beyond just its end performance. * It's a lot of information to cram into the allotted space and many sections seem terse with the connections between experiments and how they contribute to the overall understanding often left hanging. Technical Quality: 3 Clarity: 3 Questions for Authors: While reading this paper I ended up with lots of comments and questions that I've listed below. Aside from profiling computation increases as mentioned above, there are a lot of particulars around the individual experiments that I had questions about, as well as questions on what might be causes and effects in addition to raw performance numbers. - Including more examples has two effects: One of increasing information and distinct samples used, but another of increasing the context length. When viewed analogously to a training loop, a supervised optimization loop will repeat examples between epochs if there are fewer samples than steps. It would be interesting to start to separate these two effects, perhaps by extending context length by repeating the same examples instead of adding new examples. For example when going from 20 to 200, repeating the same 20 examples 10 times in the prompt. Is the resulting performance closer to using 20 or 200, or in between, and for which tasks? - Supervised finetuning comparison in 4.3 is only on the machine translation task; while this is a good experiment that I was on my mind from the start, the fact that it was evaluated on only one task limits the conclusion. - Reinforced ICL description could have more details, particularly since it is highlighted as one of the main contributions. The description "we select rationales that obtain the correct final answer" makes sense as a general filtering procedure, but I'm not sure how many rationales are generated or selected per problem (though I assume only one is used for each k-shot prompt), and it's not clear what happens if no generated rationales also contain the correct answer --- are these inputs thrown away entirely, and if so does this also have a beneficial or detrimental filtering effect on the inputs used? - sec 2.3 planning logistics: I don't agree that the figure demonstrates significant increase for many-shot as stated in this section. Near-maximum performance of just under 35% success is achieved by 10 examples, with a possible uptick at 800, as described in the caption. But this uptick is about as large as the downtick at 20, and is towards the end of the plot without enough points to establish a clear trend. It's unclear to me if this is due to the larger number of examples or other effects (e.g. which examples are included, as 800 will have a higher chance of including the most useful examples, or random variation). - Related: I'm not sure what error bars indicate; other figures' captions say it is stdev, but I don't see this described in the text. - sec 3.1 Fig 5: I'm a little unclear on what K=4 corresponds to in the ICL Ground Truth case. Since the Unsupervised ICL prompt also includes 4 gt examples at the end, does the ICL Ground Truth prompt contain 4 examples in addition to the final 4 (for 8 total), or only these final 4? That is, are these 4 at the end of the prompt counted in the K or not, and is this the same between all three prompt methods? If these final 4 are included in K, then the three methods should coincide at K=4. If they are not included, then ICL Ground Truth should have 8 total gt examples (K=4 + the four at the end of the prompt) -- is this the case? I also wasn't able to find example prompts for reinforced or gt ICL for tasks corresponding to the ones exploring unsupervised prompts. - fig 6: reinforced ICL at K=250 is missing from this figure - sec 3.3 l.194 "to reduce the impact of false positives, ..." --- why is this important? does it obscure trends in the results by having too much noice from chance, for example? - sec 4.1 Fig 8: really interesting the shape of these curves as a function of K is basically what one would expect to see for a supervised model preinitialized with the original task labels, as a function of the training step number. not sure if this would be worth mentioning in the text maybe also showing a supervised training run exhibiting similar behavior for qualitative comparison? - sec 4.3 comparing to SFT: this section lacks details on how supervised fine tuning was implemented: in particular, what was the base model, which weights/adapters are trained or frozen, and how many epochs over the training data were performed? And why were these choices made in particular? There is also a description of trade-offs between training and inference costs of the two approaches, but no estimates on what the computation resources for each are. - sec 4.4 on NLL: these are some interesting behaviors, though I agree it's unclear what to make of them exactly, but I appreciate presenting the observations. As for why GT ICL may be lower NLL than Reinforced ICL: could this be that Reinforced ICL specifically rejects some high-prob model generated responses that don't end up with the right answers, raising its NLL from this filtering step? whereas in contrast, if the gt or similar same-source examples was present in the pretraining data, it would be expected to have relatively low NLL due to being drawn from that datasource? - xsum weird dates after K=50: xsum was taken from webarchive articles, basically mapping html body text to title --- I wonder if somehow the model learned the task was to recover the webarchive page title and confused with other header data including the webarchive last updated time, from the pretraining data? the 2016 timestamps are roughly around to the document snapshot times in the xsum urls. e.g. if webarchive processing included writing a header with the timestamps, then title, then rest of the page, for example. this is also compatible with discussion on finding the relevant parts of pretraining for the task, for Unsupervised ICL intro at sec 3 l.144. - xsum could also use unsupervised ICL -- what is its performance just adding articles without summaries? For that matter, machine translation could also use unsupervised ICL just with the English phrases. I'd imagine it wouldn't help at all for MT but it's not impossible, and evaluating it might still be interesting just to verify this. In general using all techniques for all tasks would help round out the study (only reinforced ICL wouldn't make sense for tasks where there is no succinct way to do correctness filtering). Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: With the possibility of including more data, it is now more likely to include noisy or incorrect data (either accidentally or adversarially). This might also be mentioned and discussed in the limitations section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the detailed comments and questions, which we address below and would strengthen our work. In the **[rebuttal pdf](https://openreview.net/attachment?id=4BVPccQZEN&name=pdf)**, we have added results on *runtime differences, many-shot performance of 1.5 Flash and frontier LLMs, ablated impact of new information vs context length, re-evaluated results on Logistics, and added analysis for hallucination on xsum*. Our detailed response follows: > **Only Gemini 1.5 was explored .. are there model differences that impact many-shot** Indeed, our work serves as an existence proof for the huge potential of many-shot ICL. Nevertheless, we provided preliminary results for GPT-4-Turbo and Claude-3-Opus in **Figure A.2**, indicating that different models benefit from many-shot ICL to varying degrees. We also added **Figure 1** in the rebuttal pdf to evaluate many-shot ICL performance for Gemini 1.5 Flash, a smaller LLM than 1.5 Pro, and show that it can match or surpass Claude-3-Opus and GPT-4-Turbo with enough shots, despite having worse few-shot performance. We’ll move this to the main paper. Recently, follow-ups [1, 2, 3] have exhibited many-shot ICL with other open-weights and closed-source models on different tasks. Our work contributes to this growing body of evidence, and contributes several analyses of the phenomenon. [1] Many-Shot ICL in Multimodal Foundation Models. Jiang et al, 2024. [2] Many-Shot ICL for Molecular Inverse Design. Moayedpour et al, 2024. [3] ICL with Long-Context Models: An In-Depth Exploration. Bertsch et al, 2024. > **Runtime differences varying K to provide a fuller picture of many-shot ICL .. profiling computation increases** Great suggestion! To show runtime differences, we added **Figure 2** in the rebuttal pdf showing per-single generation runtime, averaged across the test set and multiple seeds, for many-shot ICL on summarization (500-shot) and sequential parity prediction (8192-shot). With KV caching enabled (default for long-context servers), runtime increases linearly with a large number of shots, as opposed to quadratic for self-attention: doubling the number of shots nearly doubles the runtime. However, for a small number of shots, runtime is nearly constant. Explanation: When computing the next token, we still have to attend to the fixed many-shot prompt, even if KV is cached. When the number of generated tokens is much smaller than many-shot prompts, each new token is still linear, which explains our observed runtime for a large number of shots. We hypothesize that up to a token length of 32K, you can fit the entire KV cache into TPU HBM, which roughly means that you compute next tokens in O(1) memory load. > **xsum weird dates after K=50 .. if somehow the model learned the task was to recover the webarchive page title** Our analysis suggests that this hypothesis is likely to be true! Specifically, we extracted the hallucinated years from XSum summaries and plotted their histogram density in **Figure 3**. Remarkably, more than 95% of these dates indeed lie within the range 2014-2017, suggesting that the model might indeed be retrieving additional information about webarchive last updated time. > **Including more distinct examples increases information, but also context length .. separate these effects by extending context length by repeating same examples** We ran this experiment on low-resource MT by repeating 25 examples several times to create many-shot prompts with up to 1000 examples (shuffled ordering) and added the results in **Figure 4** in the rebuttal pdf. The performance with repeated examples stays nearly the same and significantly lags behind many-shot performance with distinct examples. On this task, the benefit of many-shot ICL mainly stems from adding new information as opposed to increasing context length. > **Planning logistics: Is there a significant increase for many-shot?** To be certain, we re-evaluated many-shot on Logistics with the latest public version of Gemini 1.5 Pro, and added the result in **Figure 5** in the rebuttal pdf. Many-shot accuracy improves uniformly for this version – interestingly, few-shot performance already starts quite high around 40%, and improves to 62.8% with 400-shot and plateaus to 63.8 with 800% shot. > **Supervised fine tuning: base model, and how many epochs? And why were these choices made? Estimates on the computation resources?** We performed “full” fine-tuning (no adapters) on the same base model that was used for the many-shot ICL (Gemini 1.5 Pro). We performed 5 epochs of training, picking intermediate checkpoint with lowest validation loss (often from the first few epochs). These choices were made to ensure that we can obtain quite strong results for SFT. We’ll add these details in Sec 4.3. Since Gemini 1.5 Pro is closed-source, we used [Vertex API for SFT](https://cloud.google.com/vertex-ai/generative-ai/docs/models/gemini-use-supervised-tuning) and cannot provide estimates of computation resources. > **Supervised finetuning comparison only on machine translation task .. limited conclusion** Correct, our results only demonstrate that many-shot ICL **can be** competitive to fine-tuning on **some** tasks. The high dollar cost of “fine-tuning” limited our experimentation to 4 runs (2 tasks x 2 data sizes). We also performed comparison to SFT on parity prediction, where we find that many-shot ICL requires 20x less samples compared to fine-tuning GPT-2 to reach the same performance on this synthetic task (Appendix A.13). A more thorough comparison would be interesting for future work. > **what error bars indicate .. say it is stdev** Yes, error bars indicate stdev of test performance across multiple random seeds, where K-shot prompts are sampled randomly for each seed (Lines 68-70). We’ll update the text to clarify this. ------- *We hope most of the reviewer's concerns have been addressed and if so, they would reconsider their assessment.* --- Rebuttal 2: Title: [Continued] Clarifications about Reinforced and Unsupervised ICL, and remaining questions Comment: > **Details about Reinforced ICL: (1) how many rationales , (2) selected per problem, and (3) what happens when no generated rationales, and is it detrimental?** - (1) Number of rationales generated was based on available problems for ICL. MATH has 7.5K problems and we generated one rationale per problem, resulting in correct rationales for about 3.5K problems (but we see plateauing with about 500-shots). For tasks with a much smaller number of inputs (250 for GPQA, 150 for BBH), we generated 10 rationales per problem at a temperature of 1, to maximize having at least one correct rationale per problem. - (2) For each problem, we randomly pick one of the correct rationales for the K-shot prompts. - (3) Inputs without any correct rationales are thrown away entirely, which is a limitation of Reinforced ICL to be unable to use such inputs. Moreover, as these inputs correspond to the harder problems which the model cannot currently solve, we might be throwing away valuable information. As shown in Figure A.17, doing another iteration of Reinforced ICL using only these “harder” inputs can further improve many-shot performance. > **l.194 "to reduce impact of false positives" --- why is this important?** Typically, we can only evaluate final answer correctness and not verify the CoT rationale. As such, BBH tasks with binary choices can result in rationales that obtain the right answer “by chance” with wrong reasoning (“false positives”) – our manual inspection indicated that model-generated rationales on such tasks were of poor quality. This is an inherent limitation of methods that rely on model-generated rationales, including Reinforced Self-Training method, which inspired Reinforced ICL. **We discussed this limitation in L134-137**, and will clarify it in the revision. > **fig 6: reinforced ICL at K=250 missing** We generated rationales with correct answers for only 129 problems (this was mentioned in L180-181, will further clarify). > **Unable to find prompts for reinforced or gt ICL for unsupervised ICL tasks** Figure A.9 shows the zero-shot GT prompt for GPQA and Figure A.11 shows the 4-shot GT prompt for MATH and GSM8K. Reinforced ICL prompts contain the same problems as GT prompts but use model-generated solutions – we’ll add an example to show the solution differences. > **Sec 4.4: why GT ICL may be lower NLL than Reinforced ICL .. rejects some high-prob model generated responses .. raising NLL from filtering step?** Certainly, the filtering step in Reinforced ICL can result in solutions that the base model considers as high NLL. Another hypothesis is that model-generated solutions can look very different from human-written ones, resulting in higher NLL on GT solutions. Notably, Reinforced ICL has lower NLL than GT ICL on model-generated solutions on test problems. > **sec 3.1 Fig 5: what K=4 corresponds to in ICL GT. Are these 4 at the end of the prompt counted in the K, and is this same for all three prompt methods?** The 4-shot GT ICL prompt uses only 4 examples, which correspond to the same examples as used by the 4-shot formatting preamble of Unsupervised ICL (Figure A.11). However, Unsupervised ICL prompt also used an instruction preamble (“You will be provided Problems similar to the ones below:”), leading to differences in results from 4-shot GT ICL Prompt. Reinforced ICL uses the same problems but model-generated solutions instead of GT solutions, resulting in different performance than GT ICL, even in the 4-shot setting. > **Machine translation could use unsupervised ICL with the English phrases .. it wouldn't help at all for MT .. and evaluating it just to verify this. Also, unsupervised ICL on xsum?** As expected, Unsupervised ICL on low-resource MT doesn’t help, as shown in Figure A.16. This was mentioned on L146-147 about limitations of unsupervised ICL. We also ran Unsupervised ICL on XSum but only observed a maximum rouge-L score of about 23.95 (with 250-shot unsupervised prompt), which is slightly higher than just using the 1-shot prompt with an article and summary. Looking at the generated summaries, they were more verbose than the abstractive target summaries. This might be mitigated to some extent with a better zero-shot instruction, but would be unfair as no such instruction was used for many-shot ICL with ground-truth examples. > **sec 4.1 Fig 8: the shape of these curves as a function of K is what one would expect for a model preinitialized with the original task labels, as a function of training step** Agreed, this is a nice connection and we’d mention it in text. > **With more shots .. more likely to include noisy or incorrect data** If many-shot examples in the prompt contain biases (e.g., stereotypes, unfair representations), the model can possibly amplify these biases. Moreover, many-shot ICL can be used for overriding safety-training biases, manipulating LLMs to behave in unintended or harmful ways. We’ll include this in the discussion.
Summary: In this work, the Authors investigate the performance of large language models on in-context learning (ICL) tasks when provided with a large number - in the order of hundreds or thousands - examples (many-shot ICL regime), enabled by recent increases in context window sizes. The Authors demonstrate significant improvements across various tasks when moving from few-shots to many-shots. They also introduce two methods, called Reinforced ICL and Unsupervised ICL, to mitigate the need for human-generated examples. The paper analyzes how many-shot ICL affects model behavior, including overcoming pre-training biases and learning high-dimensional functions. Strengths: The paper examines many-shot ICL across a wide range of tasks including translation, summarization, planning, mathematical problem-solving. This broad scope illustrates very well the benefits of many-shots. The Authors introduce novel methods (Reinforced ICL and Unsupervised ICL) to address limitations of many-shot ICL, namely, the need for large amounts of human-generated examples, and show their superiority. An in-depth analyses is conducted of how many-shot ICL affects model behavior. For example, the Authors demonstrate that many-shot ICL can overcome pre-training biases (as shown, for example, in Figure 10 where performance on flipped and abstract labels approaches that of default labels with increasing shots). The paper presents a spund evidence for the benefits of many-shot ICL. For instance, in Figure 1, it is shown a consistent performance improvement across various tasks. By comparing many-shot ICL to fine-tuning, it is shown that a comparable performance is reached in some cases, suggesting that many-shot ICL could be a viable alternative to fine-tuning. Weaknesses: As the Authors recognize, the study is limited to a single model (Gemini 1.5 Pro). While the Authors do include some results with GPT-4-Turbo and Claude-3-Opus, a more comprehensive comparison across different models would strengthen the generalizability of the findings. While the paper provides extensive empirical results, it lacks a theoretical framework to explain why many-shot ICL works so well. A theoretical analysis could provide insights into the mechanisms behind observed improvements. Another point in which a theoretical analysis would be highly desirable regards the following case. The Authors note that performance can in some cases degrade with more examples (e.g., in the case of MATH), but don't fully explain this phenomenon. They indeed state: "Our analysis found that negative log-likelihood trends are insufficient to explain this degradation, and future work should investigate new directions to shed light on the matter and improving many-shot ICL capabilities." However, this is point is correctly raised while discussing limitations. Finally, a more explicit discussion of the potential drawbacks and risks associated with many-shot ICL would be helpful to raise awareness. Technical Quality: 4 Clarity: 4 Questions for Authors: I would like to aks the Authors what they think about the possible application of many-shots ICL in alignemt problems. The ability to include much larger contexts could - if I am not mistaken - be leveraged for improving alignment to specific ethical standards or legal frameworks (this seems plausible if one takes into account your findings about overcoming pre-training biases). By providing a large number of examples that demonstrate the desired ethical reasoning or decision-making process, it might be possible to steer the model's behavior more effectively than with few-shot prompting or fine-tuning. Do you think this is a potential application of many-shots ICL or you can already see limitations? For example, it comes to me that, since sometimes the performance can degrade with too many examples, and the ordering of examples can affect results, applying many-shots ICL to ethical alignment would require understanding how to structure and present examples effectively within the context window. Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: While the paper addresses correctly the technical limitations, the potential downsides or risks associated with many-shot ICL are not discussed. I think that a brief remark on potential misuses would be appropriate. Flag For Ethics Review: ['No ethics review needed.'] Rating: 9 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their detailed comments and questions. Our detailed response follows: > **While authors include some results with GPT-4-Turbo and Claude-3-Opus, a more comprehensive comparison across models would strengthen the generalizability** While we do not fully address this limitation, in the rebuttal pdf, we’ve added results for Gemini 1.5 Flash in **Figure 1**, a smaller LLM than Gemini 1.5 Pro and. We find that even 1.5 Flash can match or surpass Claude and GPT with many-shot ICL, despite having worse few-shot performance. This demonstrates that even small LLMs with long-contexts might be capable of many-shot ICL. > **While the paper provides extensive empirical results, it lacks a theoretical framework to explain why many-shot ICL works so well** Related to this, an ICML'24 paper [1] argues that ICL operates under two possible modes, task recognition and task learning – activating the task learning mode requires a large enough number of shots (however they only empirically studied up to 128-shots), which is highly task dependent. It is possible that the success of many-shot ICL is partially explained by activating this task learning mode. We’ll include this in discussion. > **Theoretical Analysis for why performance degrades** A very recent submission under review [2] argues that performance drop in many-shot ICL might be due to more demonstrations diverting the model attention from the query, hindering its understanding of the query. Another reason might be that really long many-shot prompts might be highly out-of-distribution (OOD) as LLMs might not have seen such prompts during pre-training or post-training (current mixtures might be optimized for needle-in-a-haystack tests). Furthermore, LLM pre-training has a maximum sequence length, followed by continued pre-training for context lengthening, for example, LLaMa 3.1 uses 8k and 128k while Apple FM uses 8k and 32k. We’ll update the discussion to include these hypotheses. > **An explicit discussion of drawbacks and risks with many-shot ICL** We’ll update the discussion to include the following drawbacks and risks. - Computational Cost: Many-shot ICL can be computationally expensive, especially with a large number of examples. This can be mitigated with context caching and KV caching. - Bias Amplification: If the many-shot examples in the prompt contain biases (e.g., stereotypes, unfair representations), the model can amplify these biases, which raises ethical concerns. - Lack of Transparency: The inner workings of how ICL works is not well understood. This makes it difficult to pinpoint exactly why a model generates a specific output with many-shot ICL. This can be problematic to assure safety and alignment of LLMs [3]. - Jailbreaking: Many-shot ICL can be used for overriding safety-training biases, manipulating LLMs to behave in unintended or harmful ways. > **Is improving ethical alignment a potential application of many-shot?** We agree that inference time steering of LLMs with many-shot ICL has more flexibility compared to fine-tuning (e.g. different alignment criteria for different use cases), could also allow for faster adaptation to changing criteria, and fast "patching" for newly discovered legal or ethical issues. As such, many-shot ICL for ethical alignment seems to be a very promising application. > **Since sometimes the performance can degrade with too many examples, and the ordering of examples can affect results, applying many-shots ICL to ethical alignment could be challenging.** In our work, sensitivity to example ordering seems to be highly task dependent – we do not see much impact on ordering on tasks like low-resource MT or summarization but larger impact on other tasks. Nevertheless, a simple approach might be to tune the ordering itself based on a held-out validation set and off-the-shelf libraries such as [DSPy](https://github.com/stanfordnlp/dspy) can be easily used for such purposes. Regarding the optimal number of examples, this is likely a more critical parameter and would also matter for ethical alignment. As a rule of thumb, we swept across the number of shots on a logarithmic scale. Interestingly, our results on overriding pretraining biases show that performance only plateaued rather than degrading when adding too many shots. ----- [1] Dual Operating Modes of In-Context Learning. Lin and Lee, 2024. [2] Focused Large Language Models are Stable Many-Shot Learners. Anonymous, 2024. [3] Foundational Challenges in Assuring Alignment and Safety of LLMs. Anwar et al, 2024.
Summary: Owing to the significant increases in context window lengths, the paper analyzes the efficacy of the Gemini 1.5 Pro LLM in the many-shot in-context learning (ICL) setting, where hundreds to thousands of exemplars can be provided to the model at inference time. ICL has generally been restricted to the few-shot learning setting, where only a small number of demonstrations are provided to the LLM. As this expansion to the many-shot setting can pose issues relating to large scale data collection, the authors propose two simple approaches: (1) Reinforced ICL, which switches human written rationales for demonstrations with chain-of-thought model generated rationales, and (2) Unsupervised ICL, where rationales are not provided in the ICL task. The authors conduct extensive experiments across a number of problem domains, ranging from summarization, machine translation, logistics planning, question answering, algorithmic reasoning, among many others, showcasing the performance benefits obtained by using a larger number of exemplars in ICL. Strengths: - I believe the paper is a significant contribution to the field of ICL, as it analyzes the efficacy of LLMs in the not yet studied many-shot regime. The authors conduct a number of extensive experiments ranging from a diverse set of tasks and benchmarks, showcasing the benefits of many-shot ICL. The biggest takeaway from the paper would be that many-shot ICL could be a suitable alternative to supervised fine-tuning which would tune the entire set of model weights, albeit at the cost of increased inference time (which can be reduced via KV caching). - The paper is very well-written and I appreciate the large scale of experiments conducted on Google Gemini 1.5 Pro. - The two annotation-free many-shot ICL strategies (Reinforced ICL and Unsupervised ICL) proposed are simple, but clearly demonstrate improved performance on a wide variety of tasks. - Findings relating to overcoming pre-training biases, learning higher-order functions, and many-shot ICL vs supervised fine-tuning are also important results that strengthen the contributions of this work. Weaknesses: - As this is an empirical analysis paper which spans many different tasks and benchmarks, can the authors confirm that for all the tasks, the full test splits were used for evaluation as in line with community standards and past work? If there are any exceptions, these should be listed. For instance, measuring summarization performance on XSum/XLSum for only 150 test articles seems far less in size than the actual test set for this dataset (~11k articles). Were the articles randomly sampled? - While the ablations are carried out with respect to the number of ICL exemplars provided to the LLM, I am somewhat unsure of what role the model size plays here. As shown in the cited Wei et al paper (https://arxiv.org/pdf/2303.03846), larger LMs might do in-context learning differently and can learn input-output mappings better than smaller LMs which for example, might not be able to reject pretraining biases. Thus, it is not clear if some of the performance improvements are just a direct result of using Gemini 1.5 Pro (which is a model on the larger end of the size spectrum). Conversely would a smaller LLM with a larger context also benefit from many-shot ICL (possibly not as many exemplars as Gemini 1.5 Pro, however). Do the authors have any thoughts on this and can they draw a distinction between gains attained due to an increase in the size of the model versus the many-shot ICL setting? - Do the authors have any intuition for why at times performance reduces as more exemplars are added, somewhat akin to overfitting? I noted that the authors discussed this as an open question in the paper but I think it would benefit readers if some more insight could be provided. Technical Quality: 3 Clarity: 4 Questions for Authors: Please see the weaknesses listed above. Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: Yes, the limitations have been addressed. More details can be provided in a revision. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their detailed comments, which we try to address below. We have added a many-shot ICL comparison for Gemini 1.5 Flash with frontier LLMs to understand the role of model size in the **[rebuttal pdf](https://openreview.net/attachment?id=4BVPccQZEN&name=pdf)**. > **unsure of what role the model size plays here .. would a smaller LLM with a larger context also benefit from many-shot ICL? Gains from increase in the model size versus many-shot ICL** To understand the role of model size, we’ve evaluated the many-shot performance of Gemini 1.5 Flash, a smaller long-context LLM than Gemini 1.5 Pro. We used the low-resource MT to compare against LLMs at the larger-size of the spectrum, namely 1.5 Pro, Claude-3-Opus and GPT-4. Our results suggest that even smaller LLMs can benefit from many-shot ICL and outperform LLMs with stronger few-shot performance with enough shots. We report these results in **Figure 1** in the rebuttal pdf. On the English → Bemba task, we find that 1.5 Flash matches Claude-3-Opus and outperforms GPT-4 with 997-shots, despite having much worse few-shot performance than Claude and GPT. On English → Tamil MT, 1.5 Flash performs comparably to 1.5 Pro and Claude in terms of few-shot performance. However, 1.5 Flash outperforms Claude-3 in terms of many-shot performance, while lags behind 1.5 Pro. > **Were the test splits in line with community standards? If there are any exceptions, these should be listed .. XSum/XLSum for only 150 test articles seems small. Were the articles randomly sampled?** Yes, for almost all of the tasks, we used the standard test sets used for evaluation, e.g., MATH500 and GPQA Diamond split). The only exceptions were summarization and low-resource MT, we randomly subsampled 150 examples from the entire test set to reduce the cost of evaluating many-shot ICL across multiple seeds. We'll update the text to specify this clearly. Note that for XSum, we actually used the GEM-XSUM [1], which is a cleaner version of XSum with 1.2K test articles. > **Do the authors have any intuition for why at times performance reduces as more exemplars are added?** A very recent submission under review [2] argues that performance drop in many-shot ICL might be due to more demonstrations diverting the model attention from the query, hindering its understanding of the query. Another reason might be that *really long* many-shot prompts might be highly out-of-distribution (OOD) as LLMs might not have seen such prompts during pre-training or post-training (current mixtures might be optimized for needle-in-a-haystack tests). Furthermore, LLM pre-training has a maximum sequence length, followed by continued pre-training for context lengthening, for example, LLaMa 3.1 uses 8k and 128k while Apple FM uses 8k and 32k. We’ll update the discussion to include these hypotheses. [1] The GEM Benchmark: Natural Language Generation, its Evaluation and Metrics. Gehrmann et. al, 2021. [2] Focused Large Language Models are Stable Many-Shot Learners. Anonymous, 2024. ------- *We hope most of the reviewer's concerns have been addressed and if so, they would reconsider their assessment* --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for their rebuttal and additional experiments. I believe these augment the work further and should be included in the revision. Finally, based on the merits I had listed in my original review, I believe the paper's contributions still currently constitute a "technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility". I will hence keep my score.
Summary: This work conducts a comprehensive study on in-context learning (ICL). The experiments range from few-shot to many-shot scenarios, with up to 2048 in-context examples. It was observed that as the number of examples increases, the performance generally improves, even matching the performance of fine-tuned models. One limitation of many-shot in-context learning noted by the authors is the difficulty in obtaining high-quality in-context example pairs. To address this issue, the authors proposed Reinforced and Unsupervised ICL, which achieved results comparable to those using ground truth examples. Additionally, the authors explored many-shot ICL in the context of pre-training bias (distribution shift settings) and high-dimensional numerical settings, providing explanations for the results. Strengths: The experiments are comprehensive, covering a wide range of tasks and comparisons. The paper is well-written with a clear structure. The problem itself is interesting and has broad applications. Weaknesses: The experiments are limited to a single model, Gemini 1.5 Pro, as mentioned by the authors. This may narrow the scope of the results, as different models have varying pre-training data and biases. The performance of LLM under ICL shows significant variation (as seen in figure 6). It would be beneficial to include more seeds or utilize better statistical metrics to represent model performance, beyond just average accuracy. Additionally, as the number of shots increases, so does the number of tokens, making the computation budget (RAM of the machine) a potential bottleneck for further exploration and application. Given that many in-context examples have varying lengths, future work could focus on how to select better in-context examples that capture the task's nature while also being concise in terms of token length. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. It's surprising that Reinforced ICL and Unsupervised ICL outperform the ground truth. Do you have any explanations for this? 2. The performance shows significant variations when different random seeds are used to select in-context examples. Could you explain why this happens? 3. Flipped labels and abstract labels seem to achieve the worst performance when K=8 or K=16. Is this a universal result across different model sizes, as related to [1][2]? 4. I am curious about what model is used for figure 11 in sec 4.3. Did you supervise fine-tune the same model and report its results on the dataset? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Please see weakness and questions for limitations. There is no potential negative societal impact of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their comments and questions, which we address below. We have added new results on 1.5 Flash and inference time of many-shot in the **[rebuttal pdf](https://openreview.net/attachment?id=4BVPccQZEN&name=pdf)**. > **Experiments are mostly limited to 1.5 Pro .. different models have varying pre-training data and biases.** Indeed, our work serves as an existence proof for the huge potential of many-shot ICL. That said, we provided preliminary results for GPT-4-Turbo and Claude-3-Opus in Figure A.2, indicating that they can also benefit from many-shot ICL to varying degrees. We added **Figure 1** in rebuttal pdf to report many-shot performance for Gemini 1.5 Flash, a smaller LLM than 1.5 Pro, and show that it can match or surpass Claude-3-Opus and GPT-4-Turbo with enough shots, despite having worse few-shot performance. We’ll move this to the main paper. Recently, follow-ups [1, 2, 3] have exhibited many-shot ICL with open-weights and closed-source models on different tasks. Our work contributes to this growing body of evidence, and contributes several analyses of the phenomenon. > **ICL shows significant variation on GPQA (Fig 6) .. more seeds or better metrics** Our paper reports standard deviation across seeds on most of the tasks. We’d emphasize that GPQA is an exception, rather than the norm, even noted by the Anthropic report due to its extreme difficulty and small size (198 questions only). To show this variability, we directly report the individual performance on 5 seeds on both GPQA and MATH. > **As the number of shots increases, so does tokens, making computation budget a bottleneck** Indeed, many-shot ICL does increase inference computation time, but can allow for quick prototyping and experimentation using just an inference API. That said, it can be sped-up with KV caching and context caching [7], which is default for long-context servers. To empirically measure this inference time, we’ve added **Figure 2** in the rebuttal pdf showing per-output generation runtime for many-shot ICL, averaged across test set and multiple seeds, on summarization (500-shot) and sequential parity prediction (8192-shot). With caching enabled, runtime increases linearly with a large number of shots, as opposed to quadratic for self-attention: doubling the number of shots nearly doubles the runtime. However, for a small number of shots (less than 32k tokens), we see that runtime is nearly constant. > **It's surprising that Reinforced and Unsupervised ICL outperform ground truth. Do you have any explanations?** - Reinforced ICL: The Reinforced Self-Training work [4] showed that fine-tuning using model-generated rationales can be more effective than human-generated ground-truth outputs. We show that a similar finding holds true for many-shot ICL. Since model-generated outputs utilize only the skills / knowledge possessed by the LLM, it might make such outputs easier to learn from. - Unsupervised ICL: This is harder to explain as unsupervised ICL does not always work well: it outperforms ground truth for MATH, but is substantially worse for low-resource MT (Appendix A.10). Our hypothesis is that it works well when the LLM already has all the required knowledge to solve a task. As such, ground-truth outputs might bias the model and the model prefers to utilize its underlying knowledge by just relying on inputs. > **The performance shows significant variations when different random seeds are used to select in-context examples. Could you explain why this happens?** Impact of random seed for ICL example selection seems to be highly task dependent – on low-resource MT and summarization, it has a minimal effect, as can be noticed from small error bars that show standard deviation of mean performance across seeds. However, on other tasks, such as MATH and GPQA, it has a higher impact. Prior work has found the following factors impact few-shot performance when selecting examples: - If chosen examples are semantically similar to test examples, it leads to improved performance [5] - Increased diversity in chosen examples leads to improved performance [6] When we select different example subsets by setting different random seeds, we perturb these factors, leading to differences in downstream performance. Our general trends hold even taking into consideration this variation in performance (shown by standard deviation bars). > **Flipped and abstract labels .. achieve worst performance when K=8 / K=16. Is this universal across different model sizes?** We do not know if this is a universal result across different model sizes. That said, we also evaluated Gemini 1.5 Flash on flipped labels and observed similar accuracy trends: 56% for K=4, 34% at K=8, 40% for K=16, 76% for K=32, and 86.5% for K=64. > **Did you supervise fine-tune the same model and report on the dataset?** Yes, we performed “full” fine-tuning on the same model (Gemini 1.5 Pro) on the same examples that were used for many-shot ICL. We performed 5 epochs of training, picking intermediate checkpoint with lowest validation loss to ensure strong SFT results. > **Future work could focus on how to select better ICL examples that capture the task's nature while being concise** Agreed, would add to future work. ---- [1] Many-Shot In-Context Learning in Multimodal Foundation Models. Jiang et al, 2024. [2] Many-Shot In-Context Learning for Molecular Inverse Design. Moayedpour et al, 2024. [3] In-Context Learning with Long-Context Models: An In-Depth Exploration. Bertsch et al, 2024. [4] Beyond Human Data: Scaling Self-Training for Problem-Solving with Language Models. Singh et al, 2024. [5] What Makes Good In-Context Examples for GPT. Liu et al, 2021. [6] Selective Annotation Makes LMs Better Few-Shot Learners. Su et al, 2022. [7] Context caching. ai.google.dev/gemini-api/docs/caching ---- *We hope most of the reviewer's concerns have been addressed and if so, they would reconsider their assessment.*
Rebuttal 1: Rebuttal: We thank all the reviewers R1 (XEKv), R2 (mj1R), R3 (kkH9), and R4 (2oDD) for their valuable feedback! All reviewers are in favor of acceptance and found the paper to be **comprehensive, very well-written, significant contribution to ICL, broad scope and applications, novel annotation-free ICL methods, and interesting analysis**. This paper also generated a lot of discussions for our observations as well as questions about future work, which we highly appreciated. Here, we address the common concerns raised by the reviewers: ### **Experiments mostly limited to Gemini 1.5 Pro** Our work serves as an existence proof for the huge potential of many-shot ICL across a variety of tasks. That said, we provided preliminary results for **GPT-4-Turbo and Claude-3-Opus** in Figure A.2, indicating that they can also benefit from many-shot ICL to varying degrees. Several follow-ups have exhibited many-shot ICL with other open-weights and closed-source models on different tasks. Our work contributes to this growing body of evidence, and contributes several analyses of the phenomenon. We also added **Figure 1** in the rebuttal pdf to report many-shot ICL performance for **Gemini 1.5 Flash**, a smaller LLM than 1.5 Pro, and show that it can match or surpass Claude-3-Opus and GPT-4-Turbo with enough shots, despite having worse few-shot performance. We’ll move this to the main paper. ### **Runtime and inference compute increase for many-shot** While many-shot ICL increases inference computation time, it can allow for quick prototyping and experimentation using just an inference API. That said, it can be sped-up with KV caching and context caching, which is default for long-context servers. Moreover, being able to spend additional inference-time compute to obtain better performance is a useful feature to have. We added **Figure 2** in the rebuttal pdf showing per-single generation runtime, averaged across the test set and multiple seeds, for many-shot ICL on summarization (500-shot) and sequential parity prediction (8192-shot). With KV caching enabled (default for long-context servers), runtime **increases linearly with a large number of shots, as opposed to quadratically** for self-attention: doubling the number of shots nearly doubles the runtime. However, for a small number of shots, runtime is nearly constant. We'll add this result to the paper. ### **Additional results in rebuttal pdf for R4** Other results the questions and concerns raised by R4 about impact of context length vs new information in many-shot ICL (**Figure 4**), hallucination on XSum (**Figure 3**), and re-evaluation on the Planning Logistics task with latest Gemini API (**Figure 5**). ### **Details about SFT comparisons, empirical evaluations, and Reinforced ICL** We'll update the revision to include them, which we discuss below in individual rebuttals. Pdf: /pdf/14fe7bda56f90e487383cbe8d472f855a3dcb283.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Neural expressiveness for beyond importance model compression
Reject
Summary: In this paper, the author proposes "Expressiveness," a metric that measures the dissimilarity of feature maps produced by different filters. Subsequently, the author introduces NEXP, a technique to prune filters based on their expressiveness. The proposed method is tested on tasks such as image classification and object detection. Strengths: 1. The overall structure of the paper is clean and easy to follow. 2. Although I am not very familiar with the structured pruning literature, the application of the concept of representation power of neural networks in this field appears novel. 3. The experiments are comprehensive, including pruning at initialization (PaI), pruning after training, and other tasks like object detection. Weaknesses: 1. The notation is very confusing. For example, in the section 'Generalization of concepts at a structural level,' $\ell$ is the index of a certain layer, but the upper-case K is the total number of layers, and the lower-case k is the index of a filter/channel in a certain layer. 2. The concept of expressiveness is not new in the context of pruning [1] and neural architecture search [2,3,4]. 3. The performance improvement by NEXP is not prominent. For example, in Tables 1 and 2, NEXP often shows a higher compression ratio but lower accuracy. This makes it unclear if NEXP offers a significant advantage over other methods. *** **Minor Mistakes:** Line 163: "where k the is" should be corrected. [1] Tanaka, Hidenori, et al. "Pruning neural networks without any data by iteratively conserving synaptic flow." Advances in Neural Information Processing Systems 33 (2020): 6377-6389. [2] Lin, Ming, et al. "Zen-NAS: A zero-shot NAS for high-performance image recognition." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021. [3] Wang, Haoxiang, et al. "Global convergence of MAML and theory-inspired neural architecture search for few-shot learning." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022. [4] Chen, Wuyang, Xinyu Gong, and Zhangyang Wang. "Neural architecture search on ImageNet in four GPU hours: A theoretically inspired perspective." arXiv preprint arXiv:2102.11535 (2021). Technical Quality: 2 Clarity: 2 Questions for Authors: In section D.1, how is NEXP applied to PaI methods? If I understand correctly, SNIP, GraSP, and SynFlow are all weight pruning methods, whereas NEXP is structured/filter pruning method. Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: The authors have discussed limitation in the manuscript. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **W1.** We have adjusted the notation formatting in 3 and especially in "Generalization of concepts at a structural level" to improve clarity. Below is an analytical list of all changes made in the paper: W1.a. Changes emphasized on the "Generalization of concepts at a structural level": - Lines 160-161: The notation $K$ has been replaced by an analytical expression for the convolutional layers. **Updated Sentence:** For a CNN model with a set $\textcolor{blue}{C = \\{C^{1}, \dots, C^{l}, \dots, C^{|C|}\\}}$ of $\textcolor{blue}{|C|}$ convolutional layers, where $C^{l}$ is the $l$-th convolutional layer. - Lines 162-163: Based on the fact that the volume of the activations output equals the number of filters applied to a given input because of the intrinsic properties of the convolutional operations (i.e., $ |F^{l}| = |C^{l}| $), we extend the sentence in the following way: **Updated Sentence:** We denote filters (weight maps) and feature maps (activation maps) as $F_{k}^{l}$ and $C_{k}^{l}$, respectively, where $k$ $\textcolor{blue}{\text{is the index within a layer}}$ and $\textcolor{blue}{|F^{l}| = |C^{l}|}$. *(Thank you for your attention to detail regarding minor mistake 1.)* - Eqs 4 and 5: $K^{(1)}$ and $K^{(l)}$, which represent the number of filters in layer 1 and a given layer $l$, respectively, have been replaced by the cardinality notation, i.e., $|C^{(1)}|$ and $|C^{(l)}|$. W1.b. To maintain a more coherent notation structure throughout the entire section, we apply the following changes to the remaining of Section 3: - Line 172-175: The sentence has been polished to incorporate the updated notation for convolutional layers as introduced above. Specifically, Line 173: *"a CNN model with $K$ convolutional layers"* **has been updated to** *"a CNN model with $\textcolor{blue}{C}$ convolutional layers"*. Line 174: $F^{|K|}$ **has been updated to** $F^{\textcolor{blue}{|C|}}$ and $F^{(l, K^{(l)})}$ **has been updated to** $F^{(l,\textcolor{blue}{|C^{(l)}|})}_{t_i}$. (Note: Subscripts are not displayed correctly due to the OpenReview LaTeX formatting environment.) Line 175: *"with $K^{(l)}$ being the amount of weight maps (filters) in a given layer $l$."* **has been updated to** *"with $\textcolor{blue}{|C^{(l)}|}$ being the $\textcolor{blue}{number}$ of weight maps (filters) in a given layer $l$."* - In the remainder of Section 3, we adopt the more consistent and clearer notation structure using $C$, as refined in W1.a and W1.b. Specifically, we replace the $K$ notations with the more appropriate $C$-based notations in lines 184, 189-190, 210-211, and in eqs 6 and 7. W1.c. see reviewer 4Yxj W2 rebuttal response. **W2.** Indeed, the concept of expressiveness, or else discriminative capacity of neuron activations, has been discussed in other papers, especially in the domain of Neural Architecture Search, as a more prominent solution for evaluating the capacity of neural networks. However, this is the first work to establish a concrete categorization between incorporating weights and activations in model compression decisions. Specifically, to the best of our knowledge, this work is the first to provide: - (a) a distinct motivation for exploring activation-based compression approaches (1), as the current state of the literature is predominantly focused on weight-based methods, with recent research interest increasingly shifting towards activations [1]. - (b) a detailed categorization and examination of the motivations and limitations of current literature on pruning weight and discriminative -based methods (2). - (c) an in-depth mathematical conceptualization of neuron activations properties and the benefits of **"information flow"** for model compression in a model-agnostic format that can be incorporated into any activation-based approach (3), while it demonstrates the complementary benefits of weight-based (importance) and activation-based (expressiveness) pruning methods, highlighting their partial orthogonality (4.2). Overall, the proposed concept of **"Neural Expressiveness"** in this paper expands on the prevalent yet unexplored notion of expressiveness in efficient deep learning and aims to forge a distinct paradigm, transitioning from the conventional weight-centered importance assessment to an emphasis on activations. **W3.** Overall, this work suggests that no universal solution outperforms all compression methods across all setups and contexts, as subjective quality metrics (e.g., accuracy) are often difficult to replicate and may vary significantly for different setups (see lines 274-277) [2]. Specifically, the efficiency of model compression methods can be directly assessed by metrics like FLOPs and parameters, while predictive quality is influenced by factors such as post-pruning fine-tuning configurations, pruning process design parameters (B.2), etc... [2]. For that reason, the experimental section emphasizes the proposed method's applicability across various settings: (a) "one-shot" or "iterative" pruning (4.1 and B.2), (b) PaT and PaI (4.1, 4.3, and D.1), and (c) hybrid solutions (4.2). This provides a foundation for further optimization in specific tasks. Overall, NEXP using iterative pruning shows notable performance improvements for object detection (lines 287-315) and one-shot pruning for PaI (4.3), while consistently improving compression across all tables and maintaining performance, with notable improvements in some regimes, e.g., Table 1 (right) +0.66, +0.10. Also kindly refer to Q2 in 9m28. **Q1.** It is applied similarly to PaT (4.1), with the only difference being that the network $\mathcal{N}$ in Alg.1 is untrained (see Section A). The network is pruned in a one-shot manner (672-674) using NEXP for given compression ratios $r = \tau$ (672-677) and then trained as outlined in lines 679-681. [1]. Liu, Ziming, et al. "Kan: Kolmogorov-arnold networks." arXiv preprint arXiv:2404.19756 (2024). [2]. ref [3] in paper --- Rebuttal Comment 1.1: Title: Thank you for your rebuttal. Comment: Thank you for your detailed rebuttal. After reviewing the authors' response and reviews from other reviewers, my primary concern W3 remains unchanged. The experimental evaluation presented in the manuscript demonstrates that while NEXP can be applied to many scenarios, it is often not the best-performing method when compared to the baselines. Or in some cases, the comparisons do not show which method is superior. Put simply, NEXP may be a good approach, however, the evidence provided does not convincingly show that it offers a significant improvement over existing baselines. Therefore, I do not find sufficient justification to revise my initial rating. As a result, I will be maintaining my original rating. Thank you again for your efforts. --- Reply to Comment 1.1.1: Title: Addressing Concerns Regarding Performance Improvements (part a) Comment: Thank you for taking the time and effort to address our initial rebuttal (especially at this time of year). Your insightful comments and suggestions are greatly appreciated and have enabled us to further clarify the motivations of our work and improve the presentation quality of the paper. To further address your concerns regarding performance improvement by NEXP (W3), we would like to reiterate some of the key points from the paper, with emphasis on the experimental section, for clarity. The experimental setup and structure of this work are motivated by our understanding that the pruning criteria for each compression use case scenario may vary significantly, e.g., availability of computational resources, cost of deployment, specifications of the target hardware device, availability of data, target quality and efficiency of the pruned model, and many more. Below, we present some specific (1 and 2) and general (3, 4, and 5) examples to highlight the intricacies of defining the (combinatorial) pruning constraints during the compression process and to demonstrate the effectiveness of NEXP beyond the predictive performance (quality) of the pruned models. 1. DCP employs 400 training epochs for fine-tuning post-pruning to recover predictive performance, compared to our method, which uses only 100 epochs (as outlined in lines 276-277). This may result in a significant increase in operational costs for cloud-based hardware infrastructures. Specifically, given that the pruned models generated by both NEXP and DCP have similar compression sizes and inference speeds, DCP will require $4\\times$ more funds to generate the pruned model compared to NEXP. 2. HRANK utilises 500 input samples to estimate the average rank, while Network trimming [3] requires over ten thousand input images to estimate the sparsity of feature maps, compared to ours random selection of 64 for the estimation of NEXP. This results to a $\\approx8\\times$ reduction in dependency on input data compared to HRANK and several orders of magnitude less than [3]. 3. NEXP requires only forward passes, unlike gradient-based pruning methods that also require backward passes, which are also intrinsically linked to the availability and quality of data. In contrast, NEXP is independent of both the quality and quantity of samples, as demonstrated in Appendix A and discussed in Section 3.3 (*"Dependency to Input Data"*) of the paper. 4. NEXP maintains consistent performance without a significant drop in computational efficiency as models and tasks grow in complexity. In contrast, weight-based pruning methods incur increased computational overhead as model sizes increase, as discussed in W2 of reviewer 9m28, primarily due to their dependence on the dimensions and cardinality of layer weight matrices. Unlike these methods, NEXP's efficiency is not correlated with model size, demonstrating strong scalability as model complexity increases. 5. NEXP yields consistent estimations across the evolutionary (learning) stages of a neural network, making it a suitable criterion for pruning both untrained and trained networks, as demonstrated in Subsection 4.3 and discussed in Appendix A. To the best of our knowledge, our proposed approach is the first criterion designed for both Pruning at Initialization (PaI) and Pruning after Training (PaT). This stands in contrast to other fundamental principles of pruning metrics in the literature, which are specifically designed to address only one of these challenges and do not extend across the full spectrum of the convergence process. Next, we also present 3 specific examples from the experimental section to highlight the performance improvement achieved by NEXP: 1. Object Detection with YOLOv8: Our work demonstrates the superiority of NEXP for the more complex task of object detection, yielding significant performance improvements across the entire pruning spectrum compared to all adversary methods (as shown in Figs 2 and 6). Specifically, in lines 302-315, we discuss the superiority of our method using an iterative pruning format to further highlight the intrinsic property of expressiveness in maintaining network elements that are more robust to information redistribution during re-training, in contrast to the 'important' labeled structures identified by other methods. The experimental sections are arranged in this order to address the potential limitations of employing one-shot pruning for NEXP, as demonstrated in the image classification experiments in the previous subsection. 2. NEXP at Initialization: NEXP consistently outperforms all other approaches in top-1 accuracy, particularly in parameters compression regimes with up to $100\\times$ smaller networks. 3. Tables 1 and 2 for image classification: we kindly ask you to refer to part b of this response. [3]. Hu, Hengyuan, et al. "Network trimming: A data-driven neuron pruning approach towards efficient deep architectures." arXiv preprint arXiv:1607.03250 (2016). --- Rebuttal 2: Title: Addressing Concerns Regarding Performance Improvements (part b) Comment: 3. Thank you for acknowledging NEXP's superiority in consistently achieving higher compression ratios. While it is true that in some cases, as seen in Tables 1 and 2, the predictive quality of models generated by NEXP is not always superior to other methods, it is equally important to note the numerous instances where NEXP demonstrates notable performance improvements (e.g., Table 1, right: lines 1-3, 8-9; Table 1, left: lines 10-11; Table 2: lines 6-8). In this context, we believe that a fair comparison, which includes cases where our approach does not outperform alternative methods, offers a valuable basis for a more thorough discussion and for identifying ways to address potential limitations. In this context, as also discussed in point 1, we address the potential limitations in predictive performance by providing a detailed discussion on the advantages of using an iterative pruning format (as demonstrated in object detection, where NEXP shows significant performance improvements) compared to the one-shot pruning format employed for image classification in Tables 1 and 2. A more in-depth analysis of the pruning settings is also available in Appendix B ('Pruning Process: An In-Depth Analysis'). Overall, we concur with the notion that the pruned architecture, rather than the inherited 'important' weights, is more crucial to the efficiency of the final model, as discussed in Q2 of reviewer 9m28 and outlined in [4]. However, under different experimental settings and evaluation criteria (e.g., Tables 1 and 2), 'important' weights may prove to be a superior metric for achieving higher predictive performance, especially when predictive accuracy is a more critical constraint than parameter reduction. To address this potential trade-off between important and expressive methods along the efficiency-quality curve, we have included an assessment of the hybrid compression space (Subsection 4.2). This assessment highlights the partial orthogonality of the two compression criteria (i.e., importance of weights and expressiveness of neurons) and provides both an intuitive understanding and a solid algorithmic basis for hybrid optimizations, and thus enables a more comprehensive exploration of the trade-off between the quality of the pruned architecture and the inherited 'important' weights (*in our opinion, an open and intriguing challenge in the domain of model compression*). To the best of our knowledge, this is the first work to propose hybrid solutions for model compression. We are more than willing to address any further questions or concerns you might have to ensure that our submission meets the highest quality standards. [4]. Liu, Zhuang, et al. "Rethinking the value of network pruning." arXiv preprint arXiv:1810.05270 (2018).
Summary: This paper works on weight pruning for CNNs. It proposes an evaluation metric, i.e., "expressiveness", to evaluate whether a neuron/groups of neurons should be pruned or not. The metric focuses on the neurons' ability to redistribute informational resources. As the evaluation of expressiveness requires data samples, the paper includes studies on arbitrary data or limited dataset’s representative samples. The experiments are conducted for image classification tasks on ResNet architectures, and the object detection task on YOLOv8m. Strengths: Weight pruning is an effective manner in reducing the redundancies in DNNs. Instead of focusing on weight importance, this paper considers the expressiveness of neurons in the information flow within a network. The proposed evaluation metric can also be combined with existing strategies with importance evaluation metrics as a hybrid pruning approach. Weaknesses: 1. Limited practicality of the approach. The method is mainly focused on removing redundant filters from CNNs. Though CNN is one category of DNNs, recent works have shifted to more advanced model architectures such as transformers and Mamba, which are mainly composed of FC layers instead of CNNs. The practicality of the approach is highly restricted to SOTA model architectures for image classification tasks. 2. Limited performance improvements. This is a major concern. The performance gain of the proposed method is not obvious compared with baselines. For instance. on CIFAR-10 VGG-16, SCP and reduce the parameters 15.28$\times$ with a 93.85\% accuracy while the proposed method can only reduce the parameters 5.62$\times$ with a slightly better accuracy 93.87\%. HRank also provides better performance than the proposed method with higher accuracy 93.96\% (0.09\% higher than NEXP) and higher reductions of FLOPs (4.26$\times$ (HRank) v.s. 4.01$\times$ (NEXP) ). On DenseNet-40, Hrank also shows better performance across all metrics. Hrank v.s. NEXP: Accuracy 95.05\% v.s. 94.64\%, parameter reduction 3.31$\times$ v.s.3.12$\times$, FLOPs reduction 3.38$\times$ v.s. 2.51$\times$. Technical Quality: 2 Clarity: 2 Questions for Authors: 1. Can this method be extended to model architectures beyond CNNs? How is the performance compared with other methods? 2. What is the acceleration performance of the proposed method? 3. For the data selection, in extreme cases, such as if the data samples all come from the same classification class, what will the performance be like? Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: Please refer to weaknesses and questions. The major concern is that the method does not show better performance than baselines, and is also limited in model architectures. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **W1 and Q1.** NEXP is designed to measure the redundancy of activation structures based on their expressiveness, where the finest granularity can be considered that of a single neuron, and thus it can be adjusted to any activation structural component, e.g., convolutional filters (as demonstrated in the paper). In Section 3, we selected the convolutional operation to illustrate the intrinsic properties of NEXP for structured activation patterns, recognizing it as the most widely used and fundamental operation in recent years (lines 159-161 and 193). However, our approach is designed to facilitate structural pruning across all neural networks. This is achieved by extending the intuitions and fundamentals presented in Section 3 and adjusting Eq.11 accordingly by substituting the filter ($F_{k}^{l}$) with the neural network element of choice. More specifically, some of the architectures that NEXP is currently compatible with for pruning by extending the latest version of DepGraph [1] include Large Language Models (LLMs), Segment Anything Model (SAM), Diffusion Models, Vision Transformers, ConvNext, Yolov7, Yolov8, Swin Transformers, BERT, FasterRCNN, SSD, ResNe(X)t, DenseNet, RegNet, and DeepLab. **W1.** The paper includes an evaluation and analytical discussion of the proposed approach (NEXP) on the SOTA YOLOv8 model architecture for object detection (lines 287-315 in Section 4.1), where NEXP demonstrates superior performance in terms of both compression efficiency and predictive quality for the pruned models. Additionally, NEXP has been employed to prune the FC layers in vision architectures. For instance, ResNet-56 contains an FC layer before the classification layer, which was pruned as part of the evaluation process reported in Table 1. In more detail, Table 1 in the 1-page rebuttal PDF provides an analytical comparison between the baseline and pruned ResNet structures using NEXP, focusing on the last block and the FC layer (for the pruned model of row 8 in Table 1 of the paper). **W2.** In Section 4.1, the results of adversarial pruning methods for image classification are sourced directly from the respective publications (please refer to the response in W3 for our rebuttal to reviewer's comments 9m28). To ensure a fair and consistent comparison of predictive quality across all various pruning approaches (as shown in Tables 1, 2, and 6-9), this work presents both the predictive performance of the unpruned (baseline) models, labeled as Base (%), and the pruned models, labeled as ∆ (%), as reported by each study. Therefore, the acc-1 percentages in W2's comparisons mistakenly refer to the accuracy of the unpruned models rather than the accuracy of the pruned models produced by each method. To improve the clarity of the paper, we have extended lines 270-272 to include a description of the predictive quality notation as follows: *“The results of adversarial pruning methods in this subsection are directly obtained from the respective publications. In Tables 1, 2, and 6-9, the predictive performance of the unpruned (baseline) models, labeled as Base (%), and the pruned models, labeled as ∆ (%), are presented, along with their respective compression ratios, as reported by each study."*. Addittionally, kindly refer to the response in W3 for the rebuttal to reviewer's pBA2 comments and to Q2 for the rebuttal to reviewer's 9m28. **Q2.** The term *'acceleration performance'* of the proposed method can be ambiguous, as it may refer to both the computational efficiency of the pruning process and the accelerated performance of the pruned networks. To address both aspects, we provide clarifications below. Please feel free to further clarify your statement if we have misinterpreted it in the comments section. Q2.a. **Acceleration of NEXP computations:** Kindly refer to the response in W2 for the rebuttal to reviewer's 9m28 comments. Q2.b. **Acceleration of pruned networks:** To effectively quantify the efficiency of reported solutions, this work emphasizes on theoretical compression metrics, i.e. (a) the number of multiply-adds (referred to as FLOPs) required to perform inference with the pruned network and (b) the fraction of parameters pruned, and does not include evaluations of the practical runtime speedup of pruned models. We acknowledge the significance of such evaluations and the limitations of theoretical proxies [2]. While all parameters (which are emphasized in this work) can be treated equally when reducing the network's storage footprint, different parameters may have varying impacts when reducing the computational cost of inference. However, this work is structured to provide a consistent comparative framework based on more accessible and common theoretical metrics, ensuring ease of comparison with other approaches. For that reason, following the intuition of Blalock, Davis, et al. [2], we provide a comprehensive detailing of the efficiency metrics in C.2, with further analytical discussions provided throughout the paper, e.g. lines 23-36, 3.4, Alg.1, and section 4 overall. **Q3.** Our initial intuition was that discriminative and activation -based approaches rely on sample quality. However, the sensitivity analysis of NEXP in A and 3.3 highlight that NEXP is better approximated using random samples, rather than class-representative sampling via k-means. Although extreme cases were not evaluated, most experiments used small batches of random samples to estimate Neural Expressiveness (lines 659-661). In this context, the evaluation on ImageNet-1k (4.1), using 64 random samples across 1000 classes, is a prominent indicator that NEXP efficiency remains consistent in such scenarios. [1]. Fang, Gongfan, et al. "Depgraph: Towards any structural pruning." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2023. [2]. Blalock, Davis, et al. "What is the state of neural network pruning?." Proceedings of machine learning and systems 2 (2020): 129-146. --- Rebuttal Comment 1.1: Comment: I thank the authors' for providing the response to my questions. After reading the rebuttal, I still have concerns towards: 1) The application scenarios towards other model architectures, especially to more latest building blocks such as transformer blocks with more fully connected layers rather than convolutional layers. There is no evidence how the method will perform to these models. 2) The performance improvements. Though the reported accuracy is the base accuracy of the dense model, the performance improvements comparing to baselines are not obvious. For instance, in table 1, ABC only degrades the accuracy by 0.03\% with 2.18$\times$ parameters reduction and 2.18$\times$ FLOPs reduction. However, NEXP degrades the accuracy by 0.41\%, with 2.11$\times$ FLOPs reduction and 2.87$\times$ parameters reduction. Thus, I would keep my current rating and thanks the authors' efforts. --- Rebuttal 2: Title: Addressing Reviewer ha2w's Concerns Regarding W1 and W2 Comment: We would first like to thank you for taking the time and effort to address our initial rebuttal (especially at this time of year). Your insightful comments and questions are greatly appreciated and have enabled us to further clarify the motivations of our work and improve the presentation quality of the paper. To further address your concerns regarding the applicability (W1) and performance improvements (W2) of our proposed method, we provide more detailed discussions below. **W1 (extended).** Indeed, the paper does not provide experimental evidence on how our proposed approach will perform in transformer blocks. While we have incorporated our proposed pruning approach as an extension of the latest DepGraph version (and thus its pruning compatibility aligns with that of the DepGraph framework) and have provided theoretical 'guarantees' via the generalization of the foundational concepts and intuitions of Neural Expressiveness—i.e., estimating the contribution of a neuron or group of neurons based on their ability to extract features that maximally separate sub-spaces within the feature space using the overlap of activations—we believe that, in order to perform a robust analysis and provide concrete experimental evidence, the intricacies of the various operations handling computational workloads within different neural network architectures should be explored and discussed separately. For that reason, the format of this paper prioritizes providing a solid basis on both the theoretical and algorithmic properties of NEXP, while hinting at potential future directions. To motivate further research and analysis of NEXP in task-specific and model-specific optimizations, we kindly encourage readers to experiment with different settings of NEXP throughout the paper, and we provide specific guidelines (e.g., lines 219-222, 234-236). **W2 (extended).** For an in-depth discussion on the performance improvements achieved by NEXP, as well as its effectiveness beyond the predictive performance (quality) of the pruned models, we kindly refer you to the responses titled *'Addressing Concerns Regarding Performance Improvements (part a)'* and *'Addressing Concerns Regarding Performance Improvements (part b)'* in reviewer pBA2's comments section. W2.a. Addressing the Example on ABC: The ABCPruner can be categorized as part of the subset of Evolutionary-Based Search Algorithms within the domain of Model Compression [3]. In contrast to most pruning methods, which focus on removing redundant network elements—whether weights, neurons, or structures of weights and/or neurons— Neural Architecture Search (NAS) involves searching through a vast space of possible architectures to find the optimal one, often requiring the training of many candidate models from scratch. Specifically, NAS-Evolutionary-Based pruning approaches adopt evolutionary algorithms to explore and identify optimal sparse subnetworks; in this context, ABCPruner employs the artificial bee colony algorithm to efficiently discover the optimal pruned structure. While NAS-based algorithms often do not constitute a fair comparison to pruning methods due to their more intensive and costly explorations (when compared to the more lightweight and straightforward intuition of elimination in pruning approaches), we have included ABC as a more generic compression baseline to further emphasize NEXP's consistent ability to achieve higher compression ratios beyond those of native pruning methods. We are more than willing to address any further questions or concerns you might have to ensure that our submission meets the highest quality standards. [3]. He, Yang, and Lingao Xiao. "Structured pruning for deep convolutional neural networks: A survey." IEEE transactions on pattern analysis and machine intelligence (2023).
Summary: This paper propose a new structured pruning approach NEXP. It works by computing the dissimilarity score of the feature activations across samples and removing those filters with smaller variances. Experimental results on several models and datasets demonstrate the effectiveness of the proposed approaches. Strengths: 1. The paper is well-written with clear motivation and many of the important technical details are included. 2. The authors have presented the experimental results well and the additional discussion provides deeper insights on the effectiveness of the proposed methods. 3. The authors evaluated models beyond image classification, i.e., the proposed methods work well on YOLOv8 object detectors. Weaknesses: 1. The conclusion section is too short and fails to characterize the main contribution of this paper. What does it mean as to “when” and “how” to prune? The authors should elaborate on these further. 2. I think that this is not a scalable approach for computing the pruning metrics. The pruning metric proposed in the paper requires computing a N by N matrix for each filter in the network, where N is the number of samples. This could grow quite computationally infeasible for large networks and batch sizes. The authors also fails to discuss this aspect on the pruning efficiency in the paper. 3. In terms of experiments, I am not sure why the authors compare each methods under different compression ratio? If the pruned models in each method have different parameters, it can be hard to compare the accuracy numbers. 4. I feel a lot of the content in section 3 is not necessary and they can could go into a separate preliminary section. Section 3.1 and early parts of Section 3.2 takes up a lot of space and in the meantime do not provides us with the motivation and insights for the later introduced methods. Technical Quality: 2 Clarity: 3 Questions for Authors: 1. The authors mostly focus on convolutional neural networks. However, Vision Transformers are becoming the de facto architecture for many computer vision tasks. Does the proposed approach apply to Vision Transformers as well? 2. Figure 3 seems to suggest that the proposed pruning metric is uniformly superior to importance based weight metric. Why is this the case? It seems unintuitive to me that a pruning metric that discard the weight information could works better than magnitude based approaches. 3. Have the authors evaluate the practical runtime speedup of pruned models? This would make the results more comprehensive. 4. What does Figure 1 depicts exactly? What does each figure represent, e.g., the row and column? Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **W1.** Kindly refer to the response in W1 for the rebuttal to reviewer's 4Yxj comments. **W2.** The proposed pruning metric, as demonstrated in 3.2 and illustrated in eq.11, requires a total of ${\frac{N(N-1)}{2}}$ combinations, unlike the $N^{2}$ combinations in an $N$ by $N$ matrix, as highlighted and discussed in lines 225-233 of Subsection 3.2. We acknowledge and agree that computational efficiency is a crucial aspect of pruning approaches. Accordingly, our submitted paper includes a dedicated subsection (B.4) on the **'Acceleration of NEXP computations**', as referenced in lines 264-265. To improve the clarity of App. B's contents in the main text, we have refined the sentence in lines 264-265 as follows: A more in-depth analysis of Alg.1 along with more details on the implementation options, $\textcolor{blue}{\text{i.e., B.1 Global vs local -scope pruning, B.2 One-shot vs Iterative pruning, B.3 Detailed description of all algorithmic steps, and B.4 Acceleration of NEXP computations,}}$ are presented in Appendix B. Following, we also provide a comparison between the average results of 100 pruning iterations using $l1$ magnitude pruning (which is consider a lightweight approach) and NEXP as implemented in the DepGraph pruning framework for CIFAR-10 and ImageNet, following the experimental protocol outlined in the paper. We present the average duration of a pruning step along with its standard deviation (STD) in seconds: **CIFAR-10 - ResNet56** | Method | Average Step | STD | |----------|----------|----------| | $l1$ | 0.0815 (s) | 0.0177 (s) | | NEXP | 0.8415 (s) | 0.0674 (s) | **Imagenet - ResNet50** | Method | Average Step | STD | |----------|----------|----------| | $l1$ | 0.3789 (s) | 0.1498 (s) | | NEXP | 0.4720 (s) | 0.1336 (s) | Additionally, the paper demonstrates that NEXP exhibits resilience to a limited number of samples, as evidenced by the significant correlations outlined in A, particularly in A.2. Consequently, NEXP can sustain consistent measurements without a significant decrease in computational efficiency as the models and tasks become more complex. Conversely, weight-based pruning methods, such as $l1$, exhibit increased computational overhead as model sizes grow, as highlighted in the aforementioned tables. This increase is most of the times caused due to their dependency on the dimensions and/or cardinality of the layer weight matrices. In contrast, NEXP does not correlate with model size and thus demonstrates prominent scalability in computational efficiency as model complexity increases. **W3.** Overall, we concur with the pruning evaluation framework articulated by Blalock, Davis, et al. [1], *"Pruning imposes a tradeoff between model efficiency and quality, with pruning increasing the former while (typically) decreasing the latter. This means that a pruning method is best characterized not by a single model it has pruned, but by a family of models corresponding to different points on the efficiency-quality curve."*. For the experimental results of object detection (lines 287-315), the assessment of the hybrid optimization space (4.2), and the evaluation of NEXP at initialization (4.3), different points on the efficiency-quality curve for all methods are sampled based on a consistent compression format with equal compression ratios. Each point on the curve is generated either based on a target ($\tau$) FLOPs compression ratio (see Algorithm 1) for object detection and hybrid optimization or on a target ($\tau$) params compression ratio (4.3, see lines 675-678). However, for the comparison on image classification, the results of all adversarial pruning methods are directly sourced from their respective publications and not replicated locally. Thus, to avoid excessive result tables, we have defined different FLOPs compression regimes of interest to ensure a fair comparison. To improve the clarity of the paper, we have extended lines 270-272 to include the following: *"The results of adversarial pruning methods in this subsection are directly obtained from the respective publications."* **W4.** The insights of most adversarial methods are discussed in Related Work, specifically in lines: - 113-115: L1, GAL, LAMP - 117-120: NISP, ThiNet, GAL, LAMP - 121-123: DCP - 131-132: HRank An outline of their motivations is also provided in lines 270-272 and C.2. We consider these sections essential as they establish the mathematical foundation for conceptualizing and differentiating between importance and expressiveness, and highlight the motivation behind NEXP (see response in W2 for the rebuttal to reviewer's pBA2 comments). However, in case of acceptance, space of the extra page will be reserved to provide further insights on the later introduced methods. **Q2.** Fig. 3 suggests that NEXP is consistently superior to l1 in compression efficiency, aligning with previous sections findings that NEXP consistantly achieves higher parameter compression ratios for given target (τ) FLOPs compared to weight-based methods. As outlined in [2], the pruned architecture, rather than inherited 'important' weights, is more crucial to the efficiency of the final model. Interestingly, hybrid optimizations can facilitate strategies that will further explore the trade-off between the quality of the pruned architecture and the inherited 'important' weights. **Q4.** A detailed description of Fig. 1 is in lines 247-251 (3.3). Appendix A provides a comprehensive discussion of the findings related to Fig.1, while Fig. 4 presents an extended version, including Expressiveness at Initialization. **Q1, Q3.** Kindly refer to the responses in Q1 and Q2.b (respectively) for the rebuttal to reviewer's ha2w comments. [1]. Blalock, Davis, et al. "What is the state of neural network pruning?." Proceedings of machine learning and systems 2 (2020): 129-146. [2]. Liu, Zhuang, et al. "Rethinking the value of network pruning." arXiv preprint arXiv:1810.05270 (2018). --- Rebuttal Comment 1.1: Title: Rebuttal Comment: I would like to thank the authors for the rebuttal. My concerns are mostly addressed. Thus I increase my score. I would suggest that the authors include the speed evaluation of the pruning method in the updated version. --- Rebuttal 2: Title: Reply to Reviewer 9m28 Comment: Thank you for taking the time and effort to address our initial rebuttal (especially at this time of year). Your insightful comments and suggestions have given us the opportunity to further clarify the intuition and effectiveness of our proposed approach, while also improving the quality of the paper. A discussion on the **Computational Scalability** of NEXP, as well as a comparison (both theoretical and experimental) against weight-based methods, has been included in the updated version of the paper. We greatly appreciate your increased score.
Summary: This paper aims to handle network pruning problem. Specifically, it proposes to use a new importance measurement, called expressiveness, to decide the pruning process. It jointly considers the model state to leverage on the proposed measurement. In addition, it can also combined with typical importance based pruning methods to improve the model efficiency. Strengths: 1. Network pruning is a valuable research direction to study on, espeicailly for current large-scale era where efficiency matters a lot. 2. The proposed new metric to measure the network importance is interesting. 3. Empirical results show the method superiority. Weaknesses: 1. Adding more discussion in the conclusion part helps to improve the paper readability. 2. Since this paper proposes a new pruning metric, it is better to show more visualization and network behavior analysis to illustrate the intuition. 3. The compared baseline models are relatively old, adding more recent publications helps to support this paper. Technical Quality: 3 Clarity: 3 Questions for Authors: Please refer to the weaknesses section above. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: None Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **W1.** The submitted version of the paper features a brief conclusion section to adhere to the 9-page limit of the NeurIPS format, prioritizing space for other sections. We agree that an extended discussion in the conclusion is vital for the paper's overall remarks and readability. Below, we provide the full version of the conclusions, which will be incorporated into the paper if accepted, as an additional content page will be permitted for the camera-ready version. **Conclusions and Future Directions:** *In this work, we have introduced "Neural Expressiveness" as a new criterion for estimating the contribution of a neuron based on its ability to extract features that maximally separate sub-spaces within the feature space, using the overlap of activations. We demonstrated that neural expressiveness can be approximated with limited arbitrary data and that is capable of yielding consistent estimations across the evolutionary (learning) states of a neural network. We also provided a theoretical background and mathematical conceptualization on the differentiators, along with an experimental study on the complementary nature between expressiveness and previous weight-based (importance) methods, hinting at potential future directions. Finally, after extensive experimentation, we showcased the efficacy of neural expressiveness, consistantly delivering significant compression ratios, across various scheduling, fine-tuning, and evaluation setups, with minor deviations in performance, and outperforming state-of-the-art solutions in both PaT and PaI. In our future NEXP steps, based on the intrinsic properties of expressiveness presented in this work, we aim to investigate optimization solutions for exploring the hybrid compression solution space, as well as solutions for bringing model compression closer to the initialization state, addressing questions such as, 'Is neural network training essentially about learning new knowledge from scratch, or is it about revealing the knowledge that the model already possesses?' (as quoted from Wang et al. [1]), and thus minimizing the need for extensive training iterations.* **W2.** We have included **Figure r1** in the 1-page rebuttal PDF, which has been incorporated in Subsection 3.1 (*"Weights and Activations: Importance vs Expressiveness"*) to facilitate a better understanding of the notations and intuitions in Section 3. This figure visually illustrates the introduced notations, clarifying the intuition behind the proposed method's motivation and differentiation from the intuition of existing pruning approaches. (This was also motivated by reviewer pBA2 W1's comment.) W2.a. A two-fold sensitivity analysis of NEXP is provided in Appendix A. This analysis examines the dependency of the proposed method to (i) the mini-batch data ($X$, as outlined in Alg. 1), using two input sampling strategies—random sampling and class-representative sampling via k-means—and (ii) the information state ($W_{t_i}$), specifically comparing expressiveness at initialization against expressiveness after training, when weights have converged, for various neural networks (i.e., ResNet-56, MobileNet-v2, DenseNet-40, and VGG19). **W3.** Thank you for your attention to detail. We have extended the paper to incorporate the following recently published pruning approaches [2, 3]. Below, we provide an outlook on the updated result tables for different target ($\tau$) theoretical speed-up ratios (#FLOPs ↓), showcasing a comparison between the newly included methods and our proposed approach (NEXP). **CIFAR-10 - ResNet-56** | Method | Base (%) | ∆ (%) | #Params ↓ | | |----------|----------|----------|----------|----------| | NUCLEAR [2] | 93.59 | -0.07 | 1.16$\times$ | 1.45$\times$ | | NEXP (Ours) | 93.36 | +0.05 | **1.69$\times$** | 1.53$\times$ | |-------------|----------|-------|---------------|---------------| | NUCLEAR [2] | 93.59 | -0.30 | 1.81$\times$ | 1.76$\times$ | | DTP [3] | 93.36 | +0.12 | 2.01$\times$ | 1.99$\times$ | | NEXP (Ours) | 93.36 | -0.41 | **2.87$\times$** | 2.11$\times$ | |-------------|----------|-------|---------------|---------------| | NEXP (Ours) | 93.36 | -1.58 | **4.3$\times$** | 2.50$\times$ | | NUCLEAR [2] | 93.59 | - 1.94 | 2.83$\times$ | 2.77$\times$ | | DTP [3] | 93.36 | -0.90 | 3.39$\times$ | 3.59$\times$ | |-------------|----------|-------|---------------|---------------| | NEXP (Ours) | 93.36 | -5.12 | **21.5$\times$** | 5.00$\times$ | | DTP [3] | 93.36 | -3.52 | 10.41$\times$ | 11.41$\times$ | | DTP [3] | 93.36 | -7.18 | 20.79$\times$ | 19.31$\times$ | **CIFAR-10 - DenseNet40** | Method | Base (%) | ∆ (%) | #Params ↓ | #FLOPs ↓| |----------|----------|----------|----------|----------| | FROBENIUS [2] | 94.82 | -0.13 | 1.76$\times$ | 1.67$\times$ | | *(new line)* NEXP (Ours) | 94.64 | -0.17 | **1.91$\times$** | 1.70$\times$ | **CIFAR-10 - MobileNet-v2** | Method | Base (%) | ∆ (%) | #Params ↓ | #FLOPs ↓| |----------|----------|----------|----------|----------| | NEXP (Ours) | 94.32 | **+0.13** | 2.25$\times$ | 2.11$\times$ | | DTP [3] | 93.70 | -1.77 | 2.50$\times$ | - | In general, the selection of reported adversarial compression results in this study was constrained by the frequent absence of size compression ratio reporting (i.e., parameter reduction) in many pruning studies. [1]. Huan Wang, Can Qin, Yue Bai, Yulun Zhang, and Yun Fu. Recent advances on neural network pruning at initialization. In IJCAI, 2022. [2]. Sun, Xinglong, and Humphrey Shi. "Towards Better Structured Pruning Saliency by Reorganizing Convolution." Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. 2024. [3]. Li, Yunqiang, et al. "Differentiable transportation pruning." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023.
Rebuttal 1: Rebuttal: We would like to thank the reviewers for their thoughtful feedback and their comprehensive suggestions. *(Each review has been addressed separately)* Pdf: /pdf/5f82d053074bca98ce8afaacc8ce6c9a27145e3d.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
DiffLight: A Partial Rewards Conditioned Diffusion Model for Traffic Signal Control with Missing Data
Accept (spotlight)
Summary: This paper proposes a conditional diffusion model named DiffLight, which is able to unify traffic data imputation and decision-making for TSC when data is incomplete. Specifically, it proposes a partial reward conditional diffusion method to avoid the negative effects brought by the padded values of missing data. Additionally, a novel spatial-temporal transformer architecture and a diffusion communication mechanism are designed to capture spatiotemporal dependencies between intersections as well as enhance cooperative control among them. Extensive experiments on five datasets demonstrate the effectiveness of the DiffLight. Besides, the source code provided in this paper is accessible and available for use. Strengths: S1: This paper studies a significant problem of traffic signal control with missing data, which is often overlooked by researchers but is prevalent in real urban traffic scenarios. So, the solutions proposed in this paper will benefit practical use about TSC a lot. Besides, the paper is well-written and easy to follow. S2: The idea of unifying the problems of traffic data imputation and traffic signal control decision-making in a model is interesting, also wise and reasonable, but has never been studied before. The design of partial reward conditional diffusion (PRCD) with classifier-free guidance is novel, which makes it feasible to model the complex distribution of traffic data, especially in scenarios with missing traffic data. Additionally, the paper provides theoretical proof of PRCD's feasibility to demonstrate how PRCD avoids the negative impacts of padding. S3: In the noise model of DiffLight, the paper designs the spatial-temporal transformer architecture (STFormer) to model the spatiotemporal dependencies between multiple intersections in traffic signal control. Furthermore, the paper introduces a diffusion communication mechanism, which assists in capturing the spatiotemporal dependencies between multiple intersections by disseminating the observational information generated by the STFormer during the backward process. This mechanism enables effective traffic signal control in difficult scenarios, such as simultaneous data loss at the current intersection and neighboring intersections. Weaknesses: W1: The authors should further explain how the Diffusion Communication Mechanism (DCM) enables information communication among multiple intersections when data is missing at both the current intersection and neighboring intersections. This part is not clear enough. W2: This paper only considers traffic signal control under two typical types of missing data patterns. While it is undeniable that random missing and Kriging missing are the most common patterns in real life, future research could explore traffic signal control under a wider variety of missing data patterns, including mixed missing data scenarios. W3: There are some symbols in the paper that are not correctly defined. For example: - At line 146, \(\tau\) is defined as the observation trajectory, but at line 152, it states \(\tau_{i}\) represents the neighboring intersection. I think here \(\tau_{i}\) should denote the observation trajectory of the neighboring intersection. - In the experimental section, at lines 220 and 221, the evaluation metric (Average Travel Time, ATT) is defined. Is there a more objective formula definition and explanation for it? Technical Quality: 4 Clarity: 3 Questions for Authors: Could you provide a specific example to explain the design of the DCM mechanism? Confidence: 5 Soundness: 4 Presentation: 3 Contribution: 4 Limitations: See weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are delighted that the reviewer found our motivation interesting and reasonable. Thank you for your positive and insightful comments. We respond to each of the points below: > [W1 & Q1] Explanation of Diffusion Communication Mechanism Thank you for your comments. We have made a brief introduction in **Section 3.2**. We apologize for our unclear expression and appreciate the opportunity to clarify this. Diffusion Communication Mechanism (DCM) is designed to enhance information communication among multiple intersections. In DCM, the states of neighboring intersection $x^k(\tau^i)$ diffused by diffusion model at diffusion step $k$ would be used to calculate $x^0(\tau^i)$ and then $x^0(\tau^i)$ would be sent to its current intersections. The agent in the current intersection would take $x^0(\tau^i)$ as the condition to generate the states of the current intersection $x^{k-1}(\tau)$ at diffusion step $k-1$. > [W2] Missing patterns of data Thank you for highlighting the complex missing patterns of data in the real world. Undoubtedly, the missing patterns of data in the real world are far more complex than the random and Kriging missing patterns mentioned. We completely agree with this. In future work, we will follow your suggestion to further explore traffic signal control methods under mixed data-missing scenarios. > [W3] Symbols and metrics Thank you for your detailed inspection of the symbol definitions. We will carefully correct these incorrect symbol definitions to improve the paper. Additionally, regarding the explanation of average travel time (ATT), it is the most commonly used metric in the field of traffic signal control. This metric represents the average travel time of all vehicles from entering the road network to leaving the intersection, and it effectively reflects the performance of traffic signal control. The calculation formula is as follows: $$ \text{ATT}=\frac{1}{N} \sum_{i=1}^{N} \left ( t_{i}^{l}-t_{i}^{e} \right ), $$ where $N$ is the total number of vehicles entering the road network, $t_{i}^{e}$ and $t_{i}^{l}$ are the entering time and leaving time for the $i$-th vehicle respectively. --- Rebuttal Comment 1.1: Comment: Thanks for the detailed responses. I'm generally fine with the results. Thus, I will keep my score. --- Reply to Comment 1.1.1: Comment: Thank you for taking the time to review and for your thoughtful feedback. We will take your suggestions into consideration in our future work.
Summary: The paper introduces "DiffLight," a novel approach combining traffic data imputation and decision-making for Traffic Signal Control (TSC) under scenarios with missing data using a diffusion model framework. It employs partial rewards conditioned diffusion and a spatial-temporal transformer architecture to address challenges associated with incomplete data. The proposed model is tested extensively on existing datasets, demonstrating its efficacy in handling different missing data scenarios. Strengths: 1. Originality: The approach of using a conditional diffusion model to simultaneously perform traffic data imputation and decision-making in TSC is straightforward. The integration of partial rewards conditioned diffusion is particularly novel and helps mitigate the impact of data padding issues. 2. Clarity: The paper is clearly written, with a logical structure that carefully explains the methodology and experimental setup, making it accessible and understandable. Weaknesses: 1. There is a significant gap in the comparative analysis. The paper does not benchmark against relevant state-of-the-art methods like "MissLight" and other baseline diffusion models which are crucial for establishing the efficacy of the proposed approach. 2. In MissLight, they used $D_{HZ}$, and $D_{NY}$ but this paper used new dataset, which is not clear why. Technical Quality: 2 Clarity: 3 Questions for Authors: 1. Can you explain why MissLight is neglected? If you have offline data, it is easy to employ an offline version of MissLight as well. 2. Why not compare on the existing dataset used in MissLight? Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: The paper discussed limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We highly appreciate your high-quality and valuable suggestions. We provide the point-by-point response as follows: > [W1 & Q1] Comparative analysis Thank you for emphasizing the gap in the comparative analysis. We apologize for our unclear expression. Actually, all the baselines were implemented with reference to the SDQN-SDQN (transferred) method in MissLight [1]. We replaced the DQN algorithm with different algorithms in the offline setting and imputed the states with the SFM model. To further evaluate the efficacy of the proposed approach, we employ an offline version of SDQN-SDQN (model-based) in MissLight and conduct experiments on Hangzhou$_1$ and Jinan$_1$. To distinguish the baseline of CQL in Section 4.2, it is named CQL (model-based) while CQL in Section 4.2 is named CQL (transferred). **Random Missing:** |Dataset|Rate|CQL (transferred)|CQL (model-based)|DiffLight| |-|-|-|-|-| |$\mathcal{D}_{\text{HZ}}^1$|10.00%|363.50|376.85|285.96| ||30.00%|368.53|381.92|293.10| ||50.00%|383.67|388.51|303.91| |$\mathcal{D}_{\text{JN}}^1$|10.00%|299.07|303.46|273.17| ||30.00%|310.80|324.15|280.32| ||50.00%|322.25|361.40|288.01| **Kriging Missing:** |Dataset|Rate|CQL (transferred)|CQL (model-based)|DiffLight| |-|-|-|-|-| |$\mathcal{D}_{\text{HZ}}^1$|6.25%|317.69|389.66|291.80| ||12.50%|317.94|397.13|297.18| ||18.75%|319.18|449.80|299.96| ||25.00%|328.83|463.25|301.08| |$\mathcal{D}_{\text{JN}}^1$|8.33%|302.35|374.20|280.83| ||16.67%|343.16|347.88|295.53| ||25.00%|398.66|400.55|334.12| In the new experiment, DiffLight achieves competitive performance compared with CQL (transferred) and CQL (model-based). The possible reason why DiffLight has better performance is that CQL (model-based) suffers from error accumulation caused by the reward imputation model while DiffLight can directly make decisions with Partial Rewards Conditioned Diffusion (PRCD). Besides, Diffuser [2] and Decision Diffuser [3] are the most representative work in the field of diffusion model for RL, which are included as baselines in much literature [9, 10, 11]. Thus, we choose them as our baselines rather than other diffusion-based methods. > [W2&Q2] Additional datasets Thank you for highlighting the discussion on the selection of datasets. Actually, Hangzhou, Jinan and New York datasets are all commonly used in recent literature [1, 4, 5, 6, 7, 8]. We conducted extensive experiments on Hangzhou and Jinan datasets with different data-missing scenarios. To further evaluate the efficacy and validate the performance of our approach, we conduct experiments on the **New York** dataset, which includes 48 intersections. **Random Missing:** |Dataset|Rate|BC|CQL|TD3+BC|DT|Diffuser|DD|DiffLight| |-|-|-|-|-|-|-|-|-| |$\mathcal{D}_{\text{NY}}$|10%|187.14|200.77|349.54|394.17|209.37|185.98|182.89| ||30%|226.23|254.73|540.18|605.81|241.32|229.44|244.93| ||50%|453.90|446.29|820.19|837.97|453.97|455.07|266.82| **Kriging Missing:** |Dataset|Rate|BC|CQL|TD3+BC|DT|Diffuser|DD|DiffLight| |-|-|-|-|-|-|-|-|-| |$\mathcal{D}_{\text{NY}}$|6.25%|515.40|242.15|496.41|894.76|741.99|765.64|197.22| ||12.50%|1304.52|470.69|859.98|930.78|951.49|1213.08|315.05| ||18.75%|1360.71|1154.71|989.99|1197.74|1034.02|929.25|350.66| ||25.00%|1442.31|1089.39|1108.67|1445.37|846.18|1393.23|454.56| In the new experiment on the New York dataset, DiffLight achieves the best performance in most of scenarios, demonstrating its ability to deal with complex traffic scenarios and control traffic signals in a larger scale traffic network. In contrast, the performance of most baselines drops rapidly, due to the cumulative effect of errors in state imputation and decision-making at more intersections. Reference: [1] Mei, Hao, et al. "Reinforcement learning approaches for traffic signal control under missing data." Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence. 2023. [2] Janner, Michael, et al. "Planning with Diffusion for Flexible Behavior Synthesis." *International Conference on Machine Learning*. PMLR, 2022. [3] Ajay, Anurag, et al. "Is Conditional Generative Modeling all you need for Decision Making?." *The Eleventh International Conference on Learning Representations*. [4] Ye, Yutong, et al. "InitLight: initial model generation for traffic signal control using adversarial inverse reinforcement learning." Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence. 2023. [5] Wei, Hua, et al. "Colight: Learning network-level cooperation for traffic signal control." Proceedings of the 28th ACM international conference on information and knowledge management. 2019. [6] Zang, Xinshi, et al. "Metalight: Value-based meta-reinforcement learning for traffic signal control." Proceedings of the AAAI conference on artificial intelligence. Vol. 34. No. 01. 2020. [7] Yu, Zhengxu, et al. "MaCAR: Urban traffic light control via active multi-agent communication and action rectification." Proceedings of the Twenty-Ninth International Conference on International Joint Conferences on Artificial Intelligence. 2021. [8] Zhang, Liang, et al. "Expression might be enough: representing pressure and demand for reinforcement learning based traffic signal control." International Conference on Machine Learning. PMLR, 2022. [9] Li, Wenhao, et al. "Hierarchical diffusion for offline decision making." *International Conference on Machine Learning*. PMLR, 2023. [10] Venkatraman, Siddarth, et al. "Reasoning with Latent Diffusion in Offline Reinforcement Learning." *The Twelfth International Conference on Learning Representations*. [11] Zhu, Zhengbang, et al. "Madiff: Offline multi-agent learning with diffusion models." *arXiv preprint arXiv:2305.17330* (2023). --- Rebuttal Comment 1.1: Comment: Thanks for the clarification on the comparisons to existing baselines. These new experimental results help us understand what actually helps under the missing data setting. From what I see, the offline RL method (CQL) definitely helps. I'm raising my score +1 and suggest you add the above results later in this paper. As for the ablation study, it seems like we still miss some ablation on the Inverse Dynamics part. I understand it was originally part of the Decision Diffuser but I'm still curious to see its performance. --- Reply to Comment 1.1.1: Comment: We are truly grateful to the reviewer for the invaluable insights and detailed feedback. We will add the content of our discussion to the subsequent version. To evaluate the performance of the inverse dynamics, we remove the inverse dynamics and extend the dimension of the noise model to generate both observations and actions. **Random Missing:** | Dataset | Rate | w/o inverse dynamics | w/ inverse dynamics | | --------------------------- | ------ | -------------------- | ------------------- | | $\mathcal{D}_{\text{HZ}}^1$ | 50.00% | 572.61 | 303.91 | | $\mathcal{D}_{\text{JN}}^1$ | 50.00% | 301.21 | 288.01 | **Kriging Missing:** | Dataset | Rate | w/o inverse dynamics | w/ inverse dynamics | | --------------------------- | ------ | -------------------- | ------------------- | | $\mathcal{D}_{\text{HZ}}^1$ | 25.00% | 386.92 | 301.08 | | $\mathcal{D}_{\text{JN}}^1$ | 25.00% | 395.46 | 334.12 | In the ablation experiment of the inverse dynamics model, our approach with the inverse dynamics model achieves better performance.
Summary: The paper proposes a conditional diffusion framework to address the traffic signal control (TSC) problem under conditions of missing data. Utilizing Partial Rewards Conditioned Diffusion (PRCD) with classifier-free guidance, for both data imputation and decision making. The authors employ the DDIM sampling method and a spatial-temporal transformer noise predictor model to reconstruct the observation trajectory. The conditioning information is derived from partially observable rewards. A Diffusion Communication Mechanism (DCM) is introduced to exchange observation information between neighboring intersections. Finally, an inverse dynamic model is used to generate actions. Extensive experiments are conducted on datasets from multiple cities, with different missing patterns. The performance of the proposed approach is compared against multiple baseline methods. Strengths: 1. Traffic signal control is a significant practical issue, and data missing is common in real-world applications. Solving this problem has substantial practical implications. 2. The use of diffusion models to unify data imputation and 10 decision-making tasks in TSC is novel. Weaknesses: 1. The selected datasets typically have few intersections and lack large-scale (e.g. thousands of intersections) datasets to validate scalability and cooperative performance. 2. The comparison benchmarks and metrics are limited, lacking some methods, such as RL-based methods and GNN-based methods. Additionally, there is a lack of comparison for certain metrics, such as queue length and throughput. 3. This paper contains many notations, formulas, and conclusions that are directly quoted without specific contextual explanations, making it confusing when reading for the first look. Technical Quality: 3 Clarity: 2 Questions for Authors: What boundary conditions ensure the effectiveness of this method? Apart from assumptions such as missing data being independent of existing data, are there constraints such as limits on missing proportions and conflicts with neighboring traffic? Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: The authors discussed the limitations of the work in the Appendix. It is suggested that they put this part in the main body of the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful and insightful comments. We respond to each of the points as follows: > [W1] Selection for datasets Thank you for your comment. We appreciate the opportunity to elaborate on this. We would like to point out that DiffLight is designed to address the traffic signal control problem under data-missing scenarios, instead of focusing on scalability. Meanwhile, the selected datasets are commonly used in most of the literature [1, 2, 3, 4, 5, 6]. Besides, due to the limitation of computing resources and time, it is difficult to conduct experiments on large-scale datasets with thousands of intersections. However, there is no denying that scalability and cooperative performance play an essential role in applications. To further validate the scalability and cooperative performance of our approach, we conduct experiments on a larger dataset **New York**, which includes 48 intersections. **Random Missing:** |Dataset|Rate|BC|CQL|TD3+BC|DT|Diffuser|DD|DiffLight| |-|-|-|-|-|-|-|-|-| |$\mathcal{D}_{\text{NY}}$|10%|187.14|200.77|349.54|394.17|209.37|185.98|182.89| ||30%|226.23|254.73|540.18|605.81|241.32|229.44|244.93| ||50%|453.90|446.29|820.19|837.97|453.97|455.07|266.82| **Kriging Missing:** |Dataset|Rate|BC|CQL|TD3+BC|DT|Diffuser|DD|DiffLight| |-|-|-|-|-|-|-|-|-| |$\mathcal{D}_{\text{NY}}$|6.25%|515.40|242.15|496.41|894.76|741.99|765.64|197.22| ||12.50%|1304.52|470.69|859.98|930.78|951.49|1213.08|315.05| ||18.75%|1360.71|1154.71|989.99|1197.74|1034.02|929.25|350.66| ||25.00%|1442.31|1089.39|1108.67|1445.37|846.18|1393.23|454.56| In the new experiment on the New York dataset, DiffLight achieves the best performance in most of scenarios, demonstrating its ability to deal with complex traffic scenarios and control traffic signals in a larger scale traffic network. In contrast, the performance of most baselines drops rapidly, due to the cumulative effect of errors in state imputation and decision-making at more intersections. > [W2] Benchmarks and metrics Thank you for pointing out the lack of benchmarks and metrics. We apologize for our oversight and appreciate the opportunity to make a further explanation. 1. **Concerning comparison benchmarks**, we have included RL-based methods in experiments, including CQL and TD3+BC. To our knowledge, there is no GNN-based method proposed for the traffic singal control problem in the offline setting. Therefore, we choose CoLight [3], a GAT-based method widely used in the TSC task, as an additional baseline, to replace the DQN algorithm with the CQL algorithm, and conduct experiments on Hangzhou$_1$ and Jinan$_1$. However, due to the gap between offline RL and online RL, CoLight in the offline setting does not converge stably. 2. **Concerning metrics**, average travel time, queue length and throughput do reflect the performance of methods in different aspects. Whereas average travel time is widely used in most of the literature [1, 2, 3, 4, 6] and is a more important metric for improving traffic efficiency. We apologize for our oversight and we will take queue length and throughput into consideration in future work in order to evaluate method performance more comprehensively. > [W3] Lack of explanations for notations, formulas and conclusions Thank you for your feedback. We will revisit our paper carefully and incorporate specific contextual explanations for these notations, formulas and conclusions. > [Q1] Boundary conditions of the method Thank you for highlighting the boundary conditions of the method. These are indeed important for the application of our approach and I'm pleased to make the following discussion: 1. **Limits on missing proportions:** To further explore limits on missing proportions, we conduct experiments on the selected datasets in random missing with missing rates of 70% and 90%. |Dataset|Rate|DiffLight| |-|-|-| |$\mathcal{D}_{\text{HZ}}^1$|70.00%|326.29| ||90.00%|878.31| |$\mathcal{D}_{\text{HZ}}^2$|70.00%|343.48| ||90.00%|430.38| |$\mathcal{D}_{\text{JN}}^1$|70.00%|310.74| ||90.00%|437.19| |$\mathcal{D}_{\text{JN}}^2$|70.00%|295.07| ||90.00%|587.42| |$\mathcal{D}_{\text{JN}}^3$|70.00%|289.01| ||90.00%|668.41| In the new experiment, DiffLight remains acceptable performance at the missing rate of 70%. When the missing rate rises to 90%, the performance of DiffLight drops rapidly, which shows that the limit for the missing rate is around 70%. 2. **Conflicts with neighboring traffic:** In the traffic network, all agents in the intersections make decisions at the same time with observations of the current intersection and neighboring intersections as input. Thus, there is no conflict in the order of decision-making. However, under data-missing scenarios, especially when the current intersection and neighboring intersections are all unobservable, agents' input could be unavailable, leading to conflict which could reduce the performance of the method. We design Diffusion Communication Mechanism (DCM) to alleviate this problem and conduct experiments in **Appendix G.3**. Reference: [1] Mei, Hao, et al. "Reinforcement learning approaches for traffic signal control under missing data." IJCAI. 2023. [2] Ye, Yutong, et al. "InitLight: initial model generation for traffic signal control using adversarial inverse reinforcement learning." IJCAI. 2023. [3] Wei, Hua, et al. "Colight: Learning network-level cooperation for traffic signal control." CIKM. 2019. [4] Zang, Xinshi, et al. "Metalight: Value-based meta-reinforcement learning for traffic signal control." AAAI. 2020. [5] Yu, Zhengxu, et al. "MaCAR: Urban traffic light control via active multi-agent communication and action rectification." IJCAI. 2021. [6] Zhang, Liang, et al. "Expression might be enough: representing pressure and demand for reinforcement learning based traffic signal control." ICML. PMLR, 2022. --- Rebuttal Comment 1.1: Comment: Thank you for the responses. I'm generally fine with the results. One question about the new results: why are the missing rates for the two experiments (random vs Kriging) totally different? I will keep my score. --- Reply to Comment 1.1.1: Comment: We sincerely thank the reviewer for the feedback and appreciate the opportunity to clarify the question. We would like to point out that challenges posed by random missing and Kriging missing are inherently different. In Kriging missing, data collected at unobservable intersections is absent all the time. Therefore, decisions made by agents at unobservable intersections can only be made based on observations of neighboring intersections. In contrast, decisions can be made based on observations of current and neighboring intersections in random missing. Thus, compared with random missing scenarios, Kriging missing scenarios are more complex. Moreover, experiments in the literature [1, 2, 3, 4] on Kriging missing are often conducted with missing rates ranging from 0% to 25% while experiments in the literature [5, 6, 7] on random missing are often conducted with missing rates ranging from 0% to 50% or more. Furthermore, in practice, the performance of many baselines and the proposed method drops dramatically in Kriging missing with a missing rate of 25%, making it challenging to conduct further experiments with higher missing rates. Reference: [1] Wu, Yuankai, et al. "Inductive graph neural networks for spatiotemporal kriging." *Proceedings of the AAAI Conference on Artificial Intelligence*. Vol. 35. No. 5. 2021. [2] Zheng, Chuanpan, et al. "Increase: Inductive graph representation learning for spatio-temporal kriging." *Proceedings of the ACM Web Conference 2023*. 2023. [3] Mei, Hao, et al. "Uncertainty-aware Traffic Prediction under Missing Data." *2023 IEEE International Conference on Data Mining (ICDM)*. IEEE, 2023. [4] Mei, Hao, et al. "Reinforcement learning approaches for traffic signal control under missing data." Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence. 2023. [5] Tashiro, Yusuke, et al. "Csdi: Conditional score-based diffusion models for probabilistic time series imputation." *Advances in Neural Information Processing Systems* 34 (2021): 24804-24816. [6] Liu, Mingzhe, et al. "Pristi: A conditional diffusion framework for spatiotemporal imputation." *2023 IEEE 39th International Conference on Data Engineering (ICDE)*. IEEE, 2023. [7] Kollovieh, Marcel, et al. "Predict, refine, synthesize: Self-guiding diffusion models for probabilistic time series forecasting." *Advances in Neural Information Processing Systems* 36 (2023).
Summary: This paper focuses on traffic signal control under the condition of missing data, presenting DiffLight, a conditional diffusion model that integrates traffic data imputation and decision-making tasks. It introduces a partial rewards conditioned diffusion method to handle missing rewards (PRCD), employs a spatial-temporal transformer for noise modeling to capture intersection dependencies (STFormer), and proposes a diffusion communication mechanism for coordinated intersection control (DCM). Experiments demonstrate the effectiveness of DiffLight over various baselines. Further, the three components of DiffLight are studied and shown to be individually effective. Strengths: The paper is overall well written and presented clearly. A rigorous study of the suggested method is performed experimentally, confirming the contribution of each part of the DiffLight framework. Furthermore experiments showing the method can generalize well between differing levels of missing data are useful as this is likely the case in a real-world scenario. This work may be relevant to the broader RL community as it uniquely handles missing data in diffusion based RL. Weaknesses: [W1] A direct discussion or comparison to [14] would be appropriate as it addresses a substantially similar problem. While it is primarily an online method, it does have an offline stage drawing the work closer to the proposed method. However, by comparing the presented results in this paper and [14] it seems unlikely [14] would perform better. Technical Quality: 4 Clarity: 3 Questions for Authors: **Minor comments**: Line 50 "In addition, traffic data imputation for TSC with missing data is necessary." - somewhat repetitive Line 121 "Diffuer" Line 196 "in both current intersection" - in both "the" current intersection Section 3.2 might be revised for clarity. Consider including a requirements or setup file declaring the dependency versions used for the experiments in the paper. I had no issue reproducing some of the results, however dependencies may change substantially in the future. Confidence: 3 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: Any potential negative societal impacts have been addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer recognizing the importance of our work and thank you for the detailed comments. Please find the point-by-point responses to the reviewer's comments below. > [W1] Discussion on MissLight Thank you for highlighting the importance of a direct discussion between DiffLight and MissLight [1]. We apologize for our unclear expression and appreciate the opportunity to have a further discussion. We have made a brief discussion in the part of the Introduction in lines 40-44. To better clarify the core differences between our approach and MissLight, we compare them from the following two aspects: 1. **Model training:** MissLight is an online method with a state imputation model and a reward imputation model, which means that interaction with the environment is necessary. In the online setting, if the method is trained in the physical environment, safety problems must be taken into consideration. If the method is trained in a simulated environment, the difference between the physical environment and the simulated environment could affect the performance of the method to some extent when the online method is going to be employed in the physical environment. In contrast, our approach, DiffLight, is an offline method based on the diffusion model. In the offline setting, our method is trained using the collected dataset without interaction with the environment, which avoids the problems mentioned above. 2. **Model composition:** MissLight is a two-stage method. In the first stage, state imputation and reward imputation models are used to fill in the missing data. In the second stage, the DQN algorithm is employed to complete the training process based on the imputed data. This approach suffers from the problem of error accumulation during the training process. However, our proposed DiffLight model, which incorporates both a diffusion model and an inverse dynamics model, can simultaneously train on missing data and collaboratively achieve traffic signal control with missing data. Additionally, it should be noted that MissLight as an online method can't be compared with DiffLight directly. To better compare with MissLight, we implement the SDQN-SDQN (model-based) in MissLight and replace the DQN algorithm with the CQL algorithm in order to adapt the offline setting. Note that all the baselines were implemented with reference to the SDQN-SDQN (transferred) method in MissLight. We replaced the DQN algorithm with different algorithms in the offline setting and imputed the states with the SFM model. To distinguish the baseline of CQL implemented in Section 4.2, the new baseline is named CQL (model-based) while CQL in Section 4.2 is named CQL (transferred). **Random Missing:** |Dataset|Rate|CQL (transferred)|CQL (model-based)|DiffLight| |-|-|-|-|-| |$\mathcal{D}_{\text{HZ}}^1$|10.00%|363.50|376.85|285.96| ||30.00%|368.53|381.92|293.10| ||50.00%|383.67|388.51|303.91| |$\mathcal{D}_{\text{JN}}^1$|10.00%|299.07|303.46|273.17| ||30.00%|310.80|324.15|280.32| ||50.00%|322.25|361.40|288.01| **Kriging Missing:** |Dataset|Rate|CQL (transferred)|CQL (model-based)|DiffLight| |-|-|-|-|-| |$\mathcal{D}_{\text{HZ}}^1$|6.25%|317.69|389.66|291.80| ||12.50%|317.94|397.13|297.18| ||18.75%|319.18|449.80|299.96| ||25.00%|328.83|463.25|301.08| |$\mathcal{D}_{\text{JN}}^1$|8.33%|302.35|374.20|280.83| ||16.67%|343.16|347.88|295.53| ||25.00%|398.66|400.55|334.12| DiffLight achieves competitive performance compared with CQL (transferred) and CQL (model-based). The possible reason why DiffLight has better performance is that CQL (model-based) suffers from error accumulation caused by the reward imputation model while DiffLight can directly make decisions with Partial Rewards Conditioned Diffusion (PRCD). > [Q1] Word and grammatical errors Thank you for your feedback. We will correct the word and grammatical errors you mentioned and check for other possible errors. > [Q2] Code reproducibility Thank you for your suggestion. We update a requirement into our code in the anonymous link. To enhance the reproducibility, we will include a readme file in the code. We apologize for our oversight of the readability and reproducibility. Reference: [1] Mei, Hao, et al. "Reinforcement learning approaches for traffic signal control under missing data." Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence. 2023. --- Rebuttal Comment 1.1: Comment: Thank you for addressing my concerns! I will keep my score as is. --- Reply to Comment 1.1.1: Comment: Thank you for your response. Your suggestion is invaluable and we will follow your guidance to incorporate specific contextual explanations in the subsequent version.
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Efficient Lifelong Model Evaluation in an Era of Rapid Progress
Accept (poster)
Summary: The paper presents the idea of Lifelong Benchmarks as a way to deal with the problem of model overfitting, both at the individual model level and at the community level. The authors present a framework, Sort & Search (S&S), as a way to deal with the ever increasing benchmarking cost: not only does each benchmark have a myriad of samples on which ML models are tested, but there is an ever increasing number of benchmarks on which to evaluate models, thus exponentially increasing the cost for evaluating model performance. S&S addresses the problems of adding N new samples and M new models to the benchmark, which require evaluating the existing models on the N new samples, and evaluating the M new models with all existing samples, respectively. S&S reduces the complexity of these problems by finding the smallest subsets N' and M' such that the evaluation for the remaining N-N' samples and M-M' models can be extrapolated from the evaluation of N' and M'. Strengths: **Originality:** The idea of creating lifelong benchmarks that grow over time as new models and samples become available is interesting and addresses the relevant problem of overfitting. It's a bit unclear, however, where these new samples and models come from and how the datasets actually grow. If a researcher/engineer has to manually update the benchmark/dataset, is it actually that different from simply creating yet another dataset on which to test models? &NewLine; **Quality:** The submission seems to be technically sound and the claims supported. The methods used are appropriate. The authors seem to have conducted quite an extensive evaluation on the proposed approach, however a re-structuring of the evaluation section could be helpful in highlighting all the key results. The authors seem to have struggled with including all evaluation results in the main manuscript, which resulted in a very condensed evaluation section with very little discussion and explanation of the results. &NewLine; **Clarity:** Overall organisation could be improved, particularly in section 3. It is challenging to follow all the reasoning behind the several steps of the framework. It would be really helpful to have a figure highlighting the steps or depicting the framework. Figure 1, for example, helps a lot with understanding the matrices and the model evaluation. Something similar to that is missing for the framework as a whole. The methodology description of section 3.1 assumes the reader will consult the supplemental material to look at the listings and algorithms, but the main manuscript should be self-contained. Section 4.2 seems to be out of place, as it describes a design decision of the framework, and not something specific to the experiments. The baselines and experimental settings could be introduced more concisely at the start of the experimental evaluation section. It would also be nice to have a list/description of the research questions/attributes that are explored in the evaluation section (e.g. RQ1: what is the cost efficiency of S&S, or something like this) &NewLine; **Significance:** Reducing model evaluation cost is a relevant problem that can have significant impact given not only the increasing size of ML models, but also the widespread training and testing of ML models. Weaknesses: See above for strengths and weaknesses. Technical Quality: 3 Clarity: 2 Questions for Authors: **a)** Is there an underlying assumption that if a new model M' does well on a "hard" sample s, then it will do well on all samples sorted as easier than s? **b)** How do you know/compute matrix Y, the optimal ranked accuracy prediction matrix? **c)** Did you come you with DP-Search or did you find it in the literature and applied it to your use-case? I couldn't understand if the algorithm was also a contribution. **d)** Could you explain what you mean by "Now, having optimized the ranking, we have a P∗ from the data matrix indicating the sample ordering based on difficulty." (Lines 170-171) **e)** Line 191: "We want to find the best n′ observations" -- best with respect to what? **f)** You present S&S as a framework. If I wanted to leverage different performance metrics to evaluate the models, how would I go about updating the framework to use the new metric? &NewLine; **Comments:** - [line 131] What does EM stand for? &NewLine; **[minor comment / observation]** - [line 110] Typo on operation 2 -- I believe it should be insert_M Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: The authors could discuss more limitations of the proposed approach. For instance, for what types of tasks does it work? (e.g. only classification or also regression problems?) Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for their thorough evaluation and positive, enouraging feedback. We seek to address the reviewer’s concerns and questions below: > S1 It's a bit unclear, however, where these new samples and models come from and how the datasets actually grow. If a researcher/engineer has to manually update the benchmark/dataset, is it actually that different from simply creating yet another dataset on which to test models? Our inspiration comes from the ChatBot Arena initiative by LMSys, where users test models with inputs from their use cases and provide feedback, which helps grow the benchmark. Lifelong benchmarks differ from the current paradigm of creating new datasets as they aggregate data from older “solved” benchmarks to compare a wide range of models in a fair manner, rather than discarding old datasets, as new datasets are challenging only for the latest models. --- > S3, point 1: Overall organisation could be improved, particularly in section 3. It is challenging to follow all the reasoning behind the several steps of the framework. It would be really helpful to have a figure highlighting the steps or depicting the framework. Figure 1, for example, helps a lot with understanding the matrices and the model evaluation. We agree and have added a figure in the common response PDF for reference. We will add this in the revised draft, and believe this will significantly improve the clarity of the sort and search method (section 3). Thank you for highlighting this! --- > S3, point 2: The methodology description of section 3.1 assumes the reader will consult the supplemental material to look at the listings and algorithms, but the main manuscript should be self-contained. We plan to include the framework figure to improve clarity of Section 3, and space permitting we will also include the algorithm in the main paper. --- > S3, point 3: Section 4.2 seems to be out of place, as it describes a design decision of the framework, and not something specific to the experiments. We agree and have shifted this to Section 3. Thank you for pointing this out! --- > S3, point 4: The baselines and experimental settings could be introduced more concisely at the start of the experimental evaluation section. It would also be nice to have a list/description of the research questions/attributes that are explored in the evaluation section (e.g. RQ1: what is the cost efficiency of S&S, or something like this) We will add details about an overview of our results at the start of the experimental evaluation section in the revised draft. --- > Q a) Is there an underlying assumption that if a new model M' does well on a "hard" sample s, then it will do well on all samples sorted as easier than s? Yes, this is the assumption underlying our algorithm. We test the generalizability of permutation matrix $\mathbf{P}^*$ empirically towards new, incoming models in detail in Appendix D: How Consistently Do Models Follow Global Ranking?, where we find that model predictions are correlated, i.e. if a model does well on a “hard” sample s, it does perform well on most samples sorted as easier than s. --- > Q b) How do you know/compute matrix Y, the optimal ranked accuracy prediction matrix? This is explained in detail in l.187 (i) How to get the optimal ranked accuracy prediction matrix. To summarize: We use DP-search to compute the matrix Y --- > Q c) Did you come up with DP-Search or did you find it in the literature and applied it to your use-case? I couldn't understand if the algorithm was also a contribution. Dynamic programming is a classical set of algorithms (like greedy algorithms). Our contribution is in formulating the solution to the search as a dynamic programming problem and proving that it is the optimal solution for searching. On the broader point, our main contributions are as follows: - Providing Sort&Search, a novel efficient model evaluation on an unexplored setting of lifelong benchmarks - Showing our simple framework is far more scalable and allows saving 1000x evaluation cost. - Novel decomposition of errors in Sort&Search into largely independent sub-components (aleatoric and epistemic errors) - Proving theoretically and empirically that our solution for the search subcomponent reaches the optimal solution (in Figure 4). --- > Q d) Could you explain what you mean by "Now, having optimized the ranking, we have a P∗ from the data matrix indicating the sample ordering based on difficulty." (Lines 170-171) To clarify, Section 3.1.2 ended with the theorem. The last statement is a summary of Section 3.1, which we should clarify by adding a “Summary:” to the start and correcting this to "Now, Ranking by Sort provides a $\mathbf{P}^*$ from the input data matrix, this permutation matrix orders the samples from easy to hard." --- > Q e) Line 191: "We want to find the best n′ observations" -- best with respect to what? We corrected this to “We want to find the $n′$ most informative observations from the $n$ samples” to clarify best and w.r.t what. --- > Comment1) [line 131] What does EM stand for? EM stands for expectation maximization. Sorry for the confusion, we will correct this and write the full term. --- > Observation1) [line 110] Typo on operation 2 -- I believe it should be insert_M Thank you for pointing this out, will correct this right away! --- > Limitations 1) The authors could discuss more limitations of the proposed approach. For instance, for what types of tasks does it work? (e.g. only classification or also regression problems?) Thank you for the question. We discuss extensibility of our setup in comment below-- it is not particularly a limitation of our framework, however present limitations in detail in Appendix I: Limitations & Open Problems. We hope we have addressed the major concerns of the reviewer, and are happy to answer any further concerns. We look forward to a fruitful reviewer-author discussion phase. --- Rebuttal Comment 1.1: Comment: Thank you for the clarifications. **Q b)** *"To summarize: We use DP-search to compute the matrix Y"*: Does DP-search give you the global optimum? I mean, if you were to test all models on all samples (and thus knew matrix A unequivocally), would the global optimum be the same as the one computed with DP-search? --- Rebuttal 2: Title: Additional Clarification for Q f) Comment: > Q f) You present S&S as a framework. If I wanted to leverage different performance metrics to evaluate the models, how would I go about updating the framework to use the new metric? Thank you for raising this important point. All our framework requires is an A matrix constructed using any binary metric, with rows representing samples and columns representing evaluated models. In a sense, this is metric agnostic. We discuss various applications of our framework which use a wide range of metrics: - **Language Models:** Our framework can be directly applied to multiple-choice language model evaluations where the metric is exact match or near-exact match, a binary metric perfectly suitable to our framework. - **Dense Prediction Tasks or Multi-label Classification:** For pixel-wise prediction tasks or multi-label classification, our framework can be extended by flattening the predictions of each sample. That is, every sample contributes an array of binary values to the A matrix instead of a single value. The extension to the search algorithm is simple, if it samples a point all associated values are sampled and annotated. - **Tasks with Real-valued Predictions:** For tasks such as regression or BLEU score evaluations, our framework can operate after applying a thresholding operation. This converts predictions into binary values (above or below the threshold). While this allows the framework to remain valid, it limits the predictions obtained to the binary threshold. - A way to extend this would be having multiple thresholds that can enable quantized searching over output values, but this is beyond the current scope of the work. In contrast, the above applications are more straightforward applications of our framework. We hope this clarifies the adaptability of our framework. --- Rebuttal 3: Title: Reply Comment: > Q b) "To summarize: We use DP-search to compute the matrix Y": Does DP-search give you the global optimum? I mean, if you were to test all models on all samples (and thus knew matrix A unequivocally), would the global optimum be the same as the one computed with DP-search? Yes, that is correct. DP-Search returns the optimal $\mathbf{Y}^\*$ for the optimization equation in lines 167-169 if we knew matrix $\mathbf{A}$ unequivocally. We use precisely this optimal $\mathbf{Y}^\*$ to compute the Aleatoric and Epistemic error discussed in lines 328-332, and results shown in Figure 4. Empirically, we observe that Epistemic error quickly reduces to nearly zero within just 1000 samples. --- Rebuttal Comment 3.1: Comment: Thank you for the clarifications.
Summary: To mitigate models from overfitting to the standardized benchmark itself, new samples can be added to the test set, resulting in a Lifelong Benchmark. However, when a new sample is added, all existing models must be evaluated on the added sample. When a new model is added, it must be evaluated on all existing samples. This results in very high evaluation costs. The authors propose a framework termed Sort & search leveraging dynamic programming. First, the samples are ranked w.r.t. their difficulty and sorted accordingly via alternating minimization. When a new model or sample is added, it is only evaluated against a sampled subset and extrapolated, saving computational cost. Strengths: - The paper discusses about lifelong benchmarking, which is important in mitigating new methodologies and models overfitting to the benchmark itself. - Computation costs for new evaluations can be effectively reduced (with a trade-off of evaluation error). Weaknesses: - For Figures 2,3,4, the plots display MAE values larger than $0.1$. In Section 4.6, the paper states that this aleatoric error is irreducible. I am not sure if an MAE of this magnitude is practically tolerable. - The method assumes that the obtained order of difficulty generalizes to new incoming models (Section 3.2), which might not be the case in the real world. Can any observations be provided on the robustness of the proposed method when this assumption is violated? - When consecutively adding new models or data serially, is the matrix $\textbf{P}$ and $\textbf{A}$ recomputed after each addition? If so, does the error accumulate for consecutive additions? Technical Quality: 3 Clarity: 2 Questions for Authors: - For Figures 2,3,4, Could the MAE get lower if more computing is utilized? Could the plots be extended further towards $10^{0}$x compute saved (full evaluation)? (if the experiments are expensive, please do not run them and skip the plot extension part) - It is relatively difficult to grasp the overall outline of the framework only from the writing, could an outline figure be provided? (if not capable, it is OK) Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: The authors include a limitations section in their manuscript. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for their thorough evaluation and their detailed feedback. We seek to address the reviewer’s concerns and suggestions below: > W1, For Figures 2,3,4, the plots display MAE values larger than 0.1. In Section 4.6, the paper states that this aleatoric error is irreducible. I am not sure if an MAE of this magnitude is practically tolerable. While MAE serves as a useful proxy metric for algorithm development, it is not a necessary requirement to provide practical applications. In particular, for many use-cases, it is the ranking of the models, rather than their absolute metrics, that are of primary importance for informing downstream decisions about which model to use. To illustrate a practical application, we examine whether Sort&Search preserves the ranking of models at high sampling efficiency. Specifically, we conducted an experiment by changing the evaluation metric from MAE to Spearman correlation between the rankings of 25,250 models using Sort & Search and the rankings obtained after full sample evaluation on Lifelong-CIFAR10. The results, presented in the attached PDF, show a consistently high correlation of 0.5. We believe this demonstrates the framework's applicability for practical use-cases. Furthermore, we provide interesting, concrete avenues for improving the sorting algorithm in point (2) in Appendix I – Limitations and Open Problems of our work. --- > W2 The method assumes that the obtained order of difficulty generalizes to new incoming models (Section 3.2), which might not be the case in the real world. Can any observations be provided on the robustness of the proposed method when this assumption is violated? Thank you for raising this point. In Appendix D, "How Consistently Do Models Follow Global Ranking?", we show that the underlying hypothesis of sorted order of difficulty generalizing to new incoming models holds fairly robustly. We will discuss the sensitivity of our sorting algorithms to other factors such as noisy samples and labeling errors, which are important robustness considerations additionally in our revised draft. --- > W3 When consecutively adding new models or data serially, is the matrix P and A recomputed after each addition? If so, does the error accumulate for consecutive additions? Thank you for the great question! We conducted experiments by adding new models serially and using the sort & search predictions as ground truth for further additions. The results are presented in the attached PDF. We observe the *errors do not accumulate with consecutive additions*, exactly the same model order is preserved. We provide an intuitive sketch for why this is the case: Consider a sum vector $s_t$, which when sorted gives us an order vector $P_t$ at time t. We can prove that sequential updates to the sum array when made with the predicted vector $y_{t+1}$ will necessarily preserve the same $P_t$, i.e. $P_t$ = $P_{t+1}$ for all t. *Proof Sketch:* The core intuition behind the proof is that vector $y_{t+1}$ is [1111...000] in the order $P_t$, i.e. preserving the order as $P_t$. Incrementing the sum array with a order preserving $y_{t+1}$ preserves the order (if ties are broken in the manner of the old ordering). Why? If an element ${s_i}$ > ${s_j}$ at time t then necessarily ${s_i}$ > ${s_j}$ at t+1 for all elements i,j because y is sorted, i.e. if j >i in y then ${y_i} >= y_j$. We shall formalize this and include these results alongside our point (1) in Appendix I – Limitations and Open Problems. --- > Q1 For Figures 2,3,4, Could the MAE get lower if more computing is utilized? Could the plots be extended further towards x compute saved (full evaluation)? (if the experiments are expensive, please do not run them and skip the plot extension part) We have extended Figures 2 and 3 in the common PDF to show the results of a full evaluation. Our observations indicate that the MAE error in these figures cannot be reduced further, demonstrating that *additional sampling does not decrease the MAE*. --- > Q2 It is relatively difficult to grasp the overall outline of the framework only from the writing, could an outline figure be provided? (if not capable, it is OK) We have provided an outline figure in the common PDF. Thank you for pointing this out, we believe this helps clarify the Sort and Search framework. --- We hope we have addressed the major concerns of the reviewer, and are happy to answer any further questions/concerns. We look forward to a fruitful reviewer-author discussion phase. --- Rebuttal 2: Comment: Thank you for the detailed response. For the author response of W1 and W2, W1: As the authors have said, MAE serves as a useful proxy metric for algorithm development. I believe that an efficient testing strategy such as the one proposed by the authors would be mainly utilized for quick probing tests during the development phase of an algorithm, for trial & error. For this purpose, approximating the amount of score improvement could be more meaningful than rankings. W2: My subjective projection on this matter is that for contemporary foundation models such as LLMs, a new version of an LLM could be trained with different proprietary datasets that boost certain abilities of the model. For instance, if a lifelong dataset measures coding abilities, a novel programming language can be introduced, and certain new LLMs may be prepared for this new language, and vice versa. However, recognizing the Authors' response and the current scope of discourse, I think that it is not necessary to address such aspects of the problem in a single research stride. I will raise my score.
Summary: The paper presents a novel approach to addressing overfitting in standardized machine learning benchmarks by introducing Lifelong Benchmarks, which are designed to expand continuously, thereby providing a more dynamic evaluation environment. The authors propose the Sort & Search (S&S) framework for efficient model evaluation on these large-scale benchmarks. The S&S framework reuses previously evaluated models to selectively rank and sub-sample test samples, significantly reducing computational costs. Empirical evaluations on their Lifelong-CIFAR10 and Lifelong-ImageNet benchmarks show that S&S can reduce inference costs with low approximation error. This work contributes to the field by offering a solution for lifelong evaluation, enhancing both model evaluation efficiency and data sample insertion. Strengths: The paper addresses an important area in the field, offering significant contributions through its novel approach. Specifically, the introduction of the framework called Sort & Search to efficiently evaluate models stands out as a key contribution. It is generally well-written, with well-defined theorems and definitions. The research provides comprehensive experimental evidence to support their claims. Weaknesses: The paper would benefit from explicitly stating the assumptions about the data samples and models within the main text. I have included specific questions regarding the data samples in the questions section to help clarify these points. Technical Quality: 3 Clarity: 4 Questions for Authors: Q1. Handling Multi-Use/Nested Cases of Benchmarks: The COCO dataset, among others, presents a scenario where each data point is associated with multiple labels, such as categories, super-categories, etc. In the context of lifelong benchmarks, how are these multi-use or nested cases managed? Specifically, would there be multiple versions of COCO to handle each use case separately or would there be just one Lifelong-COCO that handles all annotations? How would these different approaches impact model evaluation and data insertion steps? Q2. Differentiating Difficult Data Samples from Mismatched Data Samples: When evaluating data samples, how do you distinguish between ‘good difficult’ samples and ‘mismatched’ samples that might be irrelevant or noisy? Confidence: 3 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: Yes, the authors adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > W1 The paper would benefit from explicitly stating the assumptions about the data samples and models within the main text. I have included specific questions regarding the data samples in the questions section to help clarify these points. We thank the reviewer for this point. We have sought to answer the questions raised below for clarity. > Q1 Handling Multi-Use/Nested Cases of Benchmarks: The COCO dataset, among others, presents a scenario where each data point is associated with multiple labels, such as categories, super-categories, etc. In the context of lifelong benchmarks, how are these multi-use or nested cases managed? Specifically, would there be multiple versions of COCO to handle each use case separately or would there be just one Lifelong-COCO that handles all annotations? How would these different approaches impact model evaluation and data insertion steps? Thank you for raising this important point. Our framework is domain-agnostic. What we require is an A matrix constructed using any binary metric, with rows representing samples and columns representing evaluated models. We summarize how our framework can be applied to various domains below: - **Dense Prediction Tasks or Multi-label Classification:** For pixel-wise prediction tasks or multi-label classification, our framework can be extended by flattening the predictions of each sample. That is, every sample contributes an array of binary values to the A matrix instead of a single value. The extension to the search algorithm is simple, if it samples a point all associated values are sampled and annotated. - **Language Models:** Our framework can be directly applied to multiple-choice language model evaluations where the metric is exact match or near-exact match, a binary metric perfectly suitable to our framework. - **Tasks with Real-valued Predictions:** For tasks such as regression or BLEU score evaluations, our framework can operate after applying a thresholding operation. This converts predictions into binary values (above or below the threshold). While this allows the framework to remain valid, it limits the predictions obtained to the binary threshold. - A way to extend this would be having multiple thresholds that can enable quantized searching over output values, but this is beyond the current scope of the work. In contrast, the above applications are more straightforward applications of our framework. We hope this clarifies the adaptability of our framework to various tasks and domains. --- > Q2 Differentiating Difficult Data Samples from Mismatched Data Samples: When evaluating data samples, how do you distinguish between ‘good difficult’ samples and ‘mismatched’ samples that might be irrelevant or noisy? We currently assume that labels are correct, and are unable to identify label noise. One can extend this to noisy samples by trading off sample efficiency, a.la., error-correcting codes. Alternatively, we can apply a cleaning/verification process on the input labels provided by using frameworks like CleanLab and exclude outlier samples for better ranking estimation. --- We hope we have addressed the major concerns of the reviewer, and are happy to answer any further questions/concerns. We look forward to a fruitful reviewer-author discussion phase. --- Rebuttal Comment 1.1: Comment: Thank you for your response. I echo Reviewer oB2P's opinion and will maintain my current score for the same reasons.
Summary: The paper aims to improve the efficiency for evaluating large-scale lifelong benchmarks with the rapidly growing number of machine learning models and samples. To address this issue, the authors propose the Sort & Search framework, which avoids the need to evaluate all samples upon adding new models (or evaluating all models upon inserting new samples). Instead, the proposed Sort & Search framework ranks the samples and selectively applies evaluation to a subset of test samples or models by leveraging previously benchmarked results. The experimental results demonstrate significant improvements in benchmark efficiency, achieving approximately 1000x computational reduction. Strengths: + The paper is well-written, with clear examples and theoretical proofs. + The paper targets a practical and pressing problem; the experimental results demonstrate significant compute reduction. Weaknesses: - The evidence for supporting the effectiveness of the Sort algorithm is missing. Specifically, it would be beneficial to compare the proposed method with a sample-only approach to assess if the Mean Squared Error (MSE) will converge to a similar or a larger value. For example, can we skip sorting and directly perform random sampling to get the samples? If “random sampling-only” with a sufficiently large subset (e.g., n’ ~10^3, with moderate compute cost) yields a comparable MSE, the necessity of the sorting algorithm is questionable. - The reported improvement in MSE is minimal, with only a 0.01 reduction compared to the baseline in Figure 2(c) (e.g., from ~0.14 to 0.13). It is unclear whether such a small improvement justifies the proposed method. What is the reasonable target for MSE, and how challenging is it to achieve a 0.01 improvement? - The proposed framework saves computational resources at the expense of increased storage overhead, however, the storage overhead is not discussed in the paper. - The framework appears to be applicable only to classification models, limiting its scope. Can it be extended to other tasks such as object detection or segmentation? - Minor issue: In Appendix H, the text is overlapped and needs formatting correction. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. What are the MSE results of applying uniform/random sampling directly to the sample pool without using the sort algorithm? 2. What is a reasonable target for MSE, and how difficult is it to improve MSE by 0.01? 3. What is the storage overhead introduced by the proposed framework? 4. Can the framework be extended to other tasks such as object detection or segmentation? Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for their thorough evaluation and positive feedback. We’re pleased that the reviewer recognised our work’s originality, noting it "targets a practical and pressing problem", "experimental results demonstrate significant compute reduction" and "with clear examples and theoretical proofs." We seek to address the reviewer’s concerns and questions below: > W1/Q1 What are the MSE results of applying uniform/random sampling directly to the sample pool without using the sort algorithm? We agree with the reviewer that evidence for supporting the effectiveness compared to random is important. We *provide this in our paper*. It is labeled as “CopyNearest&Expand” – we randomly sample n’ points and search using kNN (which replaces the search aspect). We shall revise the naming of this evaluation in the revision to make its role as an important baseline clearer. We show in Figure 2(c) and 3(b) that our Sort & Search algorithm outperforms this baseline by significant margins, discussed in the next question. --- > W2/Q2 What is a reasonable target for MSE, and how difficult is it to improve MSE by 0.01? Improving the performance by 0.01 MAE requires altering the results on 17,000 samples, which is a large margin. This difficulty is even greater at low MAE levels, as shown in the overall figure PDF. The Sort and Search method achieved an MAE of 0.14 with 80 samples, while reaching an MAE of 0.13 required 1,000 samples—requiring more than ten times the number of samples for reducing MAE by 0.01. --- > W3/Q3 The proposed framework saves computational resources at the expense of increased storage overhead, however, the storage overhead is not discussed in the paper. What is the storage overhead introduced by the proposed framework? Thank you for this excellent question! Sort & Search only requires storing only two 1D arrays, one which maintains the sort-sum and one array used for constructing the current search output. The storage overhead is hence minimal, being 0.0166% of input data or <100MB in absolute terms. This is indeed a key strength of Sort & Search compared to recent methods including CopyNearest&Expand, and we should emphasize it in our work. **Details:** Sorting compresses the entire A matrix into a single vector, which can be updated online with a simple sum operation. Searching involves receiving a new vector, selecting n' points, and applying the DP-Search algorithm. These involve storing 3 1D arrays, with additional 1-2 1D arrays required temporarily for evaluation procedures. --- > W4/Q4 The framework appears to be applicable only to classification models, limiting its scope. Can it be extended to other tasks such as object detection or segmentation?/Can the framework be extended to other tasks such as object detection or segmentation? Thank you for raising this important point. Our framework is domain-agnostic. What we require is an A matrix constructed using any binary metric, with rows representing samples and columns representing evaluated models. We summarize how our framework can be applied to various domains below: - **Language Models**: Our framework can be directly applied to multiple-choice language model evaluations where the metric is exact match or near-exact match, a binary metric perfectly suitable to our framework. - **Dense Prediction Tasks or Multi-label Classification**: For pixel-wise prediction tasks or multi-label classification, our framework can be extended by flattening the predictions of each sample. That is, every sample contributes an array of binary values to the A matrix instead of a single value. The extension to the search algorithm is simple, if it samples a point all associated values are sampled and annotated. - **Tasks with Real-valued Predictions**: For tasks such as regression or BLEU score evaluations, our framework can operate after applying a thresholding operation. This converts predictions into binary values (above or below the threshold). While this allows the framework to remain valid, it limits the predictions obtained to the binary threshold. - A way to extend this would be having multiple thresholds that can enable quantized searching over output values, but this is beyond the current scope of the work. In contrast, the above applications are more straightforward applications of our framework. We hope this clarifies the adaptability of our framework to various tasks and domains. --- > W5 Minor issue: In Appendix H, the text is overlapped and needs formatting correction. Thank you for bringing this to our notice, we missed this. We have corrected this! --- We hope we have addressed the major concerns of the reviewer, and are happy to answer any further questions/concerns. We look forward to a fruitful reviewer-author discussion phase. --- Rebuttal Comment 1.1: Title: Gentle Nudge Comment: We would really appreciate it if you could have a look at our replies and let us know if you had any further questions/comments. We highly value your feedback.
Rebuttal 1: Rebuttal: We thank the reviewers for finding our work to be ***important in the era of large models*** (Reviewers jvk8, T9R1, bEaJ, tnhy), to have ***strong mathematical formulations and theoretical results*** (Reviewers oB2P, bEaJ, T9R1), and to be ***tackling an important pressing problem with sound empirical results*** (Reviewers jvk8, T9R1, bEaJ). We provide detailed responses to each individual reviewer independently, and summarize the most important common points and additional experiments and visualizations provided in the rebuttal here: - We have added a **detailed visualization of the Sort & Search method** in the uploaded PDF. We hope that this figure improves the clarity of our paper, and we will ensure to add this into section 3 of the revised version. - We emphasise that **our framework is domain- and task-agnostic**. We summarize below how our Sort & Search framework can be extended to tasks beyond classification. - **Language Models**: Our framework can be directly applied to multiple-choice language model evaluations where the metric is exact match or near-exact match, a binary metric perfectly suitable to our framework. - **Dense Prediction Tasks or Multi-label Classification**: For pixel-wise prediction tasks or multi-label classification, our framework can be extended by flattening the predictions of each sample. That is, every sample contributes an array of binary values to the A matrix instead of a single value. The extension to the search algorithm is simple, if it samples a point all associated values are sampled and annotated. - **Tasks with Real-valued Predictions**: For tasks such as regression or BLEU score evaluations, our framework can operate after applying a thresholding operation. This converts predictions into binary values (above or below the threshold). While this allows the framework to remain valid, it limits the predictions obtained to the binary threshold. A way to extend this would be having multiple thresholds that can enable quantized searching over output values, but this is beyond the current scope of the work. In contrast, the above applications are more straightforward applications of our framework. We hope this clarifies the adaptability of our framework to various tasks and domains, including language modeling. - We extend our results from fig 2 to n’={64,000, 128,000, 256,000, 512,000} and add these plots in the common PDF. We observe that the **absolute error (MAE) does not further reduce as we increase the sampling budget**, corroborating the point in sec 4.6 that **additional sampling does not decrease the MAE**. - We conduct an additional experiment to **serially add new models using the Sort & Search** predictions as ground truth for further additions. We observe that the **errors do not accumulate with consecutive serial applications of our Sort & Search framework**. We provide a simple intuitive proof for why this is the case. This further **showcases the robustness of our method in being applicable without introducing cascading errors**. Pdf: /pdf/4b42be2d4257ef15684793bd4ed748a8383f20a1.pdf
NeurIPS_2024_submissions_huggingface
2,024
Summary: The authors propose *lifelong benchmarks*, a solution to the high cost and saturation problems of the current evaluation paradigm. They try to predict which samples will be harder to classify to select a subset that can efficiently serve as a proxy for estimating model performance on the full set, and also use these correlates to determine the importance of new samples for evaluating the existing models. "Sort & Search" is their proposed method to bring a 1000x reduction in inference cost by finding these representative samples. Each time a new model is added, the "insert" function is called, intended to efficiently find samples to test this new model on to update the cache of sample-level correctness scores, which can be averaged to return a new score for all models in the benchmark. Using existing information of sample-level difficulty from the performance of initial models, the samples are sorted, using permutation matrix P of the samples that have been compared together across models and prediction matrix Y of which sample/model pairs are correctly/incorrectly scored. By iteratively optimizing Y with constant P and P with constant Y with their DP search algorithm, they can efficiently order the samples by difficulty. Assuming this ordering will generalize to future models, they can employ uniform or random sampling over the ordering of samples, and they can optimize the selection to pick a set of samples that minimizes the error between the full evaluation over all samples and the smaller set, wrt MAE. To efficiently insert samples, they just have to evaluate them on a set of models to estimate their difficulty. Strengths: Strong mathematical formulation and convincing demonstration of the result. Weaknesses: Simple classification tasks aren't the domain we're most concerned about evaluating models on efficiently. It is unclear how this would extend to harder domains such as LM evaluation. Technical Quality: 4 Clarity: 3 Questions for Authors: None Confidence: 3 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: Limitations addressed my weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their detailed feedback. We seek to address the reviewer’s concerns and suggestions below: > W1 Simple classification tasks aren't the domain we're most concerned about evaluating models on efficiently. It is unclear how this would extend to harder domains such as LM evaluation. Thank you for raising this important point. Our framework is domain-agnostic. What we require is an A matrix constructed using any binary metric, with rows representing samples and columns representing evaluated models. We summarize how our framework can be applied to various domains below: - **Language Models**: Our framework can be directly applied to multiple-choice language model evaluations where the metric is exact match or near-exact match, a binary metric perfectly suitable to our framework. - **Dense Prediction Tasks or Multi-label Classification**: For pixel-wise prediction tasks or multi-label classification, our framework can be extended by flattening the predictions of each sample. That is, every sample contributes an array of binary values to the A matrix instead of a single value. The extension to the search algorithm is simple, if it samples a point all associated values are sampled and annotated. - **Tasks with Real-valued Predictions**: For tasks such as regression or BLEU score evaluations, our framework can operate after applying a thresholding operation. This converts predictions into binary values (above or below the threshold). While this allows the framework to remain valid, it limits the predictions obtained to the binary threshold. - A way to extend this would be having multiple thresholds that can enable quantized searching over output values, but this is beyond the current scope of the work. In contrast, the above applications are more straightforward applications of our framework. We hope this clarifies the adaptability of our framework to various tasks and domains, including language modeling. --- We hope we addressed the major concerns of the reviewer, and are happy to answer any further questions/concerns. Looking forward to a fruitful reviewer-author discussion phase. --- Rebuttal Comment 1.1: Comment: Thank you for your response. Yes, I agree that it is easy to see how this technique could be applied to those consequential problems; that's why I gave a pretty high score to begin with. While I think the paper would have been much more impactful if the technique were demonstrated on these tasks, I think the current version as is already is a solid paper that should be accepted. I will keep my score.
null
null
null
null
null
null
MultiOOD: Scaling Out-of-Distribution Detection for Multiple Modalities
Accept (spotlight)
Summary: This paper first proposes a MultiOOD benchmark for multimodal OOD detection. It then further proposes methods including A2D and NP-Mix for better tacking the multimodal OOD detection task. Strengths: 1. I appreciate the formulation of the multimodal OOD detection benchmark. 2. This work presents extensive experiments and achieves good results. Weaknesses: (See the questions for more details.) Technical Quality: 2 Clarity: 3 Questions for Authors: Overall, I am now hesitating around the borderline for this paper and below are my concerns: 1. Generally, I believe that the relationship between this work and other multi-modality works can be very close. I thus suggest the authors to also discuss with other multi-modality works in their related works. I will also elaborate on this more in below concerns. 2. As for the benchmark, I believe that it is important for the evaluation to be conducted in a more statistically significant manner. In other words, from my perspective, the authors may consider to at least evaluate methods on different ID/OOD category splits and report standard deviation. 3. W.r.t. A2D in Sec 4.2, from my perspective, it seems that sometimes, the orders of non-ground-truth classes matter. For example, while for a cat input, both fox and truck are non-ground-truth classes, it seems natural for fox to have a higher softmax score than truck. Thus, I am curious, what if the discrepancy is maximized in an order-preserved manner. I appreciate if this can be performed as an ablation study to see whether this alternative of A2D can be more effective. 4. W.r.t. Sec 4.3, the authors claim that they get inspiration from [16]. I thus hope to see a more detailed comparison between the proposed method and the method in [16]. Besides, it seems to also worth comparing with Learning Placeholders for Open-Set Recognition CVPR 2021. If I am not wrong, it seems that the proposed NP-Mix is somehow similar to its data placeholder. 5. As for the evaluation, as mentioned in the first concern, this work is closedly related to those multi-modaity ones. Thus, to better validate the efficacy of A2D and NP-Mix, besides comparing with only uni-modal OOD methods, it is also suggeted to create baselines that combine uni-modal OOD methods and existing typical multi-modality methods outside the OOD area. This can lead the proposed method to be better evaluated. Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: This work has discussed its potential limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your insightful reviews, and we appreciate your valuable suggestions! We address your concerns and questions as follows: >**Q1**: Discuss with other multi-modality works in related works. **A1**: Thanks for your insightful suggestion! As shown in **Table 10** in our main paper, we indeed compared A2D and NP-Mix with other multimodal self-supervised training tasks, including Contrastive Loss, Relative Norm Alignment, Cross-modal Distillation, and Cross-modal Translation. A2D and NP-Mix show substantial superiority over other multimodal tasks. We will add a subsection on Multimodal Learning in related work to discuss these works. ___ >**Q2**: Evaluate methods on different ID/OOD category splits. **A2**: Thanks for your valuable comment! As shown in **Figure 13** in our main paper, we indeed evaluated our method on **five different ID/OOD category splits**. Training with our A2D and NP-Mix is statistically stable and surpasses the baselines significantly under different dataset splits. We also evaluated our method on **three different random seeds**, as shown in **Figure 12** in our main paper. Our method significantly surpasses the baselines under different random seeds. ___ >**Q3**: W.r.t. A2D, the orders of non-ground-truth classes matter. what if the discrepancy is maximized in an order-preserved manner. **A3**: Thanks for your interesting idea! There are two small issues with the idea. One issue is the conflict object with A2D training. A2D aims to enlarge the prediction distance between different modalities for all non-ground-truth classes. If we want to preserve the order of predictions for different modalities, for example, we make the most similar non-ground-truth class have the second-largest prediction for both video and optical flow, the prediction distance between them will decrease as a result. The second issue is on the implementation. For each class, we need to manually select its most similar classes to define the order, which is complicated for datasets with a large number of classes. We implement this idea on EPIC-Kitchens 4/4 using video and optical flow and use Energy as OOD score. EPIC-Kitchens 4/4 has four ID classes: ‘put’, ‘take’, ‘open’, ‘close’. We define 'put' and 'take' as paired classes (most similar) and ‘open’ and ‘close’ as another paired classes. Given an input sample, we preserve its prediction order for both modalities by making the paired classes have the second-largest prediction. As shown below, A2D with order-preserved prediction is better than w/o A2D, but with a large gap compared with the original A2D. Besides, because of the conflict object with A2D, the prediction discrepancy $l_{OOD}-l_{ID}$ for order-preserved A2D is significantly reduced compared with the original A2D. ||FPR95$\downarrow$|AUROC$\uparrow$|$l_{OOD}-l_{ID}$| |-|-|-|-| |w/o A2D|76.68|68.29|0.2696| |A2D (order-preserved)|73.13|69.39|0.2866| |A2D|**66.98**|**72.45**|0.3987| ___ >**Q4**: Detailed comparison between the proposed method and the method in [16]. it seems to also worth comparing with Learning Placeholders for Open-Set Recognition CVPR 2021. **A4**: Thanks for your useful suggestion! The major difference between [16] and our NP-Mix is the availability of real outliers during training. In [16], few labeled outliers are available and they generate more synthesized outliers based on the labeled outliers and unlabeled data. However, in our case, we assume only ID data is available during training and we want to generate synthesized outliers using only ID data. Therefore, [16] can't be used in our case directly. We make a small modification to it and implement it as a baseline. Instead of randomly selecting one sample from real outliers and another sample from its nearest neighbor in unlabeled data for Mixup, we randomly select a sample from one ID class and another sample from its nearest neighbor in other ID classes for Mixup. We also include PROSER [a] as a baseline based on your suggestion. PROSER [a] conducts manifold mixup with the pairs from different classes without considering neighborhood information and will inject manifold intrusion problem, as shown in Figure 2 in the attached PDF in global response. Instead, our NP-Mix explores broader feature spaces by leveraging the information from nearest neighbor classes without noisy synthesized outliers injection and achieves the best performances. ||FPR95$\downarrow$|AUROC$\uparrow$| |-|-|-| |A2D+NNG-Mix [16]|38.78|88.07| |PROSER|44.01|87.52| |A2D+NP-Mix (ours)|**36.38**|**88.91**| [a] Zhou, et al. Learning placeholders for open-set recognition. In: CVPR, 2021 ___ >**Q5**: create baselines that combine uni-modal OOD methods and existing typical multi-modality methods outside the OOD area. **A5**: Thanks for your suggestions! We added evaluations on HMDB51 25/26 for the ensemble of multiple unimodal OOD methods for each modality to demonstrate the importance of studying MultiOOD. Due to space limits, we put the detailed results in the **global response** at the top of this page. The ensemble of multiple unimodal OOD methods always brings performance improvements, but still has a large gap compared with our multimodal solution (A2D+NP-Mix), further demonstrating the importance of studying MultiOOD. For baselines on typical multimodal methods outside the OOD area, we already compared A2D and NP-Mix with four multimodal self-supervised training tasks in Table 10 in our main paper. Here, we further include two multimodal baselines Gradient Blending [b] and SimMMDG [c]. A2D and NP-Mix show substantial superiority over other multimodal methods. ||FPR95$\downarrow$|AUROC$\uparrow$| |-|-|-| |Gradient Blending|42.92|87.28| |SimMMDG|42.05|87.91| |A2D+NP-Mix (ours)|**36.38**|**88.91**| [b] Wang, et al. What makes training multi-modal classification networks hard? In: CVPR, 2020 [c] Dong, et al. SimMMDG: A simple and effective framework for multi-modal domain generalization. In: NeurIPS, 2023 --- Rebuttal Comment 1.1: Comment: Thank the authors for their response. My most concerns have been well-solved and I thus increase my rate from 4 to 6. --- Reply to Comment 1.1.1: Title: Thanks for recognizing our work and raising your rating to 6! Comment: We are glad to hear that we have addressed most of your concerns and that you have raised your rating to 6! Thanks for spending a significant amount of time on our submission and giving lots of valuable and insightful suggestions, which make our paper even stronger! We will also include all added experiments and points in the final paper for better clarification.
Summary: This paper introduces MultiOOD, a new benchmark for multimodal out-of-distribution (OOD) detection. The authors propose two new techniques: 1) Agree-to-Disagree (A2D), which encourages discrepancy between modality predictions during training, and 2) NP-Mix, a novel outlier synthesis method. Extensive experiments on the MultiOOD benchmark demonstrate significant improvements over existing unimodal OOD detection methods. Strengths: Originality: -First benchmark for multimodal OOD detection (MultiOOD). -A2D algorithm leveraging modality prediction discrepancy. -Outlier synthesis method (NP-Mix). Quality: -Strong performance improvements over baselines. -Thoughtful analysis of modality prediction discrepancy phenomenon. -Comprehensive experiments on diverse datasets. Clarity: -Well-organized structure. -Clear explanations of key concepts and methods. Weaknesses: Weaknesses: 1) Lacks important baselines (e.g. ensemble of multiple singleOOD methods for each modality). This also relates to the question of "why we should study MultiOOD?" The importance of studying MultiOOD and its practical impact on real-world applications could be further demonstrated. 2) Limited theoretical analysis of proposed methods. A2D and NP-Mix need more theoretical analysis. 3) The experiments are only conducted on the task of action recognition. In this case, the title "MultiOOD" seems over-claiming. Please specify the task in the title and the main text. In addition, it only focuses on specific modalities (video, optical flow, audio). This makes the scope of the paper somewhat limited. 5) Some figures (e.g. Fig. 4) could be improved for clarity, with consistent color-coding and same x-axis scales. Technical Quality: 3 Clarity: 3 Questions for Authors: 1) Why "α > 1" can ensure the synthesized outliers reside in the intermediary space between two prototypes, rather than near the ID data? Please give more explanations. 2) How sensitive are the A2D and NP-Mix methods to hyperparameter choices? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors acknowledge some limitations, such as the focus on specific modalities (video, optical flow, audio) and action recognition tasks. However, they could further discuss potential limitations in scaling to a larger number of modalities or very different types of data. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your insightful reviews, and we appreciate your valuable suggestions! Please find the responses to your questions below: >**Q1**: Lacks important baselines (e.g. ensemble of multiple singleOOD methods for each modality). This also relates to the question of "why we should study MultiOOD?" **A1**: Thanks for your suggestions! We added evaluations on HMDB51 25/26 for the ensemble of multiple unimodal OOD methods for each modality to demonstrate the importance of studying MultiOOD. Due to space limits, we put the detailed results in the **global response** at the top of this page. The ensemble of multiple unimodal OOD methods always brings performance improvement, but still has a large gap compared with our multimodal solution (A2D+NP-Mix), further demonstrating the importance of studying MultiOOD. ___ >**Q2**: Limited theoretical analysis of proposed methods. A2D and NP-Mix need more theoretical analysis. **A2**: Thanks for bringing up this point. Our A2D is based on the empirical observation of Modality Prediction Discrepancy on our MultiOOD benchmark. This discrepancy can be attributed to the unavailability of semantic information on OOD data during model training, stimulating each modality to generate conjectures based on its unique characteristics upon encountering OOD data during testing. We demonstrate that such discrepancy is highly correlated to the ultimate OOD performance. We also show through extensive experiments that A2D can amplify such discrepancy and enhance the efficacy of OOD detection. For NP-Mix, we give an additional analysis on decision boundaries, as shown in Figure 2 in attached PDF in global response. Vanilla Mixup causes manifold intrusion due to the injection of noisy synthesized outliers. Instead, our NP-Mix only generates synthesized outliers in the intermediary space between two classes, thus avoiding the manifold intrusion and helping the network learn better decision boundaries. In summary, Multimodal OOD Detection is in its very early era. Our MultiOOD benchmark aims to fill this gap and our proposed A2D+NP-Mix solution is mostly based on empirical observation from extensive experiments. More interesting solutions and theoretical analyses are expected in future work. ___ >**Q3**: The experiments are only conducted on the task of action recognition. the title "MultiOOD" seems over-claiming. In addition, it only focuses on specific modalities (video, optical flow, audio). This makes the scope of the paper somewhat limited. **A3**: Thanks for your insightful comments! To further increase the scope of our paper and demonstrate the versatility of the proposed A2D training, we add another task of 3D Semantic Segmentation using LiDAR point cloud and RGB images. We evaluate on SemanticKITTI [a] dataset and set all vehicle classes as OOD classes. During training, we set the labels of OOD classes to void and ignore them. During inference, we aim to segment the known classes with high Intersection over Union (IoU) score, and detect OOD classes as unknown. We adopt three metrics for evaluation, including FPR95, AUROC, and $mIOU_c$ (mean Intersection over Union for known classes). We use ResNet-34 and SalsaNext [b] as the backbones of the camera stream and LiDAR stream. We compare our A2D with basic LiDAR-only and Late Fusion, as well as two multimodal 3D semantic segmentation baselines PMF [c] and XMUDA [d]. Our A2D also shows strong performance under this new task (**3D Semantic Segmentation**) with different combinations of modalities (**LiDAR point cloud and RGB images**). We will integrate these new benchmark results in our paper to further increase its scope. ||FPR95$\downarrow$|AUROC$\uparrow$|$mIOU_c\uparrow$| |-|-|-|-| |LiDAR-only|57.78|84.76|59.81| |Late Fusion|53.43|86.98|61.43| |PMF|51.57|88.13|61.38| |XMUDA|55.49|89.99|61.45| |A2D (ours)|**49.02** |**91.12**|**61.98**| [a] Behley, et al. Semantickitti: A dataset for semantic scene understanding of lidar sequences. In ICCV, 2021 [b] Cortinhal, et al. Salsanext: Fast, uncertainty-aware semantic segmentation of lidar point clouds. In ISVC, 2020 [c] Zhuang, et al. Perception-aware multi-sensor fusion for 3d lidar semantic segmentation. In ICCV, 2021 [d] Jaritz, et al. xmuda: Cross-modal unsupervised domain adaptation for 3d semantic segmentation. In CVPR, 2020 ___ >**Q4**: Some figures (e.g. Fig. 4) could be improved for clarity, with consistent color-coding and same x-axis scales. **A4**: Thanks for your suggestion! We have improved the figures and will update them in the final version of the paper. ___ >**Q5**: Why "$\alpha$ > 1" can ensure the synthesized outliers reside in the intermediary space between two prototypes, rather than near the ID data? **A5**: Given a $\lambda$ sampled from distribution Beta($\alpha$, $\alpha$), as shown in Figure 1 in the attached PDF in global response, when $\alpha$ < 1, $\lambda$ has a very high probability of being close to 0 or 1. As a result, the synthesized data Z = $\lambda$ * Z1 + ($1-\lambda$) * Z2 will be close to Z1 or Z2. Instead, when $\alpha$ > 1, $\lambda$ has the highest probability near 0.5. As a result, the synthesized data Z = $\lambda$ * Z1 + ($1-\lambda$) * Z2 will be in the intermediary space between Z1 and Z2. ___ >**Q6**: How sensitive are the A2D and NP-Mix methods to hyperparameter choices? **A6**: For A2D, we analyzed the choices of different distance functions, as shown in Table 5 in our main paper. A2D training exhibits robustness across various distance functions. Regardless of the specific distance metric employed, substantial improvements are consistently observed compared to the baseline approach without A2D training. We investigated the parameter sensitivity of NP-Mix on Nearest Neighbor parameter N and Mixup parameter $\alpha$, as shown in Figure 10 and Figure 11 in our main paper. NP-Mix demonstrates robustness across different parameter settings and yields substantial enhancements in OOD performance for all cases. --- Rebuttal Comment 1.1: Title: Thank you for the rebuttal. Comment: Thanks for the rebuttal. Most of my concerns are well-solved. The reviewer suggests adding these information to the revision to make the paper stronger. I will increase my rating to weak accept. --- Reply to Comment 1.1.1: Title: Thanks for recognizing our work and raising your rating to weak accept! Comment: We are glad to hear that we have well-solved most of your concerns and that you have raised your rating to weak accept! Thanks for spending a significant amount of time on our submission and giving lots of valuable and insightful suggestions, which make our paper even stronger! We will also include all added experiments and points in the final paper for better clarification.
Summary: The paper introduces a novel OOD detection benchmark for multimodal data (called MultiOOD), covering diverse dataset sizes and modalities. Based on this benchmark, the authors first demonstrate the Modality Prediction Discrepancy phenomenon, which means that the discrepancies of softmax predictions are shown to be negligible for in-distribution data (across different modalities) and significant for OOD data. Based on these observations, the authors introduce a novel Agree-to-Disagree (A2D) algorithm, which aims to enhance such discrepancies during training. Additionally, the authors propose a new outlier synthesis algorithm NP-Mix that explores broader feature spaces and complements A2D to strengthen the OOD detection performance. Experimental validation is extensive and confirms the improvements in OOD detection due to the new proposed algorithms. Strengths: Paper's strengths: - the paper has a significant scientific contribution, by introducing the first multi-modal benchmark for OOD detection - besides this, it contributes to the improvement of OOD detection by introducing two new algorithms: A2D algorithm and a new outlier synthesis algorithm NP-Mix that explores broader feature spaces and complements A2D - the paper is clearly written and well-documented - the review of the state of the art is comprehensive and covers most of the relevant work - experimental validation is extensive and underline the improvements introduced by the A2D and NP-Mix algorithms for OOD detection Weaknesses: - Some more details are required regarding some aspects. Technical Quality: 4 Clarity: 4 Questions for Authors: Here are my concerns: - In the video modality, what features do you extract to characterize the stream? Do you extract per-frame features, or per-video features. - Same question for the audio modality - You use 5 datasets, all of them having the video and optical flow modalities, but audio modality is missing from 2 of them. In this case, what protocol do you adopt? Do you ignore them and perform the audio analysis only on the remaining three? - Is your approach robust when one modality is missing? Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: The identified limitations are: (i) the performance on Near-OOD benchmark with a large number of classes can be further improved; and (ii) the ID/OOD discrepancy could also be further improved. The paper does not represent any negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your insightful reviews and great support of our paper! We provide the responses to your questions as follows: >**Q1**: In the video modality, what features do you extract to characterize the stream? Do you extract per-frame features, or per-video features. **A1**: Thanks for your valuable question! We use the SlowFast [a] network as the backbone to extract per-video features. The SlowFast model involves a Slow pathway operating at a low frame rate to capture spatial semantics and a Fast pathway operating at a high frame rate to capture motion at fine temporal resolution. [a] Feichtenhofer, et al. Slowfast networks for video recognition. In: ICCV, 2019 ___ >**Q2**: Same question for the audio modality **A2**: We also extract per-video features for the audio modality. Each 10-second audio waveform is converted into one spectrogram and then inputted to the ResNet-18 audio encoder. Audios that are less than 10 seconds are padded to 10 seconds. ___ >**Q3**: You use 5 datasets, all of them having the video and optical flow modalities, but audio modality is missing from 2 of them. In this case, what protocol do you adopt? Do you ignore them and perform the audio analysis only on the remaining three? **A3**: All datasets have video and optical flow modalities. Therefore, we create four Multimodal Near-OOD benchmarks and two Multimodal Far-OOD benchmarks using video and optical flow, as shown in Figure 2 in the paper. Only EPIC-Kitchens, HAC, and Kinetics have audio modality. We create two challenging Multimodal Near-OOD benchmarks using EPIC-Kitchens and Kinetics with different combination of modalities (video-audio, flow-audio, video-flow-audio), as shown in Table 9 in the paper. ___ >**Q4**: Is your approach robust when one modality is missing? **A4**: Thanks for your interesting comment! In our framework, we also train a classifier for each modality to get predictions from each modality separately. By default, we use the predictions obtained from the combined embeddings of all modalities to calculate the OOD score. However, when one modality is missing, we can use the predictions from the remaining modality to calculate the OOD score. We added the evaluations on HMDB51 25/26 under this challenging condition as below. We use Energy as the OOD score. | | FPR95$\downarrow$ | AUROC$\uparrow$ | |---------|----------|----------| | Video-only | 64.05 | 83.14 | | Flow-only | 71.46 | 75.51 | | A2D+NP-Mix (Video) | 47.49 | 86.48 | | A2D+NP-Mix (Flow) | 66.01 | 77.23 | | A2D+NP-Mix | 36.38 | 88.91 | When one modality is missing, the performance drops a little, especially when the video is missing (A2D+NP-Mix (Flow)). However, compared with training on one modality alone (Video-only and Flow-only), training with A2D and NP-Mix can also bring significant improvements for each modality when another modality is missing. For example, A2D+NP-Mix (Video), the case when optical flow is missing, yields a 16.56\% relative improvement on FPR95 compared with Video-only. This underscores the importance of multimodal training for Multimodal OOD Detection. --- Rebuttal Comment 1.1: Title: Acknowledgement of Rebuttal Comment: I want to thank the authors for addressing all my concerns. --- Reply to Comment 1.1.1: Comment: We are glad to hear that we have addressed all your concerns. Thanks again for your insightful reviews and great support of our paper! We will also include all added experiments and points in the final paper for better clarification.
null
null
Rebuttal 1: Rebuttal: We sincerely thank all the reviewers for their encouraging and insightful comments. We have carefully read through them and provided global and individual responses, respectively. In global responses here, we first add evaluations for a general question on the **ensemble of multiple unimodal OOD methods**. Then, we add our new findings of A2D on a new task (**3D Semantic Segmentation**) with different combinations of modalities (**LiDAR point cloud and RGB images**). We also attach a PDF with figures to better illustrate answers for Reviewer gP94's questions on "Why $\alpha$ > 1 can ensure the synthesized outliers reside in the intermediary space between two prototypes, rather than near the ID data?" and "more detailed analysis on NP-Mix". >**Ensemble of multiple unimodal OOD methods** Based on the suggestions from Reviewer gP94 and Reviewer z2YR, we added evaluations on HMDB51 25/26 for the ensemble of multiple unimodal OOD methods for each modality to demonstrate the importance of studying MultiOOD. We first evaluate the ensemble of different OOD scores on a single modality. Specifically, we choose three different OOD scores for ensemble: probability space (MSP), logit space (Energy), and feature space (Mahalanobis). For all three scores, we normalize their values between 0 and 1 and calculate the ensemble score as score = $\alpha$ * score\_1 + ($1-\alpha$) * score\_2. For $\alpha$, we do a grid search from 0.1 to 0.9 with a 0.1 interval and report the best performance. As shown below, combining MSP or Energy with Mahalanobis can bring significant improvement, especially for video. However, there is still a large gap compared with our best multimodal OOD detection solution (A2D+NP-Mix), demonstrating the importance of studying MultiOOD. ||FPR95$\downarrow$|AUROC$\uparrow$| |-|-|-| |Video-only (MSP)|60.78|84.39| |Flow-only (MSP)|70.37|72.97| |Video-only (Energy)|64.05|83.14| |Flow-only (Energy)|71.46|75.51| |Video-only (Mahalanobis)|51.20|81.25| |Flow-only (Mahalanobis)|89.98|59.38| |Video-only (MSP+Energy)|63.18|84.19| |Video-only (MSP+Mahalanobis)|41.61|86.44| |Video-only (Energy+Mahalanobis)|44.44|86.62| |Flow-only (MSP+Energy)|70.15|73.40| |Flow-only (MSP+Mahalanobis)|66.88|73.59| |Flow-only (Energy+Mahalanobis)|65.36|74.69| |A2D+NP-Mix (ours best)|**33.77**|**90.05**| We then evaluate the ensemble of various OOD scores on different modalities and calculate the ensemble score as score = $\alpha$ * score\_video + ($1-\alpha$) * score\_flow. In this case, we use the Video-only and Flow-only models as above. For $\alpha$, we also do a grid search from 0.1 to 0.9 with a 0.1 interval and report the one with the best performance. As shown below, combining more modalities always brings performance improvements, but still has a large gap compared with our A2D+NP-Mix, further demonstrating the importance of studying MultiOOD. ||FPR95$\downarrow$|AUROC$\uparrow$| |-|-|-| |Video (MSP) + Flow (MSP)|50.98|85.40| |Video (Energy) + Flow (Energy)|49.89|85.38| |Video (Mahalanobis) + Flow (Mahalanobis)|52.07|81.27| |Video (MSP) + Flow (Energy)|46.62|86.25| |Video (Energy) + Flow (MSP)|50.98|83.69| |Video (MSP) + Flow (Mahalanobis)|57.30|84.68| |Video (Mahalanobis) + Flow (MSP)|49.02|82.92| |Video (Mahalanobis) + Flow (Energy)|47.71|83.51| |Video (Energy) + Flow (Mahalanobis)|59.91|81.96| |A2D+NP-Mix (ours best)|**33.77**|**90.05**| ___ >**Scope of the paper beyond action recognition using video, optical flow, and audio** Based on the suggestions from Reviewer gP94, we add another task of 3D Semantic Segmentation using LiDAR point cloud and RGB images, to further increase the scope of our paper and demonstrate the versatility of the proposed A2D training. We evaluate on SemanticKITTI [a] dataset and set all vehicle classes as OOD classes. During training, we set the labels of OOD classes to void and ignore them. During inference, we aim to segment the known classes with high Intersection over Union (IoU) score, and detect OOD classes as unknown. We adopt three metrics for evaluation, including FPR95, AUROC, and $mIOU_c$ (mean Intersection over Union for known classes). We use ResNet-34 and SalsaNext [b] as the backbones of the camera stream and LiDAR stream. We compare our A2D with basic LiDAR-only and Late Fusion, as well as two multimodal 3D semantic segmentation baselines PMF [c] and XMUDA [d]. Our A2D also shows strong performance under this new task (**3D Semantic Segmentation**) with different combinations of modalities (**LiDAR point cloud and RGB images**). We will integrate these new benchmark results in our paper to further increase its scope. ||FPR95$\downarrow$|AUROC$\uparrow$|$mIOU_c\uparrow$| |-|-|-|-| |LiDAR-only|57.78|84.76|59.81| |Late Fusion|53.43|86.98|61.43| |PMF|51.57|88.13|61.38| |XMUDA|55.49|89.99|61.45| |A2D (ours)|**49.02** |**91.12**|**61.98**| [a] Behley, et al. Semantickitti: A dataset for semantic scene understanding of lidar sequences. In ICCV, 2021 [b] Cortinhal, et al. Salsanext: Fast, uncertainty-aware semantic segmentation of lidar point clouds. In ISVC, 2020 [c] Zhuang, et al. Perception-aware multi-sensor fusion for 3d lidar semantic segmentation. In ICCV, 2021 [d] Jaritz, et al. xmuda: Cross-modal unsupervised domain adaptation for 3d semantic segmentation. In CVPR, 2020 Pdf: /pdf/a279f4675f84d0093c1a0b6d73dfce8ae4e55aac.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Semi-Open 3D Object Retrieval via Hierarchical Equilibrium on Hypergraph
Accept (poster)
Summary: This paper extends the open-set 3D object retrieval problem to a semi-open situation where hierarchical semantic labels are considered. The authors leverage the multi-level category information with a proposed hypergraph-based Hierarchical Equilibrium Representation (HERT) framework. This framework consists of a Hierarchical Retrace Embedding module for encapsulating hierarchical semantic information and a Structured Equilibrium Tuning module for learning generalizable features according to the constructed superposed hypergraph from both local coherent and global entangled correlations. Strengths: The proposed semi-open 3D object retrieval setting is reasonable and more practical for real-world applications. The idea of introducing a hierarchical semantic graph is suitable for this task and the proposed framework is technically sound. Experiment results could demonstrate the effectiveness of the proposed framework, the ablation study comprehensively shows the functionality of the coarse label. Weaknesses: 1. The mathematical symbols within this article are chaotic, giving the following examples: 1.1. The notations of basic features in Line 152 ($\mathcal{F}$) and Line 155 ($f$)are inconsistent. Does the aggregation function $\mathcal{X}$ in Line 156 have the same meaning as $\mathcal{T}$ in Line 155? 1.2. The description in Lines 157-163 does not match the illustrated pipeline in Figure 2, which shows that $\mathcal{A}_r$ takes $e_i$ as a side input (with a concatenate operation? This is also confusing. I think $e_i$ is concat with $c_i$ before inputting into $\mathcal{A}_r$). How exactly can the Retrace Encoding $e_i$ be obtained? Where are the reconstruction features $\hat{m}_i$ mentioned in Line 163 be used? Does the $\hat{m}_i$ actually the $f_i$ in Figure 2? However, $f$ has been utilized to represent the basic feature in Line 155. I suggest adding more essential symbols and legends in Figure 2 to align with the descriptions. Similar things happen in the Appendix C Algorithm 1. 1.3. Besides, it is not recommended to over-defined some ''spaces'' that are not utilized in the rest of the article, such as ''retrace space $\mathbb{S}_r$'' (Line 159 retracte -> retrace? Line 160 $\mathbb{S}_f$ -> $\mathbb{S}_r$?) and ''mixed space $\mathbb{S}_x$''. These may bring potential typos and are unhelpful in understanding the paper. Technical Quality: 3 Clarity: 2 Questions for Authors: 1. In Section 3 of the problem setup, the traditional 3D Object Retrieval setting contains no multi-level labels, right? Bringing this concept with the proposed multi-level labels without explanation is unsuitable, which may lead to confusion. 2. The authors construct four datasets for evaluating their framework in their proposed task. However, as listed in Appendix B, each dataset is equipped with only 3 coarse categories according to the shape of the objects, which increases my concerns about the efficiency of the proposed hierarchical semantic graph in more complex situations. Are there any possible coarse categories to be included in the real scenes? 3. As the proposed framework is trained in a two-stage scheme, It would be better to include a comparison of model complexity and training/inference times, 4. Will the proposed datasets be released? Confidence: 5 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: The authors have mentioned the potential limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Response for Reviewer vaz9** We sincerely thank you for the valuable comments and advice, which provided important guidance for the presentation of this paper and clarified the direction for future work. 1. **About the writing and symbols (Answer for Weakness 1)**: We apologize for these typos. We will conduct a thorough review and revision of the entire paper to ensure the clarity and rigor of the writing. 1.1 *(For Weakness 1.1)* The typos $\mathbf{F}^k$ in line 152 and $\mathcal{T}$ in line 155 should be correctly written as $f^k$ and $\mathcal{X}$, which denote the basic features of $k$-th modality and aggregation function, respectively. 1.2 *(For Weakness 1.2)* We have restructured the data flow in the HRE module for each object: $\{ f^k_i \}^M_{k=1} \rightarrow \mathcal{A}_m \rightarrow \mathcal{X} \rightarrow c_i$ $ (c_i+e_i) \rightarrow \Psi \rightarrow r_i \rightarrow \Phi \rightarrow \hat{m}_i$ where $e_i$ is an encoding of the coarse label, which is a learnable vector of the same dimension as $c_i$. Specifically, retrace encoding is implemented by nn.Parameter for each coarse category, objects within the same coarse label share the same retracing encoding. During computation, it is element-wise added to $c_i$ and then input to $\mathcal{A}_r$. $\hat{m}_i$ is used solely for loss calculation in the HRE module, using coarse labels for supervision to guide the accuracy of the retrace embedding representation. It does not participate in the computations of the SET module. Based on your suggestions, we have revised the pipeline diagram and added more symbols, as shown in Fig. R2 of the rebuttal PDF. 1.3 *(For Weakness 1.3)* We removed the presentation of space in lines 157-163 to enhance the readability of the paper. Specifically, we revise this paragraph as follows: *...then $\mathcal{A}_r$ compresses the unified embedding $c_i$ aligned with $e_i$ into retrace embedding $r_i$ and does the reverse reconstruction $\hat{m}_i$ for supervision...* 2. **About the traditional 3D object retrieval method (Answer for Question 1)**: Traditional 3D object retrieval methods, both closed-set [1-3] or open-set[4][5] methods, consider only single-layer labels of objects. Besides, traditional open-set methods strictly assume no overlap between the training and testing sets[5][6]. However, in practical real-world scenarios, objects are typically described by multiple hierarchical labels, and the training set and testing set often share a partial space of coarse labels. As shown in Tab. R1 of the rebuttal PDF, we expand the number of label levels in the semi-open learning task, where testing categories are unseen at one level but seen at other levels. The label spaces are disjoint at only one level and have some overlap at other levels. [1] Gao Y, et al. 3-D object retrieval and recognition with hypergraph analysis[J]. IEEE TIP, 2012. [2] He X, et al. Triplet-center loss for multi-view 3d object retrieval[C]. IEEE CVPR, 2018. [3] Collins J, et al. Abo: Dataset and benchmarks for real-world 3d object understanding[C]. IEEE CVPR, 2022. [4] Zhou Z. Open-environment machine learning[J]. National Science Review, 2022. [5] Feng Y, et al. Hypergraph-based multi-modal representation for open-set 3d object retrieval[J]. IEEE TPAMI, 2023. [6] Parmar J, et al. Open-world machine learning: applications, challenges, and opportunities[J]. ACM Computing Surveys, 2023. 3. **About the efficiency of graphs (Answer for Question 2)**: This paper is an early exploration of semi-open learning. Therefore, we selected the three typical categories for the geometry-based coarse label, which is significantly different from the semantic-based fine category. These three coarse categories and two levels of hierarchical labels are representative of a semi-open environment, helping us focus on exploring the new semi-open learning task and designing a novel collaborative learning paradigm based on hierarchical correlations. Specifically, when the category of coarse labels increases, a natural implementation is to construct a hypergraph structure with more hyperedges to capture more complex correlations. This involves addressing challenges such as complex network representation associated with multiple labels while balancing complexity, efficiency, and performance. Tackling these challenges, we have preliminarily experimented with a hypergraph-based isomorphism computation method for structure compression and complexity reduction, inspired by [7]. Besides, we also have developed a hypergraph-based dynamic system approach to manage the dynamic increase of categories and levels of the label inspired by [8]. [7] Feng Y, et al. Hypergraph isomorphism computation[J]. IEEE TPAMI, 2024. [8] Yan J, et al. Hypergraph dynamic system[C]. ICLR, 2024. 4. **About the computational requirements comparison (Answer for Question 3)**: Our experiments are conducted on a computing server with one Tesla V100-32G GPU and one Intel(R) Xeon(R) Silver 4210 CPU @ 2.20GHz. We provide a detailed comparison of model parameters, training time, and inference time for the two stages in Tab. R2 of the rebuttal PDF. 5. **About the open access to datasets (Answer for Question 4)**: Thanks for your interest in our work. We are well prepared and will release the datasets, code, configurations, and pre-trained models immediately after the anonymous review period of NeurIPS 24. We also look forward to engaging and collaborating with more researchers on both theoretical and applied studies of semi-open learning across different fields. Additionally, we are willing to share our experiences on this (OpenReview) or other open-source platforms. Thank you again for your valuable suggestions, especially your professional advice on future work in semi-open learning. --- Rebuttal Comment 1.1: Comment: I'm glad to see the authors' efforts in their rebuttal. Considering most of my concerns have been addressed, I'd like to increase my score to 8. Gook luck. --- Reply to Comment 1.1.1: Comment: We sincerely appreciate your positive feedback and professional comments on our work. Your valuable suggestions have been crucial in improving the quality of our paper. We will carefully revise the manuscript according to your review comments and ensure the rigor of the experimental results and references.
Summary: The paper introduces a novel framework called the Hypergraph-Based Hierarchical Equilibrium Representation (HERT) for semi-open 3D object retrieval. The proposed framework addresses the practical scenario of semi-open environments where the training and testing sets share a partial label space for coarse categories but are completely disjoint for fine categories. The HERT framework comprises two main modules: Hierarchical Retrace Embedding (HRE) and Structured Equilibrium Tuning (SET). The authors also generate four semi-open datasets to benchmark their approach and demonstrate its effectiveness through extensive experiments. Strengths: - **Novel Framework**: The introduction of the HERT framework for semi-open 3D object retrieval is a novel contribution that fills a gap in the current literature. &nbsp; - **Hierarchical Approach**: The use of hierarchical labels to better capture the multi-level semantics of 3D objects is innovative and aligns well with real-world scenarios. &nbsp; - **Comprehensive Experiments**: The authors conducted extensive experiments on four newly generated datasets, providing strong empirical evidence of the effectiveness of their approach. &nbsp; - **Clear Problem Definition**: The paper clearly defines the semi-open environment and distinguishes it from traditional open-set and closed-set scenarios. Weaknesses: - **Complexity**: The proposed framework is quite complex, which might make it difficult for practitioners to implement and extend. &nbsp; - **Lack of Baseline Comparisons**: While the paper compares HERT against state-of-the-art methods, more diverse baseline comparisons, including simpler methods, could provide a clearer picture of the improvements. &nbsp; - **Citation**: Please add citations for the methods you compared against in the tables (Tab. 1, 2, 3). Technical Quality: 3 Clarity: 2 Questions for Authors: - How does the HERT framework perform in scenarios with more than three levels of hierarchical labels? &nbsp; - What are the computational requirements for training and deploying the HERT framework, especially in terms of time and resources? &nbsp; - Can the proposed method be extended to other domains beyond 3D object retrieval, such as text or image retrieval? Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: The authors have provided discussions about the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Response for Reviewer oLzP** We sincerely thank you for the valuable comments and advice, which provided important guidance for the presentation of this paper and clarified the direction for future work. 1. **About the framework (Answer for Weakness 1)**: Based on your suggestions, we have restructured the presentation of the proposed framework. Specifically, **the proposed framework consists of two sequentially connected modules**: HRE and SET. a) The HRE module takes basic features of different modalities as input. This module employs two sets of auto-encoders sequentially to achieve modality fusion and category space retrace, and generates unified embeddings and retrace embeddings for each object. a) The SET module takes two types of embeddings from the last module as input, utilizing structure-aware feature smoothing and distillation through hypergraph convolution and memory bank reconstruction, respectively. Finally, this module generates the final features for similar object matching based on feature distance, thereby enabling retrieval. 2. **About more comparisons (Answer for Weakness 2)**: Inspired by your suggestions, we added three simpler methods as compared baselines to make our experiments more comprehensive. We provide the additional experimental results in Tab. R3 of the rebuttal PDF. From the results, we can observe that the low performance of simpler methods like MLP and GCN demonstrates the complexity of the semi-open environment and the necessity of research in semi-open learning. The significant improvement achieved by our method also proves its effectiveness. We will include these results and analyses in the revised version of the paper. 3. **About citation and writing (Answer for Weakness 3)**: We have added citations for the compared methods as shown in Tab. R3 of the rebuttal PDF, and we have made similar revisions for the tables in the paper. 4. **About the level of labels (Answer for Question 1)**: This paper is an early exploration of semi-open learning. Therefore, we selected the two typical levels of labels based on different criteria: geometry-based coarse shape category and semantic-based fine category, which is representative of a semi-open environment. This two-layer framework is also a typical implementation of the collaborative learning paradigm based on hierarchical correlations. When the levels of hierarchical labels increase, a natural implementation is to construct more Retrace Auto-Encoders, and each auto-encoder is designed to retrace one level of the category. This involves addressing challenges such as domain adaptation associated with multiple levels while balancing complexity, efficiency, and performance. Specifically, we have preliminarily experimented with a hypergraph-based isomorphism computation method to address the increase in parameters brought by higher levels, inspired by [1]. Additionally, we have developed a hypergraph-based dynamic system approach to manage the increasing number of labels at each layer inspired by [2]. [1] Feng Y, et al. Hypergraph isomorphism computation[J]. IEEE TPAMI, 2023. [2] Yan J, et al. Hypergraph dynamic system[C]. ICLR, 2024. 5. **About the computational requirements (Answer for Question 2)**: Our experiments are conducted on a computing server with one Tesla V100-32G GPU and one Intel(R) Xeon(R) Silver 4210 CPU @ 2.20GHz. We provide a detailed comparison of model parameters, training time, and inference time for the two stages in Tab. R2 of the rebuttal PDF. 6. **About the data extension (Answer for Question 3)**: As shown in #line 152-153 and Appendix C Algorithm 1, the proposed HERT framework is a feature-driven framework and exclusively relies on the input of basic features, rather than utilizing raw data through the end-to-end approach. This feature-driven representation approach preserves extensibility to other common multimedia data such as e.g. text, audio, video, 3d, etc. We believe this paper can provide a general theoretical foundation and methodological reference for the application of multimedia retrieval in practical real-world scenarios. We will release the datasets, code, configs, and pre-trained models immediately after the anonymous review period of NeurIPS 24. We also look forward to engaging and collaborating with more researchers on both theoretical and applied studies of semi-open learning across different fields. Additionally, we are willing to share our experiences on this (OpenReview) or other open-source platforms. Thank you again for your valuable suggestions, especially your professional advice on future work in semi-open learning. --- Rebuttal Comment 1.1: Title: Further reply Comment: Thanks to the authors for answering my questions and for their efforts. Most of my concerns are addressed well. I raise my score to 6. Good luck :) --- Reply to Comment 1.1.1: Comment: We sincerely appreciate your positive feedback and professional comments on our work. Your valuable suggestions have been crucial in improving the quality of our paper. We will carefully revise the manuscript according to your review comments and ensure the rigor of the experimental results and references.
Summary: This paper introduces a more practical Semi-Open Environment setting for open-set 3D object retrieval with hierarchical labels, in which the training and testing set share a partial label space for coarse categories but are completely disjoint from fine categories. A novel framework, HERT, is proposed for this task. The HRE module is designed to overcome the global disequilibrium of unseen categories. Besides, the SET module is designed to utilize more equilibrial correlations among objects and generalize to unseen categories. Furthermore, four semi-open 3DOR datasets are generated with multi-level labels for benchmarking. The proposed method achieves good performance. Strengths: This paper is easy to read. This paper targets an interesting problem, open-set 3D object retrieval. The proposed method is also interesting. Weaknesses: The proposed method is a little simple, making the technical contribution unclear. Maybe the writing should be improved to highlight the technical insights. This paper introduces a new task. It would be better to discuss the difference between this new task and existing ones in the technical aspect. It also helps if some promising research directions for this new task could be provided. Technical Quality: 3 Clarity: 3 Questions for Authors: Please clarify the technical insights. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The writing can be improved. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Response for Reviewer YDem** We sincerely thank you for the valuable comments and advice, which provided important guidance for the presentation of this paper and clarified the direction for future work. 1. **About the technical contribution (Answer for Weakness 1 and the Questions)**: a) **A newly general task for real-world practical machine learning**. We demonstrated the limitations of naive open-set learning tasks and methods through experiments and proposed a more practical semi-open learning task. Additionally, we constructed four semi-open datasets for benchmarking. b) **An early explored paradigm and framework for semi-open learning**. Specifically, we proposed the Hypergraph-Based Hierarchical Equilibrium Representation (HERT) framework, including the Hierarchical Retrace Embedding (HRE) and the Structured Equilibrium Tuning (SET) modules, which are designed to overcome the distribution disequilibrium and confusion of unseen categories in semi-open 3D object retrieval. c) **A flexible high-order structure for semi-open learning**. We propose a superposed Hypergraph structure to capture high-order correlations among objects, under the guidance of local coherent correlations and global entangled correlations from hierarchical category information. d) **Extensive experiments and analysis**. Experimental results on the four datasets demonstrate that our method can outperform state-of-the-art retrieval methods in the semi-open environment. 2. **About the task comparison (Answer for Weakness 2)**: We discuss the difference between this new task (semi-open learning) and existing ones (open-set learning) in Tab. R1 of the rebuttal PDF. Existing open-set learning methods consider only the single-layer labels of objects and strictly assume no overlap between the training and testing sets[1][2]. However, in practical real-world scenarios, objects are typically described by multiple hierarchical labels, and the training set and testing set often share a partial space of coarse labels. We expand the number of label levels in the semi-open learning task, where testing categories are unseen at one level but seen at other levels. The label spaces are disjoint at only one level and have some overlap at other levels. [1] Zhou Z. Open-environment machine learning[J]. National Science Review, 2022. [2] Parmar J, et al. Open-world machine learning: applications, challenges, and opportunities[J]. ACM Computing Surveys, 2023. Thank you again for your valuable suggestions, especially your professional advice on presentation and future work in semi-open learning.
Summary: This paper introduces a Semi-Open Environment setting for open-set 3D object retrieval, addressing the limitation of existing methods that only consider single-layer labels and assume no overlap between training and testing sets. The authors propose the Hypergraph-Based Hierarchical Equilibrium Representation (HERT) framework, which includes the Hierarchical Retrace Embedding (HRE) module to balance representations across multi-level categories and the Structured Equilibrium Tuning (SET) module to handle feature overlap and class confusion through high-order correlations in a superposed hypergraph. They also create four semi-open 3D object retrieval datasets with hierarchical labels to benchmark their approach. Experimental results show that their method effectively generates and generalizes hierarchical embeddings of 3D objects, outperforming current state-of-the-art retrieval methods in semi-open environments. Strengths: I generally appreciate the study angle around the fine-grained structure across different 3D object categories, which could bring more insights for related 3D research. Overall, the proposed architecture is composed of reasonable components along with reasonable loss functions. Motivating by potential contradict optimization for open-set learning, they proposed semi-open 3D object retrieval task. To examine the performance, they also design 4 datasets based on existing 3D object datasets. The proposed framework outperforms all the aselines. Weaknesses: I have several major questions and concerns: - What does the exact coarse labels mean by author in Figure 1? I failed to see the relationship among solid of revolution, rectangular-cubic, and helicopters. I doubt the meaningful of the proposed multi-levels of 3D objects: it is too coarse to be a intermediate level. Can the author show several qualitative visualization around several 3D objects which share different fine-grained levels but with the same coarse level? I'd like to check the results with randomly sampled from SO-ESB, SO-NTU, SO-MN40, and SO-ABO, respectively. Generally, for each dataset, randomly sampled 5 sets of objects with the different fine-grained-level labels while the same coarse-level label would be very helpful. - The proposed HRET framework is a bit ad-hoc: consider if we have multiple levels (more than 2 levels) shared across a large amount of 3D objects, we will need more levels of auto-encoders given the current design logic. - As mentioned in line 448-449, some of the 3D objects will be throwed away due to the improper design of coarse labels. I failed to this the use of basic geometric shape to fit nowadays 3D vision research. For example, Objaverse(-xl) have a lot of comprehensive objects consisting of multiple different basic geometric shape. Those "complex" object may lie in the interest for us to perform retrieve tasks. --- Here are some minor points: - How do you determine the corase label if one object actually contains multiple separate simple shape? - 3DOR is first mentioned in #line15 without full form. This is unfriendly to readers who are not familiar with this direction. - In Figure 6 (b), there are some overlap fine-grained objects. Could you please show some? For example, in the left brown ones, they are mixed with yellow and light-purple dots; in the top pink group, there is a green dot. Technical Quality: 3 Clarity: 2 Questions for Authors: please address the concerns raised above. To be specific, I'd like to see a total 25 randomly sampled hierarchy examples from the proposed SO-ESB, SO-NTU, SO-MN40, and SO-ABO. Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: The current version discussed limitations in #line 311~314. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Response for Reviewer nF63** We sincerely thank you for the valuable comments and advice, which provided important guidance for the presentation of this paper and clarified the direction for future work. 1. **About coarse labels and datasets (Answer for Weakness Major 1, 3, Minor Q1, and the Questions)**: The coarse labels in Figure 1 mean the basic shape of the object as a whole. As shown in Fig. R1 of the rebuttal PDF, we provide examples of four datasets, each with three coarse classes and five fine classes. We annotate the coarse labels according to the geometry-based shape of each object as a whole (\#line 242-243, \#line 447-449), while ignoring the part assembly relationships within an object in this paper. This paper is an early exploration of semi-open learning. Therefore, we selected two typical levels of labels based on different criteria: geometry-based coarse shape category and semantic-based fine category, which is representative of a semi-open environment. As shown in the right side of Fig. R1, the objects that were removed during dataset construction are those with multiple separated parts and cannot be considered as a whole. We believe that the use of these corner-case samples would intertwine other issues such as graphics and foundation models, and would not reflect the key problem of contradicted hierarchical labels in semi-open learning. Therefore, we decided to exclude them from this early exploration and will address these more complex cases with multiple levels of labels in future work. Additionally, we provide the split of some datasets in Answer 5-7. 2. **About multiple levels (Answer for Weakness Major Q2)**: As an early exploration of semi-open learning, we believe this paper should focus on exploring the new semi-open learning task and designing a novel collaborative learning paradigm based on hierarchical correlations. Based on this paradigm, the HERT framework is implemented with two layers temporarily in this paper, aiming to use two of the most representative layers to validate the necessity and performance of machine learning research in typical semi-open environments. However, one of our future work directions is to extend the HERT framework to encompass more intertwined factors in complex semi-open environments. This involves addressing challenges such as domain adaptation associated with multiple levels while balancing complexity, efficiency, and performance. Specifically, we have preliminarily experimented with a hypergraph-based isomorphism computation method to address the increase in parameters brought by higher levels, inspired by [1]. Additionally, we have developed a hypergraph-based dynamic system approach to manage the increasing number of labels at each layer inspired by [2]. [1] Feng Y, et al. Hypergraph isomorphism computation[J]. IEEE TPAMI, 2023. [2] Yan J, et al. Hypergraph dynamic system[C]. ICLR, 2024. 3. **About failure cases (Answer for Weakness Minor Q3)**: We provide the visualization of failure cases in Fig. R3 of the rebuttal PDF. In these failure cases, the query objects (bench, TV stand, bookshelf) and the wrong-matched target objects (mantel, laptop) share a certain similarity in their shapes and belong to the same coarse category (Rectangular-Cubic Prism). Although the significant performance improvement of the HERT framework demonstrates the necessity of research in semi-open learning and the effectiveness of our method, these corner cases also indicate the necessity of utilizing finer-level information such as the part-assembly of objects. This issue is the same as the multiple levels issue mentioned in Weakness Major Q2. However, this paper focuses more on the fundamental differences brought by semi-open hierarchical labels. Therefore, we only consider two layers of labels in this study. As mentioned in the Answer 2 above, we are currently conducting research to address these more complex environments. Thank you for your keen observations and academic insights. 4. **About the writing (For Weakness Minor Q2)**: Thanks for your thorough review and suggestions. We will conduct a comprehensive review of the entire paper, especially focusing on the use of abbreviations. 5. **Splits of SO-NTU**: 5.1 Train Rectangular-Cubic Prism: headstone,table square Solids of Revolution:ball,ballon,cannon,watch Miscellaneous: book,plant with pot,cold weapon stick,frame,gun pistol,plane delta wing,plant leaf 5.2 Test Rectangular-Cubic Prism: Bed,truck,chair common,sofa,man,chair,computer,container,tank Solids of Revolution: fish,bottle,knife,cup,helmet,hydrant,insect fly,insect polypod,missle,orchestral,pen,plane backswept wing,plane forwardswept wing,tree,weed,ring,table round,hammer,screwdriver,wheel,zeppelin Miscellaneous: dinosaur,dog,duck,tetrapods,bird,car common,chair swivel,chess,chip,clock,cold weapon long,sword,cycle bike,cycle moto,door,gun musket,gun submachine,human stand,floorlamp,table lamp,giant,helicopter,straight wing,flower,galleon,ship modern 6. **Splits of SO-MN40**: 6.1 Train Rectangular-Cubic Prism: table,night stand,sink, monitor Solids of Revolution: Glass box,flower pot Miscellaneous: Keyboard, airplane 6.2 Test Rectangular-Cubic Prism: Mantel,tv stand,desk,sofa,bed,bookshelf,chair,bathtub,wardrobe,dresser, radio,piano,bench,xbox,range hood Solids of Revolution: Bottle,bowl,cup,stool,vase,cone,tent Miscellaneous: Toilet,curtain,car,guitar,stairs,door,person,laptop,plant,lamp 7. **Splits of SO-ABO**: 7.1 Train Rectangular-Cubic Prism: table Solids of Revolution: tent Miscellaneous: Mirror,Plant or flower pot 7.2 Test Rectangular-Cubic Prism: chair,cart,shelf,cabinet,dresser,bed,bench,ladder,sofa Solids of Revolution: exercise weight,container or basket,vase,ottoman,pillow Miscellaneous: picture frame or painting,lamp,fan Thank you again for your valuable suggestions, especially your professional advice on future work in semi-open learning. --- Rebuttal 2: Comment: Dear Reviewer, We would greatly appreciate any updates or feedback you might have regarding our responses to your initial comments. Your insights are valuable to us as we work to improve our paper. If you need any additional information or clarification from our side, please don't hesitate to let us know. Thank you for your time and consideration. --- Rebuttal Comment 2.1: Title: Reviewer response to rebuttal Comment: I greatly appreciate the efforts made by the authors. The additional qualitative results and failure cases will help readers better understand the manuscript. I have carefully reviewed all of your responses and the rebuttal. Since the authors emphasized the contribution of the new semi-open learning task, **I would respectfully ask the AC to evaluate this contribution, while my comments will primarily focus on the technical contributions and any factual errors**: - I still believe the proposed coarse-label partition is too coarse to be practical. - For example, in SO-ABO, an ottoman is categorized under Solids of Revolution; however, many ottomans are cube-shaped. - Plants with pots are currently categorized under Miscellaneous, but many pots have a Solids of Revolution-like shape. - Several basic object categories, such as airplanes, chairs, and toilets, are partitioned under Miscellaneous. As the author did not respond to Objaverse cases, I guess a lot of objects in Objavese would be categorized into Misc which make no sense. - The proposed HERT framework appears to be ad-hoc in its current design of two-level coarse labels. The authors mentioned that they (1) experimented with a hypergraph-based isomorphism computation method to address the increase in parameters associated with higher levels and (2) developed a hypergraph-based dynamic system approach to manage the increasing number of labels at each layer; however, these details are not included in the rebuttal files. --- Rebuttal 3: Title: Part I/II of the Response to Reviewer nF63 Comment: Thanks again for your valuable suggestions. We will respond to your comments separately in the following two text boxes due to space limitations. 1. **About *ottoman* in SO-ABO and *plants with pots* in SO-NTU** - For the *ottoman*: Since the shapes of the ottomans are typically composed of curved surfaces and are closer to ellipsoids, we removed the few cube-shaped samples from the ottoman category in the ABO dataset when constructing the SO-ABO dataset, and kept only the ellipsoid-like samples for the experiment. - For the *plants with pots*: The samples in this category are not pots but rather a variety of plants with diverse and unusual shapes (such as Lavender, Snake Plant, Jasmine, Spider Plant, Aloe Vera, etc.), along with their pots, which also have different shapes. Therefore, we classified these objects under Miscellaneous in this paper. **We will provide more examples and explainations, especially for these categories, in the revised version.** 2. **About the Objaverse and Miscellaneous Category** Thank you for the reminder, and we apologize for omitting the necessary emphasis and analysis for Objaverse in our first round of response. Objaverse is an outstanding work, and we believe that it is one of the most important and practically datasets in the 3D vision field in recent decades. This dataset provides a richer quantity of objects, finer categories, and diverse domains, with a greater diversity of 3D shapes. It is an essential resource for further exploring more complex and practical semi-open learning. **We will provide a detailed analysis comparing our datasets with Objaverse, a discussion of Objaverse's irreplaceable role in semi-open learning, and necessary references [1] in the revised version of this paper.** [1] Matt Deitke, Ruoshi Liu, Matthew Wallingford, Huong Ngo, Oscar Michel, Aditya Kusupati, Alan Fan, Christian Laforte, Vikram Voleti, Samir Yitzhak Gadre, Eli VanderBilt, Aniruddha Kembhavi, Carl Vondrick, Georgia Gkioxari, Kiana Ehsani, Ludwig Schmidt, Ali Farhadi. Objaverse-xl: A universe of 10m+ 3d objects[C]. Annual Conference on Neural Information Processing Systems (NeurIPS), 2023. However, our work represents an early exploration of the semi-open environment. To explore this objectively existing environment, we have made a preliminary construction of the four datasets (SO-ESB, SO-NTU, SO-MN40, and SO-ABO). Within the context of these datasets, we used coarse labels such as Rectangular-Cubic Prism and Solids of Revolution to describe two categories of objects with typical coarse shapes. For other objects in the datasets that lack the typical shapes features, we temporarily classified them under the Miscellaneous category. The experimental results demonstrate that even with these simple coarse labels, the semi-open environment exists objectively and our method also achieved significant improvements. We believe that it is necessary in this paper to focus on typically modeling this new environment setting, analyzing the essential key challenges while filtering out the effects of other atypical or random noise. We acknowledge that conducting a comprehensive study of an entirely new field may be difficult within the scope of a conference paper. Inspired by your comment, we will focus our future work on more hierarchical levels of labels in semi-open learning. We plan to dedicate more effort to exploring Objaverse and other complex datasets, investigating various shape, domain, and semantic labels beyond the shape-based coarse labels proposed in this paper. This will allow us to better adapt to more complex objects and advance the practical applications of semi-open learning. We are well prepared and will release the datasets, code, configurations, and pre-trained models immediately after the anonymous review period of NeurIPS 24. We also look forward to engaging and collaborating with more researchers on both theoretical and applied studies of semi-open learning across different fields. Looking forward to academic discussions with you after the anonymous period of NeurIPS 24, if possible! We are willing to share all our experiences, dataset, and code of this work. --- Rebuttal 4: Title: Part II/II of the Response to Reviewer nF63 Comment: 3. **About the Framework** As mentioned in the answer above, the two-layer HERT is designed based on the existing typical semi-open assumption, serving as an early exploration setting of semi-open learning. The hypergraph-based isomorphism computation and dynamic system are the extended modules of the initial HERT framework for practical applications, we will provide more detailed experimental results and analysis in the revised version of this paper. Here, we provide some results and analysis: - **For the isomorphism computation** In order to handle more levels of labels, we simplified the hypergraph construction process using isomorphism computation following [2]. Specifically, we detect and merge hyperedges with similar structures or edge embeddings within the proposed HERT framework, thereby reducing the complexity of the hypergraph. To evaluate the efficiency of this approach, we conduct the compared experiments between frameworks with and without isomorphism computation on SO-MN40 datasets. As shown in the table below, retrieval accuracy (mAP) improved as the number of layers increased, but both training and inference times also increased. Isomorphism computation significantly improved the efficiency of training and inference, with more notable gains in efficiency and accuracy observed as the layers increased. **Table R4: Ablation studies of isomorphism computation on SO-MN40 dataset** | number of layers | 2 | 3 | 4 | | :--------------- |:----:|:----:|:----:| | mAP *w/o* IC |0.6336|0.6441|0.6583| | mAP with IC |0.6359|0.6483|0.6674| | mAP Improvement |0.36% |0.65% |1.38% | | Training Time (s) *w/o* IC |91.16 |95.79 |99.73 | | Training Time (s) with IC |89.31 |92.49 |94.97 | | Training Efficiency Improvement |2.51% |3.44% |4.77% | | Inference Time (ms) *w/o* IC |18.42 |19.51 |20.76 | | Inference Time (ms) with IC |17.37 |17.74 |18.31 | | Inference Efficiency Improvement |5.70% |9.07% |11.80%| *w/o* denote without, *IC* denotes the isomorphism computation module - **For the dynamic system** To handle the increasing number of labels, we employed a hypergraph-based dynamic system to incrementally construct the hypergraph following [3]. Specifically, we constructed hyperedges for new vertices of new samples and new labels and updated the hypergraph structure within the proposed HERT framework. To evaluate the efficiency of this approach, we conduct the compared experiments between frameworks with and without the dynamic system on SO-MN40 datasets. As shown in the table below, retrieval accuracy (mAP) improved as the number of coarse categories increased, while an increase in the number of fine categories slightly decreased retrieval accuracy. Both increases in coarse and fine categories led to longer training and inference times. The dynamic system significantly improved the efficiency of training and inference, with more notable gains in efficiency and accuracy observed as the categories increased. **Table R5: Ablation studies of dynamic system on SO-MN40 dataset** |Number of Categories|Original HERT|Coarse: 3->4|Coarse: 3->5|Fine: 32->40|Fine: 32->49| | :--------------- |:----:|:----:|:----:|:----:|:----:| |mAP *w/o* DS |0.6336|0.6395|0.6427|0.6295|0.6253| |mAP with DS |- |0.6431|0.6478|0.6331|0.6327| |mAP Improvement |- |0.56% |0.79% |0.57% |1.18% | |Training Time(s) *w/o* DS |91.60 |93.83 |95.31 |93.98 |95.57 | |Training Time(s) with DS |- |91.45 |91.61 |91.37 |91.59 | |Training Efficiency Improvement |- |2.53% |3.88% |2.78% |4.17% | |Inference Time(ms) *w/o* DS |18.42 |19.74 |20.01 |19.93 |21.36 | |Inference Time(ms) with DS |- |18.45 |18.39 |18.57 |18.43 | |Inference Efficiency Improvement |- |6.50% |8.10% |6.82% |13.72%| *w/o* denotes without, *DS* denotes the dynamic system module, and Original HERT means the original framework designed for 3 coarse categories and 32 fine categories. '->' denotes the increase in category number. - **Conclusion** Experimental results above demonstrate that isomorphism computation and the dynamic system can effectively enhance the efficiency of the HERT framework, and they have the potential to advance the practical application of semi-open learning methods. we will provide more detailed results and analysis in the revised version of this paper. Thank you again for your valuable suggestions, especially your professional advice on practical applications in semi-open learning.
Rebuttal 1: Rebuttal: We thank all reviewers for your insightful feedback and for your valuable time and effort. We try to answer all the questions and weaknesses of each reviewer in the rebuttal section below. The attached PDF contains our additional experimental results and figures. Pdf: /pdf/25cd80128441175bfbfbd46dc280f2b4739803d2.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Generative Modelling of Structurally Constrained Graphs
Accept (poster)
Summary: The authors proposed ConStruct, a graph generative framework that enables hard-constraining graph topological properties that hold upon edge deletion throughout the entire sampling trajectory. Specifically, the authors model the forward (data-to-prior) process using an edge absorbing noise model, and they predict the reversed edge insertion process using a GNN projector. Strengths: **S1.** The proposed method is the first graph constrained discrete generative framework. The hard-constraining graph generation task is challenging and meaningful. **S2.** The authors provided theoretical guarantee for the generation quality of ConStruct. **S3.** The authors improve the sampling efficiency using the incremental constraint satisfaction algorithm in the spirit of curriculum learning. Weaknesses: **W1.** The ConStruct is only applicable to edge-deletion invariant properties (Definition 3.1). Thus, it cannot be applied to hard-constrain more complicated and general graph properties, e.g. chemical properties for molecules. However, this limitation does NOT weaken the contribution of this paper and should be considered a future research direction of hard-conditioned graph generation. **W2.** The projector module (Figure 2 and Algorithm 3) introduces intractability in likelihood estimation of the graph, making it hard to estimate the likelihood of the graph instances generated by ConStruct. **W3.** ConStruct seems to be a autoregressive method, which means that the sampling complexity is in proportional to the graph size. In contrast, the diffusion-based method generates the whole graph and keeps refining it along the generative trajectory. Thus, it seems that ConStruct takes more generative steps for large graphs (w.r.t the amount of nodes and edges) when compared to diffusion-based models. I recommend the authors to assess the time complexity and efficiency of this model compared to diffusion-based methods. **W4.** It seems that the authors did not compare their method against the two mentioned (Page 3, line 88-89) discrete-diffusion graph generative models EDGE [1] and Graph-ARM [2]. By the way, I recommend the authors to cite the latest or published version rather than the ArXiv versions. [1]. Xiaohui Chen, Jiaxing He, Xu Han, and Li-Ping Liu. 2023. Efficient and degree-guided graph generation via discrete diffusion modeling. In Proceedings of the 40th International Conference on Machine Learning (ICML'23), Vol. 202. JMLR.org, Article 181, 4585–4610. [2]. Kong, L., Cui, J., Sun, H., Zhuang, Y., Prakash, B.A., & Zhang, C. (2023). Autoregressive Diffusion Model for Graph Generation. *International Conference on Machine Learning*. Technical Quality: 3 Clarity: 4 Questions for Authors: **Q1.** Please clarify the definition of $\Delta^b$ in Page 5, line 171. **Q2.** During the reverse sampling process, how to guarantee the existence of a feasible $G^{t-1}$ among the candidates induced by discarding some newly added edges from $G^t$ to $\widehat{G}^{t-1}$ (or equivalently, the intersection of $\mathcal{C}$ and $\mathcal{G}^{t-1}$ in Theorem 1 is not empty)? If there is no feasible candidate, will the generative procedure be prematurely existed? **Q3.** Following **W2**, I notice that there is a seemingly discrepancy between the training and sampling process. The effect of the projector is absent during the training process. I hope the authors can add some analysis and clarifications on why the edge predictor trained without projector can still provide satisfactory samples. Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 4 Limitations: The authors adequately discussed the limitations of the proposed method. They discuss in detail the performance limitations of their approach in Appendix H.1. They also point out potential further improvements and extensions to their approach in Section 5. Finally, they discuss computational efficiency and scalability of the proposed method in Section 3.5 and Appendices D.2 and D.3. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for acknowledging the importance of the topic under consideration and for the insightful comments. We address the reviewer's concerns in the following points: **W1**: We agree with the reviewer and do also envision the extension to joint node-edge constraints, e.g. valency constraints in molecular generation, as an exciting future direction. We extend on this topic on item 3 of the global rebuttal. **W2**: We remark that, despite the lack of a tractable evidence lower bound for the likelihood, the proposed projector pushes ConStruct to remarkable empirical performance, outperforming unconstrained models. **W3**: Even though ConStruct allows for constrained generation in a similar manner to autoregressive models, it is actually a diffusion-based framework. We thoroughly analyse its complexity and efficiency in section 3.5 and appendix C. Based on this, we provide a more explicit comparison in terms of complexity to the underlying discrete diffusion methods in item 1 of the global rebuttal. From such comparison, we are able to conclude that ConStruct is a scalable method. **W4**: For EDGE, we do compare for planar and tree datasets (Table 1). We did not test it in the digital pathology setting because we only picked "the methods that, besides DiGress, attain non-zero V.U.N. for the planar dataset in Table 1, which we consider as a proxy for performance in the digital pathology datasets due to the structural similarities between the datasets" (ll. 310-312, pg. 9). In fact, the digital pathology dataset is even more challenging since it contains attributed planar graphs. For GraphARM, unfortunately the authors do not report results on any of the used datasets, neither provide their code (to the best of our knowledge), which makes meaningful comparisons difficult. We appreciate the reviewer's suggestion to update citations and have addressed this. **Q1**: $\Delta^k$ denotes the $k$-simplex: $$ \left\lbrace \left(\lambda_0, \lambda_1, \ldots, \lambda_k\right) \in \mathbb{R}^{k+1} \mid \lambda_i \geq 0 \text{ for all } i, \text{ } \sum_{i=0}^k \lambda_i = 1 \right\rbrace. $$ We added this clarification to the notation paragraph and we corrected its usage in l.171 accordingly (to $\Delta^{b-1}$). **Q2**: If all the newly proposed edges (inserted from $G^t$ to $\hat{G}^{t-1}$) by the diffusion model lead to constraint violation, then the graph remains the same as in the previous step, $G^{t-1}=G^t$ (so the intersection of $\mathcal{C}$ and $\mathcal{G}^{t-1}$ is trivially non-empty). We do not early exit the reverse process in such case because the denoising network takes as input both $G^t$ and the timestep $t$. Therefore, even if for a given timestep $t$ all the proposed edges are rejected, the model predicted probabilities can change in the next reverse step, since the model has now a different input (still the same graph $G^{t-1}$($=G^t$) but the timestep is $t-1$). Note that the case where the diffusion model does not propose any new edge (as we dot not lower nor upper bound the number of edge insertions at each step) falls within that case as well. Additionally, we remark that the diffusion model outputs a probability distribution over graphs, from which we sample the proposed graph. As a result, the proposed edges at each reverse step are not deterministic, even with the same inputs (thus outputs) to the model. **Q3**: In training, as the edge-absorbing noise model only removes edges, the training noisy graphs necessarily remain within the domain of graphs that satisfy the desired edge-deletion invariant property. Therefore, the denoising network is only trained for graphs that satisfy the desired property. In sampling, even though the posterior term of the edge absorbing noise model ensures the reverse as an edge insertion process (this is a direct implication from using eq. 6 in eq. 5), it does not guarantee that the successive graphs remain within the domain of graphs which satisfy the desired property. In fact, since the denoising neural network is not a perfect edge predictor and the edges are sampled independently, sometimes we obtain intermediate graphs that are outside of such domain, inducing out-of-distribution prediction to the denoising neural network for the next step. To counter this, the projector is used to push back the graph generative trajectory to the domain where the model was actually trained. Note that the projector is only used to correct the trajectory when it gets out of the desired domain and does not interfere whenever the trajectory is within domain. Therefore, the projector is actually reducing the discrepancy between training and sampling processes and promoting in-distribution prediction for the denoising neural network, resulting in improved performance both in terms of sample quality and validity. These results are aligned with the intuition provided in other constrained diffusion models for continuous state-spaces that also empirically validated the benefit of promoting the match of distributions between training and sampling [1,2,3]. [1] - Lou, Aaron, and Stefano Ermon. "Reflected diffusion models." ICML, 2023. [2] - Fishman, Nic, et al. "Metropolis Sampling for Constrained Diffusion Models." NeurIPS, 2024. [3] - Liu, Guan-Horng, et al. "Mirror diffusion models for constrained and watermarked generation." NeurIPS, 2024. --- Rebuttal Comment 1.1: Comment: Thanks for the detailed responses. I hope the authors to thoroughly scrutinize the manuscript to ensure the notation system is correct and self-contain. I raise my score to 7 because my Q2 is fully addressed. The setting is new and important, the proposed method is sounded. The logic flow of the paper is smooth. The theoretical analysis is adequate. I cannot give a higher rate because this paper is restricted to 'edge-deletion-invariant constraints', which is not general enough. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for their time and the score update. We remain open to any further clarifications.
Summary: The paper presents a novel diffusion model, ConStruct, to generate graphs that follow certain pre-specified properties. ConStruct involves an edge-absorbing forward process and a projected edge-addition reverse process to sample graphs that satisfy pre-specified constraints. The novelty of their method comes from a simple random-sampling-based projection algorithm that samples constrained graphs at each diffusion step. Experimental results show that ConStruct can generate more realistic synthetic graphs that have a pre-defined constraint for the whole domain. They also show its applicability to real-world pathology graphs, by leveraging the fact that they tend to be planar. Strengths: - The paper is well-written and easy to read. - Experimental results are comprehensive for the synthetic graph datasets and use all representative metrics. - The paper also provides results of a real-world pathology dataset to highlight how structurally constraining the diffusion model is useful once we can establish some structure on the real-world graphs. In particular, they consider the planarity of the breast cancer pathology cell graphs. - ConStruct provides efficient ways to solve an otherwise NP-hard problem of projecting to hard constraints of acyclicity and planarity. The sampling time of ConStruct is at par with its unconstrained counterparts, which is quite impressive. Weaknesses: - It is not clear how applicable such domain-level constraints will be for real-world graphs that do not have a well-defined constraint. While the motivating example of pathology cell graphs is appreciated, it is difficult to see how it generalizes. - The idea is similar to PRODIGY which can be applied to any diffusion model and constraint including discrete models in the latest version [1] (which can be ignored but worthwhile to mention). Upon ignoring the minor requirement of an edge-absorbing forward process, ConStruct can be seen as an approximation of the projection operator where the distance is calculated as a GED instead of the Euclidean distance between the adjacency matrices. Thus, the novelty of the proposed method is limited. - Furthermore, the authors claim that PRODIGY distorts the underlying diffusion process even though PRODIGY takes a fractional step to the closest noisy graph that satisfies the given constraint. On the other hand, ConStruct finds a random graph that satisfies the constraint and takes a full step in that direction. Thus, it seems intuitively that ConStruct distorts the process more than PRODIGY, as opposed to the authors' claims. - The major novelty of the method comes from the proposed projection algorithm that tries to circumvent the NP-hardness by iteratively adding a random edge if it satisfies the constraint, which can be seen as a simple randomized greedy algorithm, which is quite well-studied in the discrete optimization literature. Theorem 1 in this vein is a bit trivial since it is expected that the optimal graph will belong to a randomly-edited set but it is not clear how easily it will be sampled for an arbitrary constraint. In the absence of this analysis, the proposed projection algorithm is not suitable for application. - While the authors compare against SPECTRE, they do not provide an elaborate discussion with this important related work that proposed this problem of including structurally-constrained generative models. - The proposed framework is limited to edge-deletion invariant (or more formally, downward-closed) constraints while the PRODIGY framework can theoretically handle a larger range of constraints including box constraints. - ConStruct is limited to non-attributed structural constraints due to the edit distance formulation and thus, cannot be applied to important molecular constraints. [1] Sharma, Kartik, Srijan Kumar, and Rakshit Trivedi. "Diffuse, Sample, Project: Plug-And-Play Controllable Graph Generation." Forty-first International Conference on Machine Learning. 2024 Technical Quality: 3 Clarity: 3 Questions for Authors: See above weaknesses. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors do not adequately discuss the limitations of their work and the potential negative impacts of their work. It will be really useful to elaborate on the limitations of their work, e.g., limited to structural constraints of a certain kind. Furthermore, since they are using a pathology dataset of breast cancer studies, the authors must discuss the potential negative societal impacts of their analysis, particularly when the "planar" generated graphs are used as a data augmentation tool for the downstream application for cancer detection or such. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the detailed comments and pertinent questions. We address the raised concerns below: **W1**: We acknowledge that our model is tailored for the task of constrained generation, which requires the explicit and unambiguous definition of the constraint, as reflected in the paper's title. Despite not universal, we believe our framework is quite general. It proves effective for various properties in structurally constrained synthetic graphs and digital pathology, outperforming unconstrained models, and can also be used to guide unconstrained models (e.g., generating acyclic molecules, App. G.2). Additionally, we (non-exhaustively) enumerate several real-world applications beyond digital pathology where essential constraints are well-defined and edge-deletion invariant (l. 144-147, pg 4). **W2**: Both works indeed address similar tasks. However, they differ fundamentally: PRODIGY offers efficient guidance of pre-trained models by relaxing adjacency matrices to continuous spaces (implicitly imposing an order between states) and finding low overhead projections there. Yet, it does not guarantee constraint satisfaction and faces a trade-off between performance and constraint satisfaction due to mismatched train/sample distributions. In contrast, ConStruct, while also suitable for guided generation (App. G.2), is designed for constrained generation. It treats adjacency matrices as discrete, where there are no efficient solutions for projections into arbitrary subclasses of graphs (e.g. maximum planar subgraph is NP-hard). This results from the lack of inherent ordering between states and the combinatorial nature of the domains of valid graphs. Nonetheless, ConStruct ensures constraint satisfaction with matched distributions, thereby improving performance over unconstrained models, and maintains efficiency through incremental algorithms and blocking edge hash tables. Thus, ConStruct is not an approximation of continuous-space projections and we believe it to be a full contribution in itself. **W3**: Our mention to the potentially compromised smoothness of the diffusion process refers to the thresholding step required by PRODIGY to convert the continuous adjacency matrix back to a discrete one, either at the end of the diffusion process (continuous diff.) or at each reverse step (discrete diff., a variant not available at submission). This step can disrupt smoothness as it is sensitive to the implicit ordering between states, may not yield the most appropriate discrete matrix for the original continuous relaxation, and does not guarantee constraint satisfaction. Despite notably reducing the distortion of the diffusion process, PRODIGY's fractional constraint enforcement does not address this drawback. In contrast, ConStruct operates directly on discrete state-spaces and avoids this limitation. ConStruct adjusts a graph from the previous reverse step by inserting the property preserving newly proposed edges, which are typically very few, promoting the process smoothness. Although edges are chosen in random order in the default implementation, Section H.3 empirically shows that this approach is not inferior to methods that order edge insertion based on predicted likelihoods by the diffusion model. **W4**: Theorem 1 provides useful guarantees about the proposed projector and helps understand the problem complexity. We agree that analyzing how often the projector retrieves the optimal graph is valuable; indeed, Theorem 2 shows that for the acyclicity constraint, the proposed projector *always* retrieves the optimal graph. Table 3 provides counter-examples demonstrating that this is not true for planarity, maximum degree, and lobster components. Since ConStruct generates new edges based on the diffusion model and not randomly, characterizing this subset to obtain more refined results (e.g., probabilistic bounds) for arbitrary properties is far from trivial. Finally, we believe that ConStruct is indeed well-suited for application as it shows significant performance improvements, high efficiency, and guaranteed property satisfaction. **W5**: We did not initially consider SPECTRE as closely related because it is GAN-based and it addresses unconstrained generation by producing eigenvectors and eigenvalues of the Laplacian and generating graphs conditioned on these. We did not find steps handling constraints, apart from the auxiliary eigenvector refinement matrices construction (must belong to the special orthogonal group). We would be happy to discuss SPECTRE’s relevance if the reviewer clarifies why it is particularly pertinent. **W6**: While ConStruct does not cover all PRODIGY constraints (e.g., molecular properties, see W7), it does handle a meaningful subset, including all of those tested on non-attributed datasets (edge count, triangle count, maximum degree). Crucially, ConStruct also handles combinatorial constraints (e.g., planarity, acyclicity), which PRODIGY cannot. **W7**: ConStruct does not handle node feature-dependent constraints found in molecular generation. However, molecular generation is dominated by autoregressive models due to their ability to perform validity checks at each step with minimal overhead [1]. The possibility of node ordering in molecules (via canonical smiles) explains this success. In contrast, ConStruct is designed for settings where node ordering is not suitable, such as digital pathology, focusing on purely structural constraints. While integrating node-dependent constraints into ConStruct is an exciting prospect, we believe that our focus on structural constraints does not diminish our contribution in a fundamentally distinct setting, as highlighted by Reviewer FDH7 in W1. **Limitations**: We appreciate the reviewer's suggestion and have elaborated on the specified points in the final manuscript (items 3 and 4 of global rebuttal). Overall, we believe we have addressed the limitations of our work adequately, a view also supported by Reviewer FDH7 --- Rebuttal Comment 1.1: Comment: Thanks for the rebuttal! However, I don't agree with the authors' views on certain topics and hope we can discuss and reach a conclusion. **Motivation:** The major problem I have with this is that the structural constraint of the underlying graphs must be known beforehand. As noted, this helps in the case of synthetic graphs which are specifically formed of such a constraint and maybe some generalizations can happen for specific use-cases as identified by authors for pathology graphs and planarity. However, the major benefit of graph generation is that it can approximate a distribution just from data. What this paper proposes is to include an additional constraint that identifies the underlying distribution. This is good if the proposed method is motivated as a way to simulate synthetic NP-hard structural distributions, which takes time to generate otherwise. However, in the absence of such a clear motivation and an expectation to generalize to real-world graphs, it is not clear how the users would be able to identify structural constraints of arbitrary graph distributions before training. **Comparison with SPECTRE:** SPECTRE trains generative model by explicitly conditioning the eigenspectrum of the training data. This explicit conditioning on the underlying distribution is similar to the idea of explicitly constraining on the structural constraint of planarity and acyclicity that the current paper looks at. For example, if we have graphs with 2 connected components, it can be inferred from the number of non-zero eigenvalues in the training graphs. However, this discussion is absent in the current work but is extremely important. **Comparison with PRODIGY:** This paper also requires further discussion that the authors have not acknowledged. The difference is not in continuous vs discrete or combinatorial vs not. PRODIGY constraints are also inherently combinatorial as opposed to what the authors are claiming. They have particularly considered P-space matroid constraints while ConStruct focuses on NP-hard constraints. However, more importantly, the difference is that ConStruct is a discrete diffusion model trained to generate graphs given a structural constraint on the distribution while PRODIGY aims to do plug-and-play controllable graph generation to satisfy arbitrary constraints. The authors have been inconsistent in claiming (with the guidance comment) that they may be doing the same problem as PRODIGY, then such an experimental validation would be essential to show. **Contribution of the method:** The theorems that assist the proposed method are not quite generalizable. Theorem 1 barely shows that a satisfiable constrained graph can be sampled, which is also possible through a simple random sampling. Theorem 2, on the other hand, is specific to a single constraint. I agree that any such theoretical proof will be extremely hard to prove for a general graph constraint. However, in that case, the authors should not overclaim their contributions in terms of constraint satisfaction with discrete graphs. The experiments are strong enough to validate the method's usefulness. **Attributed settings:** I am not diminishing the contributions but I would like to keep my opinion that this is a major weakness in using the method in real-world settings. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for the detailed reply to our rebuttal. We address the raised concerns below: **Motivation**: We agree with the reviewer that the flexibility provided by the unsconstrained graph generation of mimicking any data distribution is extremely convenient. However, in many real-world scenarios, this flexibility alone is not sufficient to yield satisfactory performance without incorporating additional priors into the generative model. This approach is particularly relevant in data scarce settings — such as due to high costs, ethical/privacy concerns, or a lack of high-quality annotated data — or where instances that do not adhere to specified constraints are either infeasible or lack physical meaning. Constrained generation is a valuable method for incorporating such priors by hard-constraining the hypothesis space to valid instances (in our cases, graphs), thereby reducing the search space and potentially enhancing the efficacy of the learning process. We note, nevertheless, that just enforcing such constraint does not identify the underlying distribution; instead, there is still the need to learn the underlying distribution within the constrained domain. As outlined in our rebuttal (W1), constrained generation indeed requires an explicit and unambiguous definition of the constraints. We understand the reviewer’s concerns about identifying structural constraints *a priori*, but we do not view this as a limitation. Instead, constrained generation is a practical choice for settings where practitioners have established explicit constraints through expertise or problem exposure and wish to leverage them for a more effective learning process. Typical sources of such constraints include the physics of the problem, application domain constraints, or detailed knowledge about the data acquisition process. We highlight several real-world examples (some mentioned in the paper) where such prior knowledge is available and can be effectively applied using ConStruct, showcasing its versatility. - *Planarity*: design of road networks [1], chip design [2], biochemistry [3] or digital pathology [4]. - *Acyclicity*: evolutionary biology [5] or epidemiology [6]. Additionally, if we consider the extension of discrete diffusion for directed graphs, e.g. [7], for which ConStruct is still applicable, there are several domains where the generation of directed acyclic graphs is critical: neural architecture search or bayesian network structure learning [8], causal discovery [9], etc. - *Maximum Degree*: design of contact networks [10, 11]. **Comparison to SPECTRE**: Even though we still see some relevant differences between ConStruct and SPECTRE - such as, 1) ConStruct enforces constraints by default, while SPECTRE requires learning such dependencies even with explicit structural information; or 2) SPECTRE uses spectral properties to inform the model about global graph structure, whereas ConStruct can still be used to constrain generation for local level properties (e.g., maximum degree) -, we agree with the reviewer that the latter is the first method, to the best of our knowledge, that recognizes the importance of explicitly incorporating structural information (other than locality biases from common GNNs) as powerful priors for the expressiveness of one-shot graph generative models. Therefore, we will add this discussion into our final version of the manuscript. We thank the reviewer for the constructive feedback. --- Rebuttal 2: Comment: [1] - Mercado, Rocío, et al. "Graph networks for molecular design." Machine Learning: Science and Technology 2021
Summary: This paper presents ConStruct, a framework incorporating structural constraints into graph generation models using a discrete graph diffusion process. By introducing an edge-absorbing noise model and a projector operator, ConStruct ensures generated graphs meet specific properties like planarity or acyclicity, crucial for real-world applications. The framework significantly improves the validity of generated graphs, demonstrated through experiments on synthetic benchmarks and real-world datasets such as digital pathology, achieving up to a 71.1 percentage point increase in graph validity. It mainly focuses on addressing the challenge of integrating domain knowledge into graph generative models, enhancing their practical deployment. Strengths: 1. Maintaining the structural constraint of the generated graph is an important but challenging problem for diffusion-based graph generative models. 2. The design of the edge-absorbing noise model and projector operator is reasonable and technically sound. Especially the efficiency consideration of incremental validity checks. 3. The experimental results are good compared with recent SOTA baselines, including diffusion-based methods. The extensive results provided in appendix also well support the advantage of proposed method. 4. The presented evaluation on Digital Pathology Graph dataset is interesting, and the released dataset seems useful for future research. Weaknesses: 1. As explained by the authors, the proposed method can only deal with a specific type of structural constraint, i.e., edge-deletion invariance, but the discussion on the possible impact of this limitation is not well clarified. For example, what other normally-seen constraint 2. Lack of complexity analysis. Instead, only the runtime measurement is provided. 3. This method might not be able to deal with large graphs. Technical Quality: 3 Clarity: 3 Questions for Authors: Please answer my listed weakness above. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: None. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer’s constructive feedback and requests for clarification. We have addressed the raised questions below: **W1**: An example of other purely structural constraints that ConStruct does not cover by default are edge-insertion invariant properties, *i.e.*, properties that hold upon edge insertion. This type of constraint can be useful, for instance, in the molecular generation domain, where we could impose constraints like "the generated graph should at least have a cycle" (we explore the inverse setting — acyclic molecular generation — in Appendix G.2). We note, nevertheless, that edge-insertion invariant properties could be easily captured by our framework through two simple modifications: design the transition matrices with the absorbing state in the existing edge state (instead of in the no-edge state) and a projector that progressively removes edges (instead of inserting them) while preserving the desired property. This simple inversion of ConStruct would allow it to cover edge-insertion invariant properties. In item 3 of the global rebuttal, we discuss other meaningful constraints that ConStruct does not cover but that could be extended to as future work, even if not purely structural. We propose to explicitly add this discussion to the paper. Additionally, an example of a structural constraint that is completely outside the scope of ConStruct can be seen in graph stochastic models. Since the validity of these types of graphs is determined via statistical tests (thus, not deterministically), it is difficult to constrain the generation in that setting. An example of this occurs in the graph generation benchmark SBM dataset (from Stochastic Block Model), where validity is checked using a Wald test [1]. To the best of our knowledge, constrained diffusion towards such graph structural properties remains an open problem. **W2**: Since the projector is the only component incurring overhead, ConStruct only interferes with the complexity of the sampling algorithm, leaving the efficiency of the training algorithm intact. As described in Section 3.5, through the use of a blocking edge hash table and incremental property satisfaction algorithms, we are able to minimize the increase in complexity imposed by the projector. The complexity analysis for the blocking edge hash table is provided directly in Section 3.5. The complexity analysis for all the different edge-deletion invariant properties explored in the paper (planarity, acyclicity, lobster components, and maximum degree) can be found in Section 3.5 (planarity) and in Appendix C (all of them). We also provide a more explicit comparison between the underlying discrete diffusion algorithm and the incurred overhead by the projector in item 1 of the global rebuttal. We remark that due to the combinatorial nature of edge-deletion invariant properties, each property satisfaction algorithm is specific to the property at hand. Therefore, there is no general efficient property satisfaction algorithm for all edge-deletion invariant properties, and we have to address each property on a case-by-case basis. **W3**: Item 1 of the global rebuttal details the minor overhead imposed by the projector within the diffusion framework. It also addresses the method’s (high) scalability relative to the underlying discrete diffusion framework. In particular, we demonstrate that as the generated graph size increases, the overhead of ConStruct becomes increasingly negligible compared to the underlying discrete diffusion framework. Therefore, as far as that discrete diffusion framework is able to scale, ConStruct remains a viable option. [1] - Martinkus, Karolis, et al. "Spectre: Spectral conditioning helps to overcome the expressivity limits of one-shot graph generators." ICML, 2022.
null
null
Rebuttal 1: Rebuttal: # Global Rebuttal We kindly thank all the reviewers for their time and valuable feedback on our work. As a brief overview, our paper presents ConStruct, the first graph constrained diffusion framework to fully run in the discrete setting. ConStruct guarantees the satisfaction of edge-deletion invariant constraints through the application of a specific edge-absorbing noise model and a new projector operator. These components ensure matched training and sampling distributions, thereby improving performance over unconstrained models in synthetic datasets and real-world applications. ___ In response to the reviewers' comments, we have updated our paper to include a series of clarifications and minor corrections to enhance the understanding of our work. We enumerate these updates below: 1) **Complexity Comparison**: in response to reviewers wnSX and FDH7, we provide a more explicit comparison between the underlying diffusion model and the overhead incurred by the projector in terms of complexity: "At each reverse step (out of a total of $T \approx 1000$ steps), the denoising network makes predictions for all nodes and pairs of nodes. This results in $O(n^2)$ predictions per step. Thus, the complexity of the sampling algorithm of the underlying discrete diffusion model is $O(n^2 T)$. In addition, the complexity overhead imposed by the projector is $O(N V)$. Here, $V$ represents the complexity of the property satisfaction algorithm, and $N$ is the number of times this algorithm is applied. So, in total, we have $O(n^2T + NV)$. Our analysis in Appendix C shows that incremental property satisfaction algorithms have notably low complexity. For instance, in cases like acyclicity, lobster components, and maximum degree, we have $V=O(|E_\text{added}|)$. Since the projector adds one edge at a time, we have $V=O(1)$. Additionally, since the blocking edge hash table limits us to perform at most one property satisfaction check per newly proposed edge (either we have never tested it or it is already blocked), $N$ corresponds to the total number of different edges proposed by the diffusion model across the whole reverse process. Thus, we necessarily have $N \leq n^2$. For these reasons, we directly find $O(N V) \ll O(n^2 T)$, highlighting the minimal overhead imposed by the projector compared to the discrete diffusion model. This explains the low runtime overhead observed for ConStruct, as detailed in Section D.3 (9\% for graphs of the tested size). A reasonable assumption is that the model inserts $N = O(|E|)$ edges throughout the reverse process. This is for example true if the model is well trained and predicts the correct graph. Besides, most families of graphs are sparse, meaning that $\frac{|E|}{n^2} \to 0$ as $n\to \infty$. For example, planar and tree graphs have to satisfy $|E| / n^2 = O(1/n)$. Therefore, we can conclude that asymptotically $O(n^2T + NV) = O(n^2T)$, *i.e.*, the projector overhead becomes increasingly negligible relative to the diffusion algorithm itself as the graph size increases. This further highlights the scalability of our proposed method." 2) **Notation and Citations**: in response to reviewer FDH7, we add the definition of $\Delta^k$ as the $k$-simplex to the notation paragraph of the paper and we updated our citations of GraphARM and EDGE to their published version; 3) **Limitations on Constraints**: in response to all reviewers, we provide more information regarding the limitation of ConStruct to edge-deletion invariant constraints: "In our work, we cover edge-deletion invariant properties. However, ConStruct can be easily extended to also handle edge-insertion invariant properties (*i.e.*, properties that hold upon edge insertion). This extension is particularly useful in domains where constraints such as having at least $n$ cycles in a graph can be important. To achieve this, we can simply "invert" the proposed framework: design the transition matrices with the absorbing state in an existing edge state (instead of the no-edge state) and a projector that removes edges progressively (instead of adding them) while conserving the desired property. In the particular context of molecular generation, Appendix G illustrates that, while purely structural constraints can guide the generation of molecules with specific structural properties (e.g., acyclicity), for general properties shared by all molecules (e.g., planarity) they are too loose. In constrast, autoregressive models thrive in such setting due to the possibility of molecular node ordering (e.g., via canonical smiles) and the efficient incorporation of *joint node-edge* constraints. Therefore, even though it consists of a fundamentally different setting than the one considered in this paper, incorporating joint node-edge constraints into ConStruct represents an exciting future direction for our work." 4) **Impact on digital Pathology**: in response to reviewer ZwhZ, we extend the already existing impact statement (appendix J) with a more detailed stance in the potential impact of our work for the particular setting of digital pathology: "For the particular case of the digital pathology setting, while the generated graphs are able to mimic clinically relevant structures, they remain too small to have any direct clinical impact. Pathologists use whole-slide images for informed decisions, whose corresponding cell graphs typically comprise a total number of nodes 3 to 4 orders of magnitude above the graphs generated at this stage." If deemed appropriate by the reviewers, we are also considering moving Appendix J to the main body of the paper. ___ We reiterate our appreciation for the valuable feedback provided by the reviewers and hope that the updated version of the paper, along with the individual replies to each reviewer, have addressed the main concerns raised. We remain open to any further discussion.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Optimal Transport-based Labor-free Text Prompt Modeling for Sketch Re-identification
Accept (poster)
Summary: This article extends image-based person and vehicle reid task to sketch field. First, authors introduce a novel sketch-based reid framework named OLTM, which utilizes text information to achieve modal alignment. Additionally, in sketch-based reid, authors apply the VQA model to generate textual attributes of persons, thus avoiding costly annotation efforts. Furthermore, authors introduce a novel triplet alignment loss to enhance distance calculation. Strengths: 1. The authors use the VQA model to generate attribute descriptions about pedestrians, effectively reducing the cost of manual labeling. 2. The method utilizes optimal transport theory and text prompt to improve model performance and designs a new triplet alignment loss. 3. The experimental results are convincing compared to the state-of-the-arts. Weaknesses: 1. Compared to CNN-based methods, using CLIP as a Backbone may suffer from high computation complexity. 2. Some minor issues: the symbols for K, Q, and V in Eq. 4 and 5 should be bolded. 3. The names in the paper and the supplementary material are inconsistent, i.e., HTL/TRL. 4. Figure 2 in the supplementary material is rather blurry. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Why is the ViLT-based VQA model used instead of others, like Bert-based model? 2. Is there an error in Eq. 7? After checking the supplementary materials, my understanding is that the optimal transport matrix P* is a weight matrix, not a loss value. 3. Some details are not clear to me. It is unclear whether the process of obtaining text attributes occurs during model training or after pre-processing. According to the framework Figure 2, this process seems to be executed during the model training process, which can result in complex model calls. 4. The experiment doesn’t mention whether it is compared with existing image-based re-identification methods. Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **W1: Computational complexity.** **A:** Please refer to **[Respones to nGMK:W1, edWJ:W6, beT2:W1]** in **Common Response**. **W2, W3, W4: Minor issues.** **A:** Thank you for your thorough review and for highlighting these minor issues. We have revised the manuscript to address the mentioned issues. **Q1: Replaceability of VQA model.** **A:** Please refer to **[Response to edWJ:W7, beT2:Q1]** in **Common Response**. **Q2: $\boldsymbol{P}^\ast$.** **A:** Thank you for bringing up this question. Indeed, $P^\ast$ represents a weight matrix. Here is the corrected formula: \begin{array}{c} \mathcal{L}\_{tal}(R\_i, S\_i) = [m - D(R\_i, S\_i) + D(R\_i, \hat{S}\_h)]\_{+} + [m - D(R\_i, S\_i) + D(\hat{R}\_h, S\_i)]\_{+}, \\\\ D(R\_i, S\_i) = \gamma E(R\_i, S\_i) + (1 - \gamma)(1 - \boldsymbol{P}^{\ast}\_{i,i})E(R\_i, S\_i) \end{array} *where, $\boldsymbol{P}^{\ast}$ is the optimal transport matrix with the least amount of cost, and $\boldsymbol{P}^{\ast}\_{i,i}$ represents the assignment weight of $(R\_i, S\_i)$ obtained after balancing the overall distribution.* **Q3: Details of text attribute acquisition.** **A:** Thank you for raising this question. The text attributes acquisition occurs **during the data processing stage**, and does not lead to complex model calls during training. We have enhanced clarity in our manuscript to prevent such ambiguities. **Q4: Experimental comparisons.** **A:** Thank you for your constructive feedback. Based on task relevance, our adopted methods are all image-based re-identification methods, categorized into visible-infrared and sketch-based methods. The former include DDAG [54], CM-NAS [55], CAJ [56], MMN [57], DART [58], DCLNet [59], DSCNet [60], and DEEN [61]. The latter comprises BDG [6] and UNIReID [7]. We have already given the detailed descriptions of these experimental methods in the manuscript. --- Rebuttal Comment 1.1: Comment: Thanks to the authors for the reply. The authors answered most of my questions. Therefore, I maintain my positive rating. --- Reply to Comment 1.1.1: Comment: Thank you for your time and efforts. We are encouraged that your concerns have been addressed, and we greatly appreciate your positive feedback on our work.
Summary: This paper focuses on sketch based person ReID. Especifically, a labor free text method with OT is proposed, which achieve a large performance improvement on two public databases. Overall, I think the idea of this paper is interesting and the experiments show its superiority. Strengths: 1. The proposed method is interesting. For instance, authors introduce the text without any labor cost to handle the modal-gap removal. Also, the OT is used to get interested local text representation. 2. The performance on two datasets achieves remarkable improvement. Weaknesses: 1. Some grammar errors should be corrected. 2. More detailed analysis on OT and TPR should be provided. 3. From the paper, I don't quite understand the settings of the parameters. In the proposed method, what is the mean of $\alpha$ and $\beta$? What is the relationship between them and cost matric $C$? Technical Quality: 3 Clarity: 3 Questions for Authors: 1. In the related works, there is a subsection on optimal transport. What is the relationship of [26]/[27] and OT. 2. On Line 143 “represents” should be modified to “represent”. On page 6, P* should be bold, be consistent to above. 3. On line 185, authors argue that the proposed strategy would not introduce any noise. So, how to ensure it? 4. In eq.(7), it is better to give more description about how to calculate the P*. Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The author has addressed the limitation. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **W1, Q2: Minor issues.** **A:** Thanks for your nice comment. We have identified all errors and corrected them in the manuscript. **W2, W3, Q4: Optimal transport.** **A:** Thanks for your constructive comment. Due to space constraints, a detailed description of optimal transport (OT) and matrix $\boldsymbol{P^\ast}$ can be found in the supplementary materials. Essentially, the OT problem resembles a "supplier-demander" issue: $\boldsymbol{\alpha}$ denotes the quantity of goods each supplier can provide, $\boldsymbol{\beta}$ denotes the quantity of goods each demander requires, and the cost matrix $\boldsymbol{C}$ indicates the expense incurred when each supplier supplies one unit of goods to each demander. The objective is to find the assignment solution that minimizes costs, yielding the optimal transport matrix $\boldsymbol{P^\ast}$. We add the description about how to calculate the $\boldsymbol{P^\ast}$: *For an input positive pair $(R\_i, S\_i)$ in a mini-batch $x$, the feature representations obtained through model inference are denoted as $f(R\_i)$ and $f(S\_i)$. If the sets of all sample features from different modalities in $x$ are treated as two discrete distributions, their alignment can be considered an optimal transport problem. The cost matrix $\boldsymbol{\hat{C}}$ is derived from pairwise feature similarities: $\boldsymbol{\hat{C}}\_{i,j}=[f(R\_i)^{\top}f(S\_j)]\_+$. We aim to acquire the optimal transport matrix $\boldsymbol{P}^{\ast}$ with the least amount of cost, where $\boldsymbol{P}^{\ast}\_{i,j}$ represents the assignment weight of $(R\_i, S_j)$ obtained after balancing the overall distribution. TAL can be represented based on triplet loss as the weighted sum of the original distance and the optimal assignment distance, dynamically updated at a certain rate $\gamma$:* \begin{array}{c} \mathcal{L}\_{tal}(R\_i, S\_i) = [m-D(R\_i, S\_i)+D(R\_i, \hat{S}\_h)]\_{+} + [m-D(R\_i, S\_i)+D(\hat{R}\_h, S\_i)]\_{+}, \\\\ D(R\_i, S\_i) = \gamma E(R\_i, S\_i) + (1-\gamma)(1-\boldsymbol{P}^{\ast}\_{i,i})E(R\_i, S\_i) \end{array} *where $\hat{R}\_h=argmax\_{R\_j\neq{R\_i}}D(R\_j,S\_i)$ and $\hat{S}\_h=argmax\_{S\_j\neq{S\_i}}D(R\_i,S\_j)$ are the most similar negatives in $x$ for $(R\_i, S\_i)$, $[x]\_{+}=\max(x,0)$, and $E(R\_i, S\_i)=\Vert{f(R\_i)-f(S\_i)}\Vert\_2$ denotes the Euclidean distance between feature representations.* **Q1: The relationship between Reference [26]/[27].** **A:** Both papers [26-27] utilize optimal transport to address inter-modal distribution alignment in re-ID tasks. Ling et al. [26] select an optimal transport strategy and assign high weights to pairs with smaller intra-identity variation. Zhang et al. [27] first attempt to find an optimal transport matrix to re-weight the distance of different local parts in re-ID. These two papers inspire us to apply optimal transport theory for exploring more discriminative feature representations and establishing more reasonable standards for measuring sample distances in re-ID. **Q3: Noise.** **A:** Thank you for pointing out the ambiguity in this sentence. Compared to fixed handcrafted prompts, our dynamic knowledge learning mechanism does not rely on expert knowledge. This strategy can mitigate **additional noise caused by inaccurate sentence templates**, since our prompt include adaptable learnable components constrained by various loss constraints. We have revised it in the manuscript: *"This integration introduces a dynamic knowledge learning mechanism that reduces noise introduction compared to handcrafted prompts, while enhancing flexible interaction across modalities and boosting the transferability of text embeddings."* --- Rebuttal Comment 1.1: Comment: Thank you for the author's reply. The detailed reply and analysis have dispelled my doubts about OT, TPR, and parameter settings. --- Reply to Comment 1.1.1: Comment: Thanks for your response and updating the rating. Your feedback is deeply valued and helps improve our paper.
Summary: This paper proposes an optimal transport-based labor-free text prompt network for sketch re-identification. The authors address two primary challenges: the expense of text annotation and cross-modal interaction, leveraging generated text attributes for multi-granularity modal alignment. The experimental results on two datasets achieve significant improvements. Strengths: This paper solves a practical problem and proposes a framework setting corresponding to the sketch re-identification task. The experimental results have effectively verified the effectiveness of this setting. If all the data and protocols are available, I believe it will be valuable for the research community. Weaknesses: The paper is interesting and the topic authors discussed is very promising, but I feel there are some doubts: 1. The author suggests that the classic triplet loss may lead to inaccurate estimates of sample distances and potentially result in suboptimal local minima. The proposed Triplet Assignment Loss aims to address this issue. How can this viewpoint be substantiated from experiments? 2. Comparisons with the most related method [1] are missing in TABLE 1 and TABLE 2. 3. Do the transformer blocks in feature enhancement module for different modalities share weights? 4. The description of pedestrians comes from VQA's answers to 9 questions. These questions typically cover details like color and gender. However, sketches used in testing often lack such explicit information. How does the model then prioritize these aspects? 5. The experimental results in Table 1 include "multiple-query". However, the paper does not provide a description of how this is achieved. 6. The paper compares the structural and performance differences between OLTM and other methods. However, it does not discuss whether the additional structural design increases the computational burden. Including relevant cost comparison experiments would greatly enhance the study. 7. Is the VQA model irreplaceable? I did not come across a statement like "VQA model is optional" in the paper. [1] Li H, Ye M, Zhang M, et al. All in One Framework for Multimodal Re-identification in the Wild[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024: 17459-17469. Technical Quality: 4 Clarity: 3 Questions for Authors: Major concerns: see Weakness. Minor concerns: a) In 4.1, the inference stage uses Rcls and Scls instead of r and s; the bolding of the formula needs to be checked. b) Add detailed descriptions of the cross-style retrieval experiment in the supplementary materials. Confidence: 5 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: The author mentions that their proposed framework effectively tackles challenges like occlusion and perspective but still encounters misjudgments with highly similar but distinct individuals. I consider this a critical issue for collective resolution in advancing re-identification tasks. My positive evaluation of the research remains unchanged. I will, however, follow the authors' work and look forward to potential solutions for these challenges. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **W1: Triplet Assignment Loss.** **A1:** Thank you for your inquiry. We have included visual analysis in **Figure 1 of PDF**, which illustrates the convergence curve and sample distances during training. **Figure 1(a)** shows that conventional triplet loss converges prematurely. In contrast, our proposed triplet assignment loss exhibits higher volatility, reducing the risk of suboptimal local minima. Additionally, **Figure 1(b)** shows that a specific sketch sample (red box in the top left image) may have similar Euclidean distances to multiple RGB samples. The triplet assignment loss comprehensively considers the distribution of all samples (red box in the lower right image), offering broader possibilities for selecting the most relevant ones. **W2: Reference.** **A2:** We appreciate the reviewer’s recommendation. Inspired by this valuable work, we have cited this reference into Sec. 2 of the manuscript: "... fine-grained interaction. **Furthermore, Li et al. [1] first attempt to conduct in-depth research on zero-shot multimodal ReID through a large foundational model.** In this paper, we ...". However, since the source code of this reference is not available and it mainly focuses on zero-shot learning, we cannot make comparsion with it in experiments. **W3: Weights sharing.** **A3:** Thank you for your question. As stated in Sec. 4.3, Transformer blocks for different modalities do not share weights in feature enhancement module. The cross-modal interaction component in front of this module shared weights, facilitating the extraction of common features between modalities. Consequently, this module enables each modality to autonomously refine its representation. This strategy ensures precise adjustments according to specific needs and enhances the overall performance. **W4: The availability of text attributes.** **A4:** Thank you for pointing out this comment. To verify the effectiveness of different text attributes, we have provided additional ablation experiments in Table 5 below. The results show a significant **decrease in model performance** after discarding several hard-distinguished attributes (e.g., color and gender) in sketches. As shown in **Figure 3 of PDF**, sketches convey gender-related information through factors like body shape, and the contrast between light and dark areas effectively highlights specific color details. The TPR module injects detailed information into modal interactions during training. This enables the model to focus on these nuances autonomously, even without TPR during inference. *Table 5: The experiment results on various text attributes. "Gender", "Shirt" and "Pants" indicate whether the attribute includes gender, shirt color and pants color.* | Gender | Shirt | Pants | mAP | Rank@1 | Rank@5 | Rank@10 | | ---- | ---- | ---- | ---- | ---- | ---- | ---- | | ✓ | ✓ | ✓ | 62.55 | 69.48 | 90.36 | 95.18 | | - | ✓ | ✓ | 62.28 | 68.07 | 89.96 | 95.18 | | - | - | ✓ | 61.64 | 67.67 | 89.76 | 94.98 | | - | - | - | 61.35 | 67.07 | 89.16 | 94.78 | **W5: Multi-query.** **A5:** Thank you for your question regarding the experimental setup. Similar to [2], "multi-query" involves combining multiple sketches of the same ID during both training and inference. Our paper employs a straightforward fusion method by averaging the image features from multiple sketches. Table 6 below provides a comparative analysis of various fusion strategies. The results demonstrate that the basic and simple fusion method achieves the best experimental performance. *Table 6: Performance comparison of different multi-query experimental methods.* | Methods | mAP | Rank@1 | Rank@5 | Rank@10 | | ---- | ---- | ---- | ---- | ---- | | Simple Fusion | 62.55 | 69.48 | 90.36 | 95.18 | | Average Pooling | 60.95 | 66.27 | 88.15 | 94.38 | | Non-local Attention | 60.98 | 65.66 | 90.16 | 94.98 | **W6: Computational complexity.** **A6:** Please refer to **[Response to nGMK:W1, edWJ:W6, beT2:W1]** in **Common Response**. **W7: Replaceability of VQA model.** **A7:** Please refer to **[Response to edWJ:W7, beT2:Q1]** in **Common Response**. **Q1: Minor concerns.** **A1:** We apologize for this typing mistake. We have revised it in the manuscript and added the following note to the supplementary material: *"This section provides a detailed explanation of the cross-style retrieval. Given the extensive Market-Sketch-1K dataset, where each person is sketched by six different artists, notable variations exist across these sketches. Consequently, we devise this experiment to assess our model's resilience to diverse artistic styles. Experimental setups involve sketches labeled as S1 to S6, each originating from different artists. Models are trained on sketches by specific artists and tested on sketches by others. And, "single query" denotes separate queries for sketches by different artists of the same individual, while "multi query" indicates queries combining multiple sketches of the same person."* [1]Li H, Ye M, Zhang M, et al. "All in One Framework for Multimodal Re-identification in the Wild" *CVPR* 2024. [2]Lin K, Wang Z, Wang Z, et al. "Beyond Domain Gap: Exploiting Subjectivity in Sketch-Based Person Retrieval" *ACMMM* 2023. --- Rebuttal Comment 1.1: Comment: This is a good job, and the author has also answered my questions well. --- Reply to Comment 1.1.1: Comment: Thank you for recognizing our work and raising your score. Your suggestions significantly help improve the quality of our paper.
Summary: This article proposes a framework for pedestrian re identification in sketch images based on optimal transportation theory, which utilizes visual question answering pre training models to address the current issue of high text annotation costs and only focuses on global feature representation. This framework utilizes a visual question answering model to automatically generate text prompts, and extracts coarse-grained features through interaction between the text encoder and the visual encoder. By combining clustering methods based on optimal transportation theory with multi head attention mechanism, fine-grained features are extracted, achieving hierarchical and multi granularity alignment, and obtaining a relatively complete feature representation. In addition, a triplet loss function was designed that prioritizes regions with maximum local feature similarity, providing a more reasonable representation for measuring local feature distance. Through experiments on two public datasets, it can be proven that the pedestrian re identification and generalization ability of this model framework have been improved compared to previous model methods. Strengths: (1) Reduced the cost of text annotation. Due to the significant modal differences between sketched images and real images, it is almost inevitable to add an intermediate modality to assist information exchange, while text information is a very practical intermediate modality information. Due to the high cost of manual annotation, the introduction of visual question answering models to supplement intermediate modal information is highly effective. (2) Layered and multi granularity alignment helps to mine richer sample information. Traditional text prompts mainly focus on global features, while this model uses clustering methods based on optimal theory and multi head attention mechanisms to process fine-grained information after extracting coarse-grained features, and obtains valuable local information from it. (3) The triplet loss function takes into account the similarity differences between introduced local features. The traditional triplet loss function assigns the same weight to the differences between positive and negative samples, which may lead to inaccurate estimation of the local feature distance of the samples. The model framework proposed in this paper takes into account the distance difference between regions corresponding to fine-grained features, providing a more reasonable measurement method. Weaknesses: (1) The calculation cost is relatively high. Due to the introduction of a new visual question answering model and the addition of multiple attention mechanism structures, the computational resources required by the model are significantly increased compared to previous models. (2) Overfitting is prone to occur. During the experiment, it was found that due to the small sample size of the sketch dataset and the rich granularity and depth of the model's feature extraction, overfitting is prone to occur, which can also affect the model's generalization ability. (3) The accuracy of text prompts is greatly affected by image quality. The text prompts designed in this article have some fixed prompts and some learnable units. Due to the accuracy of text information directly affecting the quality of feature extraction, the performance of visual question answering pre training models on sketch image modalities is easily affected by different styles of images. Technical Quality: 2 Clarity: 3 Questions for Authors: (1) How is the number of learnable text prompt units added to a fixed text prompt vector determined? (2) Does the visual question answering model have the ability to describe feature information that truly possesses fine-grained recognition ability? (3) Is there a comparison of models with artificial text prompts in the experimental section? Will this method still have advantages in terms of re identification performance compared to models with manual text prompts that have been described with fine-grained features? (4) Regarding the use of optimal transportation theory to remove irrelevant information, is it possible to increase recognition accuracy if the text description can involve background information? Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: (1) The performance of visual question answering models on sketch images will limit their ability to extract features, so it is necessary to evaluate whether the VQA model can provide fine-grained local feature descriptions on sketch images; (2) There may still be misjudgments when performing re identification on two individuals with similar appearances but different identities, so text prompts may need to refer to more information, such as extracting content that is beneficial for judgment from discarded background information. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **W1: Computational complexity.** **A1:** Please refer to **[Response to nGMK:W1, edWJ:W6, beT2:W1]** in **Common Response.** **W2: Overfitting.** **A2:** We sincerely appreciate your valuable comments regarding overfitting, as this is also a concern for us. Consistent with the approaches in [1] and [2], we validate our model on two publicly available benchmark datasets, PKU-Sketch and Market-Sketch-1K, to ensure fair comparisons. To mitigate overfitting, we apply various data augmentation techniques, including random cropping, rotation, and style augmentation. Furthermore, to validate the model's robustness and generalization, we conduct supplementary evaluation experiments on two large-scale datasets (SYSU-MM01 and RegDB) for visible-infrared person re-ID, as shown in **Table 1 of PDF**. The results demonstrate that our OLTM achieves comparable performance in the visible-infrared domain. **W3: Image quality.** **A3:** Thank you for your insightful comments. 1) It should be clarified that the VQA model is only applied to RGB images, not sketch. 2) In fact, image quality affects not only the attributes generated by VQA, but also the performance of all visual tasks. Therefore, in addition to several data augmentation operations (e.g., random rotation, flipping, and style augmentation), we have also implemented the following strategies: 1)In contrast to the conventional method of directly generating a complete descriptive sentence,we use a "divide-and-conquer" strategy. By employing multiple questions to capture specific details (e.g., hair and shirt color), we effectively reduce potential noise in the text prompts. 2) Fixed attribute tokens are dynamically embedded into learnable prompts. In addition, applying various loss functions guides the model to prioritize reliable and valuable information. 3) To further refine fine-grained text embeddings, a "bin" mechanism is added to categorize potentially ambiguous content during Dynamic Consensus Acquisition (as analyzed in Sec. 4.2). **Q1: The number of learnable text prompts.** **A1**: Thank you for your question. As analyzed in Section 4.2, we extract $k$ text attributes $\\{att_1, att_2,..., att_k \\}$ using CLIP tokenizer to derive text tokens $a_i=Tokenizer(att_i)$, where each $a_i$ contains $n$ fixed numbers of tokens. Subsequently, $a_i$ preserves valid tokens, and is fed into CLIP token encoder to obtain the fixed attribute tokens $a_i \in{\mathcal{R}^{m\times{c}}}$. Here each word can be represented by $m$ valid tokens of dimension $c$ [3]. Those fixed parts are uniformly embedded into $l$ learnable prompts of dimension $c$, and finally we obtain final text description $q$. In our implementation details, the value of $k$ is $9$, and we set $n=77$ based on CLIP's fixed configuration. Thus the number of learnable text prompt is $n-k\times{m}$. **Q2: The fine-grained recognition ability of VQA model.** **A2:** Thank you for your valuable comment. The visual question answering (VQA) model has the capability to describe fine-grained recognition information for the following reasons: 1) The VQA model generates detailed attributes about various aspects of the pedestrian target (e.g., hair, backpack, hat), rather than relying on complete descriptive sentences. 2) We provide a visualization comparison, as shown in **Figure 4 of PDF**. This comparison demonstrates that using text attributes can guide effectively the attention of model. **Q3: The performance comparison with the handcrafted prompts.** **A3:** Thank you for pointing out the incompleteness in our ablation study. Our method remains competitive compared to handcrafted prompts mechanism[4], as shown in Table 4 below. The performance improvements can be attributed to the following reasons: 1) We introduce a dynamic learnable prompt mechanism without requiring additional expert knowledge. This enhances the model's adaptability and robustness. 2) VQA is more flexible compared to handcrafted prompts. It can generate more detailed information (e.g., glasses, hats) than what is included in handcrafted prompts. *Table 4: Comparison of prompt setting methods. "Handcrafted" and "VQA" denote manually annotated and VQA generated text attributes, respectively. "Template" represents the sentence template defined by experts. "Prompt" denotes the learnable text prompts.* | Handcrafted | VQA | Template | Prompt | mAP | R@1 | R@5 | R@10 | |-|-|-|-|-|-|-|-| | ✓ | - | ✓ | - | 61.46 | 68.07 | 89.96 | **96.79** | | ✓ | - | - | ✓ | 61.81 | 67.47 | **90.56** | 95.78 | | - | ✓ | ✓ | - | 61.76 | 65.46 | 90.16 | 96.18 | | - | ✓ | - | ✓ | **62.55** | **69.48** | 90.36 | 95.18 | **Q4: Background information.** **A4:** Thanks for your constructive advice. This motivates us to investigate the feasibility of incorporating background information into text descriptions. Accordingly, we conduct additional background evaluation on Market-Sketch-1k dataset based on our initial studies. Specifically, we formulate the question **'What is the background of this image?'** to extract textual attributes about background. The extracted background details are illustrated in **Figure 2 of PDF**. However, the introduction of background information result in **a decrease of 2.81 in Rank-1 and 1.62 in mAP**. This decline can be attributed to the absence of corresponding background information in the sketches, which potentially interferes with the model's learning process. [1]Pang L, et al. "Cross-domain adversarial feature learning for sketch re-identification" *ACMMM* 2018. [2]Lin K, et al. "Beyond Domain Gap: Exploiting Subjectivity in Sketch-Based Person Retrieval" *ACMMM* 2023. [3]Radford A, et al. "Learning transferable visual models from natural language supervision" *ICML* 2021. [4]Lin Y, et al. "Improving person re-identification by attribute and identity learning. *Pattern recognition* 2019. --- Rebuttal Comment 1.1: Comment: We sincerely want to know if our response has addressed your concerns. If you are willing to, we would be eager to continue the discussion to better understand your thoughts.
Rebuttal 1: Rebuttal: We appreciate the reviewers' valuable time in providing constructive feedback. We have thoroughly reviewed the comments and made the necessary responses and corrections. If our responses do not fully address the reviewers' questions, we are open to further discussion. We are honored that the reviewers have recognized the practicality and value of our task **(edWJ, Bynq)**, as well as the key contributions of this paper: - Utilization of a visual question answering model to acquire text attribute, thereby reducing manual annotation costs **(nGMK, Bynq, beT2)**; - Implementation of hierarchical and multi-granularity alignment **(nGMK, edWJ)**; - Incorporation of optimal transport theory and text prompts **(nGMK, Bynq, beT2)**; - Introduction of a novel triplet alignment loss **(nGMK, beT2)**; - Effective validation of experimental results **(Bynq, edWJ, beT2)**. Considering that several reviewers have expressed similar concerns about our method, we provide a comprehensive answer for all reviewers here. **[Response to nGMK:W1, edWJ:W6, beT2:W1]: Computational complexity.** Thanks for your valuable comment. Our OLTM achieves the trade-off between performance enhancement and computational complexity. To this end, we select several methods for comparing parameters, floating-point operations (FLOPs), and frames per second (FPS), as shown in the Table 2 below. The results show that OLTM gets remarkable performance while maintaining reasonable computational costs. The reason is that: 1) only TCA module is required for inference; 2) the VQA model is used during data processing. *Table 2: The number of parameters, FLOPs, and FPS of different methods. VI denotes visible-infrared person re-identification.* | **Methods** | **Field** | **Backbone** | **Reference** | **Paras(M)** | **FLOPs(G)** | **FPS** | |-------------|-----------|--------------|---------------|--------------|--------------|---------| | DDAG[53] | VI | ResNet50 | ECCV'2020 | 95.6 | 5.2 | 13.2 | | CM-NAS[54] | VI | ResNet50 | ICCV'2021 | 24.5 | 5.2 | 14.8| | CAJ[55] | VI | ResNet50 | ICCV'2021 | 71.6 | 5.2 | 13.1 | | DEEN[56] | VI | ResNet50 | CVPR'2023 | 89.3 | 13.8 | 7.7 | | CCSC[9] | Sketch | ViT | MM'2022 | 203.9 | 383.8 | - | | BDG[6] | Sketch | ResNet50 | MM'2023 | 222.9 | 5.2 | 14.9 | | UNIReID[7] | Unified | CLIP | CVPR'2023 | 149.6 | 9.3 | 8.2 | | OLTM (Ours) | Sketch | CLIP | - | 181.9 | 6.2 | 11.7 | **[Response to edWJ:W7, beT2:Q1]: Replaceability of VQA model.** Thank you for bringing this issue to our attention. The VQA model is inherently substitutable. Essentially, any visual-language model which is capable of generating target attribute information from images can serve as an alternative. We also use other VQA models to demonstrate their substitutability, as shown in the Table 3 below. *Table 3: Performance comparison of different VQA models.* | **Methods** | **mAP** | **Rank@1** | **Rank@5** | **Rank@10** | |-------------|---------|---------|---------|----------| | Vilt | 62.55 | 69.48 | 90.36 | 95.18 | | Blip[1] | 62.63 | 67.87 | 91.37 | 96.79 | | GIT[2] | 62.33 | 68.47 | 90.76 | 96.59 | **Note: A PDF file** is attached in the common response. It contains **all the new figures** we used in the rebuttal phase. [1]Li J, Li D, Xiong C, et al. "Blip: Bootstrapping language-image pre-training for unified vision-language understanding and generation" *ICML* 2022. [2]Wang J, Yang Z, Hu X, et al. "Git: A generative image-to-text transformer for vision and language" *arXiv preprint* 2022. Pdf: /pdf/27184412205d2e44d1cc58f99b51f2534f155100.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Learning to Embed Distributions via Maximum Kernel Entropy
Accept (poster)
Summary: This paper focuses on distributional regression (classification) that carries out supervised learning over a collection of datasets, where one instance (subject to one label) is a dataset that can be considered a distinct empirical distribution. The paper proposed a novel learning objective, maximize quantum entropy, to find a suitable embedding function f_{\theta} for all datasets on a sphere. The proposed quantum entropy is a the lower bound on distributional variance, where optimizing over such objective encourages inner-distribution variance reduction and inter-distribution variance increase for the given kernels, which intuitively lead to well-separability among distributions. Empirical simulation shows the potential of such objective in distributional supervised learning. Strengths: 1. The paper is well-written and well-organized, the mathematical notation is on point and easy to follow. I enjoyed reading the paper. 2. The theoretical results are concise and straightforward, which provides enough motivation and context for the proposed framework. Weaknesses: 1. My major concern is about the validity of the operation depicted in line 302-303, where the unsupervised pretraining is done with the whole dataset rather than a subset. This seems to be a slight lack of rigor and I hope the author could address how would MDKE perform when the pretraining is only done on the training dataset. 2. The second weakness is regarding the performance of MDKE in actual regression, for more detailed question, please refer to questions. At this point, the potential use case of MDKE seems rather limited (Distributional classification). Technical Quality: 3 Clarity: 3 Questions for Authors: As I am not an expert in distributional regression, I am unsure about how important regression analysis in this subject is. However, I do not see any empirical evaluation on anything that is technically regression (compared to classification). Moreover, the motivation behind the quantum entropy, as suggested in Figure 2a, is more aligned with classification (separability) than regression. Maybe the author could make a remark on this particular subject (regression), or provide a simple set of simulations on actual regression? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer, Thank you very much for your thoughtful review. Regarding the topic of unsupervised learning datasets, the Leukemia experiment conducts unsupervised kernel learning on the entire set of available distributions without accessing the corresponding labels. In the experiments described in Appendix D, unsupervised kernel learning is performed on subsets of the data. To address your question about classification versus regression: as it is stated in the abstract, the primary goal of this work was to study **discriminative tasks**. While we believe that minimizing latent variance ultimately facilitates solving regression problems as well, the experimental verification of this hypothesis was not within the scope of the current work. --- Rebuttal Comment 1.1: Title: Response Comment: Thank you for the response. I have no more questions as all my previous questions were rather minor. I will keep my score of 6.
Summary: The authors consider the distribution regression problem. The authors propose to learn an embedding function (i.e., embedding support data points into a sphere) and leverages the kernel embedding (into RKHS) and kernel functions (within RKHS space). The authors propose an algorithmic approach to learn such embeddings via maximum kernel entropy in an unsupervised fashion. Empirically, the authors illustrate advantages of the learned kernel with SVM on Tissue and Leukemia classification. Strengths: + The authors propose an interesting framework to learn a kernel based on kernel embedding (into RKHS), parameterized by embedding function (into a sphere) in an unsupervised fashion. + The proposed method is well motivated. The paper is well-organized, easy to follow, with a detailed description for the proposed framework. + The interpretation and illustration of main concepts of the proposed method is also a plus. + The proposed learned kernel with SVM performs well in experiments. Weaknesses: + It seems the experiments are quite weak. The authors only evaluate on two small datasets (with N <=50 distributions?) and on a classification task only (?). It is better to evaluate on larger datasets (especially in our current era)! It is only a plus to illustrate it on a real distribution regression problem. + It is better to report time consumption together with performances. + Another point is that the embedding into a sphere is not well motivated (e.g., why sphere? but not other spaces? How is about low-dimensional embedding, as other criteria for embedding?) Technical Quality: 3 Clarity: 3 Questions for Authors: The submission is well organized, and easy to follow. The authors also demonstrate detailed interpretations for the proposed approach. The proposed learned kernel approach is interesting and has good empirical results for the distribution regression problem. However, it seems that the experiments are quite weak. + The authors only evaluate on two small datasets (with N <=50 distributions?) and on a classification task only (?). It is better to evaluate on several larger datasets (especially in our current era)! Additionally, it is quite unusual to learn an embedding from only 16 distributions (for Tissue) and 20 distributions (for Leukemia). Could the authors clarify why it only needs a few input distributions to learn such embedding? Or only the number of supports in the distributions matters here? + It is only a plus to illustrate it on a real distribution regression problem. + It is better to report time consumption together with performances, and compare to recent kernel approach for distributions, e.g., based on optimal transport geometry. + It is also better to discuss the parameterization for the embedding and the proposed algorithm to learn it? It seems that the learning algorithm is non-convex? + Please give a proof for results in line 157-158? What is the gradient of L_MDKE w.r.t. \theta? Please describe S_2(\Sigma_D) w.r.t. K_D? and does the result in Equ. (13) only hold for the covariance operator \Sigma_D or also its estimator \hat\Sigma_D? Please clarify them. Some other concerns are as follows: + The kernel hyperparameter (in Section B.1) is important and interesting. Could you elaborate it rigorously for the experiments with few distributions? + Could the authors comment on the choice of sphere for the embedding? Why not other spaces? It is also better to use a trivial baseline, e.g., learn embedding by autoencoder separately, and then apply the kernel embedding (into RKHS) and kernel functions (within RKHS) to illustrate the importance of the learned embedding? + It may be interesting to link the distribution regression with optimal transport (OT) geometry besides sliced-Wasserstein kernel [35], e.g., tree-Wasserstein kernel (Le et al., NeurIPS'19), Sobolev transport kernel (Le et al., AISTATS'22)? and other geometry on distributions? Could the authors comment on it? + It seems the proposed approach may be also related to the Log-Hilbert-Schmidt metric between positive definite operators (Ha Quang et al., NIPS'14), especially on application setup? Could the authors comment on it? Briefly, the proposed method is interesting, and the submission will be stronger in case the authors have a further care for the experiments, e.g., with larger datasets, real distribution regression task. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes, the authors have a discussion on the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer, Thank you very much for your thoughtful review. Before addressing your specific questions, we want to emphasize that the leukemia diagnosis dataset used in the main part of our paper is a real dataset. It was collected during clinical studies, is sufficiently diverse, and represents the typical medical complexities involved in diagnosis. It has been published to facilitate research in early cancer diagnosis and treatment. From ML perspective, this dataset showcases typical problems in such tasks: large sample size (in millions of vectors), non-i.i.d sampling due to a small number of subjects, and challenges in normalizing vector representations because of the nature of the measurements (e.g., different biomarkers). Appendix D includes experiments with more common ML modalities to demonstrate the applicability of our framework. To address your questions: * Regarding the question of using only a few input distributions. Specifically for the Leukemia dataset, the input sample space is continuous, and the encoder function is a non-linear map (i.e. NN). This complexity makes it difficult to estimate the number of distributions required to learn an encoder that sufficiently separates parts of the support space for downstream tasks. While such a quantitative statement would be beneficial for practical applications, we currently lack these estimates, even for much simpler setups. * Regarding time consumption, we report in the paper the computational resources required for our experiments as not significant. * The non-convexity of the objective is discussed alongside our choice of optimization algorithm in lines 157-163. * The learning objective involves the logarithm of the squared Frobenius norm of the kernel Gram matrix, and the gradient can be easily obtained using any autodiff software. Given that the encoder function is a neural network, deriving the gradient manually may be tedious, and we believe it does not provide additional theoretical value for the discussion in our paper. Computational form of $S_2(\Sigma_D)$ w.r.t. $K_D$ is given in Eq. 14 on page 5. Eq. 13 is stated in terms of true operator $\Sigma_D$, the rest relies on the fact that $\hat{\Sigma}_D$ is a consistent estimator of $\Sigma_D$ (with the proof of this fact given in [1]). * The hyperparameter selection procedure we applied is defined and analyzed in [2]. * Regarding the choice of a hypersphere as the latent space: as we stated in lines 141-143, the latent space is a design choice. Other compact spaces with sufficient symmetries and the existence of proper kernels (see Assumption 3.1) would also be valid. The hypersphere is a simple and commonly used latent encoding space in ML, possessing the properties required. * While we acknowledge that a specialized kernel suited to the task's geometry might improve solution quality, the study aims to learn the kernel from the dataset (or, more broadly, identify what makes a kernel "good" for solving discriminative tasks on distributions). * Currently, we do not see an apparent connection to the Log-Hilbert-Schmidt metric. Note that in our work, we do not use covariance operators to define the kernel. The operator embedding is only defined as a learning objective. Once learning is complete, the resulting kernel does not depend on the covariance operators. [1] Bach, Francis. "Information theory with kernel methods." IEEE Transactions on Information Theory 69.2 (2022): 752-775. [2] G. Blanchard, G. Lee, and C. Scott. Generalizing from several related classification tasks to new unlabeled sample. Advances in neural information processing systems, 24, 2011 --- Rebuttal Comment 1.1: Comment: I thank the authors for the explanation in the rebuttal. I have some quick questions as follows: **(1)Time consumption** - It is not clear to me why the mini-batch SGD is efficient for the proposed non-convex learning problem yet? There is no analysis for it (e.g., convergence, affects of initialization, stopping condition, step-size, etc)? Could you elaborate why the computational time is not significant? - I agree that in Section 5, the authors show the advantages on performances for the proposed method. However, it is not clear about other aspects, e.g., algorithms to solve the non-convex learning problems? **(2) Data-size is too small** - I agree that the used datasets are real-world. - However, their size is just too small! The applications use kernel SVM (i.e., non-linear classification), how many samples do the authors use for training such classifiers (split 70/30 from a very small datasets with less than 50 samples?) Could the authors elaborate it more rigorously. **(3) Gradient formulation (for minibatch SGD)** - the gradient is essentially the most important part for SGD approach. - How does the usage of autodiff affect the convergence and learning procedure? - Is it possible to derive the formulation of the gradient? --- Reply to Comment 1.1.1: Comment: Thanks for engaging on technical details. We really appreciate and enjoy discussing this with you: **(1) Time consumption** Mini-batch SGD is a common approach for tackling non-convex problems, typically able to overcome local minima due to the stochasticity of gradient estimates. Since it is usually not feasible to predict the performance of an SGD solution for a specific problem at hand, we propose it as an efficient method for solving the described optimization objective based on empirical evaluations. Note that the objective is convex w.r.t. to the kernel Gram matrix. The use of a neural network as the encoder turns it non-convex w.r.t. to parametrization of the NN. However, in our experience, SGD remains a suitable choice, often yielding satisfactory results in such scenarios. While we obviously recognize the value of quantitative statistical guarantees, our implementation of SGD aligns with standard practices in the literature for deep NNs and we expect the properties of our implementation to be mostly inherited by studies of mini-batch SGD convergence in deep-NNs. Regarding computational time, our process is relatively efficient, typically requiring only hours on a single machine. This negates the need for extensive parallelization across multiple GPUs or machines, underscoring our method’s flexibility and accessibility. **(2) Data-size is too small** We tried to explain at best the dataset but, as for any dataset, real insight comes only by playing with it. This is actually not a very small dataset: it has millions of samples! Each distribution has 10^5-10^6 samples on which the optimization is performed. What keeps the dataset compact is the dimension of each sample ranging from 20 to 50 based on the selected criteria. While the dataset’s dimensionality is smaller than what you might find in image datasets, the number of samples—which we refer to as the “distribution part”—is quite extensive. The apparent simplicity of the downstream classification task we use to properly assess quality of the distribution kernel learned is the result of specific modeling and parameterization choices made. We hope it demonstrates the power of the techniques applied. We use 70/30 splits for training, details are provided in the opening paragraphs of the Section 5. **(3) Gradient formulation (for minibatch SGD)** We agree with the Reviewer that the gradient is the most important building block of the learning procedure. We do not expect any substantial improvement from replacing autodiff with a theoretically derived custom gradient iteration. Moreover, by establishing a general framework, we aim to keep it flexible: any differentiable encoder can be incorporated without changes to the outer training loop, leveraging the power of modern autodiff software. We acknowledge that we might be wrong in such intuition and we are curious to know if the Reviewer suspects a different outcome. However, for this reason, while it is possible to manually derive a custom gradient, we didn't devote too much time to it.
Summary: This work studies the distribution regression problem. The inputs are distributions, and the goal is to learn embeddings for these inputs. The authors propose the following method: First map the distributions to the kernel mean embeddings with respect to a embedding kernel $k_{emb}$, and then conduct a kernel regression on a distribution kernel $K_{distr}$. The authors propose to learn $k_{emb}$ by maximizing the quantum entropy. The effect of this is minimizing the variance within each distribution, while maximizing the spread of distributions. Strengths: **Note:** I reviewed this paper at ICML 2024. This review is also based on the difference between the two versions. 1. This paper studies an important problem, which I believe has potential applications in many domains. The paper is well written and easy to read. The intuition behind the proposed method is clearly demonstrated. The proposed approach, as far as I know, is new. Thus, this paper contributes valuable insights and methodologies to the literature. 2. The geometric interpretation, that is minimizing the variance and maximizing the spread, makes sense and is similar to some related methods such as contrastive learning (see Wang & Isola, 2020). 3. The paper is well-structured, offering a comprehensive and clear exposition of the background, methodology, and theoretical aspects of the proposed approach. This clarity enhances the paper's accessibility to readers who might not be intimately familiar with the nuances of kernel methods or distributional regression, thereby broadening its potential impact. Weaknesses: Several weaknesses were discussed at the ICML review, which are not fully addressed in the new version: 1. The major weakness is the experiment part, which only considers the flow cytometry task. While this might be an important application (which I am not familiar with), the usefulness of the proposed method in applications is still questionable. Experiments on images and text are moved to Appendix D, while they were in the main body in the previous version. For me I would prefer them to be in the main body. As I pointed out last time: "One thing that the authors could do to increase the impact of this work is to apply this method to more real tasks. For instance, the authors mentioned voting behavior prediction and dark matter halo mass learning in the introduction, but these data sets are not used in the experiments." 2. Other weaknesses in the empirical part pointed out by other reviewers: - Analysis of Important Variables: The experimental setup lacks a thorough analysis of crucial variables, and does not have ablation studies. The impact of hyperparameter selection on the method's performance should be studied and discussed in the paper. - Computational Cost and Scalability: The authors acknowledge this point in the limitations. - Reproducibility: The authors promise to release their code in the main text. 3. One reviewer asked last time why not directly optimizing $V_H$, which is an upper bound of the proposed objective as proved in Eqn. (17). The authors explained that directly optimizing $V_H$ has a lower performance, but did not provide any experiment results. I am not satisfied with this explanation. I suggest the authors add a direct comparison in the paper. This could be an important ablation study. **Conclusion:** Overall, I am still in favor of accepting this paper. This paper indeed has lots of issues, but it is a preliminary study of an important problem and has some interesting insights, which I believe could inspire future work. My current rating is weak accept, and I am willing to raise it to accept based on the authors' response. Technical Quality: 2 Clarity: 3 Questions for Authors: See weaknesses. Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: Limitations are discussed Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer, We are grateful for your meticulous review and constructive feedback. Since our initial submission to ICML, the manuscript has undergone significant improvements, particularly in the experimental section, prompted by critiques akin to those presented by the Reviewer. We concur that incorporating additional experiments within the main text could be advantageous. However, we opted to relegate the image and text modalities to the Appendix to mitigate potential confusion. The feedback we received indicated a prevalent misunderstanding among readers. They often speculated about the relevance of supervised methods, which are standard for the datasets in question (e.g. MNIST), to our research framework. However, our framework is unsupervised and the quasi entirity of the suggested methods are not comparable to our framework. Therefore, we chose to highlight an experiment where the application of a distributional regression model is more evident. The leukemia diagnosis dataset, derived from clinical trials, best represents the complexities inherent in medical diagnosis and illustrates typical machine learning challenges, such as large sample sizes and non-i.i.d. sampling. Regarding the other points of critique, we recognize that scalability is a challenge for the practical deployment of kernel methods. We intend to conduct a more thorough analysis of the hyperparameter selection process. Nevertheless, we have shown that our method remains viable for small to medium-sized datasets, where scalability issues are less pronounced. The principal contribution of our paper is the establishment of a theoretically sound objective for unsupervised learning of a data-dependent distributional kernel. This contribution is innovative both as a theoretical construct and in its implications for the application of kernel-based methods to discriminative tasks involving distributions. We anticipate that this novel perspective will stimulate further interest and research in the field. We welcome efforts by other research groups to adapt and scale our methods for use with larger datasets. A brief comment about using $\mathbb{V}_\mathcal{H}$ as an optimization objective. We will incorporate an ablation study in the experimental section. Although the outcomes do not surpass those of a random encoder, which is expected as per theoretical considerations discussed in the paper. --- Rebuttal Comment 1.1: Title: Response Comment: I thank the authors for the response. I don't have any further questions at this point. I will discuss with my fellow reviewers and the AC, and notify the authors if I change my rating.
Summary: The authors propose a method for learning kernel embeddings for distributions via maximizing a Renyi entropy objective. They relate their objective to the distributional variance of the embeddings, and explain why this can lead to good embeddings for downstream tasks. Empirical evaluations show good results on a flow cytometry dataset, but results on image and text classification are weaker. Strengths: - This paper is very clearly written, with good explanations on kernel distribution embedding and covariance operators, and the two-stage embedding scheme used in the estimation. There are also good motivation on the use of entropy maximization as an objective for learning good feature embeddings. - The authors provide theoretical analysis on the relation between second-order Renyi entropy and distribution variance, and how maximizing the entropy helps increase the variance of the feature embeddings. Weaknesses: - The weaker part of this paper is the empirical evaluations. There is only one dataset provided in the main paper on flow cytometry which the proposed method gives improvements over existing methods. There are small-scale experiments on image and text classification in the Appendix. However, even as the results of the learned embeddings are better than randomly initialized embeddings, they are not as good as direct SGD optimization using a cross-entropy loss on MNIST or 20 News-groups. This cast doubt on the effectiveness of the entropy maximization objective for feature learning in text and image problems. - Another weakness of the current method is scalability. From Equation 14 the objective scales quadratically with the number of distributions. This makes the computation expensive and indeed most of the experiments described in this paper are rather small-scale. Technical Quality: 4 Clarity: 3 Questions for Authors: - How is the sampling done in the mini-batch SGD? There are two levels in the two-stage kernel embedding process, one with the individual samples and one with the distributions. How do the authors pick the SGD samples to ensure good optimization progress? And what are the required batch sizes relative to the training dataset size? Confidence: 4 Soundness: 4 Presentation: 3 Contribution: 2 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer, Thank you very much for your thoughtful review. Upon reading the Reviewer's comments we were confused by what exactly the Reviewer contends when referring to embeddings learned by SGD via Cross-Entropy (CE). Our methodology is inherently unsupervised, whereas CE intrinsically necessitates label information. Consequently, juxtaposing our approach with SGD optimization of the CE loss does not constitute a congruent comparison, given that our optimization goal is unsupervised—learning a data-dependent kernel between distributions devoid of label access—unlike the supervised nature of CE. We invite any additional insights from the Reviewer should there be an alternate perspective. On the other hand, we agree that our method’s scalability to larger datasets warrants further work. However, we posit that scalability challenges do not preclude the applicability of our method to practical datasets. Our methodology has demonstrated superior performance in real-world contexts, such as clinical studies. The leukemia diagnosis dataset featured in our study, derived from clinical trials, is representative of the complexity typically encountered in medical diagnoses and has been made available to support research in early cancer detection and therapy. This dataset exemplifies common challenges in the same class of ML applications: a large sample size (in millions of vectors), non-i.i.d. sampling due to a small number of subjects, and challenges in normalizing vector representations because of the nature of the measurements (e.g., different biomarkers). It is our stance that the utility of methods like ours should not be contingent solely on their suitability for the largest available datasets. Our approach is equally applicable to small and medium-sized datasets, as evidenced by our findings. To address specific questions about the sampling procedure, both levels of sampling are performed uniformly to obtain a batch of i.i.d. samples of a given size. We do not have quantitative guarantees for the optimal batch size, and the best hyperparameters were determined using a standard grid search. In addition, in the experiments conducted for this study, our empirical observations suggest that batch size is not a critical determinant of the downstream performance. --- Rebuttal Comment 1.1: Comment: Yes, when I say SGD optimization with CE loss I do mean supervised learning with labels. In computer vision there is a line of work on unsupervised representation learning with contrastive methods such as SimCLR or MoCo, and the performance of these methods is on par with supervised learning on ImageNet. So it is not completely unfair to compare the performance of unsupervised representation learning with their supervised counterpart. The same is true in NLP, where unsupervised representation learning (although via pretraining on large amount of texts) beats supervised learning with limited data. It could be difficult to require a general unsupervised learning method like the author's proposal to do better than domain-specific approaches used in vision or NLP, but it also limits the applicability of the proposed algorithm on these type of data since there are strong alternatives. --- Reply to Comment 1.1.1: Comment: Thank you for your valuable feedback. Your clarification has greatly helped us understand your perspective. We recognize that your emphasis on state-of-the-art (SOTA) performance is aimed at advancing current methodologies. However, we believe this focus may overlook the foundational nature of our contribution. Our work seeks to establish a new approach in unsupervised kernel learning, which we believe holds significant potential for future research and development. If we understand correctly, you are requesting a comparison between unsupervised and self-supervised approaches (e.g., SimCLR, MoCo) within a domain-free setup like ours. This is indeed a fundamental question, but one that other subfields (such as vision and language) have addressed at more mature stages. Our contribution is the first to propose unsupervised learning of full distribution embeddings. While we appreciate your feedback and find it very helpful, we feel that rejecting our contribution based on these remarks may be asking too much from a single paper. Here are some specific comments on your remarks: 1. **Domain-Specific vs. Domain-Free Approaches**: Our algorithm is designed to be domain-free, which we believe offers a significant advantage over domain-specific self-supervised methods. This flexibility allows our approach to be applied across various domains without requiring domain-specific adjustments. 2. **State-of-the-Art (SOTA) Focus**: While we acknowledge the importance of achieving SOTA results, our primary goal in this paper is to introduce a novel framework for unsupervised learning of whole distribution embeddings. We believe that establishing this new framework is a crucial first step that can pave the way for future improvements and comparisons. Thank you once again for your insightful comments. We look forward to addressing them in our future work.
null
NeurIPS_2024_submissions_huggingface
2,024
Summary: This paper proposes a new unsupervised way to learn kernels for distribution regression through entropy maximization. In addition, they also propose a geometric interpretation which is very interesting. The learnt kernels are general and have shown on some experimental settings to perform better than standard unlearnt kernels. Strengths: - The paper proposes a novel way to learn kernels through entropy maximiazation in the setting of distribution regression. - The paper is very well written and easy to follow for someone who has worked the KME before. - The paper shows promising results compared to exisiting methods that do not train the kernel (through entropy) Weaknesses: - The experimental section does not compare to other kernel distribution regression methods. Could the authors confirm why [1] was omitted in the comparisons? - In addition citations to other methods that also learn kernel albeit not with entropy should be cited [2, 3, 4, 5] - How were the hyper parameters picked for the fixed kernel such as RBF kernel? Can you please lay out the whole process as well as the corresponding values for each of the hyper parameters? [1] Bayesian approaches to distribution regression [2] Learning Deep Features in Instrumental Variable Regression [3] Noise Contrastive Meta-Learning for Conditional Density Estimation using Kernel Mean Embeddings [4] Meta Learning for Causal Direction [5] Deep proxy causal learning and its application to confounded bandit policy evaluation Technical Quality: 3 Clarity: 4 Questions for Authors: see above Confidence: 3 Soundness: 3 Presentation: 4 Contribution: 2 Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer, Thank you very much for your thoughtful review. * We appreciate the insightful comments from the Reviewer. We will incorporate the Bayesian approaches they highlighted into our extended discussion in Appendix C. We have dedicated this appendix to a thorough review of related literature. The absence of a comparison with the work of [1] in our experimental section is due to the novel nature of our task, which is the unsupervised learning of distribution embeddings. To our knowledge, [1] and similar studies leverage labels in a supervised manner, which contrasts with our unsupervised approach. A direct comparison would necessitate significant modifications to these methods, constituting a separate contribution. * In response to the references suggested by the reviewer, we will enrich our manuscript with additional citations and a succinct discussion of [2,3,4,5]. * We had delineated our hyperparameter selection process in Section B.1 in detail. To enhance clarity, we will insert a second cross-reference to this section. This procedure builds upon and elaborates the methodology established in [6]. [1] Bayesian approaches to distribution regression [2] Learning Deep Features in Instrumental Variable Regression [3] Noise Contrastive Meta-Learning for Conditional Density Estimation using Kernel Mean Embeddings [4] Meta Learning for Causal Direction [5] Deep proxy causal learning and its application to confounded bandit policy evaluation [6] G. Blanchard, G. Lee, and C. Scott. Generalizing from several related classification tasks to a new unlabeled sample. Advances in Neural Information Processing Systems, 24, 2011. --- Rebuttal Comment 1.1: Title: response Comment: I thank the authors for the clarifications and i will keep my score of 6.
null
null
null
null
null
null
Testing Semantic Importance via Betting
Accept (poster)
Summary: The paper presents a method to test feature importance when a model makes its decision. The proposed method is based on hypothesis testing. The features concerned in this work is human-interpretable ones. For example, when CLIP makes a "cat" prediction to the image, the features tested are: "whiskers", "pointy ears", etc. The paper claims several novel contributions: (1) the method does not depend on the existence of dense features dictionary -- and by principle can take features as input from the users (2) the method keep the original predictor, unlike existing methods that train a separate predictor. Strengths: 1. Conceptually very important and interesting, especially on the notion of sample-speficic vs. global vs global conditional 2. *if* really works on input features from user, is a very important contribution Weaknesses: Although the proposed method's efficacy and claims are very interesting, i find several major concerns: 1. the method part is very hard to understand -- especially on the equation part. I fins it hard to understand what does the equations mean, and how that connects to by rejecting the hypothesis, means that the prediction depends on the feature? the writing on method section can use a lot more explanation. 2. The evaluation is not convincing. The result presented claims that the feature importance outputted by the algorithm agrees with intuition -- this is very subjective. The evaluation can be significantly strengthen by human evaluation. Technical Quality: 2 Clarity: 3 Questions for Authors: Adding an actual "feature from user input" evaluation can significantly strengthen the claim. + see weakness Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 4 Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Thank you for your comments! We thank the reviewer for their encouraging comments. Here, we address each weakness point individually, and we are looking forward to discussing more. > *if* really works on input features from user, is a very important contribution. We do want to stress that our method *does* work with any set of concepts, and in particular it is the only method that can provide local semantic explanations for the predictions of a black-box model. So, we do appreciate the reviewer's comment that this is a very important contribution! --- ### **Clarity of presentation** Could the reviewer expand on their points of confusion? We would be more than happy to clarify methods and equations in the revised version of the manuscript. --- ### **Alignment with human intuition** With agree with the reviewer that a user study needs to be performed to claim alignment with human intuition. As we state in the general response at the top of our rebuttal, we will smooth such claims in the revised version of the manuscript. We note that our contributions *can* readily work with any set of concepts, including user-defined ones, and that no alternatives currently exist. **We have included several additional experiments to strengthen the experimental section of our submission and validate the ranks of importance obtained with our methods.** In particular, we have included the AwA2 and CUB datasets, which have ground-truth annotations of which concepts are present in images coming from certain classes or specific images, respectively. We refer the reviewer to our general response at the top of our rebuttal for a detailed description of the additional experiments and their results. To summarize our findings: - c-SKIT has better semantic importance detection performance compared to PCBM on AwA2. - x-SKIT has good alignment with ground-truth annotations across all models ($\approx 0.85$ average $f_1$ score) on CUB. --- Rebuttal Comment 1.1: Title: thank you for your response! Comment: yes, i agree w/ the authors that the new results solidifies the claims. i have raised my score --- Reply to Comment 1.1.1: Title: Thank you for your response! Comment: We sincerely thank the reviewer for engaging in discussion and their consideration of our rebuttal. We are glad to hear our additional experimental results solidified the claims in our manuscript.
Summary: Recently, there has been a lot of interest in understanding the inner workings of deep neural networks. Most existing works learn semantic concepts that are inherently understandable to the user. Often, each semantic concept comes with an associated score, and in many cases, it is hard to interpret the scores. Even though some works address this issue, they only work for input features and don't easily apply to semantic concepts. The work aims to address this shortcoming and formalizes the notion of statistical importance for both local and global semantic concepts. Strengths: - The work formalizes the notions of global, local, and global conditional importance. - Most existing methods assume the presence of a large bank of concepts; the proposed method allows the stakeholders to specify the concepts they want to evaluate directly. This flexibility will enable explanations with diverse semantics for the model prediction on an example, as opposed to a single explanation. - In practice, most of the methods rely on the weights of a linear model over the concepts to convey the importance of the semantic concepts. In contrast, since the proposed method utilizes statistical significance, it can guarantee false positive rates. - Unlike existing methods, the proposed technique doesn't rely on training a surrogate linear model and can study the semantic importance of any given model. Weaknesses: The paper is generally sound regarding the contribution, but the authors should consider the questions below regarding the experiment. - First, I encourage the authors to evaluate their technique on diverse tasks and datasets. For instance, they can evaluate their method on datasets like AwA2 and CUB, which already have concept annotations. In addition, the authors could assess their methods on domains like NLP. - Why only consider a single backbone? I encourage the authors to include results from diverse backbones, strengthening the paper. - One of the paper's main contributions is the ease with which an end-user can understand the semantic concepts and their importance, but that aspect still needs to be evaluated. The authors should design & conduct a user study to assess it. - In addition, the authors can validate the concept's usefulness by intervening on the concepts and measuring the changes in model predictions. Another way to validate the concepts would be to measure their predictive ability; ideally, just using the important concepts to predict shouldn't hinder the model's performance. Technical Quality: 3 Clarity: 3 Questions for Authors: Refer to Weaknesses. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes, the authors discuss the limitations of the proposed technique in detail. They have sufficiently addressed it, and the contributions outweigh the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Thank you for your comments and questions! We now address each point raised by the reviewer individually, and we are looking forward to clarifying any outstanding questions. --- ### **Diverse datasets and models** We thank the reviewer for their suggestions, which have significantly strengthen our experimental results and uncovered interesting findings. **As presented in the general response, we have now included additional experiments on both AwA2 and CUB-200-2011, comparing across 8 different models: CLIP:RN50, CLIP:ViT-B/32, CLIP:ViT-L/14, OpenClip:ViT-B-32, OpenClip:ViT-L-14, FLAVA, ALIGN, and BLIP.** In particular: - Since AwA2 includes class-level annotations, we have included global conditional importance results with c-SKIT in comparison with PCBM. - Since CUB includes image-level annotations, we have included local conditional importance results with x-SKIT. We describe all findings and experiments in the general response at the top of our rebuttal. To summarize, we find our c-SKIT outperforms PCBM both in terms of semantic importance detection and transferability across all models on both Imagenette and AwA2. Furthermore, x-SKIT importance ranks align well with ground-truth annotations ($\approx 0.85$ average $f_1$ score) on CUB. We consider NLP and language generation applications of our framework as important future directions. For example, it is still not completely clear how ideas of concept bottleneck models apply to encoders such as BERT. How do text encoders represent semantic information with standard pretraining techniques such as masked language modeling? Only very recent work [1] has started addressing these fundamental questions. Going beyond text encoders, how should one phrase questions of semantic importance for contrastive encoder-decoder architectures such as CoCa [2] or autoregressive models such as GPT? The structured nature of these models raises very important questions that currently remain unanswered, thank you for raising this point. We will include these discussion points in the revised version of the manuscript. --- ### **User study to evaluate alignment with human intuition** As stated in the general response, we agree with the reviewer that a user study would be necessary to claim alignment with human intuition, and we will rephrase those claims in the revised version of the manuscript. However, we remark that our tests *can* be readily applied to any set of concepts, including user-defined ones. We believe the design of a robust user study to deserve its own investigation, and, in this submission, we focus on introducing the methods that will enable such a study. We stress that, currently, no alternative method *can* work with a few user-defined concepts. For this reason, we envision our method to enable the design of such studies. --- ### **Validation of concept usefulness** We thank the reviewer for their suggestions. We validate important concepts in our additional experiments on AwA2 and CUB using the ground-truth annotations. We refer the reviewer to our general response and the attached pdf for numerical results and comparison with PCBM. Furthermore, we would like to mention that our c-SKIT and x-SKIT tests precisely work by resampling concepts and measuring their effect on distributions of the output of the predictor. However, we remark that we use *observational* conditional distributions (i.e., $Z_j \mid Z_{-j}$) and not *interventional* distributions (i.e., $Z_j \mid \text{do}(Z_{-j} = \cdot)$). Characterizing the connections between our framework and causal inference is an interesting future line of research, thank you for raising this point. Lastly, we kindly push back on the suggestion of validating concepts by training predictors, as our focus is to study which concepts are important for a fixed black-box model, and, in the general case, these may be different from the ones with highest predictive ability. This is a fundamental distinction between our approach and alternatives like PCBM. Intuitively, consider ImageNet classification. It might be the case that we want to test one concept only. But of course, even if that individual concept were truly used by the model, it will not be enough to train a good classifier from scratch. We can already see this behavior in the AwA2 experiments: restricting to 10 concepts, that we know from ground-truth annotations should be important, significantly reduces predictive power. We refer the reviewer to the "average" line in Table 1 of the attached pdf, where PCBM shows a drop in performance of around 4%. --- [1] Tan et al. "Interpreting pretrained language models via concept bottlenecks." (2024) [2] Yu et al. "Coca: Contrastive captioners are image-text foundation models." (2022) --- Rebuttal Comment 1.1: Comment: Thanks for the detailed response, and addressing my concerns, I will update my scores. --- Reply to Comment 1.1.1: Title: Thank you for your response! Comment: We are glad to hear our response addressed the reviewer's concerns! We sincerely thank the reviewer for their comments and their consideration of our rebuttal, we will include all additional results in the revised version of the manuscript.
Summary: The paper defines statistical importance of semantic concepts for black-box models such as CLIP via conditional independence. This is motivated by the fact that users would be interested to know how they should interpret two concepts with different importance scores, and if the difference in such two concepts has any statistical significance. The paper discusses that earlier work considers input features only, and not directly applicable to semantic concepts, which are represented in the internal layers of models. The paper utilizes recent advances in sequential kernelized independence testing to develop statistical tests that can produce a rank of importance for each semantic concept. The proposed method is experimentally verified using CLIP and a subset of ImageNet data. Strengths: - How to measure statistical significance between different semantic concepts' importance is well-motivated (e.g. "how should users interpret two concepts with different importance scores?", "Does their difference in importance carry any statistical meaning?" are valid questions that users would be interested to explore) - Proposed two novel procedures to test for semantic importance: c-SKIT for global conditional importance, and x-SKIT for local conditional importance - Validated their method on zero-shot ImageNet classification with CLIP. Weaknesses: While the proposed solution for discovering feature/concept importance of black-box models is written as a general framework, it's not clear how applicable this method is to problems in practice, because the real-data experiment is only performed on CLIP. For example, users in practice would probably be interested in seeing how much the rankings produced by the proposed method agree across CLIP and other vision-language models. Technical Quality: 3 Clarity: 3 Questions for Authors: CLIP has an issue with compositional understanding (e.g. Hsieh et al. "SugarCrepe", Sec 5.3), since the embedding space that is learned through CLIP's contrastive loss is incentivised to only match up to the set of concepts between images and texts, rather than learning the relational / compositional structure between concepts in the image / text. I'm wondering if the ranking order discovered by this method also suffers from this issue of CLIP's representations. Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: - Experiments on real datasets are limited, and it would be interested to know how much transferable the results of CLIP to other vision-language models. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Thank you for your comments and questions! Here, we address each point individually, and we are looking forward to discussing with the reviewer. --- ### **Limited experiments and transferability across different vision-language models** This is a great question, and we thank the reviewer for raising this point. These additional experiments have strengthen the experimental results of our submission, and uncovered interesting aspects of transferability. **In particular, we have extended our experimental results to include 2 additional datasets (AwA2 and CUB) and 8 different models (both CLIP- and non-CLIP-based): CLIP:RN50, CLIP:ViT-B/32, CLIP:ViT-L/14, OpenClip:ViT-B-32, OpenClip:ViT-L-14, FLAVA, ALIGN, and BLIP.** We evaluate agreement between pairs of models in terms of ranking of concepts, and whether they are classified as important or not. To compare ranks, we use a weighted version of Kendall's tau (see [1]) which assigns higher penalties to swaps at higher positions. That is, for example, a 1 -> 5 swap is worse than a 4 -> 5 swap. This reflects that higher positions should matter more and be more stable. To compare importance, we threshold rejection rates at level $\alpha$ and compute the accuracy between the binarized vectors. We briefly summarize here the findings of our experiments, which are described in the general response: - Ranks obtained with c-SKIT are more transferable than PCBM on both Imagenette and AwA2. - Both CLIP- and non-CLIP-based models are generally aligned in terms of ranks and importance, especially on local explanations. --- ### **Compositional understanding of CLIP's embedding space** This is a very interesting point, which deserves further investigation outside of the current submission. For example, following the SugarCrepe example of a photo with *"a girl in white facing a man in black"*, one could test whether the prediction of *"girl"* does depend on the concepts *"white"*, *"facing"*; while the prediction of *"man"* depends on *"black"*. We stress that this study would not be possible without the tools presented in this submission. We would also note that our results on global (marginal) importance find that concepts are almost always important (i.e., their rejection rates are above $\alpha$). As suggested by the reviewer, this finding may support the claim that CLIP's semantic space is entangled and overlapping. These aspects have also been considered by previous works (for example, MERU [2]), which focus on whether using an embedding spaces different from the unit sphere (e.g., the hyperbole) induce better hierarchical representations. In a similar fashion to above, one could also devise an experiment to retrieve and test semantic image traversal as in [2, Figure 5]. These example studies highlight that the framework presented in this submission has a broad reach beyond explainability and it will support other research efforts, thank you for raising this point. --- **References** [1] Vigna. "A weighted correlation index for rankings with ties." (2015) [2] Desai et al. "Hyperbolic image-text representations." (2023) --- Rebuttal Comment 1.1: Title: Response Comment: Thank you for your detailed response. Additional experiments using different V&L models addressed my concern. It is nice to see that ranks and importance are transferable across both CLIP and non-CLIP based models. I increased my score to reflect these changes. --- Reply to Comment 1.1.1: Title: Thank you for your response! Comment: We sincerely thank the reviewer for their consideration of our rebuttal. We agree that the findings on transferability would be valuable to readers, and we will include them in the revised version of the manuscript.
Summary: The paper discusses the need for precise statistical guarantees in feature importance, especially for semantic concepts, to ensure transparency and avoid unintended consequences. It introduces a framework using conditional independence for testing semantic importance and demonstrates its effectiveness on synthetic datasets and image classification tasks using models like CLIP. It uses principles of testing by betting (or sequential testing), which are based on e-values, and MMD as a test statistic The paper finds importance of concepts via conditional independence The authors introduce two novel procedures: conditional randomization SKIT (C-SKIT) for global conditional importance and explanation randomization SKIT (X-SKIT) for local conditional importance. Strengths: Offers rigorous definitions and tests for global and local semantic importance It emphasizes the importance of interpretable features in black-box models The paper is well-written and smoothly explains its argumentation. Weaknesses: The paper does little comparison to other state-of-the-art techniques for feature importance. Technical Quality: 3 Clarity: 3 Questions for Authors: The practical implementation assumes a small set of concepts. Do you find it to be enough? What are the computational limits of the proposed method? How do they scale with increasing data size and complexity? For what data modalities is the given choice of kernel functional? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The practical implementation assumes a small set of concepts. For certain tests, accurate generative models for the conditional distributions are required, which can be difficult to train. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Thank you for your comments and questions! We address each point individually, and we are looking forward to discussing with the reviewer to answer any outstanding questions. --- ### **Comparison with SOTA** We thank the reviewer for this comment, which has significantly strengthened our experimental results. We have described several additional experiments and comparisons with PCBM (which is the state-of-the-art for semantic explanations) in the general rebuttal. To summarize, we find that: - c-SKIT provides ranks that are more transferable across different vision-language models on both Imagenette and AwA2. - c-SKIT provides better semantic importance detection in terms of $f_1$ score on AwA2. For local semantic explanations, we have included additional experiments on CUB-200-2011, which indicate x-SKIT is well-aligned with ground-truth ($\approx 0.85$ average $f_1$ score). We note that, currently, there are no alternative methods that can produce local semantic explanations, hence why we could not compare. For the sake of completeness, for global conditional importance, we tried comparing with LaBo (Yang et al., 2023), which, intuitively, adds a softmax activation to the weights of a PCBM classifier. This approach, however, fails to learn good predictors in our few-concepts setting ($\approx 20\\%$ classification accuracy on AwA2 dataset compared to $\approx 95\\%$ for PCBM). --- ### **Small number of concepts** We are not sure we fully understand the question. Could the reviewer expand on what they mean by *``enough''*? Our proposed framework does not modify the original black-box predictor, whose accuracy does not depend on the number of concepts. So, defining a small set of concepts does not affect the performance of the predictor. On the other hand, it is true that the number of concepts may thwart the ability to build effective samplers to instantiate our c-SKIT and x-SKIT tests. For example, in the case of limited data, it may not be feasible to train a conditional sampler on a large set of concepts. In our experiments, we focus on $\approx 20$ concepts because previous work [1] has shown that humans prefer succinct explanations. We found that this assumption allows us to use non-parametric samplers which are fast, cheap, and do not require prior training. We will clarify this in the revised version of the manuscript, thank you. --- ### **What are the computational limits? How do the tests scale with increasing data size and complexity?** We thank the reviewer for these questions. First, the main computational limit is the need of conditional samplers. In certain domains, these models may be expensive both to train and run, such is the case for diffusion or language models. In our experiments, we strived to use methods that are effective but do not require prohibitive computational resources. Second, the computational complexity depends on the specific test and the sampler. In particular: - SKIT: Following [2, Appendix F.2] the test runs in $O(\tau^2)$, where $\tau$ is the (random) stopping time of the test. This is because at each step $t$, computing the MMD requires summing over the previous $t-1$ terms. - c-SKIT and x-SKIT: both tests incur in a extra factor of $T_n$, which represents the cost of the sampler on $n$ data points. Finally, we note that the runtime does not depend on the number of concepts because different concepts can be tested simultaneously. We will include this in the revised version of the paper. --- ### **For what data modalities is the given choice of kernel functional?** This is a great point! Our framework assumes the black-box predictor can be divided into an encoder and a classifier, and these need to be appropriate for the data modality at hand. Once inputs are mapped to an embedding space, one needs to decide on which kernel to use to test for semantic importance. In general, when using kernel methods to test for a null hypothesis, we would like the kernel to be *characteristic* for the alternative (see [2]). That is, we want the kernel to be expressive enough to distinguish two distributions coming from the alternative. Over compact domains, this can be achieved by using *universal* (see [3]) kernels---such as the RBF kernel. In practice, this means that: - for x-SKIT, the RBF kernel is appropriate whenever the classifier is a real-valued function (e.g., linear classifier with sigmoid activation). For discrete predictors (e.g., decision trees), other kernels may be necessary. - for c-SKIT and SKIT, the RBF kernel is appropriate whenever both the predictor and the concept bottleneck layer are real-valued functions (e.g., linear classifier with sigmoid activation and SVMs, respectively). We remark that our tests are defined for any choice of kernel, and they can be instantiated directly for the desired data modality. Finally, we refer the reviewer to Fig. E.3 in the Appendix, where we precisely compare c-SKIT with a linear and RBF kernel on a synthetic dataset. This experiment shows that a linear kernel may fail to detect important concepts because it does not satisfy the universality property. --- **References** [1] Ramaswamy et al. "Overlooked factors in concept-based explanations: Dataset choice, concept learnability, and human capability." (2023) [2] Podkopaev et al. "Sequential Kernelized Independence Testing." (2023) [3] Sriperumbudur et al. "Universality, Characteristic Kernels and RKHS Embedding of Measures." (2011)
Rebuttal 1: Rebuttal: ## Thank you for your comments! We sincerely thank all reviewers for their valuable comments and suggestions, which have strengthen our experimental results and the presentation of our contributions. **Following all reviewer comments, we have significantly extended our experiments on real world data: we now include additional results on both the AwA2 and CUB datasets, and we compare across 8 different vision-language models.** In this general response, we address common questions raised by several reviewers, and briefly summarize additional results presented in the rebuttal pdf. We address comments from each reviewer in their individual response, and we are looking forward to discussing with the reviewers to answer any outstanding questions. --- ## Imagenette results on different models **As suggested by RmYwv, RnqVJ, and Rgo3R, we have extended our analysis to include 8 different models: CLIP:RN50, CLIP:ViT-B/32, CLIP:ViT-L/14, OpenClip:ViT-B-32, OpenClip:ViT-L-14, FLAVA, ALIGN, and BLIP.** We evaluate agreement between all pairs of models with the following metrics: - *Comparison of ranks.* We use the method of [1] to compare ranks. A value of -1 means reverse order, and +1 means perfect alignment. - *Comparison of importance.* We threshold rejection rates at level $\alpha$ to classify concepts as important or not. Importance agreement is the accuracy between the binarized vectors. We summarize the results included in the pdf: - Fig. 1, average global importance agreement equal to 0.57 (random baseline = 0.00). - Fig. 2, comparison of c-SKIT and PCBM. **We find c-SKIT ranks to have higher agreement (0.64) compared to PCBM (0.52), which indicates c-SKIT is more transferable than PCBM.** - Fig. 3, average local conditional importance agreement is 0.73 and rank agreement is 0.62. **These results indicate that different vision-language models share certain local semantic dependence structures.** --- ## Global conditional importance on AwA2 As suggested by RnqVJ, we include global conditional importance results on the AwA2 dataset. We use c-SKIT on the top-10 best classified classes for a fair comparison across models. For each class, we test 20 concepts: 10 present, and 10 absent according to the ground-truth annotations. We compute rank agreement and report $f_1$ scores between the ground-truth annotations and the top-10 concepts according to c-SKIT rejection times and PCBM absolute weights. We remark that the coefficients of a linear classifier are a heuristic notion of global conditional independence, whereas our tests provide precise statistical guarantees. We summarize the results included in the pdf: - Fig. 4, comparison of ranks obtained with c-SKIT and PCBM. **Similarly to above, we find c-SKIT ranks to be more transferable than PCBM's (0.54 vs 0.44 agreement)**. - Table 1, $f_1$ scores for c-SKIT and PCBM. **c-SKIT consistently outperforms PBCM across all models (0.55 vs 0.48 average $f_1$ score). We stress this improvement in semantic importance detection does not reduce classification accuracy (99.50% vs 95.10%)**. --- ## Local importance on CUB As suggested by RnqVJ, we include local importance results on the CUB-200-2011 dataset. We use x-SKIT on 2 test images from the top-10 best classified classes for a fair comparison across models. For each image, we test 14 concepts: 7 present, and 7 absent according to the ground-truth annotations. We threshold rejection rates at level $\alpha$ to classify concepts as important or not. - Fig. 5, rank and importance agreement. **We find an average importance agreement of 0.97 and an average rank agreement of 0.86.** - Table 2, $f_1$ scores as a function of size of conditioning set, $s$. **We find an average $f_1$ score of $0.84, 0.86, 0.83$ for $s \in \\{1,2,4\\}$. OpenClip:ViT-L/14 has the highest $f_1$ scores across all values of $s$, with a maximum of 0.89 for $s=2$.** These results suggest models are aligned well with the ground-truth annotations. - Fig. 6, example image with local importance ranks across different models. --- ## User study to evaluate alignment with human intuition We agree with RnqVJ and RFuBo that a user study would be necessary to claim alignment of explanations with human intuition. **We will smooth these claims in the revised version of the paper in order to highlight that the scope of this work is to introduce a statistically-rigorous method that *can* work with any set of user-defined concepts**. Finally, we would like to remark that, currently, there are no alternatives to design such a study but our framework. In fact, we envision many studies to leverage the precise statistical guarantees provided by our methods. --- ## FDR control For the sake of completeness, we have also addressed FDR control limitations, as we mentioned in the submitted manuscript. We have extended our results to report important concepts with FDR control at level $\alpha$. We will include this in the revised version of the manuscript. --- **Details** AwA2 classes: giant panda, tiger, giraffe, zebra, lion, squirrel, sheep, horse, elephant, dalmatian. CUB classes: White Pelican, Brown Pelican, Mallard, Horned Puffin, Vermilion Flycatcher, Northern Flicker, Cardinal, Blue Jay, Cape Glossy, Starling, Frigatebird. **References** [1] Vigna. "A weighted correlation index for rankings with ties." (2015) Pdf: /pdf/1159a71aa61a612e0f817b0e740822cd1b4d259a.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Mixed Dynamics In Linear Networks: Unifying the Lazy and Active Regimes
Accept (poster)
Summary: In this paper, the authors study the training dynamics of a two-layer linear network $A = W_1 W_2$ and show that for a wide range of network width $w$ and initialization scale $\sigma^2$, the dynamics of $A$ can be approximated by the self-consistent equation: $\dot{A} \approx -\sqrt{ A A^T + \sigma^4 w I } \nabla C(A) - \nabla C(A) \sqrt{ A^T A + \sigma^4 w I }$, which can be viewed as a mixture of the lazy and balanced dynamics. With this observation, the authors then analyze the behavior of gradient descent with different choices of $w$ and $\sigma^2$ for the task of recovering a low-rank matrix from noisy observations. In particular, they identify two possible regimes for $w$ and $\sigma^2$: the pure lazy regime, where the network fails to recover the ground-truth matrix, and a mixed/active regime, where the network first aligns with the ground-truth following the lazy dynamics and then switches to the balanced (and aligned) dynamics and eventually recovers the ground-truth. Strengths: This paper unifies the lazy and balanced regimes of the training dynamics of two-layer linear networks. The equation (cf. (1)) obtained by the authors that approximately characterize the dynamics is surprisingly simple and suggests a way to classify the training behaviors that are more fine-grained than the lazy vs balanced regimes. The proof of Theorem 1 (the theorem on the validity of the approximation (1)) also looks interesting. Instead of controlling the error growth rate like in the proof of many similar results, it relies on the fact that $W_1W_1^T - W_2^T W_2$ is invariant under GF to obtain an equation and the stability of the solutions to that equation. With the new equation, the authors also prove a global convergence result for balanced initialization without assuming alignment at initialization. This is a novel result and demonstrates the usefulness of the new characterization, as it leverages the initial short lazy regime to obtain an approximately aligned state. Weaknesses: 1. The presentation of the paper can still be improved. In particular, some parts of the paper seem to be rushed (the appendix in particular) and contain many typos (e.g. the second equation in line 212, line 761, and line 773). 2. As mentioned by the authors in Section~2.1, in the low-rank matrix recovery setting, the width of the network is assumed to be much larger than the ambient dimension (instead of the rank). What will happen if the network width is much larger than the rank but much smaller than the ambient dimension? It seems that the network will be directly in the active regime and we can no longer rely on the initial lazy stage to align the network. Is this true in theory/practice? Technical Quality: 4 Clarity: 3 Questions for Authors: 1. Could you provide some intuition on the proof of Theorem~1? It seems to me to be some clever algebraic manipulations that cannot be easily explained intuitively. 2. See item 2 of Weakness. Confidence: 3 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: See Weakness. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the thoughtful review. Regarding the weaknesses you mention: 1. We will improve the readability of the proofs, thanks to the error/typos that you and the other reviewers have found. 2. We agree that this intermediate width regime is of particular interest, and it is probably the regime where one wants to be in practice, and we have thought of it but it seems that it would require different techniques (in particular the invariant approach might not work anymore). While your intuition that the dynamics would be purely balanced could be correct, there is another possible interpretation: when the task has a low rank structure, then only the dynamics along the low-dimensional span of the true matrix matter, the parameters orthogonal to remain essentially constant, at least until the optimal early stopping time, thus the dynamics could possibly be approximated by a smaller network with the same hidden layer size but with input and output of size equal to the rank of the true matrix $A^{*}$. We hope to be able to extend our analysis to this setup to answer which intuition is correct. Regarding your question: 1. We will improve the proof and add a intuition section in the Appendix to explain the strategy and how we rule out the other solutions. Depending on how much room we have, we will also explain this in the main. Another to understand Theorem 1 is to show an equivalence to another dynamic as described in our answer to Reviewer r8Jt, we will also add this simple derivation to help with the intuition. --- Rebuttal Comment 1.1: Comment: Thank you for your clarifications. I will maintain my score.
Summary: This paper derives a formula for the training dynamics of two-layer linear networks, encompassing the lazy regime, the active regime, and the mixed regime. Strengths: - In constract to previous works, the authors reveal the existence of mixed dynamics for training two-layer linear networks, which combine the advantages of the lazy regime and the active regime: it converges from any random initialization, and has a low rank bias. - The authors prove an almost complete phase diagram of training behavior on the task of recovering a low-rank matrix from noisy observations. Weaknesses: The main findings heavily rely on prior insights into lazy training and feature learning. Moreover, the focus on two-layer linear networks, which are simple and lack nonlinearity, limits the broader impact of this paper. Technical Quality: 2 Clarity: 3 Questions for Authors: - Do the mixed dynamics have a stronger or weaker implicit bias compared to the active regime (such as the low-rank bias or sparsity bias)? Which regime predominates in practical scenarios? - Can the theory recover other non-trivial training dynamics of linear networks, such as the saddle-to-saddle dynamics [1]? - How can the main insights from this study be generalized to nonlinear models, such as two-layer ReLU nets? - Minor: There appears to be a missing '-' in the equation between line 55 and line 56. [1] Pesme \& Flammarion. Saddle-to-Saddle Dynamics in Diagonal Linear Networks. (NeurIPS 2023) Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: The analysis in this study focus on 2-layer linear networks, which is limited. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the thoughtful review. Regarding your questions: - Note that we view the mixed dynamics as part of the active regime, what we show is that the short lazy dynamic period that always appears at the beginning plays an important role, so it is useful to think of the active regime as a mix of lazy and active. The balanced dynamics that start after the short lazy period then leads to a low-rank bias. - We are describing the saddle-to-saddle dynamics actually, and improving significantly on prior results, since we prove it in the fully-connected, non-diagonal case which is much harder. Furthermore we can determine exactly how small the variance $\sigma^{2}$ needs to be to get the saddle-to-saddle dynamics, in comparison to previous works that required an infinitely small initialization (https://arxiv.org/abs/2012.09839 or https://arxiv.org/abs/2106.15933) and could not really handle the second saddle making it difficult to determine when one would leave each subsequent saddle. We solve all of these issues. We will add a discussion of the relation to the Saddle-to-Saddle dynamics. - There are a few high level ideas that could translate to nonlinear nets: (1) there could exist mixed regimes in nonlinear networks, where e.g. some neurons are active while others are still lazy in some sense; (2) the transition from lazy to active could shift depending on the sparsity of the task, which could explain why we are not yet able to fully describe the extent of the lazy regime as a function of initialization variance, width, and number of datapoints and how it depends on the task at hand. --- Rebuttal Comment 1.1: Comment: Thanks for the authors' detailed responses to my questions. The authors have addressed my concerns, and I have raised my score.
Summary: This paper consider the GD dynamics for two layer linear network. The authors introduce an approximated dynamics which interpolate between the lazy and the balanced regime. The authors also showed the phase diagram based on the above dynamics for low-rank matrix factorization problem. Strengths: 1. This paper provides an approximated dynamics GD dynamics for two-layer linear network, including different initialization scheme. I believe this is novel in the literature and is an interesting results. Weaknesses: 1. Regarding the (pure) lazy training part, it is said in Line 124 "When $w$ is very large, we end up in the lazy regime where the parameters move enough up to a time $t$ to change $A\_{θ(t)}$ , but not enough to change $C_1 , C_2$" . Could you explain more on this or point out some references? In particular why $C_i(t) \approx C_i(0)$ for any $t$? 2. The statement of Theorem 1 needs to be written more clearly: (1) what is $\delta$ in the statement? Do you mean that the upper bounds hold for any $0<\delta<1$? Then how do the upper bounds depend on $\delta$? (2) what is $C_1$ in the RHS of the bound. From the proof I think you mean $C_1(t)$ not $C_1(0)$?Then how does $C_1(t)$ depend on $t$, in particular, does it blow up in $t$? 3. I also have a few doubts on the utility of Theorem 1: it is not clear to me whether the RHS of the upper bound in Theorem 1 vanishes in $d$ ? In particular, consider the settings in section 2.1 where $\sigma^2 = d^{\gamma\_{\sigma^2}},w = d^{\gamma\_{w}}$, then the first term $\mathcal{O}( \sigma^2 w)$ is not vanishing in $d$ unless $\gamma\_{\sigma^2} + \gamma\_{w}<0$. For the second term $ O( \sqrt{d/w} ||C_1||\_{op} )$, it is not clear to me how $ ||C_1(t)||\_{op}$ depends on $d$ for any $t$. Thus, could you address more on this point? In conclusion, so far the utility of the main theorems is not clear to me, I will raise my scores if my questions are addressed. Technical Quality: 2 Clarity: 2 Questions for Authors: Please refer to the strengths and weaknesses part. **Minor points and typos:** 1. Equation (1) and the equation below Line 204 seems to be different in a $\eta$ factor. 2. In Line 284 should be $2 \gamma_{\sigma^2} + \gamma_w < 0$. 3. Line 939, where is Lemma G.2? Confidence: 3 Soundness: 2 Presentation: 2 Contribution: 3 Limitations: No potential negative societal impact Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the thoughtful review. Regarding the weaknesses you raise: 1. Proving this is non-trivial and would require some work to be proven directly, the intuition is that one needs to take a learning rate of order $\sigma^{-1}$ to get finite size updates to $A_{\theta}$, but the change to the weight matrices $W_{1},W_{2}$ is then of order $1$ which is small relative to their size at initialization (of order $\sigma$). Note however that this fact also follows from our first theorem, if $\sigma$ is sufficiently large then $C_{1}=\sqrt{A_{\theta(t)}^{T}A_{\theta(t)}+w^{2}\sigma^{4}I}\approx\sigma^{2}wI$ for all times $t$. 2. (1) Yes any $\delta$ can be chosen, actually we will remove $\delta$ from the Theorem and simply say "with high probability" (2) It should indeed be $C_{1}(t)$. Note that $C_{1}$ might indeed blow up in time, but we are mainly interested in having a small error relative to the size of $C_{1}(t)$ (the learning rate $\eta$ has to be chosen of order $\left\Vert C_{1}\right\Vert _{op}^{-1}$ so things only need to be small in comparison to $\\left\\Vert C\_{1}\\right\\Vert\_{op}$). The fact that in Theorem 2 we are able to control the dynamics up until convergence using Theorem 1 shows that the approximation of Theorem 1 is good enough to be used in a practical setting. We will add a discussion of these aspects after Theorem 1, because we agree that it can be difficult to determine when such an approximation is "good enough" since everything can vary significantly in size as $d\to\infty$ and throughout training. 3. Again the RHS does not always go to zero in $d$ but it always becomes infinitely smaller than $\left\Vert C_{1}(t)\right\Vert _{op}$ which is sufficient. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed response. Now I have a better understanding of the theoretical contribution of this work, and I believe it is an interesting result. Thus I raise my score to 6.
Summary: The paper studies gradient descent dynamics in two-layer linear networks. It is shown that in the wide-hidden-layer regime with standard random initialization there is a simple self-consistent differential equation describing the network output (Theorem 1). This equation generalizes both lazy training regime and the standard solvable balanced regime. As an application, a phase diagram of the model with respect to different scalings of the hidden layer size and the weight variance is discussed (section 2.1). Strengths: The paper is generally nicely and thoughtfully written. The main theoretical result described in the paper (the self-consistent evolution equation) is interesting and apparently original. Thanks to the simplicity of this equation, it is likely to be useful in future research. This paper uncovers a simple and natural scenario in which lazy training transitions into an active training in an analytically tractable fashion. This scenario can help understand feature learning in more complex models. Weaknesses: I find the exposition of the main result (Theorem 1) and surrounding ideas not very clear. The provided sketch of proof is not very convincing, since the derived cubic equations can generally have solutions $C_1,C_2$ other than those indicated. I also couldn't get through the proof in the appendix, particularly Proposition C.4. Some typos there: line 689: "*Let* $L = ||A^*-A ||^2$ *be the loss function*" - not used in the proof line 761: "*Recall that $\tfrac{dw}{dt}=$*" - unfinished sentence line 773: "*To show that $∥v_i^TdC_1∥$ and $∥u_i^T dC_2∥$*" - unfinished statement Technical Quality: 3 Clarity: 3 Questions for Authors: It seems that a linear network combining lazy and active regime can actually be constructed more directly, within the class of balanced models, by simply considering a non-homogeneous balance condition $W_1W_1^T=W_2^TW_2-b^2I$. This condition is realized, in particular, when the initial $W_1(t=0)=0$ while the initial $W_2$ is isometric up to rescaling: $W_2^TW_2(t=0)=b^2I$. The balance condition is again invariant and, arguing as in the paper, we can obtain the closed-form dynamics for $A=W_2W_1$. Namely, by multiplying the invariant by the matrices $W_1, W_2$ or their transposes, we get the equations $C_1^2+b^2C_1-A^TA=0$ and $C_2^2-b^2C_2-AA^T=0$. Finding the roots leads to the self-consistent GF equation $$\tfrac{dA}{dt}=-\tfrac{\eta}{2}[(\sqrt{4AA^T+b^4}+b^2)\nabla C+\nabla C(\sqrt{4A^TA+b^4}-b^2)]=-\tfrac{\eta}{2}[\sqrt{4AA^T+b^4}\nabla C+\nabla C\sqrt{4A^TA+b^4}].$$ This dynamics also starts in a lazy regime, because initially $W_1=0$ so that $W_2(t)\approx const$ and learning is linear and occurs only through $W_1$. As deviation of $W_1$ from 0 increases, the dynamics becomes active. My impression from the proof of Theorem 1 is that it is essentially a reduction of the wide network model to the algebraically solvable "balanced-type" model, but with two channels corresponding to the two approximally orthogonal projectors $P_1, P_2$ appearing in Theorem 1. By restricting to the corresponding subspaces, the matrices $W_1, W_2, A$ are decomposed into two components, say $W_1=Y_1+Z_1, W_2=Y_2+Z_2$ and $A=A_Y+A_Z\equiv Y_2Y_1+Z_2Z_1$. The initial conditions in one channel are like in the example above, $Y_1(t=0)\approx 0, Y_2^TY_2(t=0)=\sigma^2 wI$, while in the other channel they are reversed, $Z_2(t=0)\approx 0, Z_1Z_1^T(t=0)=\sigma^2 wI$. However, both channels are described by the same equation with the same initial condition, so yield the same solution $A_Y=A_Z.$ Then the equation for the total $A=A_Y+A_Z$ presented in the paper can be obtained from the single-channel equation above simply by replacing $A$ by $A/2$. I think that the paper would be easier to understand and more convincing if such a two-channel model was explicitly described, and its solution (either this or performed in the paper) explained more carefully. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The model considered in the paper (a two-layer linear network with a wide hidden layer) is very simple and fairly artificial. However, the effects exposed in the paper may be relevant for more complex and realistic models. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the thoughtful review. Regarding the weaknesses you mention: It is indeed true that there are many solutions to the set of equations we obtain, and most of the work in the proof is to prove that one approaches the `right' solution. This part of the argument is quite technical and cannot be easily sketched out in the main. We will also fix the typos. Regarding your questions: The construction you propose is very interesting, and it can actually be slightly modified to recover the mixed dynamics exactly, if one initializes with weights $W_{1}=( I_{d} \\;\\; 0)^{T}$ and $W_{2}=(0 \\\;\\; I_{d})$ then the following three properties are satisfied at initialization and for all subsequent times $$ A_{\theta}C_{1} =C_{2}A_{\theta}$$ $$ C_{1}^{2} =A_{\theta}^{T}A_{\theta}+I_{d}$$ $$ C_{2}^{2} =A_{\theta}A_{\theta}^{T}+I_{d}$$ since it is true at initialization and the derivatives on both sides of the first equation match thanks to properties 2 and 3, while the derivatives of both sides of 2,3 match thanks to equation 1. This initialization/derivation also seems to agree with your `two channels' intuition. We added your derivation of mixed regime GF equation at the end of appendix C.
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Attention boosted Individualized Regression
Accept (poster)
Summary: The authors propose an individualised matrix regression method, where the individualised part is shown to be related to self-attention. The method is presented nicely, with theoretical and empirical results that indicate the usefulness of the method. Strengths: The paper is well-written and easy to follow. The method is clear, the connection to attention is clear, the theoretical results appear correct and the empirical results (on both simulated and real data) and convincing. Weaknesses: - The introduction mentions personalised interpretations, but that is never illustrated in the paper (though seems to be alluded to in the appendix?) - Proposition 1 seems to assume that the function g is an element-wise function (or otherwise a particular function, such that it can be transposed), but it is mentioned as being any general function. - The subscripts K and Q appear to have been swapped in the definition (I) of W. See line 176 also. - The convergence is geometric, but while one term disappears, there appears to still be a constant term left. So the question would be how large that constant is? How tight is the achieved bound in the limit as t -> \infty? - It would be better to report mean and _standard error_ (standard deviation of the mean) instead of mean and standard deviation of the 100 repetitions. This makes it easier to compare the results between methods. - Equation 18 is presented without any constraints on W, but the pseudo-code normalises the weight matrix (Equation 31), and the proofs assume unit vectors. Are these the same? If not, dot he convergence proofs still hold? This should be clarified. - I'm missing a discussion on the computational complexity and run-time (and compared to other methods). Technical Quality: 3 Clarity: 4 Questions for Authors: - When directly referring to references, use the Firstauthorlastname et al.~\cite{ref} format. Should be possible to use \citet{ref} for this. - Make equations part of sentences instead of something particular presented following a colon. Also, when equations are at the end of a sentence, end it with a full stop. - Equation 1: The d_1,d_2 subscript is missing from R. - Line 161: The functio \rho is not explained properly/clearly. Is this a notational error? - It appears you are using the Frobenius matrix inner product, but it is never defined. Same with the vector inner product, seems like you are using the Euclidean inner product (dot product), but this is never defined. - Line 206: What do you mean by: "... has the potential to achieve fewer parameters and faster training."? - In the theorems: Define W in the initial distance. - Explain what you mean by "the truth" on line 228. - Some spelling errors and typos, e.g. lines 239, 269, 498, and 535. - References: RNNs should be upper case in Ref 12, "Lasso" in Ref 25, and reference 24 has some problems with the spacing (copy from PDF?). - Label the first axis in Figure 3. - Lines 424-437: Should be "Python", "Matlab", and "Pytorch", since they are names. Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 4 Limitations: - See my previous comment about personalised interpretation. - The discussion section is _very_ short. I understand the problem with space constratins, but do elaborate properly on strengths, weaknesses, limitations, and future work. - You say that Crowdsourcing data from human subjects and IRB approval is not applicable, but these are images from human subjects collected collaboratively in the ADNI project. You would actually likely need an IRB approval to perform this research, since you do research on data from humans, but it seems that doing research on public medical data is a gray area. Flag For Ethics Review: ['No ethics review needed.', 'Ethics review needed: Research involving human subjects'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your review. # Main questions >**Q1**: Personalised interpretations **A1**: Thanks for your feedback. For real data, we show in Figure 2 the individualized coefficients of different samples and their significant internal relations and frame out corresponding regions in original samples. Take sample 1 for example. The block (4, 5) has the strongest relations, and is related to both (6, 4) and (6, 5), indicating the important relations between corpus callosum and hippocampus. We also find that after separating heterogeneous effects, the homogeneous effects highlight the hippocampus region, which is widely known to be associated with Alzheimer's disease. We will add the discussion in the revised version. >**Q2**: The function $g$ in Proposition 1. **A2**: Thank you for question. Sorry for the confusion in Proposition 1. Yes, we are mostly focusing on element-wise $g$ function, including commonly used activation functions in attention mechanisms, such as row-wise softmax function, scaling function and more. We will clarify it in the revised version. >**Q3**: The convergence is geometric, but while one term disappears, there appears to still be a constant term left. So the question would be how large that constant is? How tight is the achieved bound in the limit as t -> \infty? **A3**: Many thanks for the good question. When the error term is sub-Gaussian, it can be proved that after $t\ge t_0 + \log(\log(n)/n)/(2\log(\kappa))$ iterations, the distance errors can be bounded by $\mathcal{O}(\sqrt{\log(n) / n})$ with high probability, where $t_0$ is a constant and $\kappa$ is the contraction parameter. We will remark this specifically in the revised version. >**Q4**: Equation 18 is presented without any constraints on W, but the pseudo-code normalises the weight matrix (Equation 31), and the proofs assume unit vectors. Are these the same? If not, do the convergence proofs still hold? This should be clarified. **A4**: Many thanks for your question. We apologize for the oversight. The Equation 18 should also require a norm constraint for W due to identifiability consideration. Therefore, this is consistent with the proof (where an unit vector assumption is imposed). We will update Equation 18 in the revisions. >**Q5**: Computational complexity and run-time. **A5**: Thanks for your feedback. As the algorithm actually alternatively solves linear models, the complexity is $O(D_1^3D_2^3)$ where $D_1$, $D_2$ are the size of images. On the other hand, the optimization problem is non-convex, but it is bi-convex. Theorem 5.2 suggests a linear convergence rate of the alternating minimization algorithm (AMA) although the problem is non-convex. In contrast, gradient descent (GD) algorithms could achieve a linear convergence rate only if the objective function is strongly convex. But for general non-convex functions, linear convergence cannot be achieved. This suggests the advantage of AMA over GD. We will add the discussion on the computational complexity and report the run-time compared to other methods in the revised version. Thanks again for your advice. >**Q6**: Line 161: The function \rho is not explained properly/clearly. Is this a notational error? **A6**: Thanks for your question. Here $\rho$ stands for the linear function considered in the reference. Specifically, for a matrix input $\boldsymbol{Y}\in \mathbb{R}^{n\times p}$, define $\rho(\cdot)$ as $\rho(\boldsymbol{Y}) = \boldsymbol{Y} / n$. We will make it clear when revising. >**Q7**: Line 206: What do you mean by: "... has the potential to achieve fewer parameters and faster training."? **A7**: This sentence is to introduce the above reference titled ``Simplifying transformer blocks’’, in which the proposed simplified transformers enjoy faster training speed using fewer parameters. This further suggests the advantage of simplified models. >**Q8**: Explain what you mean by "the truth" on line 228. **A8**: The truth means true counterparts of $\boldsymbol{W}^{(t)}$ and $\boldsymbol{D}^{(t)}$, i.e. $\boldsymbol{W}$ and $\boldsymbol{D}$ in the true model. >**Q9**: References: RNNs should be upper case in Ref 12, "Lasso" in Ref 25, and reference 24 has some problems with the spacing (copy from PDF?). **A9**: The information on references were downloaded from Google Scholar directly, where there may be some issues. Thank you for pointing them out and we will make adjustments in the update. ## Minor revisions * The subscripts K and Q appear to have been swapped in the definition (I) of W. See line 176 also. * It would be better to report mean and standard error (standard deviation of the mean) instead of mean and standard deviation of the 100 repetitions. * When directly referring to references, use the Firstauthorlastname et al.~\cite{ref} format. Should be possible to use \citet{ref} for this. * Make equations part of sentences instead of something particular presented following a colon. Also, when equations are at the end of a sentence, end it with a full stop. * Equation 1: The d_1,d_2 subscript is missing from R. * Define Frobenius inner product and Euclidean inner product used. * In the theorems: Define W in the initial distance. * Some spelling errors and typos, e.g. lines 239, 269, 498, and 535. * Label the first axis in Figure 3. * Lines 424-437: Should be "Python", "Matlab", and "Pytorch", since they are names. * Crowdsourcing data from human subjects and IRB approval. Many thanks to your carefulness. We will make adjustments according to the above suggestions and update in the revised version. Thank you again. --- Rebuttal Comment 1.1: Title: Thank you for the updates and explanations Comment: I appreciate the authors' efforts to explain and improve the paper. The rebuttal addresses most of my concerns.
Summary: The paper introduces a method for self-attention-based individualized regression and derives it's relation to transformers. The method is evaluated in a simulation setting and on an Alzheimer Brain MRI dataset. Strengths: - Interesting theoretical treatment of individualized regression and its connection to transformers - Well written Weaknesses: - Only applied to tiny datasets with image size 48 x 48 - Only two experiments Technical Quality: 3 Clarity: 3 Questions for Authors: - It would be great if the authors could comment on the scalability of the method - Is individualized regression applicable to multiple instance learning problems? Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: See questions Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your review. >**Q1**: Only applied to tiny datasets with image size 48 x 48 and only two experiments. **A1**: Thank you for your feedback. The MRI scans are preprocessed to be of size $113\times 137\times 113$ and we further resize the extracted slices to the size $48\times 48$ for computational efficiency. Besides, we have added a 5-fold cross-validation in real study to test the significance of the difference, which shows that the advantage of the proposed method is significant, with results shown in the following table. | Methods | AIR | LRMR | TRLasso | DKN | ViT | | :--------: | :--------: | :--------: | :--------: | :--------: | :--------: | | Fold 1 | 3.183 | 3.712 | 3.359 | 3.297 | 3.321 | | Fold 2 | 3.193 | 3.744 | 3.325 | 3.283 | 3.349 | | Fold 3 | 3.127 | 3.699 | 3.258 | 3.284 | 3.215 | | Fold 4 | 3.129 | 3.700 | 3.232 | 3.226 | 3.282 | | Fold 5 | 3.093 | 3.717 | 3.286 | 3.214 | 3.243 | | Mean (S.D.) | 3.145 (0.042) | 3.715 (0.018) | 3.292 (0.051) | 3.261 (0.038) | 3.282 (0.055) | | P-value | - | 0.000 | 0.001 | 0.002 | 0.002 | In the coming days, we will attempt to apply our method to new real data to further validate its performance. >**Q2**: It would be great if the authors could comment on the scalability of the method. **A2**: Thanks for your suggestion. Two aspects of scalability are usually concerned. * Computational efficiency: As the algorithm actually alternatively solves linear models, the complexity is $O(D_1^3D_2^3)$ where $D_1$, $D_2$ are the size of images. As for the convergence rate, if we further suppose noise $\epsilon_i$ is sub-gaussian, it can be proved that after $t\ge t_0 + \log(\log(n)/n)/(2\log(\kappa))$ iterations, the distance errors can be bounded by $\mathcal{O}(\sqrt{\log(n) / n})$ with high probability, where $t_0$ is a constant and $\kappa$ is the contraction parameter. In summary, as the size of the dataset increases, larger images size brings more computational burden, which can be relieved by parallel processing and more in practice. On the other hand, larger sample size leads to fewer iterations and smaller errors. * Generalization: First we propose to combine the homogeneous and heterogeneous parts to make our model adaptive to more types of data. Second, we propose to model the internal relation matrix by a function $g$, which can introduce nonlinearity to make the model more flexible. However, as mentioned in the paper, the ability of the model to handle general data is limited, depending on the gap between the model and real cases, which is a common issue of model-based methods. Thanks for your suggestion again and we will add the discussion in the revised version. >**Q3**: Is individualized regression applicable to multiple instance learning problems? **A3**: Thanks for the interesting question. In multiple instance learning problems, the training data is organized into bags, where each bag contains multiple instances. In the context of MIL, individualized regression can be applied by considering each bag as an individual data point and tailoring regression models to these bags. Our model can be also applied to multiple instance learning problems, in the sense that patches of an image are instances in a bag and internal relations determine bag-specific coefficients. --- Rebuttal Comment 1.1: Comment: Thank you for your response!
Summary: This paper proposed an individualized regression method and applied it to medical image analysis. The method can handle matrix-valued data and does not require additional information on sample similarity. The authors also analyzed its relationship to the attention technique. Finally, the proposed method was evaluated on simulation and real data sets, and obtained improved performance. Strengths: 1. A novel individualized regression method handling one-model-fit-all issue. 2. The method does not require additional information on sample similarity. Weaknesses: 1. The method can only work for matrix-valued data, such as image data. 2. The real study is too simple and insufficient. 3. Section 2 should be rewritten to make it clearer since some technique details of the proposed method are hard to understand. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Section 2 should be rewritten to make it clearer since some technique details of the proposed method are hard to understand. 2. Too many details of the method are unclear, such as the inverse operation of R, p1 *p2, and d1 * d2. 3. How to determine the size of the blocks? How to determine the number of factors? 4. Please double check Eq. (14). Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors did not state the limitations of the proposed method. No conclusion of this paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your review. >**Q1**: The method can only work for matrix-valued data, such as image data. **A1**: Thanks for your comment. Matrix-valued data, particularly images, are pervasive in many practical applications, and our method aims to provide a novel solution for these scenarios. While as the amount of tensor data increases, extending the method to tensor-valued data is a valuable direction to expand its applications. We will consider how to generalize our model in future research, including how to deal with higher dimensions, how to incorporate internal relations into the individualized coefficients, and more. >**Q2**: The real study is too simple and insufficient. **A2**: Thank you for your feedback. We have added a 5-fold cross-validation in real study to test the significance of the difference. It shows that the advantage of the proposed method is significant, of which results are in the following table. | Methods | AIR | LRMR | TRLasso | DKN | ViT | | :--------: | :--------: | :--------: | :--------: | :--------: | :--------: | | Fold 1 | 3.183 | 3.712 | 3.359 | 3.297 | 3.321 | | Fold 2 | 3.193 | 3.744 | 3.325 | 3.283 | 3.349 | | Fold 3 | 3.127 | 3.699 | 3.258 | 3.284 | 3.215 | | Fold 4 | 3.129 | 3.700 | 3.232 | 3.226 | 3.282 | | Fold 5 | 3.093 | 3.717 | 3.286 | 3.214 | 3.243 | | Mean (S.D.) | 3.145 (0.042) | 3.715 (0.018) | 3.292 (0.051) | 3.261 (0.038) | 3.282 (0.055) | | P-value | - | 0.000 | 0.001 | 0.002 | 0.002 | In the coming days, we will attempt to apply our method to new real data to further validate its performance. >**Q3**: Section 2 should be rewritten to make it clearer since some technique details of the proposed method are hard to understand. **A3**: Thank you for your feedback. We will revise Section 2 to enhance clarity, particularly providing more technique details of the proposed method to make it clearer. >**Q4**: Too many details of the method are unclear, such as the inverse operation of R, p1 *p2, and d1 * d2. **A4**: Thanks for pointing it out. The inverse operation of $\mathcal{R}$ is used to recover the reshaped images and corresponding coefficients to their original reshape. Besides, $p_1\times p_2$ are the number of blocks and $d_1\times d_2$ are the size of blocks. We have scrutinized the details and will make them clearer in the revised version. >**Q5**: How to determine the size of the blocks? How to determine the number of factors? **A5**: Thanks for your question. The division of images before implementation of our model is similar to that of Vision Transformer, where $16\times 16$ is a common size of patches. In our paper, due to relevantly small size of images, the size of blocks should be smaller. We note that our method performs robust to moderately small size of blocks, so we determine the size of blocks by cross-validation among $4\times 4$, $6\times 6$ and $8\times 8$. Besides, when the row-wise internal relations are of interest, such as EEG in which each row represents a channel, it will not involve division and the method can be directly applied. >**Q6**: Please double check Eq. (14). **A6**: Thanks for pointing it out. There is a typo that the first $W$ should be transposed. It is tantamount that we assume $W^T$ in the model, which has no effect on the existing results. Thank you again. > **Q7**: The authors did not state the limitations of the proposed method. No conclusion of this paper. **A7**: Thanks for your comments. We conclude in the “Discussion” section and will add a paragraph about limitations there, stating that: “On the other hand, we realize that the AIR framework also has limitations. First, the AIR model is designed for data with heterogeneous internal relationships, and its capability to handle more general data is more or less restricted. When there are minimal heterogeneous effects, its performance will be similar to an ordinary linear model. Second, as discussed earlier, our framework could be viewed as a simplified version of the Vision Transformer; however, such simplifications may also reduce its approximation power for more complex scenarios. Furthermore, this paper primarily investigates the linear form of AIR. Although the linear form performs well in the cases of interest, exploring the generalization of the model in future work is still worthwhile.” --- Rebuttal Comment 1.1: Comment: I thank the authors for their responses. However, some concerns remain unsolved such as the number of factors, the new real data, and the size blocks just considered based on data-driven. So, my score remains unchanged. --- Reply to Comment 1.1.1: Comment: Thanks for your response. Given the size of images $D_1, D_2$, the number of factors $p_1, p_2$ are determined as long as the size of blocks $d_1, d_2$ are determined because they need to satisfy $(p_1, p_2) = (D_1/d_2, D_2/d_2)$. Due to the short period of revision, we did not implement new real data but will consider that hereafter. However, the new implementation with cross-validation could demonstrate the robustness of our approach. Thank you again.
Summary: The paper proposes an approach for regression where common model coefficients can be modulated by sample-specific data. In particular, here the approach is applied to images (or matrices), where sample-specific data is derived from patch similarities (measured through rotation correlation), reflecting intra-image homogeneity. The model parameters (matrices of coefficients) are learned through penalised least-squares, with energy-minimising Frobenius norms on the unknown coefficient matrices to make the problem more well-posed. The authors then draw correspondances between their model and self-attention under some assumptions. They then show analytically convergence rates and error bounds for their model and the AD-style optimisation algorithm. Finally, simulation results and empirical results on brain imaging data show lower prediction errors compared to related methods. Strengths: The paper provides an interesting connection between varying-coefficient models and self-attention under relatively mild conditions. The ablation studies in appendix B.1 is interesting and helps highlight the contribution of individual coefficients in different cases. The method proposed has applications for images and other matrix-valued data. The figures are very helpful in conveying the aspects of the coefficient matrices. Weaknesses: Claims of superiority are not supported by hypothesis tests between the proposed method and the other 4 methods. (post-rebuttal: this is now OK) For real data analysis, it is unclear how many subjects were selected, and with which diagnosis. Extracting 10 slice per subject is fine, but are they evaluated as independent or are results provided per-subject? (post-rebuttal: OK) In addition, predicting cognitive scores from brain imaging is a very well studied task (in particular the MMSE-ADNI combination), not only cross-sectionally but also longitudinally. See e.g. 10.1016/j.neuroimage.2011.09.069, 10.1016/j.neuroimage.2014.03.036, 10.1109/PRNI.2015.28, or the review in 10.1016/j.jalz.2016.11.007. Here, the choice of the metric 'improvement compared to sd of scores in test set' obscures performance with respect to these and other previous work. I would suggest to provide MSE and MAE in the original MMSE scale for more clarity. (post-rebuttal: OK) Technical Quality: 2 Clarity: 3 Questions for Authors: Is the performance of the method proposed significantly different from the other methods, both for the simulation and the real data case? (post-rebuttal: OK) How is the lack of independence between slices from the same subject addressed in testing, in particular for cross-validation and error metric computation? (post-rebuttal: OK) Confidence: 2 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: Limitations are not in a separate section and consist of one sentence. (post-rebuttal: OK) There is no real negative social impact of this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your review. >**Q1**: Is the performance of the method proposed significantly different from the other methods, both for the simulation and the real data case? (Claims of superiority are not supported by hypothesis tests between the proposed method and the other 4 methods.) **A1**: Thanks for your question. For the simulation where the results are from 100 repetitions, we use z-test showing that the superiority of the proposed method to the other methods is significant. As for real brain imaging analysis, we did not conduct significance tests in our original experiments because of the chronological division considered. But to have a rough understanding of the robustness of the proposed method, we retest the performances of these methods by 5-fold cross-validation. The result of 5 folds in real study is shown below. | Methods | AIR | LRMR | TRLasso | DKN | ViT | | :--------: | :--------: | :--------: | :--------: | :--------: | :--------: | | Fold 1 | 3.183 | 3.712 | 3.359 | 3.297 | 3.321 | | Fold 2 | 3.193 | 3.744 | 3.325 | 3.283 | 3.349 | | Fold 3 | 3.127 | 3.699 | 3.258 | 3.284 | 3.215 | | Fold 4 | 3.129 | 3.700 | 3.232 | 3.226 | 3.282 | | Fold 5 | 3.093 | 3.717 | 3.286 | 3.214 | 3.243 | | Mean (S.D.) | 3.145 (0.042) | 3.715 (0.018) | 3.292 (0.051) | 3.261 (0.038) | 3.282 (0.055) | Based on the mean and S.D. of the obtained RMSE, we may conduct a (rather rough) t-test and report the obtained p-value as below. | Tests | AIR vs LRMR | AIR vs TRLasso | AIR vs DKN | AIR vs ViT | | :--------: | :--------: | :--------: | :--------: | :--------: | | P-value | 0.000 | 0.001 | 0.002 | 0.002 | These results suggest that the proposed method is significantly better than the others. We will add the results of significance tests in the revised version. >**Q2**: The choice of the metric 'improvement compared to sd of scores in test set' obscures performance with respect to these and other previous work. I would suggest to provide MSE and MAE in the original MMSE scale for more clarity. **A2**: Thanks for your suggestion. We have reorganized the table with RMSE and add the results of hypothesis tests of 5-fold cross-validation for a reference of significance. Please see the table in A1. >**Q3**: For real data analysis, it is unclear how many subjects were selected, and with which diagnosis. How is the lack of independence between slices from the same subject addressed in testing, in particular for cross-validation and error metric computation? **A3**: Thanks for your questions. For the training set, the 7270 images are obtained from 727 subjects in the ADNI&GO phases where 229 are normal, 310 are with MCI (mild cognitive impairment) and 188 are with AD. For the test set, the 3320 images are obtained from 332 subjects in the ADNI2 phase where 140 are normal, 91 are with MCI (mild cognitive impairment) and 101 are with AD. We will explain this in the appendix. Extracting 10 slices can be viewed as a kind of augmentation of the dataset and we take the obtained images as independent samples, which is a common practice in data augmentation. Despite dependency, treating them as independent can be useful for training models, as it effectively increases the diversity and size of the dataset. As for the dependency issue when testing, in this revision, we conduct another experiment with 1 middle slice per subject for validation and testing (total of 332 subjects), and 10 slices per subject for training. It shows that the result is very close to the previous one, which also demonstrates the robustness of our method. >**Q4**: Limitations are not in a separate section and consist of one sentence. **A4**: Thanks for your suggestion. We will add a paragraph about limitations stating that: “On the other hand, we realize that the AIR framework also has limitations. First, the AIR model is designed for data with heterogeneous internal relationships, and its capability to handle more general data is more or less restricted. When there are minimal heterogeneous effects, its performance will be similar to an ordinary linear model. Second, as discussed earlier, our framework could be viewed as a simplified version of the Vision Transformer; however, such simplifications may also reduce its approximation power for more complex scenarios. Furthermore, this paper primarily investigates the linear form of AIR. Although the linear form performs well in the cases of interest, exploring the generalization of the model in future work is still worthwhile.” --- Rebuttal Comment 1.1: Comment: Thank you for the improvements, results are clearer now. I could not find the hypothesis test results in the updated paper, please include (in appendix if needed). Also, if using a t-test, it should be a paired t-test not just a two-sample t-test since the split in folds is the same across methods. Nevertheless I re-ran paired t-tests on the data provided here in table A1 and the claim of superiority seems to hold, so I am upgrading my score. Note that caption of table 2 in paper seems incorrect, these should be improvement in RMSE, not sd. --- Reply to Comment 1.1.1: Comment: Many thanks for your reply. According to the rules of the conference we cannot update the manuscript during this period. We will consider the results of the paired t-tests and add them in the appendix. We will also make the other adjustments aforementioned in the revised version. Thank you again.
null
NeurIPS_2024_submissions_huggingface
2,024
Summary: The authors present an interesting approach to individualization of regression for heterogenous data. They first set up an individualized model with additive homogenous and heterogenous components containing matrix-valued coefficient matrices to be learned. They nicely establish the equivalence of their model under mild assumptions with scaled dot-product attention and linear attention approaches and provide a straightforward alternating minimization scheme. They provide both theoretical analysis (two theorems showing geometric decay of optimization error and prediction error under RIP) and experimental results (one synthetic and one real-world application) for their approach. Strengths: The paper has a number of strengths. It is well written and relevant to current needs in the precision medicine. Although it is not clear to me that it is necessary, the relation with attention is interesting. The author theory holds under realistic assumptions, and they support their findings with a synthetic and a real world example. Weaknesses: #### Major Weaknesses I may have missed the anonymized link, but the authors have not provided any code for review, which I find problematic given the applied nature of the paper and the fact that implementation seems straightforward. . The individualized coefficients for the ADNI data show are very rough due to blocking in D_i^ori. Is there some way to ameliorate this in future work? Technical Quality: 3 Clarity: 3 Questions for Authors: ####Major questions: Why is there no anonymized link to code? Why are the individualized coefficients so blocky for ADNI, and what could be done to mitigate this undesirable effect? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: A more detailed limitations section should be added in the supplement. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your review. >**Q1**: Why is there no anonymized link to code? **A1**: Codes were uploaded with submission to the “Supplementary Material” part as a zip file. Due to the request for anonymity, they are not public at the moment. We will provide the public link in the camera ready. >**Q2**: Why are the individualized coefficients so blocky for ADNI, and what could be done to mitigate this undesirable effect? **A2**: We would like to clarify that the blocky effect is essentially caused due to the reshaping operation (from $X_i^{\text{ori}}$ to $X_i$). The reshaping operator $\mathcal{R}(\cdot)$ divides original images into blocks, vectorizes and stacks them for preprocessing. The internal relation matrix $A_i$ aggregates blocks in $D^{\text{ori}}$, then we obtain the individualized coefficients $D_i^{\text{ori}}$. This is the reason that $D_i^{\text{ori}}$ appears to be blocky. On the other hand, if we consider internal correlations among small blocks, the blocky effect can be mitigated significantly. >**Q3**: A more detailed limitations section. **A3**: Thanks for your suggestion. We will add a paragraph about limitations in the “Discussion” section, stating that: “On the other hand, we realize that the AIR framework also has limitations. First, the AIR model is designed for data with heterogeneous internal relationships, and its capability to handle more general data is more or less restricted. When there are minimal heterogeneous effects, its performance will be similar to an ordinary linear model. Second, as discussed earlier, our framework could be viewed as a simplified version of the Vision Transformer; however, such simplifications may also reduce its approximation power for more complex scenarios. Furthermore, this paper primarily investigates the linear form of AIR. Although the linear form performs well in the cases of interest, exploring the generalization of the model in future work is still worthwhile.” --- Rebuttal Comment 1.1: Title: Thank you. Comment: I thank the reviewers for their responses. My score remains unchanged.
null
null
null
null
null
null
Global Convergence in Training Large-Scale Transformers
Accept (poster)
Summary: The paper considers the theoretical mean-field limit of Transformers where width and depth go to infinity and studies the approximation error and convergence properties of gradient flow with weight decay. Both a residual self-attention layer and a residual feedforward layer are approximated by an ODE which averages the two encoders, whose solution models distribution of the parameters of both blocks throughout the depth of the Transformer. It is shown that the dynamics of the discretized model approximates the Wasserstein gradient flow under some regularity assumptions. Moreover, it is shown under partial homogeneity and universal kernel assumptions that if the Wasserstein flow weakly converges to a stationary distribution, then the discrete model must also converge with arbitrarily small risk. Strengths: * The paper is a nontrivial extension of existing mean-field analyses of two-layer neural networks and ResNets to Transformers, which is challenging due to the existence of both feedforward and attention layers, the latter of which is new in the literature. * The assumptions for the networks and input data are quite general and can encompass e.g. the softmax mechanism and in-context learning settings. * The analyses and techniques are sufficiently novel and detailed and serve as an initial characterization of the dynamics of large scale Transformers, in particular the global convergence analysis of Section 4. Weaknesses: * The obtained error bounds (Theorem 3.1) are so large that they likely have little practical relevance beyond the initial moments of training. In particular, the hidden constants are in the worst case super-super-exponential with respect to the time horizon. Specifically, the bound is exponential in $\phi_T(N,D,C\times B_\tau)$ where $B_\tau$ is exponential in $R_\tau$, which in turn is exponential in $\tau$. While bounds exponentially diverging in time are common in the literature (which is a priori expected from an ODE discretization argument) as mentioned in the paper, the dependency in Theorem 3.1 is much worse and some effort needs to be made to at least justify this. For example, one exponentation seems to be removable if e.g. $\phi_T$ is bounded. Currently the dependency in the hidden constants are not made obvious without going through the proofs in the Appendix, which I feel should be addressed more transparently. * The convergence analysis (Theorem 4.1) also suffers from the same problem. The analysis requires a time horizon $\tau_0$ large enough so that $W_2(\rho^{(\tau)}, \rho_\infty)$ is exponentially small (w.r.t. $R_\infty$) compared to the desired error $\epsilon$. This horizon is then fed into the approximation bound of Theorem 3.1, again yielding very large constants unless convergence is achieved in the very early stages of training. (This issue seems less critical since it only affects the $C_1$ term, although the $C_2\lambda$ term is still super-exponential in $R_\infty$.) * The recurring rate $L^{-1}+\sqrt{\log L/M}$ is not shown to be tight, leading me to further question the utility of the provided bounds. Why is the scaling for $L,M$ different? How does the rate compare with the analysis for ResNets or other ODE systems with 'width' & 'depth' dimensions? * A nitpick: the terms 'universal constant' or 'universal bound' are used quite frequently, however they should be reserved for constants that do not depend on *any* problem parameters. Technical Quality: 4 Clarity: 2 Questions for Authors: * Can the dependency on time horizon be alleviated with stronger model assumptions? * Is the $L^{-1}+\sqrt{\log L/M}$ rate tight/expected? (see Weakness) * The widths of both the feedforward layer and the attention layer are both set equal to $M$, however the number of heads cannot typically be expected to be very large compared to the width of feedforward layers. Does the analysis generalize to when they are different? * Some recent papers [1,2] have also studied mean-field limits of Transformers from different perspectives, it would be nice to add a comparison in the related works section (although the latter is a contemporary work). [1] https://openreview.net/forum?id=xm2lU7tteQ [2] https://arxiv.org/abs/2405.15712 Confidence: 3 Soundness: 4 Presentation: 2 Contribution: 2 Limitations: See Weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insightful comments and suggestions. They are extremely helpful in improving our work and presentation. Below, we address your concerns point by point. **Q1**: The error bounds in Theorem 3.1 are so large they may be impractical beyond initial training moments, with hidden constants possibly being super-super-exponential over time. The hidden constants' dependency is unclear without reviewing the Appendix proofs and should be addressed more transparently. Additionally, can stronger model assumptions alleviate the time horizon dependency? One exponentiation might be removable if $\phi_T$ is bounded. **A1**: We thank the reviewer for this insightful suggestion and careful clarification on this point. We agree that the hidden constants in Theorem 3.1 have sensitive dependencies on the time horizon, which is partially due to the structural complexities of the Transformers. Since Assumptions 2-3 are mild, allowing additional parameter norm factors in the bound, it is inevitable that the constant dependency would accumulate, thus easily causing super-exponential constants. Our goal is to take the first theoretical step in analyzing Transformers using mean-field tools. We can always make $L$ and $M$ sufficiently large to ensure vanishing approximation error. Though the result becomes asymptotic, our work lays the groundwork for future mean-field theory for Transformers. Yes, we could alleviate the dependency with stronger model assumptions, as $\phi_T$ may not depend on the Transformer output magnitude. We have noticed a lot of interest in identifying the optimal choice of $\phi_T$, i.e., the Lipschitz constant of the Jacobian matrix of the self-attention term. This is a particularly challenging frontier question. For instance, [R1] suggests that $\phi_T$ can be bounded by $\sqrt{N/D} + \mathrm{poly}(\|T\|_F)$, where $\mathrm{poly}(\cdot)$ denotes a polynomial function. In the context of $l_2$ self-attention, [R2] finds $\phi_T$ to be $\sqrt{N \log N / D}$, which notably does not depend on $\|T\|_F$, and [R3] demonstrates that for $l_1$ distance metrics in attention layers, $\phi_T$ could be $\sqrt{D \log N}$. We will include this illustration if our submission is accepted. [R1] Dasoulas, G., Scaman, K., & Virmaux, A. (2021). Lipschitz normalization for self-attention layers with application to graph neural networks. ICML [R2] Kim, H., Papamakarios, G., & Mnih, A. (2021). The lipschitz constant of self-attention. ICML. [R3] Vuckovic, J., Baratin, A., & Combes, R. T. D. (2020). A mathematical theory of attention. arXiv preprint arXiv:2007.02876. **Q2**: The convergence analysis in Theorem 4.1 also has issues with large constants. It requires a large time horizon $\tau_0$ to make $W_2(\rho^{(\tau)},\rho_\infty)$ exponentially small relative to the error $\epsilon$. This leads to large constants in Theorem 3.1's approximation bound unless early-stage convergence occurs. While this mainly affects the $C_1$ term, $C_2$ remains super-exponential in $R_\infty$. **A2**: We thank the reviewer for the crucial clarification on this point. We agree that $C_1$ heavily depends on the choice of $\tau_0$, and $\tau_0$ heavily depends on the prefixed $\epsilon$. Nevertheless, we can always make $L$ and $M$ sufficiently large (independent of the $\tau_0$ choice) to ensure the risk is asymptotically bounded. We will focus more on refining these constants in future work after current initial steps towards extending the architecture into the realm of Transformers. **Q3**: The rate $L^{-1} + \sqrt{\log L/M}$ is not shown to be tight. Why is the scaling for $L$ and $M$ different? How does this rate compare to the analysis for ResNets or other ODE systems with 'width' and 'depth' dimensions? **A3**: The different scaling for $L$ and $M$ also appears in the analysis for ResNets (see [20] Theorem 9), where $L$ depends linearly and $M$ depends quadratically on $\epsilon$. We think such rates are indeed expected. Intuitively, the different scaling is from the source of the approximation difference: $L^{-1}$ arises from the discretization of the ODE into $L$ small steps, and $M^{-1/2}$ comes from the implicit use of the Hoeffding’s Inequality considering the average of the outputs of $M$ nodes/heads. Furthermore, since our result in Theorem 3.1 considers the maximum difference across the entire time interval, we add an additional $\log L$ term as we apply the probability union bounds. **Q4**: The terms 'universal constant' or 'universal bound' are used quite frequently, however they should be reserved for constants that do not depend on any problem parameters. **A4**: We thank the reviewer for the careful reading. We will fix this issue in the revised version. **Q5**: The widths of both the feedforward layer and the attention layer are both set equal to, however the number of heads cannot typically be expected to be very large compared to the width of feedforward layers. Does the analysis generalize to when they are different? **A5**: We thank the reviewer for the great question. The results can be easily extended when the widths of the feedforward layer ($M_1$) and the attention layer ($M_2$) both go to infinity, and we conjecture that the main results still hold by replacing $M$ with $\min\{M_1, M_2\}$. However, by the nature of the mean-field analysis, we do need $M_2$ to be sufficiently large to achieve the theoretical convergence. The shift from discrete parameters to parameter distributions requires a large number of heads to provide a close approximation for the "average". **Q6**: Two recent papers referenced by the reviewer have also studied mean-field limits of Transformers from different perspectives. It would be nice to add a comparison. **A6**: We thank the reviewer for pointing out the related works. We will cite them and add discussions in the revision. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed reply. I will maintain my score as I still feel the technical issue of Q1 ends up weakening the relevance of the paper, however I personally do think the approach and ideas are very interesting.
Summary: The authors theoretically investigate the mean field limit of residual transformer models. As depth and width go to infinity, thanks to the residual structure, the forward pass can be modeled as an ordinary differential equation, and the training gradient flow converges to a Wasserstein gradient flow. The author show well-posedness of this PDE, provide non-asymptotic time uniform bounds on quantities of interest and show global convergence to the global minimum of the empirical risk as $L,m/log(L) \to+\infty$ and $\lambda \to 0$. Strengths: The tackled problem is extremely relevant and the derived results are sound. Given the size of the work, the authors did a good job overall in organizing the contents. Weaknesses: 1. The content is intense and the presentation doesn't help a novel reader. I believe polishing of the manuscript would improve the quality. More concretely, I would add a small paragraph in the main manuscript summarizing the main ideas behind the novelties. In the appendix instead it is really easy to get lost in a long proof, loosing track of the main goal. I suggest to include a small paragraph at the beginning of each section with the main steps required for the final proof, to guide the reader and make even the technical part more accessible (something on the line of what the authors did in D.3 would be perfect). 2. While this did not and will not impact my score, for manuscripts of this size I believe at least a really small experimental section (even in the appendix) to test the claims would make the results more sound. Technical Quality: 4 Clarity: 2 Questions for Authors: On the technical side: 1. While I understand the need of assumption i) in theorem 4.1, it is not obvious to me how stringent it is. Already for an extremely simple problem like $Q(\rho) = \int \rho \log(\rho) + \lambda \int x^2 d\rho$ the minimizer is a Gaussian, that has unbounded support. Given the last example, the heuristic justification given by the authors seems to not be sufficient. Do the authors see a way to transfer this condition on the growth rate of $R(\Theta)$ at infinity? 2. The immediate extension of the deterministic case would be to consider the full Fokker-Plank, as it may emerge as a model of stochastic optimization. Already in the simplest case, the PDE under consideration would be $\partial_\tau \rho^{(\tau)} = div(\rho^{(\tau)} \nabla U) + \sigma \Delta \rho^{(\tau)}$. The stationary distribution in this case would be $\rho \propto e^{-U/\sigma}$, that has unbounded support for $U$ defined on all parameter space. This suggest that already in the simplest case (just pure isotropic diffusion added), the extension of the results may not be straightforward. I would appreciate the authors' comment on this, and I would like to know if they see an easy way to weaken that hypothesis as it's something one has no control on. 3. While mentioned in the proofs, it is not clear in the main manuscript what $\delta$ is. For example, in theorem 4.1 the statement is "with high probability", but it is not mentioned on what. I would add two words in the theorems/lemmas that are in high probability to specify with respect to what. 4. It sounds strange to me that theorem 3.1 gives a bound on a compact set, while theorem 4.1 on an unbonded time horizon, with what appears to be exactly the same constant. Is the constant in theorem 4.1 really the same of the one in 3.1 or it includes other terms? Confidence: 4 Soundness: 4 Presentation: 2 Contribution: 4 Limitations: The authors correctly assessed the limitations in the manuscript, I have no further suggestions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are grateful for your supportive comments, which are greatly helpful for us to improve our work. We address your questions as follows. **Q1**: The content is intense and the reviewer suggests polishing the manuscript. Adding a summary of the main ideas in the manuscript and brief introductory paragraphs in each appendix section are helpful. **A1**: We thank the reviewer for this great suggestion. We agree that it would be helpful to briefly summarize the main ideas in the manuscript and the main proof ideas in the appendix. (If we can efficiently utilize the additional page for the camera-ready version, we may even add a brief proof sketch if space permits.) We will revise the paper following your suggestions. **Q2**: The reviewer believes that at least a really small experimental section (even in the appendix) to test the claims would make the results more sound. **A2**: In practice, we observe that using vision Transformers to fit simple datasets such as MNIST can easily achieve near-zero training loss, indicating convergence to a global minima. This observation reinforces our confidence that our results are valid and that our Transformer structure can minimize the training loss to zero as $L$ and $M$ increase. We will also conduct experiments investigating the relation between discrete transformers and their continuous limits and include the results in the revision. **Q3**: Assumption i) in Theorem 4.1 has ambiguity on how stringent it is without sufficient heuristic justification. A related example: the minimizer of $Q(\rho)=\int \rho\log(\rho) +\lambda x^2d\rho$ is a Gaussian that has unbounded support. Do the authors see a way to transfer this condition to the growth rate at infinity? The immediate extension of the deterministic case would be to consider the full Fokker-Planck, whose stationary point of the PDE has unbounded support. **A3**: We thank the reviewer for the insightful comments about the extra entropy and diffusion terms. Since these two questions are related, we have combined them into one and provided a comprehensive response. Yes, several papers [13,24,45] in the mean-field analysis literature study the same terms you mentioned, considering noisy gradient descent (NGD). By introducing an additional random Gaussian noise term $N(0,\lambda_2)$ into the gradient flow/descent formula, the PDE evolution of $\rho^{(\tau)}(\theta,w)$ becomes a diffusion process with an additional term $\nabla^2\rho^{(\tau)}(\theta,w)$ based on the Fokker-Planck equation. Consequently, the stationary point of the PDE is the local minimum of the regularized risk function with a regularization term $\int \rho\log(\rho) +\lambda x^2d\rho$. In this case, we note that the minimizer of $\tilde{Q}(\rho)$ will not have compact support. In comparison, (noiseless) gradient flow minimizes the risk function with $\int \lambda x^2d\rho$ regularization, and it remains possible that the minimizer still has compact support. While the extension to NGD is very interesting, after careful consideration, we believe this extension does not fit within the framework and assumptions of our paper. The key reason is that our Assumptions 1-3 are much milder than the corresponding assumptions in [13,24,45], as the constant terms in our assumptions also depend on the parameter norms to accommodate Transformer structures. Given the dependence on parameter norms (e.g., $(1+||\theta||)$ in Assumption 2(ii)), we must ensure that the parameter distribution is always bounded within any infinite time $s \in [0,\tau]$ to apply these assumptions. If we consider a simpler structure like residual networks with stronger assumptions, then the extension you mentioned could perfectly fit or even enhance the theoretical proof. Lastly, we hope to clarify the first assumption in Theorem 4.1 to alleviate concerns about its stringency. Although this assumption is uncheckable from the given assumptions, the weight decay regularization can penalize the parameter norms, potentially leading to a $\rho^{(\tau)}$ with compact support. Additionally, this assumption is more likely to hold for simpler learning tasks: if the true label generation process is defined by a $\rho^*$ with compact support, then a bounded solution suffices to minimize the loss for such a simple learning task. **Q4**: It is not clear in the main manuscript what $\delta$ is. The reviewer would add two words in the Theorems/lemmas that are in high probability to specify with respect to what in Theorem 4.1. **A4**: We thank the reviewer for the careful reading and the great suggestion. The probability is with respect to the parameter initialization $\Theta^{(0)}=\\{\theta^{(0)}\_{t,j},w^{(0)}\_{t,j}\\}_{t,j}$. We will clarify it in the revision. **Q5**: It sounds strange that Theorem 3.1 gives a bound on a compact set, while Theorem 4.1 on an unbonded time horizon, with what appears to be exactly the same constant. Is the constant in Theorem 4.1 really the same of the one in 3.1 or does it include other terms? **A5**: We thank the reviewer for the question and careful reading. The constant in Theorem 4.1 is indeed the same as that in Theorem 3.1, but with $\tau$ is fixed as $\tau_0$. To clarify how we obtain the constant in Theorem 4.1, we first select a large time horizon $\tau_0$ and apply Theorem 3.1 with respect to this specific choice to bound $|\widehat{R}^{(\tau_0)} - R(\rho^{(\tau_0)})|$. The results follow if we can show $\sup_{\tau \geq \tau_0} R(\rho^{(\tau)})$ is asymptotically smaller than $\epsilon + \lambda$. This is why $C_1$ is dependent on $\tau_0$ in Theorem 4.1. Further illustration can be found in the proof steps of Theorem 4.1 (pages 34-36), where $\tau $ being large means exactly $\tau \geq \tau_0$. --- Rebuttal Comment 1.1: Comment: First of all, I would like to thank the authors for their thorough rebuttal. I am satisfied with the answers and I would like to keep my score, conditionally on the authors addressing Q1 in the revised version.
Summary: This paper analyzes gradient flow on Transformer networks. It is shown that for wide and deep Transformers, gradient flow converges to the Wasserstein gradient flow and reaches a global minimum. Strengths: This is a well-written paper and it seems the results are strong and clean. However, I have very little background in this area and cannot verify the correctness of the statements. I hope the AC can find another qualified reviewer and delete this review if possible. Weaknesses: N/A Technical Quality: 3 Clarity: 3 Questions for Authors: N/A Confidence: 1 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your recognition of our paper. Your comments provide valuable guidance on our presentation. In this work, we aim to take the initial steps towards studying the theoretical optimization guarantees of Transformer models and, for the first time, prove the global convergence property via gradient descent-based methods.
Summary: This paper studies transformers in their mean field limit. They use a ResNet architecture with infinite depth. The residual blocks are made of two steps: one transformer step and one standard MLP. They also use infinite width for both steps. Then, they show that the gradient flow in this limit is well posed, by using the notion of Wasserstein gradient flows since the mean field model is defined on space of measures. They also show consistency with the original discrete transformer model. Then, they study the global convergence properties of the gradient flow: they prove a sort of global convergence result, which requires some hypotheses. Strengths: The writing of the paper is quite clear. Proving mean field convergence results has been studied a lot in the literature on theoretical deep learning. The result of the authors is interesting because they deal with the transformer architecture which ends up being a sum of two maps in the mean field representation. The authors can prove similar results in this context, which are new to the best of my knowledge of the literature. Weaknesses: One of the main weakness for a submission to NeurIPS is the format of the paper. The 9 pages of the main paper are devoted to a (well done) explanation of the main results. There is no concrete ideas of proof in the main paper. More than 40 pages of technicalities are necessary to validate their result. The novelty of the results is the fact that transformer architecture is taken into account in the mean-field limit. However, the stated results do not show to what extent this architecture helps in obtaining the results. In fact, the transformer architecture appears more as something that hinders the standard proof on global convergence. The paper does not shed light on the interest/peculiarity of the transformer architecture. I would have appreciated if some simple experiments could be done to assess the range of validity of the results and in particular testing the importance of the transformer architecture. The hypothesis in section 4 to obtain a global convergence result seems not checkable in practice. In my opinion, it is not a global convergence result but rather an "if" theorem. Although I know that this assumption has been put forward in other previous papers, it is still an very demanding hypothesis. Technical Quality: 2 Clarity: 3 Questions for Authors: What are specific features of the transformer architecture that are necessary to make the results valid? In other words, can you extend your results to more general architecture than transformers? The justification of the hypothesis on the separability property in C5 is difficult to understand. Can you elaborate more on this assumption and try to report precisely on the progress of the literature on this assumption? I do think it is not fair to claim global convergence for such kind of "if" theorem with a hypothesis that is uncheckable. I suggest the authors to rephrase their contributions. Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: Not applicable. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable suggestions. In the following, we give point-by-point responses to your questions. **Q1**: One of the main weaknesses for a submission to NeurIPS is the format. The main paper is devoted to a (well done) explanation of main results, but insufficient explanations of the proof ideas are given in the main paper. **A1**: Thank you for your comment. Since our work is novel in the context of Transformer models, we believe comprehensive explanations of the main results are necessary to clearly convey the key theoretical messages to readers. Consequently, there is limited space for including the concrete ideas of proofs due to their highly technical nature. We have made efforts to address the lack of proof sketches. In the appendix, we included proof ideas for the most important results before presenting the rigorous proofs. For example, we discussed the idea of obtaining the adjoint ODE solution of $p_\rho$ in Appendix C.4, and the proof idea of Theorem 3.1 at the beginning of Appendix D. For the proofs of the main results, we also explicitly listed the goal of each proof step to enhance understanding. We agree that it would be helpful to explain the ideas of proof in the main paper. If this paper is accepted, we will revise the paper structure and also utilize the one additional page for the camera-ready version to add a proof sketch. **Q2**: The novelty of the results is to take the transformer architecture into account in the mean-field limit. However, the results do not show how this architecture helps in obtaining the results. Can you extend your results to more general architecture than transformers? In addition, some simple experiments could be helpful to assess the range of validity of the results and in particular testing the importance of the transformer architecture. **A2**: We thank the reviewer for the insightful question. The purpose of our paper is to enable mean-field analysis on Transformers, so that global convergence guarantees can be established for Transformers. The complexity of the Transformer structure makes achieving such convergence results particularly challenging, and existing mean-field tools cannot be directly applied. Therefore, our paper's contribution lies in overcoming these difficulties. While our focus is on Transformers, our result can indeed cover (or be easily extended to cover) more general architectures. Importantly, Assumption 4 only requires partial 1-homogeneity. This can potentially enable mean-field analysis of very general architectures and activation functions Regarding experiments, in practice, we observe that Vision Transformers can fit simple datasets such as MNIST and achieve a near-zero training loss, indicating convergence to a global minimum. We will include experimental results on validating the mean-field limit of transformers in the camera-ready. **Q3**: Assumptions in Theorem 4.1 are not checkable in practice. It is rather an "if" Theorem. Although these assumptions appear in other previous papers, they are still demanding. **A3**: We thank the reviewer for the question and the acknowledgement about the common issue in literature. Indeed, many other papers in the mean-field literature make global convergence claims using similar assumptions. Although our paper retains these untestable assumptions to prove global convergence, we emphasize that we have already made significant contributions towards for practical assumptions by: 1. Weakening these “if” assumptions compared to [41], as we do not require full homogeneity. 2. Weakening Assumptions 1-3 compared to [19, 20, 41], whose assumptions cannot be extended to Transformer structures. Therefore, while validity concerns about the “if” Theorem still exist, we believe that weakening these assumptions to a milder version is a significant step towards the ultimate goal of achieving “checkable” assumptions. **Q4**: Discussions about the hypothesis on the separability property in Appendix C.5 are difficult to understand. Can you elaborate more, and report precisely on the progress of the literature on this assumption? **A4**: The first assumption in Theorem 4.1 requires that the parameter distribution $\rho^{(\tau)}$ is concentrated in a bounded region across all time. Though this assumption is uncheckable, the introduction of the regularization parameter $\lambda$ penalizes the parameter norms, which may lead to a $\rho^{(\tau)}$ with compact support. Additionally, this assumption is more likely to hold on simpler learning tasks: if the true label generation process is defined by a $\rho^*$ with compact support, then a bounded solution suffices to minimize the loss for such a simple learning task. The second assumption concerns the separation property. It requires that the support of the convergence point $\rho_\infty$ “separates” a small and a big sphere at any local point $\theta_0 (w_0)$ regarding the 1-homogeneous parameter part, and the support always “spans the full disk” $\mathcal{K}$ used for universal approximation. A mild case that satisfies the “separation part” of this assumption is that the original point $0_{\text{dim}\theta+\text{dim}w}$ is an interior point of the support for some $t^*$. This condition is relatively mild and is generally satisfied, though no paper has rigorously shown it when considering deep networks where $L$ tends to infinity. The challenge arises more in verifying the “spanning full disk” part, i.e., $\text{supp}(\rho_\infty(\cdot,t))$ extends to encompass the entire space $\mathcal{K}$. While no paper can prove this property for the limit $\rho_\infty$, [20] with a similar theoretical setting shows that if the initial parameter distribution $\rho_0(\cdot,t^*)$ spans $\mathcal{K}$, then $\rho^{(\tau)}(\cdot,t^*)$ spans $\mathcal{K}$, i.e., this expansive support property is maintained at any finite time. We will revise Appendix C.5 with clearer statements and more detailed discussions. --- Rebuttal Comment 1.1: Comment: I thank the authors for their answers. About Q2/A2: I do not think that an experiment on the MNIST dataset and just claiming global convergence would be a valuable addition to illustrate the claims in the paper. Studying the size of parameters to obtain this global convergence would be more interesting. In any case, having read the answers to Q3 and Q4, I reckon this work appears as an improvement over the current literature and could motivate some further progress. This motivates me to increase my score. --- Reply to Comment 1.1.1: Comment: Thank you for your suggestions on the experiments and for increasing your score! We will make sure to add experiments to study the size of parameters that can guarantee to global convergence.
null
NeurIPS_2024_submissions_huggingface
2,024
Summary: The paper presents a rigorous theoretical analysis of the global convergence properties of gradient flow in training Transformers in the setting of in-context learning. By constructing the mean-field limit, the authors show that as the model's width and depth increase to infinity, the gradient flow converges to a Wasserstein gradient flow. This convergence ensures that the training process reaches a global minimum when the weight decay regularization parameter is sufficiently small. The paper introduces techniques adapted to Transformers, leveraging partial homogeneity and local Lipschitz smoothness to demonstrate the close approximation between discrete and continuous models and to establish the global convergence of gradient flow. Strengths: Strength: 1. The paper studies the Transformer model in the context of in-context learning. Although the mean-field approach has been applied to infinite width and depth ResNet models before, the Transformer model is currently the most widely used and thus worthy of detailed study. This paper extends the analysis by incorporating two distinct encoders, filling the gap in understanding the Transformer model in infinite limits. 2. The paper introduces new techniques by extending previous approaches used for ResNet models. The authors refine the analysis by assuming only partial homogeneity and local Lipschitz smoothness, which is a key extension. The idea of considering the continuous dynamics as a function of the average behavior of f and h is new and interesting. 3. The final result has a mild requirement for the dependence of depth and width, where one would only need depth $L = \Theta(\epsilon^{-1})$ and the number of Transformer block $M = \Theta(\epsilon^{-2} \log (\epsilon^{-1}))$ to achieve a loss as small as $\epsilon$. (However, the reviewer has some questions about the result, so this strength can still be questionable.) 4. The paper is well-organized and clearly written. The authors effectively communicate the ideas, contributions, approaches, and limitations of their work, making it accessible and comprehensible to the reader. Weaknesses: ## Potential fallacy in the claim of Corollary 4.1: 1. The reviewer is convinced by most results presented in the paper, but has concerns about the claim in Corollary 4.1. The main issue arises from the fact that the authors treat many objects as constants, even though they clearly depend on some key quantities. In short, the reviewer thinks that $C_1$ in Theorem 4.1 depends significantly on $\epsilon$, which makes the claim in Corollary 4.1 questionable. The concern stems from the observation that $C$ in Theorem 3.1 may depend exponentially on $\tau$. Following the proof of Theorem 4.1, to achieve a loss as small as $\epsilon$, one would need $Q(\rho^{(\tau_0)}) \le \epsilon$. This introduces a non-trivial dependence of $\tau_0$ on $\epsilon$. Given that $C$ may depend exponentially on $\tau_0$, $C_1$ in Theorem 4.1 would thus depend exponentially on $\tau_0$, leading to a complex dependence on $\epsilon$. This dependence cannot be ignored, and treating $C_1$ as a simple constant to achieve Corollary 4.1 appears incorrect. (The reviewer believes this is how Corollary 4.1 is currently derived.) Based on the above, the reviewer thinks that 1. The current approach in proving Corollary 4.1 is flawed. 2. A naive application of Theorem 4.1 might only lead to asymptotic bounds for $L$ and $M$, instead of the linear and quadratic terms presented nicely in the current version of Corollary 4.1. ## Limitations of settings and results Though the reviewer agrees that studying and achieving a good understanding of the case where both width and depth go to infinity is valuable, there are some limitations in the results presented in the paper. 1. The main result, Theorem 4.1, depends on Assumptions 1-4. Although the reviewer considers Assumptions 1-4 to be reasonable and likely to hold under certain settings, the authors do not make it clear when these assumptions would hold. Consequently, the setting of the Transformer model might differ from classical Transformer models (e.g., it might only hold under the reparameterized case (2.3) instead of keeping all value, key, query, and output matrices). The reviewer will address this point more in the Questions section. 2. The main result, Theorem 4.1, relies heavily on a few additional assumptions that are not justified in the paper: 1. the Wasserstein gradient flow weakly converges to some distribution $\rho_{\infty}$ 2. the uniform boundedness of $\rho^{(\tau)}$ for large time $\tau$ 3. the separation property for $\alpha_1$ with the support expansion of $\alpha_2$ to $K$. The reviewer acknowledges that some of these assumptions are also adopted in other mean-field approaches and appreciates the authors' efforts to justify them intuitively in Section C.5. However, they are not examined carefully and may harm the validity of the statement. For example, for the second assumption, the reviewer agrees that a large regularization $\lambda$ can implicitly bound the norm of the solution. However, this might require a large $\lambda$, which could make the results less meaningful. Notice that the current bound shown in Proposition 3.2 implies that $R_{\tau}$ may depend exponentially on $\tau$, thus it is not clear that a small $\lambda$ can result in a bounded solution. And based on Theorem 4.1, a small $\lambda$ would be required to achieve a small loss. 3. The result does not provide a non-asymptotic bound for the required training time. In other words, there is no guarantee that $\hat{Q}(\Theta^{(\tau)})$ will be small for any specific $\tau$. This is because there is no control over the convergence rate of $\rho^{(\tau)}$ to $\rho^{\infty}$, which would require a stronger assumption beyond the current weak convergence assumption of $\rho^{(\tau)}$. 4. The loss and training dynamics are only considered for the population risk under $\mu$. There is no consideration of the effect of an empirical dataset. This is a clear limitation, but the reviewer can accept this as an initial step toward studying the Transformer model. ## Minor: 1. Some parts of the statements are not well organized. For example, Assumption 2 is an assumption about norm bounds of function $f$ and its gradient. The line "define ReLU'(x) = ..." has no direct connection with the assumption. The reviewer thinks that the authors are trying to argue that by using ReLU'(x) as the gradient of the ReLU function, $f$ defined in (2.3) would satisfy Assumption 2. If this is the case, the reviewer suggests the authors separate the statement into 1) what the assumption of $f$ is. and 2) when the assumption would hold. Another minor point is that the Fréchet derivative still acts differently from the gradient, so some careful treatment might be needed if the authors claim that the result in Theorem 4.1 applies to Transformers with ReLU activation. Typos: 1. Line 111, it should be "h(Z, w) = W_2 \sigma_M(W_1 Z)" instead of $W_1 H$. 2. Line 156, in Assumption 2, $\Psi_j$ in the description should be $\Phi_P$. 3. Line 86: "~~the~~ our deep Transformer model" 4. Line 901, For the line that bounds $R(\Theta)$, there is no $C_{\lambda}$ in the first line, and the transition from the first line to the second line is currently shown as $1/2 \epsilon + ... \le 1/4 \epsilon + ...$ Technical Quality: 2 Clarity: 3 Questions for Authors: 1. Please address the above concern for Corollary 4.1 carefully. The answer may greatly affect the final rating from the reviewer. 2. Questions related to assumption 1-4: 1. For the purpose of confirmation, have you shown in which cases Assumption 1-4 are met for the Transformer model? My current understanding is that they would be met when you reparameterize $f$ as in (2.3) but not when you keep the W_V, W_K, W_Q and W_O matrices (at least Assumption 2 won't be satisfied in this case). Is that correct? The review is not against the reparameterization and considers this is still a Transformer model, but want to understand the difference and the claim of the paper more carefully. 2. As the focus of the paper is on the Transformer model, the main results are based on the structure defined in (2.1) (2.2), (2.4) and Assumptions 1-4 over f and h. The review wonders if it is possible to provide a separate corollary for a commonly used form that satisfy Assumption 1-4, such as (2.3), making it clearer on an example where minimum assumptions are used. Or in other words, are there technical difficulties in showing Assumption 1-4 for (2.3)? Is the universal approximation capabilities the key obstacle? Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: The authors have adequately addressed the limitation of the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are grateful for your constructive and helpful comments and suggestions. We address your concerns as follows. **Q1**: Corollary 4.1 only implies asymptotic bounds for $L$ and $M$, instead of the linear and quadratic terms in the current version of Corollary 4.1. **A1**: Thank you for the crucial clarification on this point. You are correct, and we will revise Corollary 4.1 and remove the comment on how $L_0$ and $K_0$ depend on $\epsilon$. Our result only ensures that the risk can be bounded when ​​$L$ and $M$ are sufficiently large. Our work does not aim to provide a tight bound on $L$ and $M$ with respect to $\tau_0$ or measure the exact reduction in risk. Instead, our goal is to take the first theoretical step in demonstrating that large-scale Transformers can achieve global convergence via gradient descent-based optimization. Even though our result is asymptotic and does not involve an explicit rate, it is the first of its kind and lays the groundwork for future theoretical optimization guarantees for Transformers. **Q2**: Assumptions 1-4 may rely heavily on the reparameterized form of transformer as in (2.3). The reviewer is not against the reparameterization and considers this is still a Transformer model, but wants to understand the difference and the claim of the paper more carefully. Additionally, is it possible to provide a separate corollary for a commonly used form that satisfies Assumption 1-4, such as (2.3)? **A2**: Thank you for your suggestion. We indeed mainly focus on the reparameterized form of transformer in (2.3). However, given the theoretical nature of our work, we believe that the form of transformer in (2.3) covered in our paper is already pretty close to the practice, compared with most existing theoretical analyses of transformers. We believe that Assumptions 1-4 are satisfied when we consider: - The form of (2.3) for the attention layer, with $\sigma_A$ being the column-wise softmax. - $W_2\sigma_M(W_1H)$ for the MLP layer with the ReLU activation or smooth activations with Lipschitz derivatives. The first two assumptions are easier to verify by definition. Assumption 3 is straightforward to verify when all activation functions are smooth. For ReLU activation, Assumption 3 can still hold when the data are generated from a smooth distribution – all the properties in Assumptions 3 are in the form of expectations over the data distribution, and such an expectation can have a smoothing effect. For Assumption 4, the partial 1-homogeneity is clearly met in our paper given the partially linear structure of Transformers. Furthermore, since the universal kernel property applies to either the attention or the MLP layer, we can choose the MLP layer with the ReLU activation, whose universal approximation ability has been discussed in [12]. We will add more discussions in the revision. **Q3**: Theorem 4.1 relies heavily on a few additional assumptions that are not examined carefully and may harm the validity of the statement. It is not clear that a small $\lambda$ can result in a bounded solution. And based on Theorem 4.1, a small $\lambda$ would be required to achieve a small loss. **A3**: Thank you for acknowledging the common adoption of these assumptions. We agree that a small $\lambda$ may not always result in a bounded solution. We treat the assumption of “the uniform boundedness of $\rho^{(\tau)}$ for a long time $\tau$” as a data-related assumption. Our heuristic thinking is that if the true data distribution $\mu$ is simple, and the true label generation process $\rho^*$ is bounded, then a bounded solution suffices to minimize the loss for such a simple learning task. Likewise, the first and third assumptions are also assumptions on the learning task regarding $\mu$ and $\rho^*$. We believe they should be satisfied when the task is sufficiently simple. However, like most other papers considering mean-field analysis across multiple network layers, we acknowledge that it is challenging to construct a general class that could be verified to meet these assumptions. **Q4**: The result does not provide a non-asymptotic bound for the required training time. **A4**: Thank you for your comment. Similar to our response in Q1, we believe that even establishing an asymptotic convergence bound is a novel and significant finding given the technical complexities of the Transformer models. Establishing more concrete convergence rates is an important future work direction. **Q5**: The loss and training dynamics are only considered for the population risk under $\mu$, but the reviewer can accept this as an initial step toward studying the Transformer model. **A5**: We thank the reviewer for the careful reading and understanding. We consider the extension to finite sample results with explicit generalization error as the next step of our research. **Q6**: In Assumption 2, the line "define ReLU'(x) = ..." has no direct connection with the assumption. The authors should separate the statement into 1) what the assumption of $f$ is. 2) when the assumption would hold. The Fréchet derivative still acts differently from the gradient, so some careful treatment might be needed. **A6**: We thank the reviewer for the careful reading and the excellent suggestions. We indeed aimed to claim that by using $\text{ReLU}'(x)$ as the gradient of the ReLU function, $f$ defined in (2.3) would satisfy Assumption 2. We will revise the statement following your suggestions. We believe the formula of the Fréchet derivative holds under ReLU activation (Equation (3.4) and Proposition 3.1), as the Fréchet derivative is more akin to the weak derivative and milder than the standard definition of the derivative. We will add a detailed discussion in the revision. **Q7**: Some typos. **A7**: We thank the reviewer again for pointing out the typos, especially with the careful reading of the proof. Line 901 indeed contains an error in the constant term. We will fix them all in the revision. --- Rebuttal Comment 1.1: Comment: The reviewer first thanks the authors for their detailed and thoughtful response. The reviewer also appreciates the authors' honesty and transparency in the rebuttal. Unfortunately, the reviewer does not feel that the concerns have been adequately addressed. As a result, the reviewer has decided to change the rating. The reviewer understands how frustrating this can be, especially given the thorough rebuttal provided by the authors in an effort to address all the questions raised. The reviewer struggled with the rating during the initial review phase and provided the rating with the hope that some of these concerns would be resolved. However, after careful and extensive consideration, the reviewer believes that the paper remains incomplete in its current form. Before delving into specific concerns, the reviewer would like to acknowledge the strengths of the paper, as noted in the initial review. The paper is well-organized and clearly written, and it tackles a challenging and important problem: the theoretical understanding of training Transformer models. The reviewer recognizes that theoretical work, particularly in complex areas like Transformer networks, is inherently difficult. The reviewer is open to compromises, such as accepting reasonable assumptions that may be difficult to verify or acknowledging results with suboptimal dependence on problem parameters, including asymptotic results. However, based on the authors' response, the reviewer believes the following deficiencies are significant: 1. Corollary 4.1 is incorrect and requires major revision. 2. The dependence on training time $\tau$, the depth $L$ and the width $M$ is purely asymptotic. 3. The paper lacks clear results for the Transformer network (e.g., in the form of (2.3)) or for functions with ReLU activation. First of all, having a misleading or incorrect conclusion for one of the main claims of the paper is a serious issue, as it undermines the credibility of the rest of the work. This problem also leads to the asymptotic nature of the results, which significantly diminishes the importance of the paper’s contributions from the reviewer’s perspective. To clarify for those who may be interested, the reviewer believes that the logic behind Theorem 4.1 and Corollary 4.1 is as follows: A specific training time is required to ensure that the limiting dynamics reach a small error $Q(\rho^{(\tau_0)})$, and $L$ and $M$ must be chosen large enough so that the discrete training dynamics remain close to the limiting dynamics within the time $\tau_0$. The lack of control over $\tau_0$ to achieve a small loss leads to the asymptotic nature of the time (with additional weak convergence assumption), depth, and width. These first two deficiencies alone prevent the reviewer from giving the paper a high score. However, what is most concerning upon reevaluating the paper is the third point. The paper claims to provide convergence results for large-scale Transformers, yet no specific Transformer model is present in any of the main theorems. A Transformer should be defined in the form of (2.4) or at least in the form of (2.3), rather than as a concatenation of general functions $f$ and $h$. In the rebuttal, the authors used many "believe"s including the "belief" that (2.3) would satisfy Assumptions 1-4 and the "belief" that the results apply to the ReLU function with its Fréchet derivative. While the reviewer agrees with these beliefs on an intuitive level, the absence of a theorem that includes at least the form in (2.3) or the ReLU function makes these "beliefs" shaky and the work incomplete. It is somewhat surprising that the authors claim to achieve results for Transformers, yet provide no specific results related to the attention mechanism (2.4) (or its reparameterized form (2.3)) and the ReLU function. This presentation is therefore also misleading, making it difficult to connect the main results in the paper with a Transformer network. Given the rebuttal, it seems that Assumptions 1-4 for (2.3) and ReLU activation for Proposition 3.1 are checkable, so including these components would form a more complete story for the convergence of Transformers in the asymptotic regime. --- Rebuttal 2: Comment: Given the above points, the reviewer considers the paper to be an interesting attempt at tackling important questions. The paper introduces valuable techniques for addressing problems related to Transformer networks and has the potential to become a strong contribution. However, in its current form, the paper is incomplete and would benefit greatly from revision and resubmission to a future venue. The reviewer suggests the following improvements: 1. Please rewrite Corollary 4.1 and other theorems, ensuring that the dependencies for all constants are clearly specified. 2. Prove that assumptions 1-4 are satisfied for Transformer networks in the form of (2.3). It would be helpful if some parameters in the assumptions, including $K, K_T, K_P, \Phi_T, \Phi_P, \Phi_{PP}, \Phi_{TP}, \Phi_{TT}$ could be represented in more explicit forms with clear bounds. 3. If the authors intend to include ReLU activation beyond smooth activations, they should demonstrate that ReLU activation also satisfies Assumptions 1-4 and Proposition 3.1. 4. For the assumptions used in Theorem 4.1, even though it may be difficult to prove when these assumptions hold, it would be beneficial to provide heuristic examples to help readers understand when and why the assumptions are valid. For instance, the example provided by Reviewer accV in question 1 (Q3 in the rebuttal) offers a good illustration that this bounded assumption may not generally hold. Therefore, the authors need to make a stronger effort to justify the assumptions. In the current response to that question, phrases like "remains possible" are not convincing enough. Again, the reviewer understands that theoretical work in this area is challenging and that compromises are often necessary. For example, the weak convergence assumption is difficult to avoid in the mean-field regime, and previous work on ResNet models [Ding et al. 2021] has also shared the same asymptotic nature as in this paper. This is why the reviewer initially leaned toward accepting the paper, hoping that some of these concerns could be addressed. However, as outlined above, the reviewer does consider the work to be incomplete and believes that the paper does not meet the standard for a conference paper due to its lack of direct application to Transformer models, its weak (asymptotic) results, and its reliance on assumptions that are difficult to verify. The reviewer apologizes for the change in rating, noting that this adjustment reflects a reassessment of the initial rating rather than a direct response to the rebuttal. [Ding, Zhiyan, et al. "Overparameterization of deep ResNet: zero loss and mean-field analysis." Journal of Machine Learning Research 23.48 (2022): 1-65.] --- Rebuttal Comment 2.1: Comment: We appreciate your careful feedback. We understand the concerns you have raised regarding Corollary 4.1, the asymptotic nature of our results, and the applicability to Transformer models. However, we would like to argue that these weaknesses you have mentioned are essentially the weaknesses of almost all existing mean-field studies of deep neural networks. In fact, our paper contributes to pushing the common assumptions in the mean-field literature towards more practical scenarios. Therefore, we respectfully disagree with your comment that our work is incomplete. Corollary 4.1 should not need major revision, as its mathematical formulation and technical proof are accurate. To ensure rigor, it suffices to delete the sentence in line 302 "Here, $L_0$ scales as $\Omega(\epsilon^{-1})$, and $K_0$ as $\Omega((1+\delta)\epsilon^{-2})$", and remove the paragraph around lines 304-306. Regarding your concern that the dependence on training time $\tau$, the depth $L$ and the width $M$ is asymptotic, we would like to emphasize again that our result remains, to our best knowledge, the state-of-the-art global convergence guarantees for large-scale Transformers. Moreover, we would also like to point out that our work has two main results: (i) connections between the discrete Transformers and their continuous limit, and (ii) global convergence guarantees of Transformers. We feel that it is unfair to deny all of our contributions based solely on that our global convergence guarantees for discrete Transformers require asymptotically large $\tau$, $L$ and $M$. Regarding your concern about the applicability to Transformers, we honestly believe that the Transformer architecture we consider is already among the ones considered in theoretical works that are closest to practical Transformer architectures. We will add concrete examples of Transformer architectures and rigorously prove that Assumptions 1-4 hold for these architectures in the camera-ready version. As clarified, these assumptions easily hold for activation functions that are sufficiently smooth. We will also clarify the case of the ReLU activation function, noting that it may not generally satisfy these assumptions, but it is possible for Assumptions 1-4 to hold when the data follow certain favorable distributions. Once again, we appreciate your detailed and comprehensive feedback. We sincerely hope that you can consider re-evaluating our contributions. Thank you for your time and attention. Best regards, Authors --- Rebuttal 3: Comment: Thank you very much for acknowledging that the asymptotic assumptions on $L$ and $M$ are common in the mean-field analysis literature. We also appreciate your suggestion on removing $C\_{\lambda}.$ We will follow your suggestion, choose $\lambda \le (2C\_2)^{-1}\epsilon,$ and adjust the proofs accordingly. Regarding your concern about the concrete application to the Transformer architecture, we would like to clarify that verifying Assumptions 2-4 (please note that Assumption 1 is irrelevant to the Transformer model, and is only a fairly mild assumption on the data) for concrete examples of Transformer architectures with smooth activations is fairly intuitive and the proof is mainly based on a series of tedious calculations. Below, we give a concrete proposition and its brief proof as follows. --- Consider $f(Z,\theta)=VZ \mathrm{softmax} (Z^T W Z ) $ $\qquad $ (Eq 1) with the collection of parameters $\theta = \mathrm{vec}[V,W],$ where $\mathrm{softmax}$ denotes the column-wise softmax function. Moreover, consider $h(Z,w)=W\_2\mathrm{HuberizedReLU}(W\_1H)$ $\qquad $ (Eq 2) with the collection of parameters $w= \mathrm{vec}[W\_1,W\_2].$ Here, $\mathrm{HuberizedReLU}$ denotes the entry-wise HuberizedReLU activation function defined as $\mathrm{HuberizedReLU}(x) = \left\\{ \begin{aligned} &0, &&\mathrm{if} \ z\leq 0;\\\\ &z^2/2, &&\mathrm{if} \ z\in [0,1];\\\\ &z - 1 / 2, &&\mathrm{if} \ z\geq 1. \end{aligned} \right. $ Then, we can consider a Transformer model defined by equations (2.1), (2.2), and (2.4) in the paper, where the functions $f$ and $h$ are specified above. We suppose that this Transformer model is applied to a learning task with data that satisfies Assumption 1. We have the following proposition. **Proposition.** Consider the Transformer model defined by equations (2.1), (2.2) and (2.4), with $f(Z,\theta)$ and $h(Z,w)$ defined in (Eq 1) and (Eq 2) respectively. Then Assumptions 2-4 all hold. In our following comments, we will present a proof sketch of the proposition above. Due to the large character count of equations, we split the proof into several separate replies. We apologize for the long comments. To simplify our response, we omit the detailed derivations for the function $h(Z,w)$, which corresponds to the MLP part, in our verification of Assumptions 2 and 3. We feel that the fact that $h(Z,w)$ satisfies Assumptions 2 and 3 is relatively more intuitive, especially given the proofs for $f(Z,\theta)$. We will also omit some calculation details to keep our response reasonably simple. We will make sure that all the omitted details will be added to the revised version of our paper. --- Rebuttal 4: Comment: Proof. We first introduce some notations. We remind the readers that for a matrix $\mathbf{A}$, we denote by $\\| \mathbf{A} \\|\_2$ the spectral norm of $\mathbf{A}.$ We also denote by $\\| \cdot \\|\_{1-\mathrm{col}}$ the maximum $\ell\_1$-norm across all columns of a matrix. Denote $Z=(z\_1,\dots,z\_{N+1})\in\mathbb{R}^{D\times N+1}.$ Then the function $f$ can be rewritten as $f(Z,\theta)=VZ\mathrm{softmax}(Z^TWZ)=(f(Z,\theta)\_{:,i})\_{1\leq i\leq N+1}$, where $f(Z,\theta)\_{:,i}=\sum\_{j=1}^{N+1}P\_{ij}z\_j$ and $P\_{i,:}=\mathrm{softmax}(Z^TWz\_i).$ Next, we calculate the derivatives of $f(Z,\theta)\_{:,i}$ with respect to $Z$ and $\theta$ as follows: For $Z$: the Jacobian $J\in\mathbb{R}^{(N+1)D\times(N+1)D}$ is $J=(J\_{ij})\_{1\leq i,j\leq N + 1},$ where $J\_{ij}=\frac{\partial f\_{:,i}}{x\_j}\in\mathbb{R}^{D\times D}.$ After calculation, we obtain $J\_{ij}=ZQ\_i\big[E\_{ji}Z^TW+Z^TW^T\delta\_{ij}\big]+P\_{ij}I,$ where $Q\_i:=\mathrm{diag}(P\_{i:})-P^T\_{i:}P\_{i:},$ $E\_{ij}$ is the matrix with zeros everywhere except one the $(i,j)$-th entry, and $\delta\_{ij}$ is the Kronecker delta. For $\theta$: Define $A\_i=Z^TWz\_i.$ After calculation, we have $\nabla\_{\mathrm{vec}[V]} f(Z,\theta)\_{:,i}=\sum\_{j=1}^{N+1}P\_{ij}\mathrm{diag}\Big([B\_{kj}(z\_j)]\_{1\leq k\leq D}\Big),$ $\nabla\_{\mathrm{vec}[W]} f(Z,\theta)\_{:,i}=\sum\_{j=1}^{N+1}P\_{ij} \Big(\mathrm{diag}\Big([z_l^T B\_{kj}( z\_j)]\_{1\leq k\leq D}\Big)[\frac{\partial\mathrm{softmax}(A\_i)}{\partial A\_i}]\_l\Big)_{1\leq l \leq N+1} Vz\_j,$ where $B\_{kj}(z)$ is the $D\times D$ matrix with zeros everywhere except $z\_j$ for the $k$-th row. --- We then verify the assumptions one by one. For Assumption 2 (i), we have $ \\| f(T,\theta) \\|\_{2-\mathrm{col}} = \\| VT \mathrm{softmax} (T^T W T ) \\|\_{2-\mathrm{col}} \leq \\| V\\|\_2 \cdot \\| T \\|\_2 \cdot \\| \mathrm{softmax} (T^T W T ) \\|\_{2-\mathrm{col}} \leq \\| \theta \\|\_2 \cdot \\| T \\|\_{2-\mathrm{col}} \cdot \\| \mathrm{softmax} (T^T W T ) \\|\_{1-\mathrm{col}} \leq \\| \theta \\|\_2 \cdot \\| T \\|\_{2-\mathrm{col}},$ where the second-to-the-last inequality follows by the fact that $\ell\_2$-norm can be upper bounded by the $\ell\_1$-norm, and the last inequality follows by the fact that each column of the softmax output has an $\ell\_1$-norm equaling one. Therefore, the first condition in Assumption 2 with $K = 1$ is verified for the function $f$ in (Eq 1). For $h$ in (Eq 2), we have $ \\| h(T,w) \\|\_{2-\mathrm{col}} = W\_2\mathrm{HuberizedReLU}(W\_1T) \\|\_{2-\mathrm{col}} \leq \\| W\_2 \\|\_2 \cdot \\|\mathrm{HuberizedReLU}(W\_1T) \\|\_{2-\mathrm{col}} \leq \\| W\_2 \\|\_2 \cdot \\|W\_1T \\|\_{2-\mathrm{col}} \leq 2 \cdot \\| w \\|\_2^2 \cdot \\|T \\|\_{2-\mathrm{col}}$, where the second inequality follows by the property of HuberizedReLU that $|\mathrm{HuberizedReLU}(x)| \leq |x|.$ This demonstrates that Assumption 2 (i) with $K=1$ holds for $h$ in (Eq 2) as well. For Assumption2 (ii), we have $\\| \nabla\_{\mathrm{vec}[V]} f(T,\theta)\_{:,i} \\|\_{2} \leq \sum\_{j=1}\^{N+1} P\_{ij} \\| \mathrm{diag}\Big([B\_{kj}(T\_j)]\_{1\leq k\leq D}\Big) \\|\_{2} \leq \sum\_{j=1}^{N+1} P\_{ij} \\| T\_j \\|\_{2} \leq \\|T\\|\_{2-\mathrm{col}}. $ Similarly, we have $\\| \nabla\_{\mathrm{vec}[W]} f(T,\theta)\_{:,i} \\|\_{2} \leq \sum\_{j=1}^{N+1} P\_{ij} \Big\\| \Big(\mathrm{diag}\Big([z_l^T B\_{kj}( z\_j)]\_{1\leq k\leq D}\Big)[\frac{\partial\mathrm{softmax}(A\_i)}{\partial A\_i}]\_l\Big)_{1\leq l \leq N+1} VT\_j \Big\\|\_{2} \leq \sum\_{j=1}^{N+1} P\_{ij} \sum\_{l=1}^{N+1}\\|T\_l^T T\_j \\|\_{2}\cdot (\frac{\partial\mathrm{softmax}(A\_i)}{\partial A\_i})\_l \cdot \\|V\\|\_{2} \cdot \\| T\_j \\|\_{2} \leq \\|T\\|\_{2-\mathrm{col}}^3 \\|\theta\\|\_{2}.$ Combining the two equations above gives Assumption2 (ii) with $\phi\_P( \\|T\\|\_{2-\mathrm{col}} ) = \\|T\\|\_{2-\mathrm{col}} + \\|T\\|\_{2-\mathrm{col}}^3$. For Assumption2 (iii), we have $\\| J \\|\_{2}\leq \sqrt{\sum\_{1\leq i,j\leq N+1}\\|J_{ij}\\|^2\_2}\leq (N +1) \max\_{1\leq i,j\leq N+1} \\|J_{ij}\\|\_2.$ For any $1\leq i,j\leq N + 1,$ we have $\\|J_{ij}\\|\_2\leq P_{ij} + \\|T\\|\_2 \\|Q\_i\\|\_2 \\|E\_{ji}T^TW+T^TW^T\delta\_{ij}\\|\_2 \leq 1 + 2 \\|T\\|\_2^2 \\|W\\|\_2 \leq 1 + 2 \\|T\\|\_F^2 \\|\theta\\|\_2. $ Hence, we have $\\| J \\|\_{2}\leq 2N \\|T\\|\_F^2\cdot (1 + \\|\theta\\|\_2).$ The above equation demonstrates that for $f$, Assumption2 (iii) holds with $\phi_T ( N, D, \\| T \\|\_F ) = 2N \\|T\\|\_F^2$. We have verified Assumption 2 for the attention layer encoder $f$. The verification for $h$ is similar and easier. We omit the derivation for $h$ here to shorten our response, but we will make sure to include the full verification in the revised version of the paper. --- Rebuttal 5: Comment: Next, we verify Assumption 3. Given that we are currently considering the example where the encoder employs a smooth univariate activation function, we can prove stronger results by removing the expectation $\mathbb{E}_{\mu}$. (i) and (iii): Given the calculation of derivatives we have presented above, it suffices to show that $\mathrm{diag}\Big([B\_{kj}(z\_j)]\_{1\leq k\leq D}\Big)$ and $\mathrm{diag}\Big([Z^T B\_{kj}( z\_j)]\_{1\leq k\leq D}\Big)\frac{\partial\mathrm{softmax}(A\_i)}{\partial A\_i}Vz\_j$ are both locally Lipschitz continuous with respect to $Z$ and $\theta.$ Since each of $\mathrm{diag}\Big([B\_{kj}(z\_j)]\_{1\leq k\leq D}\Big),$ $\mathrm{diag}\Big([Z^T B\_{kj}( z\_j)]\_{1\leq k\leq D}\Big),$ $\frac{\partial\mathrm{softmax}(A\_i)}{\partial A\_i}$ and $Vz\_j$ is obviously bounded by an increasing function of $N,D,\\|\theta\\|, K_T,L_T,$ it suffices to show that each of them is locally Lipschitz continuity with respect to both $Z$ and $\theta.$ This is straightforward as they are all sufficiently smooth. (ii) and (iv): Because the norm of the difference of two Jacobian matrices $\\|J^1-J^2\\|\_2$ is bounded by $\sqrt{\sum\_{1\leq i,j\leq N+1} \\|J^1\_{ij}-J^2\_{ij}\\|\_2^2},$ it suffices to show that $J\_{ij}$ is locally Lipschitz continuous with respect to both $\theta$ and $Z.$ Again each component of $J\_{ij}$ that depends on $Z$ or $\theta,$ i.e. $Z,Q_i,W,P\_{ij},$ is bounded by an increasing function of $N,D, K_P,L_T,K_T,\\|\theta\\|,$ and is locally Lipschitz continuous given sufficient smoothness. Hence, (ii) and (iv) also hold. The proof for the HuberizedReLU MLP encoder is similar to the above and is not conceptually complicated, if not easier. We omit the details here, and save the space for a more detailed discussion about Assumption 4, as we expect that you may be more skeptical about the proof regarding the “universal kernel” assumption in Assumption 4. --- For Assumption 4, we consider the pair $(g,\alpha) = (h,w),$ and the partition $\alpha = (\alpha\_1,\alpha\_2)$ with $\alpha\_1 = W\_2,$ $\alpha\_2 = W\_1$. We also let a compact set $\mathcal{K} = \\{ W_1: \\| W_1 \\| \leq 1 \\}$. Then Assumption 4(i) on the partial $1$-homogeneity property straightforwardly holds: $h(T,W\_1,c\cdot W\_2) = c\cdot W\_2\mathrm{HuberizedReLU}(W\_1H)= c\cdot h(T,W\_1,W\_2).$ Regarding Assumption 4(ii) on the universal kernel property, we first note that according to the choice $(g,\alpha) = (h,w)$, this assumption is purely an assumption on the MLP part of the Transformer. Here we give the detailed proof as follows. First of all, according to the classic universal approximation theory (see the wiki page of “universal approximation theorem” and [1,2,3] for more details), we know that two-layer fully-connected networks with non-polynomial activation functions and without any constraints on its parameters are universal approximators. Therefore, we know that the function class $ \mathrm{span} \\{ W\_2\mathrm{ReLU}^2(W\_1T): W_1 \in \mathbb{R}^{\mathrm{dim}(W_1)}, W_2 \in \mathbb{R}^{\mathrm{dim}(W_2)} \\} $ is dense in $\mathcal{C}(\\|T\\|\_{2-\mathrm{col}}\leq B,\mathbb{R}^{D\times(N+1)})$. Moreover, by the definition of HyberizedReLU, for any $B>0$ and any $\hat{W}_1$, $\hat{W}_2$, there exist small constant $c$ such that $c\cdot \hat{W}_1 \in \mathcal{K}$, $c\cdot \\|\hat{W}_1\\| \leq B^{-1}$, and $ c^{-2} \cdot \hat{W}\_2\mathrm{HyberizedReLU}(c\cdot \hat{W}\_1T) = c^{-2} \cdot \hat{W}\_2\mathrm{ReLU}^2(c\cdot \hat{W}\_1T) = c^2\cdot c^{-2} \cdot \hat{W}\_2\mathrm{ReLU}^2(\hat{W}\_1T) = \hat{W}\_2\mathrm{ReLU}^2(\hat{W}\_1T)$, where the second equation follows by the positive $2$-homogeneity of $\mathrm{ReLU}^2$ activation. This implies that $ \\{ W\_2\mathrm{ReLU}^2(W\_1T): W_1 \in \mathbb{R}^{\mathrm{dim}(W_1)}, W_2 \in \mathbb{R}^{\mathrm{dim}(W_2)} \\} \subseteq \\{W\_2\mathrm{HyberizedReLU}(W\_1T): W_2 \in \mathbb{R}^{\mathrm{dim}(W_2)}\times \mathcal{K}\\} $. Therefore, we conclude that $ \mathrm{span} \\{W\_2\mathrm{HyberizedReLU}(W\_1T): W_2 \in \mathbb{R}^{\mathrm{dim}(W_2)}\times \mathcal{K}\\}$ is dense in $\mathcal{C}(\\|T\\|\_{2-\mathrm{col}}\leq B,\mathbb{R}^{D\times(N+1)})$. This finishes the validation of Assumption 4. --- [1] Funahashi, Ken-Ichi (January 1989). "On the approximate realization of continuous mappings by neural networks". Neural Networks. [2] Cybenko, G. (1989). "Approximation by superpositions of a sigmoidal function". Mathematics of Control, Signals, and Systems. [3] Pinkus, Allan (January 1999). "Approximation theory of the MLP model in neural networks". Acta Numerica. --- Rebuttal Comment 5.1: Comment: We hope that by presenting the details that you have requested, we can convince you that our theory can be applied to fairly practical Transformer models, and that the volume of work required should not be an issue. We are dedicated to add additional details on the explanations and verifications in our revised paper. We sincerely hope that you could take our discussion above into consideration and reevaluate our result. We appreciate your time and effort in reviewing our paper.
null
null
null
null
null
null
FAST: A Dual-tier Few-Shot Learning Paradigm for Whole Slide Image Classification
Accept (poster)
Summary: This paper proposes a few-shot learning approach for WSI (Whole-Slide Image) classification. This approach, built upon Tip-Adapter, leverages a cache branch to memorize the knowledge from few-shot instances and then retrieve label information from the cached knowledge. In addition, a prior branch, which utilizes the knowledge from CLIP and GPT4-V, is built to boost the predictive performance. The experiments on two WSI datasets show the superiority of the proposed method over the original Tip-Adapter and Tip-Adapter-F. Strengths: - Originality. This paper explores the few-shot setting in the context of WSI classification. It is under-studied in the field of computational pathology. - Significance. This work shows that the proposed approach could obtain a performance near that of fully supervised learning, only with a few labeled instances. Overall, it is an interesting work worthy of investigation in computational pathology, given that labeling WSIs at pixel level is extremely time-consuming and labor-expensive. - Quality. This work presents a good experimental design. Its experiments are conducted from different angles to verify the effectiveness of the proposed algorithms. Weaknesses: In summary, my main concerns lie in (1) writing quality, (2) limited technical contribution, and (3) missing experimental comparisons with important vision-language-based models in computational pathology. Based on these critical weaknesses, it may be hard to recommend accepting this paper to NeurlPS. The details are given below: - This paper is overall rough and sub-par in writing, requiring substantial improvements in clarity. Some obvious flaws are as follows: i) slice-label in line 102, ii) a efficient annotation strategy in line 111, iii) undefined V in line 160, iv) some undefined notations in Section 3.2, and v) some citation errors such as [20]. The authors are encouraged to check these errors and improve their presentation for better academic communication. - The technical novelty is quite limited. Most key designs of the proposed approach have been proposed in Tip-Adapter (Zhang et al., ECCV 2022). Compared to Tip-Adapter, the proposed approach does not present valuable or substantial technical contributions, since its two crucial components, few-shot knowledge retrieval in the cache branch and the prior branch, seem borrowed from Tip-Adapter. - The authors claim that their work differs significantly from Tip-Adapter (line 142) because the key-value cache model built by Tip-Adapter only allows the key to be learnable; in contrast, their approach allows both the key and the value to be learnable. The authors are encouraged to rephrase this sentence, as i) also allowing the value to be learnable should not be called a significant modification from my humble understanding; ii) Tip-Adapter actually has proposed to use a learnable value in the key-value cache model but it leads to collapse during training. Moreover, given that the learnable value leads to collapsed training in Tip-Adapter, could the authors explain to readers why their methods can avoid collapsed training? - Some important vision-language-based models in computational pathology are not cited and compared to the proposed methods, such as **PLIP** (Huang et al., Nature Medicine, 2023) and **CONCH** (Lu et al., Nature Medicine, March 2024). These models show exciting zero-shot performance in WSI classification combined with MI-Zero. The proposed approach should at least show better performance than their zero-shot performance. It could be crucial for justifying the value and significance of this work. Minor issues: - It is suggested to rewrite Section 3.2 to make sure that all notations (those in texts and figures) and the implementation of components are clear and well-explained. - The original Tip-Adapter is proposed for traditional single-instance settings. Its implementation for few-shot WSI classification is not clear. Technical Quality: 2 Clarity: 1 Questions for Authors: - Few-shot instances are randomly selected from the core set. Since only a few instances, e.g. 16, are selected from a very large instance pool and pathological patches (instances) often present heterogeneity, the final selected instance set could have a large variety and thus lead to unstable performances, calling into question the usability of the proposed method. This could also be observed from Fig 3 in the paper. So, the authors are encouraged to analyze and discuss the impact of randomly selected few-shot instances. - The original TCGA-RCC is not annotated at the pixel level. This work does a good job in terms of annotating the fine-grained region-of-interest of WSIs. However, if the annotation is not made public, this work would be difficult to follow and be very limited in research impact. The authors are encouraged to make their annotated dataset public. I would like to raise my score if the authors could resolve my concerns & questions above. ---------------------------------------------After Rebuttal------------------------------------------------- My main concerns have been addressed. I am happy to increase my score. Confidence: 5 Soundness: 2 Presentation: 1 Contribution: 2 Limitations: There is no explicit limitation that should be included in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: $\textbf{Q1:}$ This paper is overall rough and sub-par in writing, requiring substantial improvements in clarity. $\textbf{R1: }$ Thanks very much for pointing out the problem. We have revised the above errors as follows: “slice-label” has been corrected to “slide-label,”, “a efficient annotation strategy” has been corrected to “an efficient annotation strategy,”, the symbol “V” has been corrected to “L,” and a “workshops” flag has been added to the citation information for reference [20]. We will conduct sentence-by-sentence proofreading of the paper before the official publication and will also have professionals polish the language. $\textbf{Q2 and Q3:}$ The technical novelty is quite limited. The authors are encouraged to rephrase this sentence. Moreover, could the authors explain to readers why their methods can avoid collapsed training? $\textbf{R2 and R3:}$ Thanks for your great suggestion on improving the quality of our manuscript. I believe that the second and third comments in the Weaknesses section are closely related, so we have combined them in our response. I consider the proposed method in this paper to be innovative for the following reasons. 1. New Paradigm: For the problem of few-shot classification in pathology images, we propose a novel Dual-tier Few-Shot Learning Paradigm, which not only improves classification accuracy but also reduces annotation costs. 2. Annotation Strategies and Classification Framework: We have introduced a dual-level WSI annotation strategy and a dual-branch classification framework. These two components must work together to achieve excellent WSI classification performance. 3. Addressing Training Issues: As noted by the reviewer, using learnable keys and values in Tip-Adapter can lead to training instability. Therefore, directly applying a learnable Tip-Adapter to WSI classification is not feasible. Our method ensures that the cache model does not suffer from instability due to its overall design, including the classification paradigm, annotation strategy, and classification framework. First, we obtain precise labels for some patches through the new annotation strategy. Second, in the cache model we construct, the labels of annotated patches are not optimized; only the labels of unannotated patches are optimized. The labels of annotated patches provide correct guidance for learning the labels of other patches, ensuring that the proposed cache model remains stable. We accept the reviewer’s suggestion “The authors are encouraged to rephrase this sentence.” To clearly state our contributions, we revised “To fully utilize all patches in WSIs, we build a cache model where key and value are learnable, enabling the CLIP to adapt well to the WSI classification task” to “To fully utilize all patches in WSIs, we built a cache model where keys and values are learnable. This effectively facilitates the learning of correct label information for a large number of unlabeled patches, significantly enhancing the performance of CLIP for WSI classification.” At the same time, we rephrased the extent of the differences between our method and Tip-Adapter, revising “Our work is inspired by Tip-Adapter, but it differs significantly from Tip-Adapter.” to “Our work is inspired by Tip-Adapter, but it differs from Tip-Adapter.” $\textbf{Q4:}$Some important vision-language-based models in computational pathology are not cited and compared to the proposed methods, such as PLIP and CONCH. $\textbf{R4:}$ Thanks for your great suggestion on improving the quality of our manuscript. We have added comparison experiments with PLIP and CONCH. We used the same experimental setting as Zero-shot CLIP, then employed the PLIP image encoder and text encoder to obtain Zero-shot PLIP, and used the CONCH image encoder and text encoder to obtain Zero-shot CONCH. The experimental results are shown in the following two tables: Table A1 presents the results of the comparison methods with FAST on the CAMELYON16 dataset, and Table A2 shows the results on the TCGA-RENAL dataset. First, from Table A2, we can see that the bag classification AUC of Zero-shot CONCH that we reproduced on the TCGA-RENAL dataset (referred to as TCGA RCC dataset in CONCH) is 91.94%, which significantly exceeds the 90.2% reported in the CONCH original paper. This indicates that our constructed prompt is very effective and that our Zero-shot CLIP serves as an excellent baseline model. This result also demonstrates that our comparative results across multiple datasets are very valid. From Table A1, it can be observed that our method outperforms PLIP and CONCH in all metrics on the CAMELYON16 dataset. From Table A2, it is evident that our method surpasses PLIP in all metrics on the TCGA-RENAL dataset. Compared to CONCH, FAST significantly outperforms CONCH in the average bag-level classification AUC. It is worth noting that our method uses a maximum of only 16 WSIs, while CONCH relies on 1.17 million pairs of pathology images based on CoCa, resulting in a significantly high training cost. Last but not least, our method is orthogonal to studies such as PLIP and CONCH, and they do not conflict with each other. For example, in the “Few-shot classification with task-specific supervised learning” section of CONCH, it is stated: “However, it may still be desirable to specialize the model with labeled training examples to maximize performance for a given task, ideally using as few labels as possible. In this section, we investigate the label efficiency when using the pretrained representation of the image encoder backbone of the visual-language foundation models for task-specific supervised classification.”. This indicates that PLIP, CONCH, and similar methods will also need to be combined with few-shot learning methods like FAST proposed in this paper in the future. Therefore, our method can be effectively combined with methods such as PLIP and CONCH to further enhance WSI classification performance. --- Rebuttal 2: Title: Part2: Rebuttal by Authors Comment: $\textbf{Q5:}$ It is suggested to rewrite Section 3.2 to make sure that all notations (those in texts and figures) and the implementation of components are clear and well-explained. $\textbf{R5:}$ Thanks for your great suggestion on improving the quality of our manuscript. Firstly, we have revised the corresponding errors according to the suggestions in $\textbf{Q1}$. Secondly, we have corrected line 180 from “$\tilde{y}^{\text{cache}} = f_{\text{train}} F_{\text{train}}^T Y_{\text{train}}^I $” to “$\tilde{y}^{\text{cache}} = f_{\text{train}} F_{\text{train}}^T {Y_{\text{train}}^I}^ {{\prime}{\prime}}$”. We have also added the definition of $P_{\text{train}}$ as follows. $P_{\text{train}} = [\{ p_{(1, L+1)}, p_{(1, L+2)}, \ldots, p_{(1, K_1)}\}, \{ p_{(2, L+1)}, p_{(2, L+2)}, \ldots, p_{(2, K_2)} \}, \ldots, \{ p_{(i, L+1)}, p_{(i, L+2)}, \ldots, p_{(i, K_i)} \} ]$ represents the pseudo-labels of all unannotated instances in ${{X}_{\text{train}}}^{\prime}$, where $p$ is a learnable high-dimensional vector. $\textbf{Q6:}$ The original Tip-Adapter is proposed for traditional single-instance settings. Its implementation for few-shot WSI classification is not clear. $\textbf{R6:}$ We apologize for any inconvenience brought to you. We will add the implementation details of the Tip-adapter for few-shot WSI classification tasks to the supplementary materials. Additionally, we will open-source the code to facilitate further research by other researchers. The implementation details are as follows. We conducted experiments according to the settings of the optimal model in the original Tip-Adapter paper. For aspects that cannot be adapted to the few-shot WSI classification task, we used the following approach. 1. We designed a set of text prompts specifically for pathology images, which has been proven superior in the CONCH comparison experiments we conducted. 2. We used all annotated patches to build the cache model. $\textbf{Q7:}$ Few-shot instances are randomly selected from the core set. Since only a few instances, e.g. 16, are selected from a very large instance pool and pathological patches (instances) often present heterogeneity, the final selected instance set could have a large variety and thus lead to unstable performances, calling into question the usability of the proposed method. This could also be observed from Fig 3 in the paper. So, the authors are encouraged to analyze and discuss the impact of randomly selected few-shot instances. $\textbf{R7:}$ We apologize for any inconvenience brought to you. In our constructed cache model, not only does it include a few labeled patches, but it also contains a large number of unlabeled patches, making our model relatively stable. Additionally, as shown in Figure 3, when the instance shot reaches 16, the variance becomes relatively small. $\textbf{Q8:}$ The original TCGA-RCC is not annotated at the pixel level. This work does a good job in terms of annotating the fine-grained region-of-interest of WSIs. However, if the annotation is not made public, this work would be difficult to follow and be very limited in research impact. The authors are encouraged to make their annotated dataset public. $\textbf{R8:}$ Thank you very much for your suggestions on our work. To promote progress and development in the community, we will release the relevant datasets available for academic research. --- Rebuttal 3: Title: Reply to the Authors' Rebuttal Comment: Thanks for the authors' efforts and responses. After carefully reading the replies, I still have the following concerns: - **R2**: I am more concerned with the technical novelty since the proposed framework does incremental work to the existing Tip-Adapter. By the way, I do acknowledge the novelty of the two-level few-shot learning paradigm, as I mentioned in the Strengths. - **R4**: Thanks for the experimental results. These results could be helpful to justify the value of this work. However, I cannot agree with the authors' claims made in R4. - "*Compared to CONCH, FAST significantly outperforms CONCH in the average bag-level classification AUC*". Is there any statistical test to verify the significance difference between the AUC of 0.9235 and 0.9141? Or, does this conclusion just come from a personal sense? I think in scientific research the conclusion given by the authors must be rigorous enough. - "*while CONCH relies on 1.17 million pairs of pathology images based on CoCa, resulting in a significantly high training cost.*". CONCH is a foundational model for pathology, just like CLIP. Here, discussing its efficacy and comparing it with the proposed FAST is not appropriate, because I) foundation models generally rely on massive data and large-scale pretraining, and ii) FAST also stands on the shoulder of such foundation models like GPT and CLIP, right? - Additionally, I mentioned a comparison with CONCH. It intends to encourage the authors to justify the significance and value of this work, not to question the validity of the experiments. Concretely, for example, if the foundational model, CONCH, could achieve an accuracy of 90% in zero-shot settings yet the FAST framework with CONCH only obtains 90.5% in few-shot settings, the improvement would be too marginal to demonstrate the value of the proposed few-shot FAST. - "*Therefore, our method can be effectively combined with methods such as PLIP and CONCH to further enhance WSI classification performance.*". I failed to find the experiments on FAST + CONCH and see the improvements, so I think this claim, *i.e.*, it can further enhance WSI classification performance, may not be valid from my point of view. - **R6**: I cannot figure out the authors' implementation for Tip-Adapter in few-shot WSI classification, after carefully reading the authors' instructions. Could the authors please explain more? Thanks. - **R7**: The authors mention that the model is relatively stable because the cache model includes a few labeled patches and a large number of unlabeled patches. It seems not obvious to me, since I just don't understand the logic behind the cause and consequence provided by the authors. --- Rebuttal Comment 3.1: Title: Response to the Remaining Concerns from Reviewer x5Ho Comment: Thank you very much for your rapid response, which is crucial for improving the quality of our manuscript. $\textbf{Response to R2:}$ First, we sincerely appreciate the reviewer’s recognition of our innovation in the dual-tier few-shot learning paradigm, which is meaningful for WSI classification. Additionally, when Tip-adapter is applied to WSI classification, it faces challenges such as the inability to fully utilize WSI data and the issue of huge size. For the former, directly adopting the methods from Tip-adapter to fully utilize WSI data would lead to instability in training, as mentioned in response to an earlier question. To address this, we proposed a cache model where both labels and features are learnable, with the labels of annotated patches being frozen while those of unannotated patches remain learnable. This effectively alleviates the issue of Tip-adapter being unable to fully utilize WSI data. For the latter, we introduced a core set construction method that effectively addresses the challenge of training on entire WSIs caused by their huge size. Overall, our model design is inspired by Tip-adapter, but it is not entirely identical to Tip-adapter. $\textbf{Response to R4:}$ We are very grateful for the problems pointed out by the reviewer, as they have been extremely helpful in improving the quality of our manuscript. The last three questions are closely related, so we will answer them together in response 2. 1. In the final version of the paper, we will revise “Compared to CONCH, FAST significantly outperforms CONCH in the average bag-level classification AUC” to “Compared to CONCH, FAST outperformed CONCH by 0.0094 in the average bag-level classification AUC.” 2. First, I would like to provide a brief explanation of PLIP and CONCH. PLIP is a version of CLIP fine-tuned on large-scale pathological data. Similar to CONCH, which is trained based on the large-scale vision-language model CoCa. Secondly, we apologize for any inconvenience caused. Below, we present the experimental results of our method combined with CONCH. As shown in the table A5, when the bag shot is 1 and the instance shot is 16, FAST-CONCH improves the instance-level AUC by 0.1227 and the bag-level AUC by 0.1485 compared to FAST-CLIP. When the bag shot is 16 and the instance shot is 16, FAST-CONCH improves the instance-level AUC by 0.0615 and the bag-level AUC by 0.1373 compared to FAST-CLIP. CONCH achieved a bag-level classification AUC of 0.7113 on the CAMELYON16 dataset. In comparison, FAST-CONCH improved this by 0.2457. $\textbf{Table A5: Results of using CONCH on CAMELYON16 dataset.}$ | Bag Shot | Instance Shot | Methods | Instance-level AUC | Bag-level AUC | | :---: | :---: | :---: | :---: | :---: | | 0 | 0 | CONCH | $0.8929$ | $0.7113$ | | 1 | 16 | FAST-CLIP | $0.8400 \pm 0.0335$ | $0.6933 \pm 0.0846$ | | 1 | 16 | FAST-CONCH | $0.9627 \pm 0.0132$ | $0.8418 \pm 0.0734$ | | 2 | 16 | FAST-CLIP | $0.8584 \pm 0.0380$ | $0.7595 \pm 0.0391$ | | 2 | 16 | FAST-CONCH | $0.9667 \pm 0.0115$ | $0.8399 \pm 0.0556$ | | 4 | 16 | FAST-CLIP | $0.8864 \pm 0.0563$ | $0.7359 \pm 0.0853$ | | 4 | 16 | FAST-CONCH | $0.9763 \pm 0.0036$ | $0.9326 \pm 0.0175$ | | 8 | 16 | FAST-CLIP | $0.9060 \pm 0.0074$ | $0.7742 \pm 0.0249$ | | | 8 | 16 | FAST-CONCH | $0.9792 \pm 0.0024$ | $0.9507 \pm 0.0058$ | | 16 | 16 | FAST-CLIP | $0.9151 \pm 0.0200$ | $0.8197 \pm 0.0474$ | | 16 | 16 | FAST-CONCH | $0.9766 \pm 0.0036$ | $0.9570 \pm 0.0053$ | $\textbf{Response to R6:}$ In WSI classification, we followed the settings from the original Tip-adapter paper. For aspects that were not directly applicable to WSI classification task, we made the following adjustments: 1. We designed a specific set of text prompts customized for WSI classification task. 2. We used all annotated patches to construct the cache model. Apart from these differences, the settings are consistent with those in the Tip-adapter paper. $\textbf{Response to R7:}$ We sincerely apologize for any confusion we may have caused. We randomly selected a small number of instances from the core set for annotation. The remaining instances in the core set were not annotated but instead were all used in the training of the cached model. During training, the unannotated patches gradually learn the labels used for classification from the annotated patches. Since all patches in the core set are eventually used for model training, our model is relatively stable. The variance observed by the reviewer in Figure 3 comes from the random selection of bags. When the number of bags is 1 or 2, the variance is indeed large. We believe this is reasonable, as it is challenging to fit all the data with just one WSI. When the number of bags is greater than or equal to 4, the variance decreases considerably. Therefore, in practical applications, we recommend selecting 4 bags or more. --- Rebuttal 4: Comment: I would like to thank the authors for their efforts. Most of the time during the rebuttal, I feel inefficient communication in reading the author's responses, as most of these responses often fail to capture the true meaning of my questions and lead to undesirable QA. Yet, overall, most of my concerns have been resolved. Thanks for the authors' experiments provided in the rebuttal. I believe the proposed framework could facilitate the study of few-shot WSI analysis. In view of these, I am happy to increase my score. The authors are encouraged to include the important suggestions & questions into the final version of the paper. --- Rebuttal Comment 4.1: Comment: Thank you very much for your suggestions, which have been extremely helpful in improving the quality of our manuscript. We also sincerely appreciate your recognition of our work and the higher score. We will incorporate the aforementioned content in the camera-ready version of the paper.
Summary: To address the challenges of expensive fine-grained annotation and data scarcity encountered in the clinical application of deep learning-based WSI classification methods, this paper proposes a novel and efficient dual-tier few-shot learning paradigm named FAST. Under this new paradigm, the authors introduce a dual-level annotation strategy that includes bag-level few-shot and instance-level few-shot, modeling the WSI classification problem as a new few-shot classification problem. Building on this, the authors further propose a classification framework composed of a learnable image cache branch and a CLIP prior knowledge branch, fully leveraging the value of the limited data and labels. Extensive experimental results show significant improvement over other few shot methods in both binary and multi-class classification tasks. Interestingly, the proposed method FAST achieves performance close to fully supervised methods with only 0.22% of the annotation cost. This showcases its efficiency and great potential for practical applications. Strengths: 1. The paper is well written and easy to read. The authors intuitively demonstrate their methods and contributions through numerous figures and tables. Extensive comparative and ablation experiments illustrate the efficiency and generality of the proposed method. To ensure fairness and prevent randomness, the authors conducted multiple random experiments in their study. The results show significantly better performance in both bag-level and instance-level classification compared to other methods. 2. The proposed dual-level WSI annotation strategy is a highly innovative and suitable method for WSI data annotation. It addresses the issues of single-level annotation and provides patch-level supervisory information at a cost close to slide-level annotation. Compared to fully supervised methods, the proposed method has significantly lower annotation costs, astonishingly reaching as low as one-thousandth or even one-ten-thousandth. 3. This paper is inspired by Tip-adapter and proposes a learnable cache branch where both labels and image features are learnable. The final classification framework further integrates a CLIP prior knowledge branch incorporating GPT-4V. Comparative experiments show that this method achieves performance close to fully supervised methods with only 0.2% of the data annotation cost, which is an exciting advancement. Ablation experiments also demonstrate the importance of the proposed components. Weaknesses: 1. The function ϕ(∙) in Figure 2 is not mentioned or explained in the paper, which may confuse readers. 2. The authors conducted extensive comparisons in terms of accuracy and annotation cost but lacked analysis of time and memory consumption. 3. In section 3.2, the authors mention obtaining the optimal fusion weight α through grid search but lack specific details in the paper. Technical Quality: 3 Clarity: 3 Questions for Authors: Will the authors open source the relevant code and all model weights for this project? For other issues, please refer to the weaknesses. Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 4 Limitations: In the conclusion section, the authors acknowledge the limitations of their proposed method. I agree that such limitations exist and look forward to future work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: $\textbf{Q1:}$ The function $\phi(\cdot)$ in Figure 2 is not mentioned or explained in the paper, which may confuse readers. $\textbf{R1:}$ We apologize for any inconvenience brought to you. We have added a description of the function \phi(\cdot) in Section 3.2, revising “The retrieval result is $(\dot{Q}\dot{K}^T)\dot{V}$ ” to “ The retrieval result is $\phi(\dot{Q} \dot{K}^T)\dot{V}$ , where $\phi(\cdot) = \text{softmax}(\cdot)$ . ” $\textbf{Q2:}$ The authors conducted extensive comparisons in terms of accuracy and annotation cost but lacked analysis of time and memory consumption. $\textbf{R2:}$ Thanks for your great suggestion on improving the quality of our manuscript. We conducted experiments on training time and memory consumption for the scenario with an instance shot of 16 and a bag shot of 16 using an NVIDIA RTX 3090. The results are shown in the Table A3. Our method achieves performance close to fully supervised methods with a training time of only 0.21 hours and a memory usage of just 5.12 GB. In comparison, training pathology large models like CONCH requires 8 NVIDIA A100 GPUs, highlighting a significant advantage of our method. $\textbf{Table A3: analysis of time and memory consumption}$ | Metric | Time (h) | Memory (GB) | | :---: | :---: | :---: | | FAST | 0.21 | 5.12 | $\textbf{Q3:}$ In section 3.2, the authors mention obtaining the optimal fusion weight α through grid search but lack specific details in the paper. $\textbf{R3:}$ Thanks for your great suggestion on improving the quality of our manuscript. We have added the following description in Section 3.2. We divide the fusion weight \alpha into equal intervals with a step size of 100, then sequentially calculate the classification accuracy for each fusion ratio, and finally select the fusion weight that yields the highest classification accuracy as the fusion weight \alpha for this task. $\textbf{Q4:}$ Will the authors open source the relevant code and all model weights for this project? $\textbf{R4:}$ Thank you very much for your recognition of our work. We will open-source all related code and model weights to promote further development in WSI classification research. --- Rebuttal Comment 1.1: Title: Thank you for your rebuttal, which clearly addressed my concerns. Comment: Thank you for your rebuttal, which clearly addressed my concerns. I have read other reviewers' comments and the author's rebuttal, particularly the comparison with vision-language-based models in computational pathology. The experimental results and the authors’ responses clearly and effectively demonstrate the contribution of this paper. Overall, the authors' rebuttal resolves my concerns, and their answers to other reviewers' questions also seem reasonable to me. I think this paper is highly valuable for advancing computational pathology. Thus, I increase my rating to strong accept. --- Reply to Comment 1.1.1: Comment: Thank you very much for your suggestions, which have been extremely helpful in improving the quality of our manuscript. We also sincerely appreciate your recognition of our work and the higher score. We will incorporate the aforementioned content in the camera-ready version of the paper.
Summary: In this article, the authors propose a novel few-shot learning paradigm for WSI classification. This paradigm is based on two branches: the first is a learnable cache model that utilizes both labeled and unlabeled instance data, and the second, the Prior Branch, leverages the prior knowledge of a pre-trained CLIP model. By combining these two branches, efficient few-shot learning is achieved, and extensive experiments have been conducted on the CAMELYON16 dataset and the TCGA-RENAL dataset. Strengths: 1. Few-shot learning is inherently important in the field of WSI classification. The authors have proposed a new few-shot learning paradigm tailored for WSI classification and have achieved notable results. 2. The experiments on few-shot learning are quite comprehensive, thoroughly comparing the effects of different numbers of instances and bags. Weaknesses: 1. The study lacks experiments with V-L models specific to the pathology field. Since CLIP is not originally based on pathology images, the authors should include comparisons using PLIP [1] and CONCH [2]. 2. The few-shot learning method proposed by the authors operates at both the instance-level and bag-level. Therefore, the comparative methods should include both instance-based methods and bag-level methods (multi-instance learning). However, the fully supervised methods chosen for comparison are only instance-based. The authors should supplement their comparisons with bag-level methods based on multi-instance learning, such as R2T [3]. 3. Comparing the third and fourth rows in Table 3 of the paper reveals that adding the Prior Branch on top of existing components brings almost no improvement. However, it requires first processing through GPT and then the Text-encoder, significantly increasing the cost without enhancing performance. [1] PLIP: A visual–language foundation model for pathology image analysis using medical Twitter. Nature Medicine 2023 [2] CONCH:A Vision-Language Foundation Model for Computational Pathology. Nature Medicine 2024 [3] Feature Re-Embedding: Towards Foundation Model-Level Performance in Computational Pathology. CVPR 2024 Technical Quality: 3 Clarity: 3 Questions for Authors: What the difference between the proposed work and previous few-shot WSI classification methods like [4]? [4] The rise of ai language pathologists: Exploring two-level prompt learning for few-shot weakly-supervised whole slide image classification. NeurIPS 2023 Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors only tested up to 16 instances and bags, but there is still a significant performance improvement from 8 to 16. I am curious at what data ratio in few-shot learning the model will begin to overfit. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: $\textbf{Q1:}$ The study lacks experiments with V-L models specific to the pathology field. Since CLIP is not originally based on pathology images, the authors should include comparisons using PLIP [1] and CONCH [2]. $\textbf{R1:}$ Thanks for your great suggestion on improving the quality of our manuscript. Due to the huge size of WSIs, extracting features for the entire dataset using foundational models such as CLIP, PLIP, and CONCH requires a significant amount of time. We have not yet completed the extraction of features encoded using PLIP and CONCH. Additionally, many function wrappers in PLIP and CONCH are different from those in CLIP. Therefore, we need more time to build models using PLIP and CONCH. Once we obtain the latest experimental results, we will include these results in our subsequent replies. We have now completed the extraction of all features for the test set and have conducted some zero-shot classification experiments. Please refer to Table A1 and Table A2 in the PDF file for the experimental results. $\textbf{Q2:}$ The few-shot learning method proposed by the authors operates at both the instance-level and bag-level. Therefore, the comparative methods should include both instance-based methods and bag-level methods (multi-instance learning). However, the fully supervised methods chosen for comparison are only instance-based. The authors should supplement their comparisons with bag-level methods based on multi-instance learning, such as R2T [3]. $\textbf{R2:}$ Thank you very much for your valuable suggestion. We did not compare bag-level methods based on multi-instance learning for the following reasons. First, instance-level fully supervised methods represent the upper bound of supervised learning classification results. Bag-level methods generally perform worse than instance-level fully supervised methods due to the lack of precise fine-grained labels. Second, we found it challenging to accurately reproduce the results of R2T within the limited time available. For these reasons, we did not include this comparison experiment. However, we have added an explanation in Section 4.2, “Comparing Methods and Evaluation Metrics,” about why we did not compare with “bag-level methods,” and we have also included references to relevant methods in the related work section. The specific content added is as follows. “Instance-level fully supervised methods represent the upper bound of supervised learning. Bag-level weakly supervised multi-instance learning methods, such as R2T, generally perform worse than instance-level fully supervised methods due to the lack of fine-grained labels. Therefore, this paper does not directly compare with multi-instance learning methods.” $\textbf{Q3:}$ Comparing the third and fourth rows in Table 3 of the paper reveals that adding the Prior Branch on top of existing components brings almost no improvement. However, it requires first processing through GPT and then the Text-encoder, significantly increasing the cost without enhancing performance. $\textbf{R3:}$ We apologize for any inconvenience brought to you. If we only compare the third and fourth rows of Table 3, it might indeed seem that way. However, we found that these results in Table 3 are due to the experiments being conducted under the 16-bag shot setting. To fully demonstrate the role of the prior branch, we conducted further experiments, and the related results and analysis can be found in the main text, Figure 4, and line 288. For convenience, we have included some key conclusions here: “When there are only 1 or 2 bags, the instance classification results of the prior branch are significantly higher than those of the cache branch. The instance and bag classification results that combine both the cache and prior branches also surpass those of using each branch separately, indicating that the prior branch performs better in extreme samples, and the information learned by the prior branch and the cache branch is complementary. Therefore, in extreme few-shot scenarios, FAST is dominated by the prior branch, but as the sample size gradually increases, FAST is dominated by the image branch.” $\textbf{Q4:}$ What the difference between the proposed work and previous few-shot WSI classification methods like [4]? [4] The rise of ai language pathologists: Exploring two-level prompt learning for few-shot weakly-supervised whole slide image classification. NeurIPS 2023 $\textbf{R4:}$ Our method differs significantly from Top in the following ways. 1. Different Scenarios: Top is a few-shot learning method under slide-level labels. While it also reduces annotation costs, it lacks precise patch-level label information. In contrast, we propose a dual few-shot learning scenario tailored for WSIs, which not only provides patch-level label information but also significantly reduces annotation costs. 2. Different Technical Approaches: The focus of Top’s research is on designing better text prompt strategies to serve CLIP. In contrast, our method introduces new approaches from both the image cache branch and the text prior branch to fully utilize existing annotation information. $\textbf{Q5:}$ The authors only tested up to 16 instances and bags, but there is still a significant performance improvement from 8 to 16. I am curious at what data ratio in few-shot learning the model will begin to overfit. $\textbf{R5:}$ We conducted experiments with 64 instances in the supplementary materials, and the results are shown in Figure 6. From Figure 6, it can be observed that as the instance shot increases, the classification accuracy also gradually improves. When the number of shots reaches 64, the rate of accuracy increase slows down. Therefore, considering both training costs and performance benefits, we recommend using an instance shot of 16 or 64 in practical applications. --- Rebuttal 2: Title: Experiments with V-L models specific to the pathology field Comment: Thank you very much for your valuable suggestions. We have implemented FAST using the vision-language model CONCH from the pathology field and conducted experiments on the CAMELYON16 dataset. The experimental results are shown in Table A4. We define the method using CLIP as the feature extractor as FAST-CLIP, and the method using CONCH as the feature extractor as FAST-CONCH. As shown in Table A4, compared to using CLIP as the feature extractor, using CONCH significantly improves the classification performance of FAST. Notably, the bag-level classification AUC can reach 0.957. This indicates that our method not only integrates well with V-L models like CLIP in natural images for WSI classification but also enhances the classification performance of V-L models in pathology. $\textbf{Table A4: Results of using CONCH on CAMELYON16 dataset.}$ | Bag Shot | Instance Shot | Methods | Instance-level AUC | Bag-level AUC | | :---: | :---: | :---: | :---: | :---: | | 1 | 16 | FAST-CLIP | $0.8400 \pm 0.0335$ | $0.6933 \pm 0.0846$ | | 1 | 16 | FAST-CONCH | $0.9627 \pm 0.0132$ | $0.8418 \pm 0.0734$ | | 2 | 16 | FAST-CLIP | $0.8584 \pm 0.0380$ | $0.7595 \pm 0.0391$ | | 2 | 16 | FAST-CONCH | $0.9667 \pm 0.0115$ | $0.8399 \pm 0.0556$ | | 4 | 16 | FAST-CLIP | $0.8864 \pm 0.0563$ | $0.7359 \pm 0.0853$ | | 4 | 16 | FAST-CONCH | $0.9763 \pm 0.0036$ | $0.9326 \pm 0.0175$ | | 8 | 16 | FAST-CLIP | $0.9060 \pm 0.0074$ | $0.7742 \pm 0.0249$ | | | 8 | 16 | FAST-CONCH | $0.9792 \pm 0.0024$ | $0.9507 \pm 0.0058$ | | 16 | 16 | FAST-CLIP | $0.9151 \pm 0.0200$ | $0.8197 \pm 0.0474$ | | 16 | 16 | FAST-CONCH | $0.9766 \pm 0.0036$ | $0.9570 \pm 0.0053$ | --- Rebuttal Comment 2.1: Comment: Thanks for the authors responce and further experiments. My questions are addressed, and I will rise the score. --- Reply to Comment 2.1.1: Comment: Thank you very much for your suggestions, which have greatly helped improve the quality of our manuscript. We sincerely appreciate your recognition of our work and the higher score. We will include the aforementioned content in the camera-ready version of the paper.
Summary: This paper investigates the issue of Whole Slide Images (WSI) classification, a study with practical value. It proposes a new working paradigm that is an improvement based on Tip-Adapter. Theoretically, this new paradigm can effectively address the problem and has strong scalability. Strengths: This study has practical significance, and the proposed method demonstrates strong scalability. The paper is clearly written and easy to understand, with comprehensive experiments. Weaknesses: 1. This paper lacks some important related work. The proposed method is based on the Tip-Adapter. While, there are many improvements based on Tip-Adapter, such as [1-4]. I think the experiments should include comparisons with these related methods, or at the very least, mention and briefly analyze them. [1] Collaborative Consortium of Foundation Models for Open-World Few-Shot Learning. AAAI, 2024. [2] Prompt, Generate, then Cache: Cascade of Foundation Models makes Strong Few-shot Learners. CVPR, 2023. [3] DeIL: Direct-and-Inverse CLIP for Open-World Few-Shot Learning. CVPR, 2024. [4] Not All Features Matter: Enhancing Few-shot CLIP with Adaptive Prior Refinement. ICCV, 2023. 2. In Figure 1, the blue and red boxes should be explained. Additionally, if space permits, I suggest that in future work, the authors could add more detailed descriptions in the caption. By this way, readers can understand the general idea of the method just by looking at the figure and caption, without having to spend effort finding the corresponding description in the main text. 3. For instance-shot, only the results of 16-shot are shown, without the results of 1-shot, 2-shot, etc. 4. Line 160 seems to have a typo; it should be y_1L instead of y_1V. Technical Quality: 4 Clarity: 4 Questions for Authors: see weakness Confidence: 5 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: authors didn't present the limitation. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: $\textbf{Q1:}$ This paper lacks some important related work. The proposed method is based on the Tip-Adapter. While, there are many improvements based on Tip-Adapter, such as [1-4]. I think the experiments should include comparisons with these related methods, or at the very least, mention and briefly analyze them. $\textbf{R1:}$ Thanks for your great suggestion on improving the quality of our manuscript. our method is orthogonal to these four studies, and they do not conflict with each other. Additionally, the four studies mentioned above mainly focus on natural images and would face similar challenges as Tip-Adapter when directly applied to WSIs. Combining our method with these four methods has great potential to further improve the WSIs classification accuracy. For these reasons, we did not compare them directly through experiments, but we analyzed and summarized these four important studies in the related work section. The specific content added in the related work section is as follows. In the field of natural images, many subsequent works based on Tip-Adapter have also made significant contributions to the development of foundation model adaptation. For example, CaFo [2] effectively combines the different prior knowledge of various pre-trained models by cascading multiple foundation models. CO3 [1] goes a step further by considering both general and open-world scenarios, designing a text-guided fusion adapter to reduce the impact of noisy labels. Similarly, for open-world few-shot learning, DeIL [3] proposes filtering out less probable categories through inverse probability prediction, significantly improving performance. APE [4] proposes an adaptive prior refinement method that significantly enhances computational efficiency while ensuring high-precision classification performance. Due to the huge size and the lack of pixel-level annotations, these methods cannot effectively solve the classification problem of WSIs. $\textbf{Q2:}$ In Figure 1, the blue and red boxes should be explained. Additionally, if space permits, I suggest that in future work, the authors could add more detailed descriptions in the caption. By this way, readers can understand the general idea of the method just by looking at the figure and caption, without having to spend effort finding the corresponding description in the main text. $\textbf{R2:}$ Thanks very much for pointing out the problem. We have added the following description below Figure 1. Figure 1: Different few-shot learning paradigms for WSI classification. (a) The instance few-shot method divides all WSIs into a series of patches, then selects a few samples at the patch level and annotates them at the patch level. The red box represents positive samples, and the blue box represents negative samples. (b) The bag few-shot method directly selects a few WSIs at the slide level and annotates them weakly at the slide level. (c) Our method first selects a few WSIs at the slide level, then annotates a few patches for each selected WSI. Compared to (a) and (b), our method significantly reduces annotation costs while providing patch-level supervision information. $\textbf{Q3:}$ For instance-shot, only the results of 16-shot are shown, without the results of 1-shot, 2-shot, etc. $\textbf{R3:}$ We apologize for any inconvenience brought to you. Due to space limitations in the main text, we did not present the experimental results for different shot settings. Instead, we included these results on page 22 of the supplementary materials. In Figure 6, we show the results for 4-shot, 16-shot, and 64-shot settings. From Figure 6, it can be observed that the classification accuracy gradually increases with the number of shots. When the number of shots reaches 64, the accuracy increase nearly converges. Therefore, we recommend using 16-shot or 64-shot in practical applications. We have also added experimental results for the 1-shot and 2-shot settings. The results are shown in Figure A1, which we have uploaded in the PDF file. When the number of shots decreases to 1-shot or 2-shot, the accuracy decreases, with the lower limit being the result of Zero-shot CLIP. $\textbf{Q4:}$ Line 160 seems to have a typo; it should be $y_{1,L}$ instead of $y_{1,V}$. $\textbf{R4:}$ Thanks very much for pointing out the problem. The correct term here should indeed be $y_{1,L}$ . We have corrected this typo in the manuscript. --- Rebuttal Comment 1.1: Title: Response to the Authors Comment: Thanks for the authors' responses. The author has resolved my questions and I agree to accept this paper. --- Reply to Comment 1.1.1: Comment: Thank you very much for your suggestions, which have been extremely helpful in improving the quality of our manuscript. We also sincerely appreciate your recognition of our work and the higher score. We will incorporate the aforementioned content in the camera-ready version of the paper.
Rebuttal 1: Rebuttal: We sincerely thank all reviewers for your valuable comments. These comments have greatly helped improve the quality of our manuscript. Next, we will reply to the questions raised by each reviewer individually. The figures and tables mentioned in our replies have all been uploaded in a single PDF file. Pdf: /pdf/3ca60791857400f0760a84fe340b36d18f1f1a09.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
UDPM: Upsampling Diffusion Probabilistic Models
Accept (poster)
Summary: This paper introduces a novel generative model called the Upsampling Diffusion Probabilistic Model (UPDM). UPDM aims to decrease the number of diffusion steps needed to generate high-quality images, resulting in a significantly improved efficiency compared to previous methods. Strengths: 1. This paper is well-written, and the organization is great. 2. The motivation is clear enough. Weaknesses: 1. Some symbols are not fully explained when they are used at the first time. 2. The datasets might be fully able to valid the effectiveness of your method. 3. The compared methods are relatively out-of-date. 4. The comparison metric is only FID. 5. Some commas and labels in several equations are missing. Technical Quality: 3 Clarity: 2 Questions for Authors: 1. In ColdDIffuison, blur and noise can be utilized to train a diffusion model. In your method, downsample and noise is utilized at the same time to train a diffusion model. Please re-clarify your main contribution expect for this. 2. Please re-clarify the details for your network to handle images with different resolution. 3. Can you explain how you balance the weights of L{simple}, L{per}, and L{adv}. Please provide more ablation studies. 4. Please compare the generation speed with other methods that speed up DDPM. 5. Please provide more details about your network. 6. How did the authors assure that 3 steps will obtain best performance? Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: The authors have addressed the limitations and potential negative societal impact of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for the valuable comments. Our response is detailed below. **Weaknesses**: 1) Will be fixed in the revised version. We thank the reviewer for the effort. 2) Although the three datasets we examined UDPM on are diverse, with CIFAR10 and AFHQv2 being multi-class datasets; we additionally trained UDPM on the LSUN horses dataset at 128x128 resolution as can be seen in the attached rebuttal PDF file. 3) The methods shown in the paper are state-of-the-art accelerated diffusion-based generative models. We will be glad to compare our methods to other works we might have missed. There is a recent class of generative models derived from diffusion named consistency models [1], which can generate images with a single denoising step with an FID score of 8.70 on CIFAR10, which is still inferior to UDPM in both generation quality and efficiency. Note that we do not compare UDPM to distillation-based approaches and specifically to the distillation version of the consistency models, as distillation is complementary to our technique. Thus, we compare only to direct training. Note also that even if distillation-based approaches are taken into account, they require at least a single denoising step, leading to much longer runtimes ($\sim300\\%$) than UDPM. 4) Following the reviewer's comment, we have the following inception score comparison: Denoising student: 8.36 TDPM: 8.65 UDPM: **9.01** Which shows that UDPM outperforms current SOTA efficient diffusion models also in the inception score measure. In the revised version of the paper we will make sure to add this comparison to Table 1. 5) We will make sure to polish the paper in the revised version. **Questions**: 1) Indeed in ColdDiffusion they propose multiple approaches for defining the forward diffusion process. However, ColdDiffusion does not address the existence of the reverse diffusion process in their formulation. While in UDPM, Lemma 1 allows explicit access to the reverse process defined in equations (10-12). 2) UDPM needs a network architecture that upsamples its input while being aligned to other DDPM works to allow direct comparison. Therefore we use the popular SongUNet [2] used in many diffusion works while increasing the number of output channels from $3$ to $3\times \gamma^2$, followed by a depth-to-space layer that upsamples the output by rearranging the pixels (https://pytorch.org/docs/stable/generated/torch.nn.PixelShuffle.html). This architecture makes minimal changes to the network, assuring minimal additional latency over the baseline. This is reflected by the run time that we measure which indeed shows that. 3) The practical objective function we use is built from three different terms: $\ell_1$: Promotes high fidelity and agreement between the diffusion steps. $\ell_{per}$: Guides the network to more perceptual pleasing estimations. $\ell_{adv}$: Complementary term to the $\ell_{per}$, that makes sure the reconstructed variable statistics matches the true one. We found the combination of the three terms above very crucial for getting sharp and detailed generations. For instance, on using only $\ell_1$ when training UDPM on CIFAR10 we got an FID score of $\sim 60$, and when we added the $\ell_{per}$ it reached $\sim 30$. In the attached PDF file we show the effect of each loss term on the generation results. We will make sure this ablation study is presented in the revised version of the paper. 4) For the completeness of the comparison, below we report the average runtimes on CIFAR10 of UDPM with 3 steps, and DDPM with a single denoising step when a similar network is used. Benchmarked on a single NVIDIA RTX A6000 GPU and averaged on 100K image generations for eliminating any unwanted overhead: UDPM: 2008.21 FPS DDPM: 735.68 FPS Speedup = 2.73x The runtimes will be added to Table 1 in the revised version. 5) We use the popular SongUNet used in many diffusion works, particularly the implementation of EDM [3] (https://github.com/NVlabs/edm/). The specific hyperparameters used are detailed Table 4 in the supplementary. 6) The number of steps is determined by the smallest noise resolution you want to start with. Then everything is fixed. From our experiments, we found out that beyond the 3 diffusion steps, the diffusion steps did not benefit the generation quality, hence we use 3 diffusion steps. We will make sure to clarify this is the revised version. [1] Consistency models by Yang Song, Prafulla Dhariwal, Mark Chen, and Ilya Sutskever. [2] Score-Based Generative Modeling through Stochastic Differential Equations [3] Elucidating the Design Space of Diffusion-Based Generative Models (EDM) Given the substantial improvements made in response to your feedback, we kindly request you to reconsider the score you initially assigned to our submission. We believe that the revised version of our paper now better aligns with the high standards of the NeurIPS conference. --- Rebuttal Comment 1.1: Title: Review response Comment: I've thoroughly reviewed the authors' responses and appreciate their thoughtful engagement. I will stay in touch for further discussion as we approach the final rating. --- Reply to Comment 1.1.1: Comment: Dear Reviewer, Thank you very much for your thoughtful feedback and for acknowledging that we have addressed the concerns you raised in your reviews. We appreciate the time and effort you put into evaluating our work and the constructive comments you provided. If there are any additional questions or if further clarification is needed, please feel free to let us know. We are happy to provide any further information, and we hope you will consider re-evaluating your score as for acceptance in Neurips the score needs to be around 6. Thank you once again for your valuable input. Best regards, The Authors
Summary: The paper discusses the Gaussian diffusion modeling at different dimensionality by incorporating downsampling in the forward process. As a solution, the authors propose a new model called Upsampling Diffusion Probabilistic Model (UDPM), which reduces the latent variable dimension before adding noise. The reverse process then gradually denoises and upsamples the latent variable to produce a final image and tackles the computationally inefficient problem in previous diffusion models like DDPM. In the experiment, UDPM can generate images within 3 stages, which is computationally cheaper than a single step in DDPM and achieves better results. Strengths: - The method is novel and explores a diffusion process across variance scale and dimensionality. The proposed solution is technically sound. - The proposed method is computationaly efficient compared to previous diffusion models, which were designed on a fixed dimensionality and relied a following cascade of upsampling models to reach higher dimension. - The paper is overall easy to read and well organized. Weaknesses: - The authors do not elaborate on how do we determine the number of upsampling stages are needed. Moreover, how do we choose the resolutions in the training in order to balance the performance and computation cost? The authors may need to provide heuristics, theoretical analysis or empirical studies to guide the authors on the choices. - The expression "steps $<1$" is not rigorous. Using NFEs (numbers of fuction evaluation) and GPU time (or FLOPs) at different resolution stages may be more informative to the readers. - Some important related works are missing and lack of discussion. For example, Simple Diffusion studies the diffusion schedule in terms of the image dimensionality; LEGO diffusion and Matryoshka diffusion also discuss the diffusion modeling with variable dimensionality and the solutions are closely related. Emiel Hoogeboom, Jonathan Heek, and Tim Salimans. "simple diffusion: End-to-end diffusion for high resolution images." In International Conference on Machine Learning, pp. 13213-13232. PMLR, 2023. Huangjie Zheng, Zhendong Wang, Jianbo Yuan, Guanghan Ning, Pengcheng He, Quanzeng You, Hongxia Yang, and Mingyuan Zhou. "Learning stackable and skippable LEGO bricks for efficient, reconfigurable, and variable-resolution diffusion modeling." In The Twelfth International Conference on Learning Representations. 2023. Jiatao Gu, Shuangfei Zhai, Yizhe Zhang, Joshua M. Susskind, and Navdeep Jaitly. "Matryoshka diffusion models." In The Twelfth International Conference on Learning Representations. 2023. --------------- Some minors: - Some references are not precisely cited. For example, Adir [1] and Wavelet SGM [12] missed the conference/journal title; ; score-sde [36] was published in ICLR 2021, DDGAN [40] was published in ICLR 2022, TDPM [42] was published in ICLR 2023. Technical Quality: 3 Clarity: 2 Questions for Authors: Please see the Weakness. Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: The authors have discussed the limitations and potential negative societal impact of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer’s valuable points. Our response to the reviewer’s comments are given below. **Weaknesses**: 1) The number of steps is determined by the smallest noise resolution you want to start with. Then everything is fixed. From our experiments, we found out that beyond the 3 diffusion steps, the diffusion steps did not benefit the generation quality, hence we use 3 diffusion steps. We will make sure to clarify this is the revised version. 2) Because the size of the diffusion variables changes throughout the diffusion process in UDPM, it is not possible to directly compare it to the denoising steps of DDPM. Therefore, we compare the total computations required by each algorithm; from which we obtain that UDPM uses $\sim30\\%$ of the computations used in a single denoising step with the same network, or equivalently $\sim0.3$ the time of a **single** diffusion denoising step. Yet, for the completeness of the comparison, below we report the average runtimes on CIFAR10, of UDPM with 3 steps and DDPM with a single denoising step when a similar network is used. Benchmarked on a single NVIDIA RTX A6000 GPU and averaged on 100K image generations for eliminating any unwanted overhead: UDPM: 2008.21 FPS DDPM: 735.68 FPS $\Rightarrow$ Speedup = 2.73x The runtimes will be added to Table 1 in the revised version. 3) **Simple Diffusion**: This work investigates the noise scheduling and the network architecture, and their relation to the image's resolution. However, this approach complements our method, since it does not modify the diffusion structure itself and only optimizes the empirical setup. **LEGO diffusion**: This approach proposes a LEGO bricks architecture for improving the training, efficiency, and resolution generalization of diffusion models. This approach is also complementary to UDPM since it can be used alongside our approach for further improvements. **Matryoshka diffusion**: This method proposes to train a diffusion model on different resolutions simultaneously. The effectiveness of this approach shines when the desired generation resolution is relatively high, where it can generate images with resolutions similar to what Stable-Diffusion can produce but with no need for a separate image encoder. This approach however is very different from UDPM, since it does not modify the formulation of the diffusion model itself. All the missing references will be added to the revised version and discussed accordingly. We thank the reviewer for the effort. Given the substantial improvements made in response to your feedback, we kindly request you to reconsider the score you initially assigned to our submission. We believe that the revised version of our paper now better aligns with the high standards of the NeurIPS conference. --- Rebuttal Comment 1.1: Comment: I appreciate the authors' efforts in addressing my concerns and questions. After reading the rebuttal, I will keep my positive recommendation, and I suggest the authors incorporate the discussions and additional content of the rebuttal into the final revision. --- Reply to Comment 1.1.1: Comment: Dear Reviewer, Thank you very much for your thoughtful feedback and for acknowledging that we have addressed the concerns you raised in your reviews. We appreciate the time and effort you put into evaluating our work and the constructive comments you provided. If there are any additional questions or if further clarification is needed, please feel free to let us know. We are happy to provide any further information, and we hope you will consider re-evaluating your score as for acceptance in Neurips the score needs to be around 6. Thank you once again for your valuable input. Best regards, The Authors
Summary: This paper proposes a new training and sampling scheme for a diffusion model. The motivation is to enhance the effectiveness and interpretability of the diffusion model. Building upon the methods of DDPM, this paper introduces an upsampling operation into the Markov process, enabling the model to denoise and upsample simultaneously. Furthermore, through mathematical derivations, the reliability of this process is demonstrated. Experimental results ultimately show that in certain specific scenarios, the model outperforms existing alternatives. Strengths: + The paper is well-written with a clear organizational structure, making it easy to follow. + The paper propose a new diffusion model framework, complete with mathematical derivations, resulting in a loss function analogous to that used in DDPM. + The discussion and comparison with related work, such as cold diffusion and soft diffusion, are clearly articulated, effectively highlighting the technical contributions of this paper. Weaknesses: - The motivation behind the study is not sufficiently clear, and the interpretability of the model has not been well demonstrated. - There is a lack of ablation studies on the loss function. The complexity of the loss, especially with adversarial training, may lead to instability during training. - The experiments were conducted only at a 64x64 resolution, leaving the scalability of the method unclear. Technical Quality: 2 Clarity: 2 Questions for Authors: - What is the computational logic behind steps less than 1 in Table 1? Could you please provide a clear explanation? - It appears that this method involves special design considerations for the network structure. What is the actual inference latency of this model compared to baselines? - Traditional diffusion models, such as EDM, achieve significantly better results with more sampling steps due to their scalability. How scalable is the proposed method, and how does its performance compare? Confidence: 3 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: The authors have adequately addressed the limitations and potential negative societal impact of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are grateful for the reviewer's insightful comments. Our response is provided below. **Weaknesses**: 1) It is well known that diffusion models severely suffer from heavy computations to produce pleasing-looking images due to two aspects: (i) The large number of diffusion steps and (ii) the large dimensions of the latent variables at each step. Compared to GANs, where a single inference step is used, with latent space much smaller than the generated image size; DDPMs have significant drawbacks. As a result, in this paper, we narrow this gap significantly, reducing the computations using the same network by $\sim 65\\%$, while outperforming the fastest SOTA diffusion-based models. Additionally, it has been shown in many GAN papers that the latent space of the generative model is interpolatable and interpretable, which was not the case with DDPMs, where even a single step DDPM has latent space with the same dimensions as the input image; making the latent space very redundant. In UDPM however, the whole latent space is much smaller than the generated image, making the latent space smooth for interpolation (as shown in Figures 5, 7, and 8) and interpretable (as shown in Figures 6, 9, 10). For further understanding of the interpretability of the model, we present an ablation study where we generate an image, then fix two of its three noise maps, and perturb/rerandomize the third noise map 128 times to produce 128 different images. We then take these images and analyze them by examining the principal components of their covariance matrix to understand how each diffusion step affects the generated image (check the rebuttal PDF file). As can be seen in Figures 1 and 2 in the attached PDF file, the noisiest latent variable controls the semantics of the image (i.e. class, pose, background, etc.), while the initial and the middle noise level controls the fine details of the generation. As a result, one may get similar behavior to what has been done in StyleGAN simply by modifying the last diffusion variable; as demonstrated in Figure 3. 2) Complexity: The reverse process in UDPM requires super-resolving the latent variable from the previous step, which necessitates the use of a sophisticated loss term, as shown by previous super-resolution literature [1, 2, 3], which we find crucial in our case for obtaining sharp and detailed results. Stability: In contrast to pure adversarial loss used in GANs, using it as a regularization term for guiding the network to sharp solutions is in fact very stable. The only part that needs tuning is the weights of each term, which from our experiments remained fixed to the reported values in the paper for all datasets. To demonstrate the effect of each loss term, in Figure 4 in the attached rebuttal PDF, we show how each term contributes to the generation results. This ablation will be added and discussed in the revised version. 3) As mentioned in the limitation section, training diffusion models require heavy computational resources, particularly when increasing the images resolution; therefore, due to our limited resources we leave such exploration to future research. Yet, for completeness, we ran our approach on LSUN-horses dataset with resolution 128x128 and reported the results in the attached rebuttal PDF file. **Questions**: 1) Because the size of the diffusion variables changes throughout the diffusion process in UDPM, it is not possible to directly compare it to the denoising steps of DDPM. Therefore, we compare the total computations required by each algorithm; from which we obtain that UDPM uses $\sim30\\%$ of the computations used in a single denoising step with the same network, or equivalently $\sim0.3$ the time of a **single** diffusion denoising step. 2) UDPM needs a network architecture that upsamples its input while being aligned to other DDPM works to allow direct comparison. Therefore, we use the popular SongUNet [4] used in many diffusion works while increasing the number of output channels from $3$ to $3\times \gamma^2$, followed by a depth-to-space layer that upsamples the output by rearranging the pixels (https://pytorch.org/docs/stable/generated/torch.nn.PixelShuffle.html). This architecture makes minimal changes to the network, assuring minimal additional latency over the baseline. This is reflected by the run time that we measure which indeed shows that. 3) UDPM generalizes traditional denoising diffusion schemes, therefore one can omit $\mathcal{H}$ in part of the diffusion steps (e.g. the last ones), such that some diffusion steps become denoising without upsampling. This may enable increasing the diffusion steps to arbitrary choice, similar to traditional diffusion. However, we did not examine such an approach as we focused on efficiency, and we leave it for further research. [1] Photo-realistic single image super-resolution using a generative adversarial network [2] Esrgan: Enhanced super-resolution generative adversarial networks [3] Real-esrgan: Training real-world blind super-resolution with pure synthetic data [4] Score-Based Generative Modeling through Stochastic Differential Equations Given the substantial improvements made in response to your feedback, we kindly request you to reconsider the score you initially assigned to our submission. We believe that the revised version of our paper now better aligns with the high standards of the NeurIPS conference. --- Rebuttal Comment 1.1: Comment: Dear Reviewer, Thank you for taking the time to review our work. As the discussion period is approaching its conclusion (on August 13, AOE), we kindly ask if you could review our detailed responses to your concerns. We would be happy to address any further questions you might have, and we hope you will consider re-evaluating your score as for acceptance in Neurips the score needs to be around 6. Thank you again for your efforts. Best regards, The Authors
null
null
Rebuttal 1: Rebuttal: In the following PDF file, we present the following ablation studies and results: 1) Additional demonstration of the interpretability of the model. 2) Ablation study on the contribution of each loss term. 3) Additional results on more a diverse dataset with higher resolution. We hope after this significant improvement we will make the reviewers reconsider their initial scores. Best regards Pdf: /pdf/2d8dc065eccab463c48d9a5c658489af89f03be6.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Model Reconstruction Using Counterfactual Explanations: A Perspective From Polytope Theory
Accept (poster)
Summary: This paper presents a novel and valuable theoretical analysis leveraging polytope theory to provide novel insights into using counterfactual explanations for model reconstruction and significant contribution to the field of model reconstruction/interpretability of black box models. The key contributions are: Providing a geometric interpretation of how counterfactuals relate to the decision boundary using polytope geometry. Deriving bounds on the expected approximation error of the reconstructed model as a function of the number of counterfactual queries . Extending the analysis to handle cases where only approximate closest counterfactuals are available using Lipschitz continuity. Proposing the Counterfactual Clamping Attack (CCA) which treats counterfactuals differently in the loss function to prevent decision boundary shift during training of the surrogate model. Demonstrating CCA's improved performance over baselines on multiple real-world datasets, including with one-sided counterfactuals. Strengths: Providing novel theoretical insights into the relationship between counterfactuals and the decision boundary geometry using polytope theory. Deriving mathematical bounds relating approximation error to the number of counterfactual queries. Proposing CCA which mitigates decision boundary shift, a key issue with prior counterfactual-based model extraction approaches. Extensive empirical evaluation validating CCA's effectiveness across datasets. Weaknesses: Limited discussion on the applicability of CCA to model families beyond neural networks like tree based models XGBoost, Random Forest which are most commonly used for problems discussed like loan decisions. The loss function and counterfactual generation process may need adaptations. Lack of analysis on the computational complexity and scalability of CCA, especially for high-dimensional data or large datasets. Generating closest counterfactuals can be computationally expensive. No exploration of the effect of counterfactual quality aspects like sparsity, actionability or realness on CCA's performance. The paper does not analyze how enforcing quality constraints on counterfactuals impacts CCA's ability to accurately reconstruct the target model's decision boundary. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Have you explored the application of CCA to other models like tree ensembles, XGBoost or others which might have non continuous / non differentiable decision boundaries? 2. Have you explored into the computational complexity and memory requirements of CCA, especially when scaling to high-dimensional data or large datasets for example in comparison to other standard model reconstruction methods for practical applications of the approach? 3. Any exploration of sensitivity of CCA's performance to the quality (e.g., sparsity, realistic) of the counterfactual instances used? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The theoretical analysis is limited to neural network architectures and assumes convex/concave decision boundaries. The authors should discuss the applicability to other model families like tree ensembles. While bounds on approximation error are derived, there is limited analysis of the computational complexity and scalability of the proposed Counterfactual Clamping Attack (CCA) to high-dimensional data or large datasets. The paper does not explore the effect of counterfactual quality criteria like sparsity and similarity on CCA's model extraction performance, which is important for real-world applications. The empirical evaluation is focused on classification tasks. More analysis on regression problems or tasks with complex decision boundaries is needed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for reading the paper and appreciate their detailed review. **_CCA for other machine learning models:_** The proposed CCA algorithm, being implemented through a modified loss function, in its current form is limited to neural networks. However, the initial theoretical development, specifically, Theorem 3.2 and Corollary 3.11 are applicable to models with a convex decision region or models that are Lipschitz continuous, respectively. Extending Algorithm 1 to other models, as suggested by the reviewer, is an important path for exploration in the future. **_Computational complexity:_** In comparison to existing strategies of **model extraction specifically using counterfactuals** [Aïvodji et al., Wang et al.], our model training process in Algorithm 1 is the usual, except for the two factors: the loss function and the training dataset. Our loss function does not have any significant impact on the computational complexity or the memory requirement, since it involves typical mathematical operations such as addition and multiplication. On the other hand, the training set includes counterfactual explanations and regular datapoints. We note that **computing counterfactual explanations can be computationally intensive,** particularly when the data is high-dimensional, and also varies based on the generation method chosen. We assume this increased computational burden is on the API’s side. Therefore, the reconstruction strategy is not directly affected by the computational complexity or memory requirements of computing the counterfactuals. Though, we acknowledge that it would indeed be **an interesting discussion** if considering the overall computational complexity of both parties including both model training and the API’s counterfactual generation complexity. [Aïvodji et al.] shows that model extraction using counterfactuals has better performance using fewer queries than traditional methods which do not use counterfactuals. DualCFX [Wang et al.] further improves performance at the expense of requiring the counterfactual of counterfactuals. Our work eliminates the requirement of the counterfactuals of counterfactuals by specifically leveraging the fact that counterfactuals are close to the decision boundary. **The benefit of model extraction specifically using counterfactuals does come at the additional computational cost of generating counterfactuals in the first place.** The exact cost depends on the strategy being used. **_Sensitivity to the quality of counterfactuals:_** We considered the following counterfactual generating methods in the manuscript, including MCCF with L1 and L2 norms, DiCE actionable counterfactuals, 1-Nearest-Neighbor counterfactuals (counterfactuals from the data manifold) and DiCE counterfactuals with varying levels of sparsity (Section D.2.6 Table 5 and Fig. 16, 17). Now, we also include ROAR [Upadhyay et al.] and C-CHVAE [Pawelczyk et al.] and present the consolidated results in Rebuttal PDF Table 7. We have also included histograms of the prediction probabilities for counterfactuals generated by different methods. We would like to highlight the fact that our strategy does not take into account any specifics of the generating method. Instead, what affects the performance is the distribution of the counterfactuals around the decision boundary. Histograms in the Rebuttal PDF Fig. 23 provide some insights on how the counterfactuals generated using different methods are distributed. Firstly, observe that our strategy CCA does not require closest counterfactuals but is able to reconstruct models quite faithfully for many of the other counterfactual generating methods even if they are not the closest. Additionally, we also observe that the robust counterfactuals generated using the ROAR method have relatively higher prediction probabilities from the target model. As we may observe from these histograms, when the counterfactual distribution is concentrated around higher prediction probabilities (e.g.: in case of ROAR or sparse DiCE with posthoc_sparsity_param=0.2 – see manuscript section D.2.6. Fig. 16, 17), the advantage of CCA over the baseline diminishes. We will include a detailed discussion on these observations in the paper. [Wang et al.] Wang, Y., Qian, H., & Miao, C. (2022, June). Dualcf: Efficient model extraction attack from counterfactual explanations. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency (pp. 1318-1329). [Aïvodji et al.] Aïvodji, U., Bolot, A., & Gambs, S. (2020). Model extraction from counterfactual explanations. arXiv preprint arXiv:2009.01884. [Upadhyay et al.] Upadhyay, S., Joshi, S., & Lakkaraju, H. (2021). Towards robust and reliable algorithmic recourse. Advances in Neural Information Processing Systems, 34, 16926-16937. [Pawelczyk et al.] Pawelczyk, M., Broelemann, K., & Kasneci, G. (2020, April). Learning model-agnostic counterfactual explanations for tabular data. In Proceedings of the web conference 2020 (pp. 3126-3132). --- Rebuttal Comment 1.1: Title: Author Response acknowledged Comment: I have read the authors response. --- Reply to Comment 1.1.1: Comment: Thank you very much for raising these important questions and the detailed review. We are ready to address any other questions the reviewer might have.
Summary: The paper proposed a model reconstruction methodology by using one-sided counterfactual explanations. After generating counterfactuals (assumed to be the closest counterfactual to the observation), the authors reconstruct the original model using a piece-wise linear approximation (with a bunch of hyperplanes). The key idea is that such hyperplanes can be identified by the pair of observation and its closest counterfactual, of which the the joining-line is straightforwardly perpendicular to the model's decision boundary. The paper discussed convex and non-convex decision regions, and demonstrated that the general non-convex case is challenging for being approximated with finite number of counterfactual explanations. Nevertheless, the authors showed that given sufficiently many counterfactual explanations (queries), a ReLU neural network can still be reconstructed. The experiment results demonstrated improvement over one literature (served as baseline), for both the known architecture and unknown architecture of the ReLU network. The paper is well-written and organized. The idea, to my knowledge, is novel. The paper goes into good depth for analysis, showing interesting insights. Strengths: The idea of using counterfactual explanations together with piece-wise linear approximation for model reconstruction, to my knowledge, is novel. The quality of writing and presentation is fairly clear. Weaknesses: The key limitation of this paper is that all the theories hold if the counterfactuals are the closest ones to the observations, which is difficult to be guaranteed utilizing counterfactual explanations algorithms. One way to deal with this is to consider in the derived theory some tolerance on the counterfactual's quality. Namely, if an algorithm is able to find counterfactuals within a given radius epsilon of the closest counterfactual, then how does this affect model construction. The connection between the proposed theory and the effectiveness of the authors model construction methodology is not stated. Namely, algorithm 1 could stand alone without any of the derived theories. Theorem 3.6 relies on the assumption that each cell only contains completely linear part of the model's decision boundary, with one closest counterfactual in each cell. In order to make the first part (that only linear part is contained in a cell) holds for most of the cells, one has to make the edge of the cell (varepsilon) really small, which requires a massive amount of queries to be done. Hence, the assumption that is required to make this Theorem work, is too strong. Technical Quality: 2 Clarity: 3 Questions for Authors: How does the "polytope theory" help the model reconstruction algorithm design? Namely, the foundation (Lemma 3.1), the fidelity expectation for the convex case (Theorem 3.2), and the reconstruction probability lower bound (Theorem 3.6) are proposed, but how could we use them to better reconstruct the model than existing counterfactual explanations based reconstruction methodologies? The results in Table 1 have large confidence intervals. For example, in the column Architecture unknown (model 1), D_{uni}, and row DCCC, the baseline is 95+-2.2 and the proposed CCA is 95+-11.8 etc. Does this imply that the proposed method is not very stable? What if we assume that sufficient many queries can be made and then increase the number of queries from 400 to 4000? Will the performance of CCA go close to the baseline? Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: Yes, the authors have stated clearly some limitations, and foresee potential non-positive impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for taking out the time to review this paper, and appreciate their acknowledgement of the novelty of our work. **_On significance and limitations of theoretical results:_** Deriving theoretical bounds for model reconstruction using counterfactuals under general non-convex decision boundaries is a challenging problem. Our work proposes a novel non-trivial approach towards solving this problem through the lens of polytopes with assumptions for mathematical tractability. We will further elaborate on these assumptions in our limitations. 1. **Closest counterfactuals:** We agree that assuming the availability of the closest counterfactuals in Theorems 3.2 and 3.6 limits their direct applicability into scenarios where the counterfactuals are reasonably close but not exactly the closest one. However, Theorems 3.2 and 3.6 still provide valuable insights into how counterfactuals might facilitate model reconstruction. E.g.: the empirical mean approximation error appears to be of $o(n^{-2/(d-1)})$ which is the theoretical rate of decay derived by assuming closest counterfactuals and convex decision boundaries (as discussed in Section D.2.3 of the manuscript). 2. **Additional insights:** Moreover, Theorem 3.6 suggests that target models with simpler decision boundaries are easier to extract, and the Corollary 3.8 specifically shows why model reconstruction is mathematically easier when two-sided counterfactuals are available. Corollary 3.8 is quite relevant in the context of existing literature on model extraction which typically assume two-sided counterfactuals are available. For instance, an institution might give counterfactuals to rejected applicants but not necessarily to the accepted ones, leading to the one-sided counterfactual scenario. 3. **Beyond closest counterfactuals:** In Section 3.3, we no longer assume the counterfactuals to be the closest, but they lie on the decision boundary, and provide guarantees under local Lipschitz assumptions (Theorem 3.10 and Corollary 3.11). We greatly appreciate the insightful suggestion to consider **a tolerance ball**, which can be included directly into Corollary 3.11. We will also include this suggestion in future works in conjunction with other relaxations such as probabilistically-Lipschitz assumptions [Khan et al.]. Algorithm 1 is motivated from the need to clamp the counterfactuals to the decision boundary as per our analysis in Theorem 3.10 and Corollary 3.11 (which will have further relaxations now using the tolerance ball). 4. **Grid size:** Approximating a non-convex polytope becomes challenging due to facts such as it being impossible to be simply expressed as an intersection of half-planes leading us to explore ReLu networks whose decision boundaries are polytopes allowing for some mathematical traceability if the boundary is locally linear over some cells of a grid. We agree that the edge-cells (cells in which two or more high-dimensional facets are present) violate the assumption and hence, the grid size needs to be reduced (though such edge cells might be fewer in comparison to the entire set of boundary cells). Nonetheless, we believe that our approach is a non-trivial approach towards solving this challenging problem. **The number of linear regions of a ReLU network in practice has been observed to be far less than the theoretically achievable maximum** [Hanin and Rolnick]. Moreover, [Jordan et al.] suggests that the decision regions can be represented as polyhedral complexes, which are a specific type of unions of convex polytopes. This may further reduce the number of high-dimensional edges that would occur in practice, as opposed to the worst-case scenario. Therefore, the required size of the grid might actually depend largely on the complexity of the classification problem. **_Variability of results in Table 1:_** The results in Table 1 have been averaged after training 100 different target models, generating queries and counterfactuals for them multiple times, and then surrogate models for each case. As pointed out by the reviewer, the standard deviation is a bit high for one setup, but for most others, it is fairly reasonable and comparable to the existing methods of model extraction using counterfactuals. **_Significantly higher number of queries:_** For a very high number of queries, the performance depends on how the positive and negative queries are actually distributed. Sometimes, we observe that good fidelity is achieved for both CCA and baseline possibly because the positive and negative queries dominate a lot more over the counterfactuals. Though, in a few cases, the fidelity of the baseline does not keep increasing with the number of queries and saturates because it treats the counterfactuals as points with label 1, and hence suffers from decision boundary shift issue (in this situation, ignoring the counterfactuals entirely or using CCA might be helpful than treating them as label 1 instances). [Khan et al.] Khan, Z. Q., Hill, D., Masoomi, A., Bone, J. T., & Dy, J. (2024, April). Analyzing Explainer Robustness via Probabilistic Lipschitzness of Prediction Functions. In International Conference on Artificial Intelligence and Statistics (pp. 1378-1386). PMLR. [Hanin and Rolnick] Hanin, B., & Rolnick, D. (2019, May). Complexity of linear regions in deep networks. In International Conference on Machine Learning (pp. 2596-2604). PMLR. [Jordan et al.] Jordan, M., Lewis, J., & Dimakis, A. G. (2019). Provable certificates for adversarial examples: Fitting a ball in the union of polytopes. Advances in neural information processing systems, 32. --- Rebuttal Comment 1.1: Comment: Thank you for your response. > The number of linear regions of a ReLU network in practice has been observed to be far less than the theoretically achievable maximum [Hanin and Rolnick] Do you want to say that the "corners" in Figure 5 would be quite many such that in reality we need many grids to properly use Theorem 3.6, or the opposite? It reads a bit confusion to me. > Moreover, [Jordan et al.] suggests that the decision regions can be represented as polyhedral complexes, which are a specific type of unions of convex polytopes. This may further reduce the number of high-dimensional edges that would occur in practice, as opposed to the worst-case scenario. Therefore, the required size of the grid might actually depend largely on the complexity of the classification problem. I believe it worths a discussion somewhere in the paper, especially for how the size of the grid could be influenced in practice. One of the major concerns of the proposed method, from my point of view, lies on the two strong assumptions (though they are not assumed simultaneously): 1. "Closeness" for convex case 2. "Grids num" required for non-convex case Still, I'm not fully convinced by your response (I meant, I believe in sentences your argument, but they seem not persuasive as rebuttal for why these two assumptions are not too strong.) > Algorithm 1 is motivated from the need to clamp the counterfactuals to the decision boundary as per our analysis in Theorem 3.10 and Corollary 3.11 (which will have further relaxations now using the tolerance ball). And yet, I don't think this answers my question fully. My original question is how does the "polytope theory" help the model reconstruction algorithm design, since "A perspective from polytope theory" is part of your title? Namely, how do the bounds you have derived, contribute to Algorithm 1? To me, Algorithm 1 seems quite independent to other parts of the paper and can stand alone very well. But I believe there are shining and inspiring points in this paper, even though with above weaknesses. So, I would like to keep my score. --- Reply to Comment 1.1.1: Title: Reply by Authors Comment: We appreciate that the reviewer finds that “there are shining and inspiring points in this paper.” We will definitely include a detailed discussion on the assumptions that we have to make for analytical tractability and the limitations pointed out by the Reviewer. **_Clarifications on the grid-size ($\epsilon$) assumption:_** In general, approximating a non-convex decision boundary becomes challenging since it is impossible to express it as an intersection of half-planes. However, for a ReLU network, the problem becomes analytically tractable because the input domain can be partitioned into **q** regions such that the model is linear in each region (leading to polytope decision boundaries with at most **q** linear pieces) [Chen et al., Hanin and Rolnick]. In fact, if the number of such partitioned regions is q and **one assumes that the adversary knows each of these partitions**, we can derive another result similar to our Theorem 3.6 by constructing only q inverse counterfactual regions and the probability will depend on only q (and not the grid-size). $$\mathbb{P}[\text{Reconstruction}] \geq 1-q(1-v^*)^n$$ where $v^* = \min_i v_i$ with $v_i$ being the volume of the $i^\text{th}$ inverse counterfactual region ($i=1,2,..,q$). However, instead of assuming that the adversary exactly knows the partitions of the ReLU network they are trying to reconstruct (which we feel would be a much stronger assumption for our problem), we assume there is a uniform grid such that the ReLU network’s decision boundary is linear across each small grid-cell that it passes through. We agree that the grid-size would need to be reduced in order for this to hold: otherwise, there would be more “edge” cells where two or more facets intersect (e.g., cells containing the corners in 2D). Nonetheless, it is still a weaker assumption than assuming the adversary exactly knows the partitions of the ReLU network. Furthermore, the grid-size would still be determined by the complexity of the ReLU network’s decision boundary (essentially goes back to the number of original partitions q). What we wanted to highlight is that this number of partitions for a ReLU network observed in practice is far less than the theoretically possible maximum [Hanin and Rolnick], holding promise that the grid size might also not need to be too small. Moreover, [Jordan et al.] suggests that the decision regions can be represented as polyhedral complexes, which are a specific type of unions of convex polytopes. This may further reduce the number of high-dimensional edges that would occur in practice, as opposed to the worst-case scenario, and consequently holds promise that the number of such “edge” cells (cells containing corners in 2D) would be fewer. Therefore, the required size of the grid might actually depend largely on the complexity of the classification problem. Relaxing the said assumptions is the main goal of our future work. In particular, we will study the possibility of allowing the adversary to know alternate partitions such that the ReLU network is linear over each region. **_On Algorithm 1:_** Algorithm 1 can stand on its own, without the basis of theorems 3.2 and 3.6 (Theorem 3.10 and Corollary 3.11 provide the main intuition for Algorithm 1). However we see theorems 3.2 and 3.6 as important parts of the journey where we start from a more constrained but mathematically-tractable setting and move to a less constrained but not-so-mathematically-tractable setting. Thank you very much for the detailed review and the valuable insights. [Chen et al.] Chen, K. L., Garudadri, H., & Rao, B. D. (2022). Improved bounds on neural complexity for representing piecewise linear functions. Advances in Neural Information Processing Systems, 35, 7167-7180. [Hanin and Rolnick] Hanin, B., & Rolnick, D. (2019, May). Complexity of linear regions in deep networks. In International Conference on Machine Learning (pp. 2596-2604). PMLR. [Jordan et al.] Jordan, M., Lewis, J., & Dimakis, A. G. (2019). Provable certificates for adversarial examples: Fitting a ball in the union of polytopes. Advances in neural information processing systems, 32.
Summary: The paper presents an approach to mitigate the decision boundary shift problem of counterfactual-based model reconstruction algorithms. They leverage the fact that counterfactuals differ from ordinary instances since they exist relatively close to the decision boundary, to derive a novel loss function for model reconstruction that provides special treatment to counterfactuals. The theoretical foundations of this work are based on polytope theory, using which the authors derive a relationships between the error in model reconstruction and the number of required counterfactual samples. Experimental results show that the proposed approach surpasses the baseline on a number of model reconstruction benchmarks. Strengths: 1. The paper is very clearly written. The authors set a concise and understandable stage for the problem of model reconstruction using counterfactual explanations, and explaining the limitation of decision boundary shift that the existing models face due to treating counterfactuals same as ordinary samples. They zero-in on their approach motivated by a sound theoretical basis which establishes a relationship between the error in model reconstruction and number of required counterfactual examples using polytope theory. 2. The theoretical results are intuitive. The authors derive rates at which the success of model reconstruction changes in terms of the decision boundary complexity and the number of samples for linear models. Further they provide a bound on the model reconstruction error based on the observation that deep ReLU networks can be represented as Continuous Piece-wise Linear functions, whose decision boundaries form collections of polytopes. 3. The proposed approach provides state-of-the-art results on multiple benchmarks over the existing baseline for counterfactual-based model reconstruction. Weaknesses: 1. Is there any dependency of the proposed method on the counterfactual generating method MCCF, or in other words, the quality/nature of the counterfactuals used? 2. A comparison of the proposed method with regular approaches (that treat counterfactuals and ordinary samples the same) like [Wang et al., 2022] using two sided counterfactuals could help solidify the contributions. I can understand that it might sound like an unfair comparison, but one should not necessarily expect the proposed CCA to outperform models using two-sided counterfactuals. This is just to get an understanding of how close can one get (using a model like CCA) to such two-sided counter-factual based approaches, by just using one-sided counterfactuals. 3. Comparing just with a single baseline seems a bit insufficient. Perhaps a discussion on how this work relates to model inversion attacks (such as [a,b]) would add to the completeness of the paper. References: [a] Zhao et al., "Exploiting Explanations for Model Inversion Attacks", ICCV 2021. [b] Struppek et al. "Plug & Play Attacks: Towards Robust and Flexible Model Inversion Attacks", ICML 2022. Technical Quality: 3 Clarity: 3 Questions for Authors: Please refer to the Weaknesses section. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The limitations of this work have been adequately discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are grateful for the detailed review and the important suggestions. **_Dependency on counterfactual generating method:_** The performance of the proposed method does not depend on the specific counterfactual generating method, except for the proximity of the generated counterfactuals to the decision boundary. CCA does not take any specifics of the generating method into account. However, the proximity to the decision boundary depends largely on the generating method. Therefore, we have considered several counterfactual generating methods in the experiments (Section D.2.6, Table 5 in manuscript). Furthermore, following the suggestions of Reviewer VLao, we have included two more counterfactual generating methods: **ROAR** [Upadhyay et al.] and **C-CHVAE** [Pawelczyk et al.] in the Rebuttal PDF (Table 7). In addition, we have included histograms of the predictions made by the target model on the generated counterfactuals and the query instances (see Rebuttal PDF Fig. 23). They provide insights into the distribution of counterfactuals around the decision boundary. As evident from these, the CCA performance generalizes to a wide range of counterfactual generating methods as well as different counterfactual distributions w.r.t. the decision boundary. Moreover, we have discussed the effect of counterfactual sparsity on CCA performance in Section D.2.6 of the manuscript. It has been observed that the stricter the sparsity constraint gets, a significant amount of counterfactuals tend to lie further away from the decision boundary (Fig. 16). This in turn causes the gap between the baseline method and CCA to reduce (Fig. 17). We report some values in the table below for easy reference. An extreme effect of this nature can be observed in case of the ROAR generating method (Rebuttal PDF Table 7). Query size=100, fidelity over **test** data (psp=posthoc_sparsity_param – sparsity increases with increasing psp) |Model|psp=0.1|psp=0.2| |:---|:---:|:---:| |Base. M0|0.97|0.97| |CCA M0|0.98|0.96| |Base. M1|0.97|0.98| | CCA M1 |0.98|0.98| Query size=100, fidelity over **uniform** data |Model|psp=0.1|psp=0.2| |:---|:---:|:---:| |Base. M0|0.91|0.94| |CCA M0|0.92|0.90| |Base. M1|0.91|0.94| |CCA M1|0.94|0.95| **_Comparison with DualCFX_**: The related work [Wang et al.] is one of the few pioneering works studying the effects of counterfactuals on model extraction, which proposes the interesting idea of using counterfactuals of counterfactuals to mitigate the decision boundary shift. Based on the reviewer’s suggestion, we have now implemented DualCFX and included it in the Rebuttal PDF. The primary focus of our work is on the one-sided scenario where an institution might be giving counterfactuals to rejected applicants to help them get accepted only but not to the accepted ones. As the reviewer has correctly pointed-out, a fair comparison cannot be achieved between CCA and the strategy proposed in [Wang et al.] in the scenario where only one-sided counterfactuals are available since DualCFX requires counterfactuals from both sides. Therefore, in the two-sided scenario, we compare the performance of CCA with the DualCFX strategy proposed in [Wang et al.] under two settings: 1. only one sided counterfactuals are available for CCA (named CCA1); 2. CCA has all the data that DualCFX has (named CCA2). We also include another baseline (following [Aïvodji et al.]) for the two-sided scenario where the models are trained only on query instances and counterfactuals, but not the counterfactuals of the counterfactuals. Results are presented in Table 6 of the Rebuttal PDF. Note that even for the same number of initial query instances, the total number of actual training instances change with the strategy being used (CCA1 < Baseline < DualCFX = CCA2 – e.g.: queries+CFs for the baseline but queries+CFs+CCFs for DualCFX). **_Related works in model inversion:_** We would like to thank the reviewer for suggesting two important related contributions. We will include a discussion on these works in a revised version of the manuscript. Model inversion is another form of extracting information about a black box model, under limited access to the model aspects. In contrast to model extraction where the goal is to replicate the model itself, in model inversion an adversary tries to extract the representative attributes of a certain class with respect to the target model. In this regard, [Zhao et al.] focusses on exploiting explanations for image classifiers such as saliency maps to improve model inversion attacks. [Struppek et al.] proposes various methods based on GANs to make model inversion attacks robust (for instance, to distributional shifts) in the domain of image classification. [Upadhyay et al.] Upadhyay, S., Joshi, S., & Lakkaraju, H. (2021). Towards robust and reliable algorithmic recourse. Advances in Neural Information Processing Systems, 34, 16926-16937. [Pawelczyk et al.] Pawelczyk, M., Broelemann, K., & Kasneci, G. (2020, April). Learning model-agnostic counterfactual explanations for tabular data. In Proceedings of the web conference 2020 (pp. 3126-3132). [Wang et al.] Wang, Y., Qian, H., & Miao, C. (2022, June). Dualcf: Efficient model extraction attack from counterfactual explanations. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency (pp. 1318-1329). [Aïvodji et al.] Aïvodji, U., Bolot, A., & Gambs, S. (2020). Model extraction from counterfactual explanations. arXiv preprint arXiv:2009.01884. [Zhao et al.] Zhao, X., Zhang, W., Xiao, X., & Lim, B. (2021). Exploiting explanations for model inversion attacks. In Proceedings of the IEEE/CVF international conference on computer vision (pp. 682-692). [Struppek et al.] Struppek, L., Hintersdorf, D., Correira, A. D. A., Adler, A., & Kersting, K. (2022, June). Plug & Play Attacks: Towards Robust and Flexible Model Inversion Attacks. In International Conference on Machine Learning (pp. 20522-20545). PMLR. --- Rebuttal Comment 1.1: Comment: I thank the authors for their detailed response to my comments, which addresses all of my present concerns. I will carefully consider them to arrive at my final decision. --- Reply to Comment 1.1.1: Comment: Thank you very much for the insightful comments. We remain open to address any other comments and to provide any clarifications required.
Summary: This paper studies model reconstruction attacks by using the proximity of counterfactuals to the decision boundary. The authors aim to establish theoretical guarantees for such attacks. To this end, they characterize the number of queries required for the attacker to achieve a given error in model approximation using results from polytope theory (Theorem 2). The authors’ main result from relies on the decision boundary being convex. To relax the convexity assumption, the paper additionally studies the case of underlying relu networks to be attacked and provides probabilistic bound on the reconstruction rate. Finally, the authors propose a strategy for model extraction. The paper offers strengths in terms of proposing new tools to analyze model extraction attacks. The main theoretical results are covered for two models classes that are commonly used in recourse literature (linear models, and NNs with relu activations) Overall, the paper provides a good starting point for future research in the area of model extraction attacks through counterfactual explanations but further improvements would be necessary to generealize the analysis to general commonly used models such a tree based classifiers. Strengths: - *New theoretical approach to study extraction attacks*: The paper introduces a fresh approach to studying model extraction attacks using counterfactual explanation algorithms, employing methodologies from polytope theory that I have not seen explored in this context before. - *New method*: The authors propose a new model extraction method. - *Clearly structured*: The paper is overall well written and clearly structured. Weaknesses: *Missing variety of recourse methods*: The paper would greatly benefit from a more comprehensive set of experiments that examine the viability of model reconstruction under various recourse settings. Specifically, incorporating experiments that generate counterfactuals with data manifold constraints [3,4] or consider robustness [1,2] would be highly valuable. Such experiments would clarify the conditions under which the proposed attacks are likely to succeed or fail, and could indicate potential defenses. For instance, if the attacks fail under certain counterfactual attack methods, this would highlight key characteristics necessary to ensure the safe use of recourse or counterfactual explanations. I strongly encourage the authors to include additional experiments addressing these aspects. Demonstrating the impact of these conditions would significantly strengthen the paper, and I would be happy to increase my score if such experiments are provided. --------- **References** [1] "Probabilistically Robust Recourse: Navigating the Trade-offs between Costs and Robustness in Algorithmic Recourse", ICLR, https://arxiv.org/abs/2203.06768 [2] "Towards robust and reliable algorithmic recourse.", NeurIPS, https://arxiv.org/abs/2102.13620 [3] "Learning model-agnostic counterfactual explanations for tabular data", WWW, https://arxiv.org/abs/1910.09398 [4] "Towards realistic individual recourse and actionable explanations in black-box decision making systems", arxiv, https://arxiv.org/abs/1907.09615 Technical Quality: 3 Clarity: 3 Questions for Authors: Could the authors provide insights into their attempts to estimate the Lipschitz constant in practical applications? Considering the inherent difficulty in obtaining low Lipschitz constants for neural networks of reasonable size, the practicality of the Lipschitz result may be questionable. Clarifying this aspect would help in understanding the real-world applicability of their theoretical findings. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors did not mention any limitations of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their review and greatly appreciate the positive opinion about our work. **_On the variety of recourse methods:_** As per the suggestions of the reviewer, we have now implemented **ROAR** [Upadhyay et al.] and **C-CHVAE** [Pawelczyk et al.] methods and will include them in our paper. We considered the following counterfactual generating methods in the manuscript, including MCCF with L1 and L2 norms, DiCE actionable counterfactuals, 1-Nearest-Neighbor counterfactuals (counterfactuals from the data manifold) and DiCE counterfactuals with varying levels of sparsity (Section D.2.6 Table 5 and Fig. 16, 17). Now, we also include ROAR and C-CHVAE and present the consolidated results in **Rebuttal PDF** Table 7. We have also included histograms of the target prediction probabilities for counterfactuals generated by different methods. We would like to highlight the fact that our strategy does not take into account any specifics of the generating method. Instead, what affects the performance is the distribution of the counterfactuals around the decision boundary. Histograms in the Rebuttal PDF Fig. 23 provide some insights on how the counterfactuals generated using different methods are distributed. Firstly, observe that our strategy CCA does not require closest counterfactuals but is able to reconstruct models quite faithfully for many of the other counterfactual generating methods even if they are not the closest. Additionally, we also observe that the robust counterfactuals generated using the ROAR method have relatively higher prediction probabilities from the target model. As we may observe from these histograms, when the counterfactual distribution is concentrated around higher prediction probabilities (e.g.: in case of ROAR in the Rebuttal PDF or sparse DiCE with posthoc_sparsity_param=0.2 in the manuscript in section D.2.6. Fig. 16, 17), the advantage of CCA over the baseline diminishes. We will include a detailed discussion on these observations in the paper. **_On Lipschitz constants:_** In the experiments discussed in Section D.2.4 of the manuscript, we approximate the Lipschitz constant with the product of spectral norms of the weight matrices (following [Gouk et al., 2021]). A recent ICLR paper [Khromov and Singh] studies the tightness of this upper-bound as well as a lower-bound computed using the gradients around a subset of input instances. They point out that in practice, the actual Lipschitz constant lies more closer to the lower bound. They report the lower bound for a fully connected MLP with a layer width of 256 to be less than 40 [Khromov and Singh, Fig. 1]. It is also noteworthy that Theorem 3.10 and Corollary 3.11 require the Lipschitz behavior only for a local region of any given input instance, provided that the counterfactuals are well-spread across the decision boundary. This local Lipschitz behavior is a much looser constraint than the global Lipschitzness. In essence, Lipschitzness attempts to capture the well-behavedness of the target model, in that it does not change very erratically. There may be scenarios where a model is reasonably well-behaved except in few regions which is captured by another body of work called probabilistic Lipschitzness. Such a probabilistic Lipschitz approach [Khan et al.] in conjunction with our result might generalize to a broader class of machine learning models, which can be a path for future exploration. [Upadhyay et al.] Upadhyay, S., Joshi, S., & Lakkaraju, H. (2021). Towards robust and reliable algorithmic recourse. Advances in Neural Information Processing Systems, 34, 16926-16937. [Pawelczyk et al.] Pawelczyk, M., Broelemann, K., & Kasneci, G. (2020, April). Learning model-agnostic counterfactual explanations for tabular data. In Proceedings of the web conference 2020 (pp. 3126-3132). [Gouk et al., 2021] Gouk, H., Frank, E., Pfahringer, B., & Cree, M. J. (2021). Regularisation of neural networks by enforcing lipschitz continuity. Machine Learning, 110, 393-416. [Khromov and Singh] Khromov, G., & Singh, S. P. (2024). Some Fundamental Aspects about Lipschitz Continuity of Neural Networks. In The Twelfth International Conference on Learning Representations. [Khan et al.] Khan, Z. Q., Hill, D., Masoomi, A., Bone, J. T., & Dy, J. (2024, April). Analyzing Explainer Robustness via Probabilistic Lipschitzness of Prediction Functions. In International Conference on Artificial Intelligence and Statistics (pp. 1378-1386). PMLR. --- Rebuttal 2: Title: Response to Rebuttal Comment: I very much appreciate the additional results provided by the authors. As the response has addressed all my concerns, I am increasing my score. I would expect the additional results to be discussed in the final version of the paper as well. --- Rebuttal Comment 2.1: Comment: Thank you very much for updating the score. We will definitely include the additional results and a discussion in the updated version of the manuscript. We greatly appreciate the insightful review.
Rebuttal 1: Rebuttal: We thank all the reviewers for their insightful comments and suggestions. We are glad that our work has been recognized as an “in-depth analysis of a novel model reconstruction strategy” by Reviewer z1A7 and as a “good starting point for future works” by Reviewer VLao. Model reconstruction using counterfactuals has received limited attention when compared to the vast literature of model extraction strategies using other cues. Within the scope of model extraction using counterfacuals, the existing works focus on scenarios where the counterfactuals are available from both sides of the decision boundary. Moving beyond this requirement, our work proposes a novel extraction strategy that utilizes counterfactuals lying only in the accepted region. We arrive at this strategy by analyzing such attacks through the lens of polytope theory. Moreover, we provide a comprehensive set of experiments comparing existing strategies with ours. In addition to the individual responses/clarifications, we have conducted the following additional experiments as per the reviewer suggestions: 1. Two additional counterfactual generating methods (**ROAR** [Upadhyay et al.] and **C-CHVAE** [Pawelczyk et al.]) as suggested by Reviewer VLao 2. Comparing CCA with the **DualCFX** method proposed in [Wang et al.] as suggested by Reviewer Bnbn Results of these experiments are included in the attached **Rebuttal PDF**. [Upadhyay et al.] Upadhyay, S., Joshi, S., & Lakkaraju, H. (2021). Towards robust and reliable algorithmic recourse. Advances in Neural Information Processing Systems, 34, 16926-16937. [Pawelczyk et al.] Pawelczyk, M., Broelemann, K., & Kasneci, G. (2020, April). Learning model-agnostic counterfactual explanations for tabular data. In Proceedings of the web conference 2020 (pp. 3126-3132). [Wang et al.] Wang, Y., Qian, H., & Miao, C. (2022, June). Dualcf: Efficient model extraction attack from counterfactual explanations. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency (pp. 1318-1329). Pdf: /pdf/d0b597c88c410ca8844288074a3f84e2bc3453e9.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Ask, Attend, Attack: An Effective Decision-Based Black-Box Targeted Attack for Image-to-Text Models
Accept (poster)
Summary: The paper focuses on the vulnerability of image-to-text models to adversarial attacks, particularly in a decision-based black-box targeted attack scenario. The authors design a three-stage attack process: (i) Ask: Guides attackers to create target texts that fulfill specific semantic requirements. (ii) Identifies crucial regions of the image for the attack, reducing the attack's search space. (iii) Utilizes an evolutionary algorithm to attack these critical regions, aligning with the semantics from the Ask stage to achieve targeted attacks without semantic loss. Tests on transformer-based and CNN+RNN-based models demonstrate the success of the proposed AAA method in performing effective targeted attacks. Strengths: 1. A novel decision-based black-box targeted attack approach is proposed. 2. The idea of reducing the search space is interesting. Weaknesses: 1. The technical contribution is limited. Many concepts and ideas are directly adopted from existing work. There are many steps that use existing models such as CLIP models, which only work on some benchmark datasets. 2. The writing quality needs to be enhanced. For instance, the title should be "An Effective Decision-Based Black-Box Targeted Attack for Image-to-Text Models". Any typos in $S_{sem}$ and $S_{seg}? Some technical details are unclear. For example, how can authors guarantee target words that should be closer to the input space as many rounds of mutation proceed? Technical Quality: 3 Clarity: 3 Questions for Authors: 1. How can authors guarantee target words that should be closer to the input space as many rounds of mutation proceed? 2. Can authors provide more explanation about why selecting a visual model is good as the surrogate model? Will the model architecture or parameters have impacts on the feature map? I guess for the common datasets such as the ImageNet and common architectures, these models could be similar. But what if different model architectures were trained on a more diverse dataset? As a black-box attack, it is hard to guarantee the consistency of feature map generation among different models. 3. Are the mean and median of heatmap A is around [0.3, 0.4] for all types of data? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors mentioned the limitation of a high number of queries and low optimization efficiency. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: To Weakness 1: Our technical contribution lies in carefully crafting specific designs to improve search efficiency and make our framework applicable in more difficult scenarios. Each design and its technical contribution is summarized as follows. 1) Improving search efficiency from different perspectives: a) Reducing the search difficulty: Designing a target semantic dictionary as prior knowledge shortens the search path from image to target text as much as possible, improving the search efficiency of Attack stage (line 137), b) Reducing the computation time: The Attack stage requires calculating the cosine similarity matrix of the entire population’s text in each iteration, which is time-consuming. The CLIP model’s text encoder can process large amounts of text in parallel. Therefore, we disable the visual encoder of CLIP and use the text encoder to compute the cosine similarity matrix between the target text and the output text of all chromosomes (Equation 13), speeding up the search. c) Reducing the search space: The large number of decision variables (pixels) leads to a vast search space and low search efficiency. We combine the Grad-CAM formula with decision variables to design Equation 10, reducing the range of decision variables in unimportant areas (e.g., background), significantly reducing the search space. 2) Dealing with different difficult scenarios: a) Black-box:Grad-CAM requires gradient information and is not suitable for black-box scenarios, nor can it handle the image-to-text task where the number of output text categories is infinite. Based on previous research (cited [28], CVPR 2021) and our experiments (Figure 4), we designed a surrogate model strategy to generate attention maps in black-box scenarios. And we designed a CLIP-based mapping formula (Equation 9) to map the output text to the category set of the surrogate model, enabling the generation of attention maps for image-to-text tasks with unlimited output categories. b) Perturbation imperceptibility: The perturbations of adversarial examples need to be imperceptible to the human eye. The Grad-CAM-based Equation 10 we designed can also enhance the stealthiness of adversarial examples. This is because a smaller search range for decision variables means smaller perturbations at each pixel. To Weakness 2: Thank you for pointing out these typos. The term $S_{seg}$ in line 167 is a typo and should be corrected to $S_{sem}$ . We have checked the rest of the paper and found no other typos. We will correct this in the final version. Your question about target words is addressed in Question 1 below. To Question 1: The mutation process in the Ask stage does not change the position of any words in the input space. Instead, it filters words related to the attacker’s specified semantics within a small hypersphere centered on the image (Equation 2). All target words are within a distance of radius $\eta$ from the image, ensuring that the target text (created based on the target semantic dictionary) is also positioned close to the image. This effectively reduces the search difficulty and improves the search efficiency in the Attack stage (Table 1 and Appendix B.3). To Question 2: DNN with different architecture and parameters exhibit similar attention patterns for the same image, as concluded in previous adversarial attack work (cited [28], Figure 4, CVPR2021) and validated by our qualitative and quantitative experiments on diverse CNN and Transformer models (Figure 4). The role of attention maps in our framework is to reduce the search space of unimportant regions (e.g., background), enhancing the stealthiness of adversarial images and reducing the search difficulty in the Attack stage. Even if different surrogate models produce slightly different attention maps, the cost is limited to additional computation in unimportant regions. Since adversarial attacks are not time-sensitive tasks, this cost is acceptable. To Question 3: Yes, this is a pattern we observed through statistical analysis of the heatmap value distribution. We have added the numerical distribution of all heatmaps as well as the median and mean values to the supplementary file in the rebuttal system. --- Rebuttal Comment 1.1: Comment: Hello, Is there any theoretical guarantee for question 1? What is the additional computation cost if dealing with different heatmaps? Does "DNN with different architecture and parameters exhibit similar attention patterns for the same image" work for other modalities? --- Rebuttal 2: Comment: # To Question 1: ## Is there any theoretical guarantee for question 1? In fact, the reason we can ensure that the chromosome (perturbed image) after multiple rounds of mutation in the Ask stage remains in close proximity to the clean image, thereby facilitating the search for the corresponding target word, is grounded in the following mathematical rationale: Let $x$ denote the clean image, $n$ the number of pixels in the image, $\eta$ the maximum perturbation size for each pixel, and $x_i$ represent the $i$-th pixel of the image. Define $\Vert x \Vert_{\infty}$ = $\max$ ($|x_{1}|$, $\cdots$, $|x_i|$, $\cdots$, $|x_n|$) to represent the infinity norm of $x$. Let $x_{r1}$, $x_{r2}$, and $x_{r3}$ be the images obtained by adding different random perturbations not exceeding $\eta$ to each pixel of $x$. Let $F$ be the mutation scaling factor, and $v$ be the chromosome (perturbed image) obtained after mutation. The hypersphere $B(x, \eta)$ is centered at $x$ with a radius $\eta$, and is mathematically characterized as: $$B(x, \eta) = \\{z \mid \Vert x - z \Vert_{\infty} \leq \eta \\}$$ The distance between the mutated chromosome $v$ and the clean image $x$ is given by: $$\Vert v - x \Vert_{\infty} = \Vert x_{r1} + F \cdot (x_{r2} - x_{r3}) - x \Vert_{\infty} = \Vert x_{r1} - x + F \cdot (x_{r2} - x) + F \cdot (x - x_{r3}) \Vert_{\infty}$$ Employing the triangle inequality, we deduce: $$ \Vert v - x \Vert_{\infty} = \Vert x_{r1} - x + F \cdot (x_{r2} - x) + F \cdot (x - x_{r3}) \Vert_{\infty} \leq \Vert x_{r1} - x \Vert_{\infty} + F \cdot \Vert x_{r2} - x \Vert_{\infty} + F \cdot \Vert x - x_{r3} \Vert_{\infty}$$ Given the definition of the hypersphere $B(x, \eta)$, for any point $x’$ within $B(x, \eta)$, it follows that: $$\Vert x’ - x \Vert_{\infty} = \max_i | x’ - x| \leq \eta$$ Consequently: $$\Vert v - x \Vert_{\infty} \leq \eta + F \cdot \eta + F \cdot \eta = (1 + 2F) \cdot \eta$$ From this, the scaling factor $\frac{1}{1 + 2F}$ is derived. Let $v’ = x + \frac{(v - x)}{1 + 2F}$, which yields: $$|| v’ - x ||_{\infty} \leq \eta $$ By assigning $v’$ to $v$, we guarantee that the mutated chromosome $v$ will always be within the hypersphere $B(x, \eta)$ regardless of the number of iterations. Through these mathematical calculations, we can ensure that the new chromosome $v$ obtained after any number of mutation operations will be in close approximation to the input image (with a distance that does not exceed $\eta$). # To Question 2: ## What is the additional computation cost if dealing with different heat maps? The additional computational cost refers to the extra iterations required by the evolutionary algorithm to find a solution with the same fitness value. As shown in Figure 3(b) of the main paper, the experimental setup of $AAA$ (w/o Attend) used extremely poor attention heatmaps (failing to identify any unimportant regions), resulting in an additional 200 iterations to find a solution of the same quality as $AAA$. This indicates that the quality of the attention heatmap (its ability to identify unimportant regions) is related to the convergence speed of the evolutionary algorithm. The reason is that higher-quality attention heatmaps can effectively reduce the search space. # To Question 3: ## Does "DNN with different architecture and parameters exhibit similar attention patterns for the same image" work for other modalities? This conclusion is feasible under the condition of the same modality, but it is not feasible under the condition of cross-modality. Observations on datasets from different modalities such as RGB, thermal imaging, and infrared reveal that different DNNs trained in the same modality have similar attention maps. However, for a image in another modality (thermal image or infrared), these DNNs (trained on RGB) produce attention maps with differences. This may be because thermal and infrared images lack the color and texture features of RGB images, leading to varying degrees of inability to identify unimportant areas by DNNs. It is worth noting that our work mainly focuses on the RGB modality, addressing the challenges of black-box attacks and targeted attacks, without involving the cross-modality attack, as image-to-text models are mostly based on the RGB modality.
Summary: The authors tackle the challenging problem of adversarial attacks on Image-to-Text Models, focusing specifically on the black-box attack scenario where an attacker has no access to the internal workings of the model, only its output. To address this challenge, they propose a novel framework called Ask, Attend, Attack (AAA), which leverages evolutionary algorithms to mount targeted attacks. The AAA framework consists of three stages: Ask, where the attacker generates target texts that satisfy specific semantic constraints; Attend, where crucial regions of the image are identified without access to the image encoder; and Attack, where an evolutionary algorithm is used to attack these identified regions with semantically related attacks that match the target texts from the Ask stage. This approach cleverly avoids the issue of semantic mismatch encountered in previous gray-box attacks. The authors demonstrate the effectiveness of their proposed framework through experiments on two distinct models. Strengths: 1. The authors aptly acknowledge the complexity of the domain they tackle in this paper, which underscores the significance and novelty of their contributions. 2. The authors provide a clear and comprehensive description of the three stages comprising their proposed AAA framework. Notably, each stage is carefully designed to address the semantic mismatch issue that has been previously observed in related research, offering a thoughtful and intentional approach to tackling this challenge. 3. The experimental outcomes presented in this manuscript are encouraging, as the proposed AAA framework demonstrates strong performance across the two models examined. Additionally, I commend the authors for including an ablation study, which provides valuable insights into the effectiveness of individual components and sheds light on their contributions to the overall framework's success. Weaknesses: 1. While I appreciate the effort invested in this paper, I do find some aspects lacking in terms of clear motivation. For instance, the example provided to justify the attack setting, where an attacker seeks to elicit political slogans and hate speech by introducing imperceptible perturbations to the image, seems somewhat weak compared to more established examples in the domain of image classification. Additionally, I would have liked a clearer explanation for the specific population definition chosen for the Ask stage; without further justification, it's difficult to fully understand the reasoning behind this design choice. 2. The authors effectively emphasize the practicality of the attack setting, which is indeed a significant strength of this paper. However, I do have some reservations about the feasibility of the proposed solution. Specifically, I'm concerned that the attacker would need to interact with the victim model numerous times to generate the target image, which seems impractical and may not be a realistic scenario in many real-world applications. Technical Quality: 3 Clarity: 2 Questions for Authors: 1. I see that the authors chose the l_1 adversarial attack domain. Can AAA be generalized for the more common l_{inf} attack? 2. How are the different parameter values chosen? 3. In Table 1, the epsilon values chosen are 25 and 15. These seem rather big when compared to image classification domain. How do the results look when epsilon is made even smaller? Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: To Weakness 1: To underscore the significance of research into image-to-text black-box targeted attacks, we will add an example of societal harm and highlight a societal benefit: 1) Harm Example: Social media companies use image-to-text AI for content moderation of user-uploaded images. Black-box targeted attacks on image-to-text AI could enable attackers to alter the semantic content of images from illegal to legal, thus bypassing the moderation process and uploading the illegal images to social media. 2) Benefit: The introduction of image-to-text black-box targeted attacks can alert AI service providers to vulnerabilities in their AI products, providing a benchmark to improve the security of their AI products. In Section 3.3, the Ask stage’s population consists of randomly perturbed images, where each perturbed image is a chromosome, and each pixel is a gene. The goal of the Ask stage is to obtain prior knowledge (target semantic dictionary) to reduce the search difficulty of evolutionary algorithm (Attack stage). Target semantic dictionary contains words that are 1) close to the image in the input space, and 2) related to the attacker’s specified semantics. Equations 2, 3, and 4 show a random search within a small hypersphere centered on the input image $x$ with a radius $\eta$, ensuring all potential words are close to the input image. Equations 5, 6, and 7 store words related to the attacker’s specified semantics from the output text of each chromosome in the target semantic dictionary, helping the attacker generate a target text that meets their requirements (target semantics) and can be efficiently searched. To Weakness 2: In Appendix C.1, we discuss the limitations of our framework in real-world applications, specifically the need for numerous interactions with the target model. This is due to our use of a basic differential evolution algorithm, which requires many iterations to converge to the optimal solution. A potential improvement (Appendix C.2) is to combine our framework with the current state-of-the-art evolutionary algorithms, significantly reducing the number of interactions needed. To Question 1: In fact, our framework uses the $l_{inf}$ attack domain, not the $l_1$ domain. As shown in Equation 10, the value range for the i-th variable (pixel) of the clean image $x$ is [$-A(i) \cdot \eta$, $A(i) \cdot \eta$], where $A(i)$ is the contribution weight (0 to 1) and $\eta$ is the perturbation size parameter. This means each pixel has its own perturbation limit ($l_{inf}$ attack domain), adjusted by its contribution weight. For example, less important areas (like the background) have reduced perturbation limits. Since each pixel has a different perturbation limit, we represent the constraint in Equation 1 using the $l_1$ attack domain. To Question 2: The parameter values are determined through ablation studies. For example, Appendix B.2 determines the target semantic strategy, Appendix B.3 determines the word selection strategy, Appendix B.4 determines the population size, Appendix B.6 determines the evolutionary algorithm, Figure 4 in the main text determines the surrogate model, and Figure 5 in the main text determines the perturbation size, etc. To Question 3: In Section 4.2 (Qualitative Experiment of Different Perturbation Sizes), we have compared attacks with epsilon values of 25, 15, 10, and 5 (Figure 5). Our conclusions are: 1) As perturbation size decreases, both our method and existing methods become more stealthy, but their attack performance decreases. 2) As perturbation size increases, our black-box method achieves nearly 100% target attack performance, while existing gray-box methods have lower performance due to semantic loss issues. --- Rebuttal Comment 1.1: Title: Response to author comment Comment: I thank the authors for responding to my comments. However, my concern regarding Weakness 1 still remains. Due to this factor and after reading through the other reviews and responses I have decided to keep my scores the same. --- Rebuttal 2: Comment: Dear Reviewer 8Kf2, As the deadline for the discussion period is approaching, we would like to kindly request your feedback on our responses. We wish to express our deepest gratitude for the time and efforts you have dedicated to reviewing our work. We sincerely hope that our detailed responses have adequately addressed all the concerns and suggestions you raised. We fully understand that you may be occupied with other commitments, but we would greatly value any comments you can provide on our responses before the deadline. Thank you for your attention to this matter. We eagerly look forward to hearing from you soon. Sincerely, 11274 Authors
Summary: This paper proposes a new adversarial attack, termed AAA (ask, attend, attack) attack towards image-to-text models. In the Ask stage, the attacker iteratively generates candidates to generate individuals closer to the target semantics in the feature space of the target model. During the Attend Stage, the attacker utilizes a surrogate model to generate a heatmap to guide the attack. Eventually, in the attack stage an evolutionary algorithm is adopted to search for individuals with closer feature distance between the output text and the target text. Strengths: * This paper presents the first black-box targeted adversarial attack towards image-to-text models. * The proposed attack is shown to be even more effective than existing gray-box attacks. * Comprehensive evaluations are conducted including detailed ablation studies on each component. Weaknesses: * The presentation of Section 3.3 is confusing. Steps 1,2 and 3 are randomly shuffling the input images. It is not well-motivated or clearly explained why the attacker needs to generate the perturbed images following these steps. * The Ask stage seems to be "optimizing" the target text so that the target text would be naturally closer to the input image, which, from my perspective, makes the attack less "targeted" as it is actually easing the attack. As shown in Table 1, the Ask stage heavily contributes to the effectiveness of the AAA attack. Technical Quality: 3 Clarity: 2 Questions for Authors: * In line 262, "AAA (w/o Ask) means the target text is not from the target semantic dictionary, but random words". Why the author didn't use the target semantics as the target text? It is confusing that AAA (w/o Ask) adopts random words as the target text while achieving comparable performance to gray-box attacks. Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: See weaknesses and questions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: To Weakness 1: Section 3.3 (Ask stage) aims to obtain prior knowledge for searching the target text. Prior knowledge shortens the search path from image to target text as much as possible, improving the search efficiency of Attack stage (line 137). Specifically, we perform a random search within a small hypersphere centered on the image $x$ with a radius $\eta$, ensuring all potential words are close to the image (Equations 2, 3, and 4). Then, we compile a target semantic dictionary from the words related to the attacker’s desired semantics (Equations 5, 6, and 7). To Weakness 2: I agree with your opinion that Ask stage makes the attack easier, which is exactly our goal and is reasonable: 1) Technically Reasonable: Prior knowledge can reduce the search difficulty of evolutionary algorithm. In addition, adversarial attacks often rely on prior information. For example, existing image-to-text gray-box attacks depend on the prior information of gradients, and black-box attacks for classification tasks rely on the prior information of similarity between surrogated and target models, while our prior information is the target semantic dictionary. It is worth noting that our prior information is more easily obtainable in reality, while their prior knowledge is difficult to acquire. 2) Fair Comparison: We applied the same prior knowledge (target semantic dictionary) to both our method and the existing methods, meaning that all approaches in the comparative experiments utilized the identical target text, thereby guaranteeing a fair comparison. 3) Reasonable use scenarios: Attackers typically need to meet their target semantic requirements without fixating on specific words. For example, if an attacker wants to map an illegal image to a legal text related to “dogs”, and the target semantic dictionary (prior knowledge) includes the words “puppy” and “jogging” but not “little dog” and “running”, the attacker can choose the sentence “The puppy is jogging” as the target text, avoiding “The little dog is running”. The reason is that although both sentences meet the attacker’s needs, the former is derived from the dictionary and is therefore easier to search for. To Question 1: Using target semantics as the target text for AAA (w/o Ask) is unfair. In our paper, AAA (w/o Ask) randomly creates a complete and coherent sentence from all English words as the target text, which is fair. The reason why the former is unfair: The target text must be a sentence, not a word, because the output text of an image-to-text model is always a complete sentence. If the target semantics (a single word, such as "animal" or "photograph") is used as the target text to calculate the similarity with the output text (a complete sentence), it is difficult to achieve a high similarity score. The reason why the latter is fair: The sole difference between AAA and AAA (w/o Ask) lies in the range of words for the target text. For AAA, the range is "words within the target semantics dictionary," whereas for AAA (w/o Ask), it is "all English words." Our experimental setup for AAA (w/o Ask) fairly demonstrates the impact of word selection range on the attack. Additional comparative experiments with various word selection ranges are conducted in Appendix B.3, which can provide a better understanding. It is important to emphasize that when comparing our method AAA with existing gray-box methods, the target texts are identical sentences, ensuring the fairness of the experiment. --- Rebuttal 2: Comment: Dear Reviewer xQiz, As the deadline for the discussion period is approaching, we would like to kindly request your feedback on our responses. We wish to express our deepest gratitude for the time and efforts you have dedicated to reviewing our work. We sincerely hope that our detailed responses have adequately addressed all the concerns and suggestions you raised. We fully understand that you may be occupied with other commitments, but we would greatly value any comments you can provide on our responses before the deadline. Thank you for your attention to this matter. We eagerly look forward to hearing from you soon. Sincerely, 11274 Authors
null
null
Rebuttal 1: Rebuttal: Dear Reviewers, thank you for recognizing our work as novel, important, and experimentally adequate. And your comments have greatly improved the quality and clarity of our manuscript. We will address your concerns one by one. Pdf: /pdf/132c92adf358b2e20ab9392c02b216be2b32cf2e.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Online Learning with Sublinear Best-Action Queries
Accept (poster)
Summary: In this paper, the authors studies the classical problem of online learning in the additional context of algorithms with predictions, or learning-augmented algorithms. In online learning, a learning algorithm must repeatedly select from a set of options, each associated with a loss function, and the goal is to minimize the regret, i.e., the difference between the (expected) loss of the algorithm and that of the best fixed option in hindsight. The loss function may be adversarially generated based on time and prior actions, which makes obtaining good regret traditionally hard. By utilizing predictions to give the learning algorithm some additional information about the unknown loss function, we may be able to obtain better regrets and break impossibility bounds. The authors incorporate 'best-action queries', which are queries that informs the algorithm the best option at the current time step. Prior work considers different form of predictions or queries, such as a vector correlated with the actual loss function, or the best option amongst a small subset of all the options. This work differs from prior works both in the form of the advice, and that the learning algorithm is only allowed to query at $k$ out of $T$ time steps. WIthin this advice model, the authors show that in either the full feedback model, in which the full loss function is revealed to the algorithm after every time step, or the label efficient feedback model, in which the algorithm is only given the full loss function if it issues a query in the current round, the query budget $k$ multiplicatively affects (decreases) regret. This is a surprising result since the regret per round is upper bounded by $1$, so $k$ rounds can only increment the regret additively instead of multiplicatively. The algorithm the authors propose is a very simple and intuitive modification of the classical Hedge, or multiplicative weights, algorithm, except that in $k$ out of $T$ rounds, randomly sampled according to some procedure, the algorithm queries the best-action and follows it instead of a weighted selection amongst all options. Strengths: The paper studies the intersection between two interesting fields, learning-augmented algorithms and online learning, and contributes positively to its literature. Traditionally, the field of learning-augmented algorithms places a focus on (classical) online algorithms, so whether external predictions can help online learning is a very natural extensional question to ask. Over prior works in the same intersection, the authors additionally showcase the power of such predictions by showing that a small number of queries or predictions can significantly decrease the loss/regret of online learning algorithms, which is a very surprising and exciting discovery. Weaknesses: I find the author's presentation somewhat rushed and unsatisfactory, and makes significant assumptions on the reader's background knowledge. Coming from a learning-augmented algorithm background, I find many online learning aspects of the paper not sufficiently defined or explained in a self-contained way, despite having some familiarity with the field. Some minor points of complaint are listed in the Questions section below. More importantly, a majority of the main corpus of the paper consists of technical proofs following one another, without much introduction or intuition in between to give the readers a high-level guideline of what is going on in the paper. It is very hard, without full familiarity of relevant literature, to follow the author's proofs, and understand what part a claim or statement plays in the grand scheme of proving the main theorems. As a result, while I believe the author's claims are overall correct, I am unable to confidently confirm the technical soundness of the paper. Overall I still believe that this is an exciting paper; but its presentation can be improved significantly. Technical Quality: 3 Clarity: 2 Questions for Authors: The authors discuss the potential of using noisy, possibly erroneous predictions or queries in the conclusion. From my understanding, the learning-augmented algorithms community put a lot of emphasis on utilizing machine-learned oracles that can possibly be accurate but have no rigorous guarantees to facilitate and *augment* classical algorithms to retain both *consistency* (good performance when predictions are accurate) and *robustness* (good performance even when predictions are arbitrarily bad). As a result, I would be very interested in seeing discussions and extensions in this noisy prediction regime in the future. As illustrated above, I believe that a lot of intuition, such as high-level description of the proof strategy, and the usage and purpose of each component lemma or claim in colloquial terms, would be very helpful to the overall presentation in the main corpus, and can help guide readers to follow the flow of logic. Some minor details I would like to point out: - Line 35-36: "... the platform's task **consists in** deciding whether..." should be "consists of"? - Line 49: "...with full feedback, **an** in the **stochastich** multi-armed...". - The phrases "loss" and "feedback" are both used a lot, perhaps interchangeably, which is somewhat confusing. This is especially important in the label-efficient feedback section, in which it is not immediately clear what is "partial" about the "partial feedback" given to the algorithm. Is it just that the algorithm receives the loss function only when it makes a query, or is the received loss function itself partial? - Line 102-103: The entire sentence within the parenthesis does not make sense to me, grammatically. - Line 120: The introduction contains multiple mentions of the classical Hedge algorithm, but never explains, in any level of detail, what is the Hedge algorithm. From their pseudocode of Algorithm 1, which is described as a modification of Hedge, I can extrapolate that Hedge is the classical multiplicative weight algorithm, but the authors did not even make this clear in the introduction. - Line 142: "We have the following theorem." Which theorem is this sentence exactly referencing to? Theorem 2.2 is the natural candidate but its statement predates this sentence. - Line 162: I find it somewhat slightly strange that of the two algorithms presented in the paper, one is given a pseudocode block, while the other is only given a description. - Line 196: "...we construct two **of** randomized instances...". - Line 199-200: "(and $Z_t$ is an empty $n$-dimensional vector)". Is the scope of this sentence if a query is not issued at time $t$? Confidence: 2 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: The authors sufficiently discuss limitations and future directions in the conclusion section. There are no ethical concerns or limitations in their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the questions and suggestions, which we will implement in the final version of the paper. **Question.** The authors discuss the potential of using noisy ... prediction regime in the future. **Answer.** We thank the reviewer for this suggestion. In several applications, these external queries come from human reviewers, for whom we can assume that query responses are perfect (see, e.g., https://transparency.meta.com/en-gb/policies/improving/content-actioned-metric/, https://transparency.meta.com/en-gb/policies/improving/prioritizing-content-review/, https://support.google.com/adspolicy/answer/13584894). However, if the queries are generated by external learning algorithms (even if trained on this specific task), they may be erroneous. The suggested direction is an interesting open questions, which we defer to future work. We believe that our paper's techniques for perfect best-action queries will be essential to analyze these model's extensions. **Question (cont'd).** As illustrated above, I believe ... follow the flow of logic. **Answer.** We appreciate the reviewer’s feedback. We tried to add an overarching explanation of the proof strategy in Section 1.3. We detailed the technical challenges and how we addressed them, providing an explicit explanation of how the various components fit together and highlighting the advantages of uniform querying. We will expand the explanation in the technical sections, exploiting the extra page that is provided for the final version of the paper. **Minor details.** - Q - Line 49: "...with full feedback, an in the stochastich multi-armed...". The phrases "loss" and "feedback" are both used a lot, perhaps interchangeably, which is somewhat confusing. This is especially important in the label-efficient feedback section, in which it is not immediately clear what is "partial" about the "partial feedback" given to the algorithm. Is it just that the algorithm receives the loss function only when it makes a query, or is the received loss function itself partial? - A - Yes, the algorithm receives the \textit{entire} loss vector when it makes the query, and before choosing the action. So, the loss vector itself is given as a whole (and, thus, not partial) but only when a query is issued. - Q - Line 120: The introduction contains multiple mentions of the classical Hedge algorithm, but never explains, in any level of detail, what is the Hedge algorithm. From their pseudocode of Algorithm 1, which is described as a modification of Hedge, I can extrapolate that Hedge is the classical multiplicative weight algorithm, but the authors did not even make this clear in the introduction. - A - Yes, it indeed is the classical Hedge algorithm. However, for the purposes of an easier analysis and calculations, we subtract $\ell_t(i^*_t)$ in the update rule exponent. As the reviewer points out, however, the distribution from which we sample the next action is the same as in classical Hedge (the exponent cancels out in numerator vs. denominator). - Q - Line 142: "We have the following theorem." Which theorem is this sentence exactly referencing to? Theorem 2.2 is the natural candidate but its statement predates this sentence. - A - We actually meant "We have the following algorithm." Thank you for pointing this out. - Q - Line 199-200: ``(and is an empty $n$-dimensional vector)''. Is the scope of this sentence if a query is not issued at time? - A - Thank you for pointing this out. We meant to write ``(and is an empty $n$-dimensional vector otherwise)'', i.e., we do not observe anything if we do not query. We will edit it in the final version of the paper. --- Rebuttal Comment 1.1: Comment: Thank you for your response. I maintain my rating. Overall I agree with the other reviewers that the paper asks a very timely and interesting question about whether we can utilize reliable queries (and hopefully in the future, learning-augmentation with unreliable predictions) to facilitate online learning, and I very much hope to see the paper structured better with more intuition and self-contained proofs, and suggestions from me and other reviewers incorporated, in the camera-ready version, if accepted.
Summary: The paper’s title concisely summarizes the topic. They study both the “full feedback” setting, where the learner observes the entire loss vector (what the loss would have been for each possible action) on each time step, and the “label-efficient feedback” setting, where the learner only observes the loss vector on time steps when a query is issued. The results are as follows: 1. An algorithm for the full feedback setting which achieves regret $O(\min(\sqrt{T}, T/k))$ with k best action queries. Interestingly, the queries are issued uniformly at random. 2. An algorithm for the label-efficient feedback setting which achieves regret $O(\min(T/\sqrt{k}, T^2/k^2))$. The queries are also issued mostly at random here (specifically, uniformly at random until the budget is exhausted). 3. Lower bounds for both of the above results which are asymptotically tight. Strengths: I find the model of sublinear best-action queries to be quite natural, and the authors also do a good job of motivating it. One example given in the paper which I found compelling was an algorithm for flagging harmful content which can escalate to a human review a limited number of times. I also found the results to be impressive. As the authors state, > Note, the total of the losses incurred by any algorithm in k rounds is at most k, which only affects the overall regret in an additive way; nevertheless, we prove that issuing k queries has an impact on the regret that is multiplicative in k. This seems like a potentially powerful insight to me. Weaknesses: I found the paper to be quite technically dense. Although the description of the results in the introduction was clear, the authors don’t provide much intuition for their results. I had a hard time identifying where (on a technical level) the multiplicative power of k comes from. Conveying technical intuition along with technical results is very beneficial for other researchers trying to make use of the insights from this paper. I suspect the impact of the current version of paper is limited by the presentation. For transparency, I will mention that I am not an expert in this sub-area, and it is possible that experts would not have the same complaint. However, to the extent that the intended audience of this paper is researchers in related but not identical fields, I think this weakness is significant. See also the “limitations” section below. Technical Quality: 3 Clarity: 2 Questions for Authors: Can you provide technical intuition for where the multiplicative power of the best-action comes from? Below are minor comments that the authors are free to use or discard. I’m including them in this section because I don’t know where else to put them, but I don’t expect responses. - Line 71: “In our paper we pinpoint the exact minimax regret rates for the problems studied.” It might be worth saying “asymptotically tight” instead of “exact” (although it’s pretty clear from context). - Lines 84 - 86 are quite compelling. - Line 93: Maybe mention that the learner still observes the entire loss vector, not just the best action. - Lines 132 - 133: “combining the Hedge algorithm [Chapter 2 in Cesa-Bianchi and Lugosi, 133 2006] to uniform queries” → “with uniform queries”? - Line 200: “and Z_t is an empty n-dimensional vector” → “and Z_t is an empty n-dimensional vector otherwise”? Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: In Question 2 of the checklist, the authors state: > The paper has no significant limitation. Being theoretical in nature, there is a set of model assumptions I found this answer to be disappointing. Theoretical papers absolutely have limitations. Are the model assumptions realistic? If not, to what extent do the results crucially hinge on those assumptions? Do the algorithms have inefficiencies that would impede or preclude practical usage? Are the technical insights too complex to be easily understandable by other researchers? These are just example limitations; I’m not claiming that any of these particular examples apply to this paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the questions and suggestions, which we will implement in the final version of the paper. **Question 1.** Can you provide technical intuition for where the multiplicative power of the best-action comes from? **Answer.** We are happy that the reviewer shares our enthusiasm for the (somewhat) surprising effectiveness of our query model. We provide here two high-level observations that may help a better understanding. We will add them to the final version of the paper: - The additive term $\frac{k}{T} \cdot L_T^{\min}$ (given in Observation 2.3, and in particular right after line 150) impacts the choice of the learning rate, which is allowed to be more aggressive, thus impacting the regret in a multiplicative way. To be more specific, in the usual Hedge performance analysis, we do not care about the negative term $\frac{-\eta L_T^{\min}}{1-\eta}$ (in the inequality given right after line 153). This negative term together with the additive $\frac{k}{T} \cdot L_T^{\min}$ term given by best-action queries allows us to set the optimal $\eta$ to be larger than the usual (order of) $1/\sqrt{T}$. In other words, the additive impact of the $\frac{k}{T} \cdot L_T^{\min}$ term permits a multiplicative gain in regret as the learning rate $\eta$ is modified and increased. - Consider the following natural instance, which provides a simple proof of the $\Omega(\sqrt{T})$ regret lower bound in the adversarial setting (without queries). The instance is composed of two arms, whose rewards are i.i.d. Bernoulli distribution with probability $0.5$. Any learning algorithm achieves $T/2$ regret, while the best-fixed arm in hindsight is expected to achieve an extra $\Theta(\sqrt T)$ term (this is a simple corollary of the expected distance of a random walk). Now, if the learner is given the power to issue $\approx \sqrt{T}$ queries, then its regret naturally drops from $\sqrt{T}$ to constant. **Limitation 1.** Are the model assumptions realistic? If not, to what extent do the results crucially hinge on those assumptions? **Answer.** We thank the reviewer for pointing this out. The model is realistic as the number of queries we can issue is limited. Moreover, if we supposed that the external queries come from a human (which is true in many applicative settings), we could assume that these reviews are noiseless (see, e.g., https://transparency.meta.com/en-gb/policies/improving/content-actioned-metric/, https://transparency.meta.com/en-gb/policies/improving/prioritizing-content-review/, https://support.google.com/adspolicy/answer/13584894). This, of course, no longer holds when queries are generated by an external learning algorithm, where some degree of noise is inevitable. Our current model does not account for potentially faulty queries. Indeed, our algorithms ``blindly trust'' the query, and the bounds presented in our analysis rely on the assumption that the query is perfect. For example, the former intuitive explanation of the multiplicative power given by best-action queries no longer holds. However, we believe that our work, which provides a nearly complete understanding of the perfect prediction case, may work as a natural starting point along such direction. We will expand on this in the final version of the paper, and we will update the checklist accordingly. --- Rebuttal Comment 1.1: Comment: Thanks for the response. I am satisfied and am maintaining my positive rating.
Summary: This paper considers an online learning with actions model where the learner is allowed to make $k$ "best action queries". Such a query at step $t \in [T]$ will return $i_t^\ast \in \arg\min_{i \in [n]} \ell_t(i)$; an action that minimizes the loss at step $t$. The loss values are bounded (in $[0, 1]$ w.l.o.g) and may be generated by an oblivious adversary. The best action query is performed _before_ the learner makes the decision (arm choice). The regret of the learming algorithm is defined in the usual way. They first consider the full feedback setting, where _after_ making the arm choice, the learner observes the entire loss vector $(\ell_t(1),\ldots,\ell_t(n))$. In this setting, without best action queries, the Hedge algorithm gives $\tilde{O}(\sqrt{T})$ regret for bounded losses (and this is well-known to be optimal). (i) In the full feedback setting (augmented with best action queries), the authors show that the Hedge algorithm with $k$ uniformly random best action queries gives an (expected) regret of $\tilde{O}\left(min\\{\sqrt{T},T/k\\}\right)$. This beats the standard regret bound when $k$ is $\omega(\sqrt{T})$. (ii) They also show a matching lower bound (up to $\log n$ factors) that any full-feedback algorithm with $k$ best action queries (not necessarily random) will suffer $\Omega(\sqrt{T},T/k)$ (expected) regret. They then consider a label-efficient feedback model w.r.t the best action queries. Here the restriction in feedback is entirely in terms of observing the loss values (there is no outcome space) and linked to the best action queries themselves. The learner can make $k$ best action queries; if the learner chooses to make a query at time step $t$, it receives the full feedback after it makes the choice. Otherwise, it receives no feedback. (i) In the label-efficient feedback setting (augmented with best action queries), the authors show that the label-efficient Hedge algorithm (does not update the probabilities when there is no feedback) with random best action queries ($\leq k$ of them, but not quite uniform random unlike in full feedback) achieves $O\left(\min(T/\sqrt{k},T^2/k^2)\right)$ regret. This beats the known regret bound for label-efficient Hedge (from Cesa-Bianchi and Lugosi 2006) when $k$ is $\omega(T^{2/3})$. (ii) They also show a matching lower bound for regret (up to $\log n$ factors) of any algorithm with $k$ query label efficient feedback. In the appendix, they show nearly identical bounds for full feedback and label-efficient feedback in the stochastic losses setting. Here the algorithms are Follow the leader and Explore then commit, and the best action queries are simply made in the first $k$ rounds. Strengths: * The model (best action queries) is theoretically interesting and the results are quite nice in the sense that you can get polynomial improvements in the regret by incorporating _sublinearly-many random_ best action queries with the standard (full feedback and label efficient feedback) Hedge algorithms. * The paper is well-written. Almost all the proofs are cleanly presented in the main matter itself. They also survey and compare some of the existing work on augmenting online prediction algorithms with additional information. Weaknesses: * Even though the best action query model yields theoretically interesting results, it is a bit too strong compared to some of the existing work. Even if only sublinearly many queries are made, each query is supposed to give the _correct best action (among all)_ for that time step before making the decision, and the number of queries needed is still polynomial in $T$ (at least $T^{1/2}$) to get any interesting result w.r.t the regret (in both full feedback and label efficient feedback). When the authors link best action queries to human expert advice in the introduction section, the assumption on the queries becomes somewhat impracticable. Technical Quality: 4 Clarity: 3 Questions for Authors: * In the equations after line 178, is the last equality an inequality ($\leq$) using $T\eta \leq \hat{k}$? Confidence: 3 Soundness: 4 Presentation: 3 Contribution: 2 Limitations: * There is no negative social impact. * The major practical limitation is the strong requirements of the query model. It would be much more feasible if (i) the queries returned an "approximate" best action among all, (ii) the queries returned the exact best action among a subset of actions or (iii) interesting results were obtainable with logarithmically many queries. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the questions and suggestions, which we will implement in the final version of the paper. **Question.** In the equations after line 178, is the last equality an inequality using $T\eta \leq \hat k$? **Answer.** Yes, thank you for pointing this out. **Limitation.** The major practical limitation is the strong requirements of the query model. It would be much more feasible if (i) the queries returned an "approximate" best action among all, (ii) the queries returned the exact best action among a subset of actions or (iii) interesting results were obtainable with logarithmically many queries. **Answer.** We thank the reviewer for this suggestion. In several applications (see, e.g., https://transparency.meta.com/en-gb/policies/improving/content-actioned-metric/, https://transparency.meta.com/en-gb/policies/improving/prioritizing-content-review/, https://support.google.com/adspolicy/answer/13584894), these external queries come from human reviewers, for whom we can assume that query responses are perfect. We agree that in some applications, i.e., when the queries are generated by external learning algorithms, some degree of noise may be inevitable. We believe that addressing also these applications is a compelling and interesting direction of research, and we believe that our work, which provides a nearly complete understanding of the perfect prediction case, may work as a natural starting point along such direction.
Summary: This paper considers the standard prediction with expert advice setting of online learning, but with the twist that the learner may issue, up to $k$ times, a "best-action query" before making a prediction, in which case the identity of an expert incurring the smallest loss in the round is revealed to the learner (and of course they can choose that expert on that round). Two settings are investigated: the standard experts setup wherein the losses of all experts are revealed after each round, and the 'label-efficient prediction' (LEP) setting, where the losses are not revealed unless the learner issues a query for the same (which the authors identify with the best-action query, i.e., all losses are only revealed when the learner makes a best-action query). The paper shows that these $k \ll T$ best-action queries have an impressive effect, improving the regret in the experts setting to $O(\min(\sqrt{T}, T/k)$, and in the LEP setting to $O(\min(T/\sqrt{k}, T^2/k^2)$. Surprisingly, the method achieving this is just hedge, but run with losses of the form $\ell_{t,i} - \min_i \ell_{t,i}$ instead of just $\ell_{t,i}$ (and the appropriate modification a la prior work on LEP for this setting). The approach to showing this uses the standard analysis of the hedge algorithm, and uses the additive advantage of the best-action queries to allow the method to set an aggressively large learning rate (when $k$ is large enough) without suffering a strong penalty for the same, leading to improved regret bounds. The paper concludes by showing matching lower bounds for this setup, in a minimax sense. Strengths: I think that this paper is both pertinent and timely: the investigation of how augmented information can improve regret in classical online learning settings has been an interesting direction of research in the recent years, and the paper makes a significant contribution to this body of work. I particularly find the improvement in the LEP setting to be remarkable. The paper is further well written and easy to understand, with existing arguments being used in an interesting new way. Weaknesses: I don't see major weaknesses in the paper. I think the only change that is necessary is that the discussion of the related work in section C should be moved to the main paper, since this provides valuable context to the investigation presented within (but this should be easily accommodated with the extra page, if the paper is accepted). Perhaps another point of improvement lies in deepening the discussion of scenarios where the model being studied has pertinence. The moderation setup certainly is interesting and natural, but are there other situations where the authors see the relevance of best-action queries? Technical Quality: 4 Clarity: 4 Questions for Authors: Suggestion: One approach to modeling problems like using limited moderation effectively that I have seen is through the abstention model of learning, wherein the learner may "abstain" on a query, and receive a "best-response" by utilising extra resources. I think work on online abstention may thus be a useful point of contact with the literature that the paper misses, and might be worth including a discussion on. This includes both the KWIK model, and the full-information model. Useful points of contact with this literature are below. Li, Littman, Walsh, and Strehl, Machine Learning 2011, Sayedi, Zadimoghaddam, and Blum NIPS10; Zhang and Chowdhuri, COLT 16; Cortes et al., ICML 18; Neu and Zhivotovskiy, COLT 20; Gangrade et al., Neurips 21 Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: This is fine Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the questions and suggestions, which we will implement in the final version of the paper. **Weakness.** Perhaps another point of improvement ... best-action queries? **Answer.** One other application of our model is in fraud detection. In this context, an online learning algorithm aims to identify incoming points as either potential frauds or benign content. This is critically important for platforms like Booking.com, where customers defrauded by fictitious hotels or BnBs must be reimbursed. Proactively identifying potential fraud saves the platform substantial amounts of money. Specifically, Tax, Jan de Vries, de Jong, Dosoula, van den Akker, Smith, Thuong, and Bernardi (MLHat, 2021) provide evidence of a machine learning system designed to detect potentially fraudulent listings on Booking.com. This system is supplemented by expert human oversight, which can be called upon a limited number of times. These instances of expert involvement can be interpreted as best-action queries. **Suggestion.** One approach to modeling problems ... Gangrade et al., Neurips 21. **Answer.** We thank the reviewer for providing this point of contact. We will make sure to expand upon this connection in the final version of the paper related work.
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Disentangling the Roles of Distinct Cell Classes with Cell-Type Dynamical Systems
Accept (spotlight)
Summary: The author developed a model named Cell-Type Dynamical Systems (CTDS) to capture the excitatory (E) and inhibitory (I) neurons’ electrical activity of rat frontal orienting fields (FOF) and anterior dorsal striatum (ADS) during an auditory decision-making task. Strengths: The experiments are well-designed and comprehensive. The results are well presented. The author systematically explores the CTDS models’ performance on E and I neurons’ electrical activity as well as these two neurons’ activity within two brain regions. The model shows higher accuracy in predicting neural activity compared to standard LDS models. By separating latent for E and I neurons, the model provides deeper insights into the functional roles of different cell classes. Weaknesses: 1. It is hard to interpret the Figure 2D; what is the y-axis of the plot for eigenvalues? 2. For Figure 2E, why is the ADS, E latents not plotted? 3. What is the dashed line in the ipsilateral bias plot in Figure 3CD? What does the star mean in the plot, especially in Figure 3D? Technical Quality: 3 Clarity: 3 Questions for Authors: The author focuses on two neuron cell types: excitatory (E) and inhibitory (I) neurons. Is it possible to extend it to other neuron cell types? How does the CTDS model perform on neural recordings from other brain regions and tasks? Is the CTDS model feasible for real-time applications, such as closed-loop experiments where neural activity needs to be decoded and perturbed in real time? From figure 2E, the author stated that ‘both E and I latents in the FOF encoded the animal’s choice, with their trajectories separating out for left and right choices in opposite directions’. Does this mean only I latent of the ADS region participating in encoding the animal’s choice? But from Figure 2D, these two regions are recurrently connected. Then, what is the function of the E neurons in ADS during the decision-making process? Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Addressed limitation. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their positive assessment of our work. We are glad that the reviewer found our work to be well presented, and our experiments to be comprehensive. Below are out detailed responses to the reviewers questions: 1. **Clarifications on figures 2D / 3CD:** Thanks for bringing these to our attention, we will add detailed captions to clarify these figures. In Fig 2D, the y-axis represents the magnitude of the eigenvalues. In Fig 3CD, the dashed line shows 0% bias, which means that the animal makes perfect choices. In Fig 3D, the stars indicate that the experimentally observed ipsilateral biases during both early-half and later-half perturbations in the ADS are significantly above 0% (Yartsev et al. 2018). In Fig 3C, the observed bias is only significant during later-half FOF perturbations as indicated by the star, and not during the early-half perturbations. Furthermore, the difference between the two perturbation effects in FOF is significant (Hanks et al. 2015). 2. **E latents in ADS:** Thanks for pointing this out. ADS does not have E neurons, thus we thought it was not important to visualize E latents in ADS. We discuss this in sec A3 of the appendix, but will make sure to mention this in the main text to avoid confusion. 3. **Extension to other cell types:** Thanks also for this comment; we feel this is an important point about the generality of our model, and we have elaborated on this in more detail in our response to all reviewers. 4. **Performance on other brain regions and tasks:** We focused on one task in this work, and fitted our models to one dataset containing FOF and ADS neurons. However, we would like to point out that our results (Fig 3CD) are consistent with two other papers (Hanks et al. 2015 and Yartsev et al. 2018), which study the same task but using distinct datasets. Our focus here is to introduce the model, and comprehensively demonstrate its merits using the poisson clicks task. We hope future work will apply CTDS to other tasks and brain regions. 5. **Real-time decoding and perturbations:** CTDS is indeed capable of real-time decoding. Once the model has been fitted, during inference Kalman filtering can be used to infer underlying latent states and behavior. Thus, it is also possible to extend this framework to control the inputs being provided to the model in a closed loop while running real-time decoding. Thanks for this important comment, we will add this to the discussion section. --- Rebuttal Comment 1.1: Comment: I believe the authors have addressed all my concerns. I will keep the scores unchanged. --- Reply to Comment 1.1.1: Comment: Thank you for taking the time to help improve our work.
Summary: This work proposed the dynamical model with cell-type specific latent, especially focused on the excitatory (E) and inhibitory (I) neurons. They developed a cell-type dynamical system (CTDS), where E/I neurons will only have positive or negative effects. They apply this model into decision making takes, and CTDS outperforms standard LDS model in decoding animal choice. They also performed an in silico experiment with optogenetics stimulation to test the causal effects on behaviors. In the end, they also demonstrate CTDS could help with identifying cell types. Strengths: **Motivation** 1. Neurons with different cell types has very distinguishable roles in neural computation, while it is often ignored in dynamical modeling for neural activities. This paper focused on important problem, and model cell types with their postive/negative effects, and demonstrating the effectiveness of the proposed model. **Results** 1. They evaluated their proposed models on extensive applications, including decoding choices, studying causal effects from in silico experiments, and inferring cell types. 2. The results on studying causal effects are very impressive, given the model is only trained on unperturbed data, that demonstrated the learned model is very generalizable to new scenarios. Weaknesses: **Evaluation** The proposed model only compared with over-simplified linear RNNs model, adding nonlinearity to model and add more powerful baselines will add more strengths into the evaluations. **Results** The CTDS does not outperform simple LDS in several settings, i.e. Fig 2c, Fig 3c. **Applications** Knowing cell types information in advance could be difficult in many experiment setups, using anatomical information or spike width histograms could add errors into cell type information. Evaluating the robustness of the results given incorrect cell type information. **Related works** Relevant references on modeling cell-type specific neural dynamics and inferring cell types is missing. [1] Learning Time-Invariant Representations for Individual Neurons from Population Dynamics. NeurIPS 2023. [2] Transcriptomic cell type structures in vivo neuronal activity across multiple timescales. Cell Report 2023. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. What might be the potential challenges to extend this framework into nonlinear model? 2. How to extend this into more fine-grained cell type levels? subclass (vip, sst, pvalb, etc.) Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: No potential negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their positive assessment of our paper and constructive suggestions for improving it. We are glad that the reviewer found our experiments to be extensive, and our causal perturbation results to be impressive. However, we feel that the reviewer has misunderstood one of our key findings, as we explain below. We apologize for not making this point clearer, and would humbly request that the reviewer consider raising their score if indeed they are persuaded by our response (on that and several other issues that we describe below). 1. **Comparison between CTDS and a standard LDS (Fig 2C, 3C):** We would like to clarify the takeaway messages from both these figures, and want to emphasize that they do not suggest that LDS models outperform CTDS. Fig 2C shows a comparison between choice accuracy as decoded from CTDS and LDS models. Indeed, both models perform equally well, however CTDS provides greater interpretability by allowing us to characterize the information encoded by the different cell types. Furthermore, CTDS has higher test-LL which means it in fact captures neural activity __better__ than LDS. More importantly, in Fig 3C (left panel), we show the classification error when perturbing FOF during either half of a trial. Experimental perturbation results from rats show that this error is low during early-half perturbations, but high during late-half perturbations (see Fig 3C right panel). CTDS is indeed able to capture this phenomenon due to the distinction between E and I latents, while a regular LDS does __not__ recapitulate these findings. The fact that the “choice error” bar for the LDS model is lower than that of the CTDS model in Fig 3C does not mean that the LDS model is performing better — in fact, the opposite is true, since the point of this figure is to show that CTDS model captures the qualitative pattern of errors found in real behavior experiments (from Hanks et al. 2015, shown in the right panel of Fig 3C) better than LDS. We apologize for the confusion on this point, and we will rewrite the figure caption and text to make this clear. This is a key contribution of our work, so we hope this alleviates the reviewer’s concern. 2. **Comparisons to nonlinear RNNs, incorporating non-linearity:** Our rationale for showing an equivalence between CTDS and linear RNNs was to illustrate that CTDS acts as a bridge between mechanistic and descriptive models. That is, low-rank linear E-I RNNs can be exactly transformed to a CTDS model. We agree with the reviewer, however, that non-linear RNNs are more expressive. In future work we would certainly like to extend CTDS to incorporate nonlinearities, which we agree would make it both more expressive and more biologically plausible; we will certainly add this point to the Discussion. However, we feel that the idea of incorporating cell types into LDS models, along with the empirical results on perturbation experiments (and the mathematical connection to linear RNNs) nevertheless make for a worthwhile contribution in their own right, which we hope the reviewer will agree on. 3. **Evaluating the robustness of the model using incorrect cell-type information:** This is a great question, and we thank the reviewer for bringing this up. As the reviewer points out, it may be difficult to brain accurate cell-type information. Sec 5 of our paper which focuses on inferring cell type identities gets at this question: we mask the identities of up to 50% neurons in our dataset, and fit CTDS to these datasets while also inferring neuron identity. As shown in Fig 4, CTDS is able to infer neuron identity significantly above chance for up to 50% of neurons. To answer the reviewer’s question more directly, we have also attached a figure showing test log-likelihood of CTDS fitted to datasets with varying numbers of masked neurons, as compared to when we know all cell identities accurately (in the pdf attached with author rebuttal). As we can see, while test LL expectedly falls off as the percentage of masked neurons increases, it is robust when masking up to 20% of neurons. We will add this new result to our revised manuscript. 4. **Relevant literature on inferring cell classes:** Thank you for pointing us to these papers, we will be sure to cite and discuss them in the updated version of the manuscript. 5. **Adding more fine-grained cell information:** This is an important suggestion. Please see response to all reviewers for our comments on this. --- Rebuttal Comment 1.1: Comment: Thanks the authors for their clarifications between comparisons between LDS and CTDS, that address my concerns, I have raised the score correspondingly. --- Reply to Comment 1.1.1: Comment: Thank you for acknowledging our response.
Summary: This work describes how a latent dynamical systems (LDS) model of neural activity can incorporate distinct excitatory (E) and inhibitory (I) latents. Doing so requires being careful to maintain sign constraints (Dale's law) in the transition matrix defining latent dynamics as well as the emission matrix defining how recorded neural activity is a projection of the underlying latents. This model can also be extended to a multi-area variant, enabling separate interaction between and within areas. The model, named a "cell-type dynamical system" (CTDS), can be fit to neural activity using an expectation-maximization inference procedure. The model is extensively evaluated, and shown to be better than vanilla LDS. On neuropixel data from a rodent auditory decision-making task, the model predicts neural activity and behavioral choice better. Simulated optogenetic perturbations on the model better predicts behavioral effects of real perturbations. Finally, the model can be used to infer putative E/I identities of recorded neurons. Strengths: **Significance**: Latent dynamical systems are a widespread model used to explain neural activity, and this work takes the necessary step of incorporating E/I information to make these models more biologically realistic and closer to a mechanistic model. As such, this work tackles an important problem for the neuroscience community. The model is also simple and elegant, which should help its adoption. **Novelty**: The model seems novel enough, though sign constraints is a common feature of computational neuroscience models, so it's surprising if such an approach hasn't been explored. (Can the authors better contextualize this in the Introduction?) Nevertheless, even if that were the case, such convergence is appreciated. **Technical Quality**: The work is technically solid, with an impressively thorough evaluation on simulated data / RNN theory, predictions for neural/behavioral data, perturbations for behavior effects, and inference of cell types. **Presentation Quality**: The writing, the mathematical notation, and the figures are all very well done. Weaknesses: 1. **Cell types**: The name "cell-type dynamical systems" seems to be overselling a bit. It could more accurately be called "sign-constrained latent dynamical systems" or "latent dynamical systems with Dale's Law". Of course, the notion of "cell types" (e.g. Pyramidal, SST, PV, VIP, Purkinje) includes the excitatory/inhibitory distinction, but this is just one of many functional properties (morphology, intrinsic dynamics, stereotyped connectivity, plasticity rules, etc.) that are widely accepted as important to the distinction of cell types. Crucially, while it's relatively straightforward to incorporate sign constraints into the LDS framework as demonstrated in this work, it is not obvious to me how to incorporate these other properties that characterize cell types. If the authors feel that CTDS can indeed easily capture such properties, I would be interested to see a discussion here and in the paper about how such extensions can be done. 2. **Limitations**: I didn't see limitations in the Discussion section as mentioned in the Checklist. Please include a separate subsection to discuss this explicitly. In particular, I'd be interested to understand: (a) How the dimensionalities of the latent population are chosen for different tasks, (b) How this framework can scale to more difficult and naturalistic tasks than 2AFC. 3. **Presentation tweaks**: A couple minor revisions to improve the manuscript. Equation 1 and 3 should use a similar notation/definition for the error term. Figures use should use the same hues/shades of red and blue to denote excitatory and inhibitory data. Technical Quality: 4 Clarity: 4 Questions for Authors: See above. Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: I didn't see limitations in the Discussion section as mentioned in the Checklist. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their detailed and insightful review. We are grateful that the reviewer found our work to be novel, technically solid, and of significance to the neuroscience community. Below are our responses to the reviewer’s comments and questions: 1. **Relevant literature on sign constraints in neuroscience models**: We agree with the reviewer that a large number of previous models have sought to incorporate Dale’s law and or sign-constrained weights. However, we would like to point out that past works have focused on incorporating Dale’s law in recurrent neural network models (RNNs), but NOT (as far as we are aware) into latent dynamical system models. For example, [Fisher et al. 2013, Haber & Schneidman 2022] have used sign-constrained RNNs to study properties of neural circuits, and [O’Shea & Duncker et al., 2022] have used them to study perturbation experiments . We have cited these works in Sec 3 of our paper, but will also mention them in the introduction for better contextualization per the reviewer’s suggestion. However, we are not aware of similar literature in the context of latent dynamical systems. Latent dynamical systems have typically focused on descriptive representations of neural activity, while sidestepping mechanistic properties of the brain. Our work, thus, bridges this gap by enforcing mechanistic constraints on linear dynamical systems. We thank the reviewer for bringing this up, and we will make this clear in the introduction of our revised manuscript. 2. **Extension to other cell types, model nomenclature:** We agree with the reviewer that here we only focus on E and I cell types, and have not discussed more fine-grain cell identities. However, we feel that our framework can easily incorporate multiple cell types within a single population or across multiple populations (as discussed in the example case of PV and SOM inhibitory neurons in the response to all reviewers). We apologize for not making this generality clear in our original paper, and will revise both the Introduction and Discussion sections to discuss specific ways in which CTDS can be applied to multiple cell types (and not just E and I). 3. **Limitations section:** Thanks for pointing this out. We will add an explicit limitations section in the manuscript discussing choice of latent dimension, and scaling to other tasks. Currently, we choose latent space dimensionality based on test log-likelihood as well as for ease of visualization / interpretation. However, our results qualitatively hold for a wide range of latent dimensions (we will add this in the appendix). In terms of applying the model to more naturalistic tasks beyond 2AFC, two main challenges would come up: (a) The stimulus could be high-dimensional: this would mean learning a correspondingly higher-dimensional B matrix in our current CTDS setup. Alternatively, we could also learn a lower-dimensional encoding of the stimulus, depending on the specifics of the task. (b) For naturalistic tasks spanning over larger timescales, a linear model might not be sufficient. In such cases, our approach can be extended to incorporate sign constraints in a non-linear setup such as a switching linear dynamical system. 4. **Presentation tweaks**: Thank you pointing these out, we have fixed these now. --- Rebuttal 2: Comment: Thanks for your comments and manuscript improvements. I still feel that the nomenclature of "cell-types" is overselling a bit. The authors' global response points to PV and SOM as distinct types with specific connectivity/signs that future work could model. While connectivity and sign are indeed part of distinguishing these cell types, other properties are also important (e.g. dynamics of fast-spiking in PV vs. low-threshold spiking in SOM). As noted in my review, my concern was not that distinct connectivity/signs couldn't be represented in this framework (they can, as demonstrated in the model and multi-area variant). My concern was that other important functional properties that are widely accepted as important to the distinction of cell types (especially morphology and intrinsic dynamics) can't be trivially incorporated. So, the contribution here really is "sign-constrained latents". If renaming isn't desirable at this stage, then perhaps a discussion in the paper can highlight that this research direction is still wide open. Overall, I am excited that this work explores how to make LDS models more mechanistic by incorporating biological properties, a contribution the authors have reiterated in their response. I am happy to maintain my score to accept. --- Rebuttal Comment 2.1: Comment: Thank you for your response, and for your insightful suggestions. We will certainly add a discussion about the limitations of our current approach in terms of capturing properties of different cell types. We agree with the reviewer that there is room for future work in this space, at the same time we also think that our model provides a valuable framework to disentangle roles of different cell types. Thanks again for your encouragement!
null
null
Rebuttal 1: Rebuttal: We thank the reviewers for their positive assessment of our work, and for their detailed comments and suggestions. We are delighted that the reviewers found our work to be well-written, of significance to the community, and our experiments to be well-designed and technically solid. We thank reviewers yzj8 and upoR for placing our work above the acceptance threshold. We also thank reviewer KBww for their constructive feedback; we have done our best to address their concerns in our detailed rebuttal below, and would humbly request that they consider raising their score. We will discuss a general point mentioned by all reviewers first, and then address reviewer-specific comments. **Incorporating additional cell types beyond E and I**: As the reviewers have pointed out, our paper focuses primarily on populations with two cell classes: excitatory and inhibitory neurons. Indeed, neural populations can be divided into more fine-grain cell-types, and we intend for our model to be applicable to these cell-types. For example, for a population with two distinct classes of inhibitory neurons (e.g., PV and SOM), we could use the CTDS modeling framework to incorporate distinct latent variables for each of these two populations, and the dynamics matrix could be structured to assume for particular forms of connectivity between these populations as determined by anatomy (e.g., that both PV and SOM cell types inhibit excitatory neurons, and that SOM neurons inhibit PV, but not vice versa). However, we realize that we did not make this point clear in our original submission, and we will clarify the generality of our framework in the Introduction and Discussion sections of the revised manuscript. Note that we have already considered a case with two different regions, where the projection from one region to the other is purely excitatory, whereas the other allows connections of both signs (once again, as suggested by anatomy). We therefore prefer to keep the name “Cell Type Dynamical Systems” for our framework, although we are open to additional suggestions if the reviewers feel strongly. We hope that future work can use our approach to disentangle information encoded by a range of different cell types. Pdf: /pdf/09ed95fadfe2a3d3c66a86652d87b251e042d3ee.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
RL on Incorrect Synthetic Data Scales the Efficiency of LLM Math Reasoning by Eight-Fold
Accept (poster)
Summary: There have been previous attempts at doing finetuning on positive synthetic examples to improve LM reasoning, but the performance gain from these attempts are generally quite limited (and possibly even negative). In this paper, the authors take a different approach which also accounts for negative examples and “critical” intermediate steps, improving upon previous approaches. Strengths: I was really impressed by this paper – at least for me, it contains quite a lot of insights, and is substantially better than most papers on synthetic data that I’ve seen in the past. - I thought the idea of using negative data was fairly natural but also good, and I’m glad to see it implemented in practice - I found that the proposed link to model collapse seemed really interesting, which allegedly can occur if a model generating synthetic data has memorized the training data using a spurious feature. - I really liked the idea of looking at “critical steps”, and I thought the approach of relating this to credit assignment was impressive. More generally, I liked that the authors really made clear attempts to both quantify and explain their empirical observations. For example, I appreciated the section comparing the sample efficiency of self-generated and GPT-4/Gemini 1.5 Pro synthetic data, which does both. Weaknesses: The biggest concrete thing that I felt was missing was discussion about computational costs. I think this is really important because I’d be interested in the scalability of approaches like the ones outlined in the paper. I think having some comparison of the overall FLOP required for finetuning (including the FLOP for synthetic data generation) would be very helpful. Another weakness is that in my understanding, the main approaches in the paper apply most directly to mathematics, since identifying critical steps involves checking results based on MC rollouts. There’s still a question about whether or not similar approaches can be applied to LMs in domains where outputs are harder to “verify” – though to be clear, I think it’s entirely reasonable that the authors focus on math in the context of this paper. Lines 197-199: This compares the scaling exponent for data for the MATH and GSM8K datasets with the exponent from the Chinchilla paper (on MassiveText). How do we know that this implies improving coverage over samples? Other possible reasons for the different exponent come to mind – e.g. different functional forms for the scaling law in finetuning and pretraining, use of different datasets, etc. As a minor comment, I think “It is predicted that we will run out of high-quality internet data by 2026” should be modified to be about human-generated public text data in particular, and it looks like the median year from source [53] is 2028 rather than 2026. Technical Quality: 3 Clarity: 3 Questions for Authors: - Could the authors please provide some information about the computational costs of their approach? (perhaps comparing this for SFT, RFT, and per-step DPO) - Clarification for lines 197-199: How do we know that this implies improving coverage over samples, rather than the other reasons I mentioned? - Clarification for lines 203-204: Is it correct to interpret this as saying that “around 4% of sampled outputs from the SFT model are used for RFT”? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 4 Limitations: I think the authors did a fair job discussing the limitations of their work. One minor weakness is that the paper is framed around when synthetic data improves LM reasoning, but the investigation itself is more narrowly focused on mathematics. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the feedback and for the positive assessment of our paper! We are grateful to you for the comments and kind words regarding the contribution of our paper. To address the questions, we have added a discussion of computational costs below and answer the other questions raised. **Please let us know if your questions are addressed, and if so, we would be grateful if you might be willing to increase the score further.** ___ ## Scalability of approaches; comparison of the overall FLOP required for fine-tuning Thanks for the question! We will add a more formalized version of the discussion below to the final version of the paper. To understand the scalability of SFT, RFT, and per-step DPO, we make the following comparisons. First, SFT exclusively uses synthetic prompts and responses generated from more capable models. Assuming inference FLOPs are $2 \times N \times D$, this cost scales with the parameter size ($N$) of the more capable model used to generate the data for every new synthetic question, whereas RFT and per-step DPO only require running more rollouts with a much smaller 7B model for fewer questions, not incurring the inference FLOPs of the more capable model. Thus in a FLOPs-matched comparison, RFT / per-step DPO should have an even bigger edge over SFT. Now we perform a comparison of FLOPs / compute costs for RFT and per-step DPO, for a fixed number of synthetic prompts. We can break the total FLOPs into two parts: **(a)** inference FLOPs needed to generate data from the SFT policy for running RFT and per-step DPO, and **(b)** training FLOPs for RFT and per-step DPO. **Regarding (b),** we train both RFT and per-step DPO for an equal number of steps, with an identical number of forward and backward passes through the model: more precisely, since the DPO loss utilizes two samples together, we run DPO with half the batch size to fit on the GPU memory. Put together, this should lead to an equal number of forward and backward passes for per-step DPO and RFT. The training FLOPs are typically given by $6 \times N \times D$, which should be the same for both RFT and per-step DPO. **Regarding (a),** we compare the number of samples that need to be drawn from the SFT policy for both RFT and per-step DPO. For RFT, to collect enough positives from $\pi_\mathrm{sft}$, we draw 100 samples per question and filter for positive ones. For DPO, if the accuracy of $\pi_\mathrm{sft}$ is $p$, then with high probability, identifying a single positive and negative sample for a prompt takes $\approx max(1/p, 1/(1-p))$ samples. Now, for computing advantage estimates of each step in the negative response, we set the maximum number of steps per generation as 10 and sample 5 MC rollouts conditioned on each step. Thus, in total per-step DPO requires $\approx max(1/p, 1/(1-p)) + 50$ samples per question, which is a value smaller than $100$ for RFT, when we plug in $p \approx 0.4$ and $p \approx 0.7$ for MATH and GSM8K respectively (this is the accuracy of the SFT model in our experiments). **Thus, DPO is more computationally efficient than RFT in our experiments.** ___ ## Clarification for lines 197-199: How do we know that this implies improving coverage over samples, rather than the other reasons I mentioned? Thanks for bringing this up! Indeed the difference in scaling law exponents can come from a different functional form of the scaling law. We originally wrote this statement because the approx. exponent of the scaling law of Llama2-70B on GSM8K (derived from training on 1, ½, ¼ subsets of only real GSM8K training data) is -0.3 (following the result from Yuan et al. [68]) and -0.105 for DeepSeek-7B on MATH (see below for the result), whereas we observed -0.15 with synthetic data on GSM8K and -0.05 on MATH, which is substantially lower. Since the only difference is in the datasets (at least for MATH), we attributed this to the coverage of samples. That said, you are right that a comparison with pre-training scaling laws may not be correct here, so we will update this line to note the difference against training models on real math data, rather than contrasting it with pre-training scaling laws. | Training data fraction | 1 | 1/2 | 1/4 | 1/8 | 1/16 | 1/32 | |:----------------------:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:| | Test error | 0.624 | 0.671 | 0.742 | 0.788 | 0.832 | 0.905 | ___ ## The main approaches in the paper apply most directly to mathematics, since identifying critical steps involves checking results based on MC rollouts. We only study our approach on MATH and GSM8K due to computational constraints associated with running it on other domains, and are happy to change the title of the paper to include “math reasoning” instead of “reasoning” to clearly scope the contributions of the paper, if the reviewer thinks that would be more appropriate. That said, we do think that identifying critical steps using MC rollouts and per-step RL can also be used for coding problems, where each line or each function call can correspond to a step (e.g., see StepCoder (Ma et al. 2023), which runs a similar style of MC rollouts to collect data for training process reward models on coding problems). We will add a discussion of this in the paper. ___ ## Clarification for lines 203-204: Yes, your interpretation is correct in that only 4 out of the 100 sampled responses are used for training with RFT, following the procedure in Yuan et al. 2023. ___ ## Typo / minor concern: “It is predicted that we will run out of high-quality internet data by 2026 Thanks for catching the typo! We will update the statement to reflect 2028, the median year in that reference and use the term “human-generated public text data” instead of internet data to address the concern. --- Rebuttal Comment 1.1: Comment: Thanks for the really comprehensive response! I'll keep my official score at 8 - while I think the work is excellent, the description for a score of 9 requires the work to be "groundbreaking", which I think is a very high bar. However, I should emphasise that my main concerns have been addressed, and if it were possible I'd raise my score by 0.5 points to reflect this. --- Reply to Comment 1.1.1: Title: Thank you for your appreciation Comment: Thank you immensely for your kind appreciation and very positive assessment of our work! To further improve our paper, we will incorporate the clarifications and discussion we provide in our responses. -Authors
Summary: The paper investigates how to effectively leverage synthetic data to improve reasoning capability in the GSM8K and MATH datasets. The authors identify that sampling correct synthetic data from a fine-tuned model is more sample-efficient but comes with the risk of overfitting artifacts in the synthetic data. Instead, they propose a per-step DPO to leverage negative synthetic data to enhance sample efficiency. Strengths: 1. Synthetic data for reasoning is a highly important topic. 2. The idea is straightforward and effective. 3. The experiments are extensive, with comprehensive ablation studies. Weaknesses: 1. Missing Relevant References: The paper lacks references to relevant literature. Scaling laws on how synthetic data plateaus or performs slower than real data during pretraining have been observed and analyzed in several studies [1][2]. Additionally, the observation that self-generated data is more sample-efficient has been noted in research on images and machine translation [3][4][5]. 2. Unclear Per-Step DPO Definition: The per-step DPO is not fully defined. In Line 180, how is the “first pit” $\hat{y_c}$ determined? Does it involve looping over all intermediate steps from the beginning to find the smallest $c$? It would be helpful for the authors to provide an algorithmic outline in the appendix. In Theorem 6.1, the per-step DPO is trained with pairs (x, $[y_{1:i}, +y_{i+1}]$, $[y_{1:i}, -y_{i+1}]$), whereas Line 180 defines it as using $+y$ instead of $[y_{1:i}, +y_{i+1}]$. Which version is implemented in the experiments, and why? 3. Claims on Sample Efficiency: The improved performance on filtered $\mathcal{D}_{syn}$ from the fine-tuned model requires sampling 100 samples per question, and the per-step DPO samples require Monte Carlo rollouts for intermediate steps. This adds significant computational overhead during inference, much more than during the fine-tuning stage. A proper comparison of sample efficiency should include the inference compute spent generating these examples. Given the same computational resources, would DPO perform better as it does not require identifying the “first pit” while utilizing more synthetic data? 4. Explanation of Figure 7: The explanation of Figure 7 is unclear. How are the average Q-values calculated? What if a problem requires fewer than 8 steps in GSM8K? Will the average for step 8 be biased toward harder questions? ### Reference [1] Dohmatob, Elvis, et al. "A tale of tails: Model collapse as a change of scaling laws." arXiv preprint arXiv:2402.07043 (2024). [2] Fan, Lijie, et al. "Scaling laws of synthetic images for model training... for now." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024. [3] Mobahi, Hossein, Mehrdad Farajtabar, and Peter Bartlett. "Self-distillation amplifies regularization in hilbert space." Advances in Neural Information Processing Systems 33 (2020): 3351-3361. [4] Gu, Jiatao, et al. "Non-autoregressive neural machine translation." arXiv preprint arXiv:1711.02281 (2017). [5] Zhou, Chunting, Graham Neubig, and Jiatao Gu. "Understanding knowledge distillation in non-autoregressive machine translation." arXiv preprint arXiv:1911.02727 (2019). Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Line 138: Why can we assume that the answers generated via this process are accurate? Is the model prompted with the correct solution? 2. Line 166: The sentence is not well-written. 3. DPO Suitability for Math: DPO is derived from the Bradley-Terry model, where preference follows a probabilistic distribution with softmax over reward values. For mathematical generation, however, correct answers should always be preferred over incorrect ones with probability 1. The framework, especially naive DPO, seems unsuitable for the math set, which might explain its weak performance. The KTO method, cited in the paper, proposes a different RLHF method that considers positive and negative distributions directly. How would the KTO method perform? Would the per-step version of KTO also outperform naive KTO? 4. Line 302: Why are the advantage values always non-positive? 5. Data Generation Protocols: It would be more self-contained to include critical data generation protocols in the paper rather than referencing “Training data for DPO is generated in the procedure outlined in [23]” in Appendix H at line 800. The reviewer will consider increase the score if some of the weakness and questions are addressed. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Limitations are discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the feedback. To address the concerns, we add new results applying per-step RL to KTO and RFT as well, and compare computational costs. We also clarify the per-step DPO algorithm, advantage functions and Fig 7. **Please let us know if these responses address your concerns, and if so, we would be grateful if you are willing to raise your score.** ___ ## [New exp.] Per-step DPO definition, KTO results (W2, Q3) As we understand, there are two concerns: - (1) does per-step DPO work well because DPO is a weak baseline, and would the benefits of per-step RL translate to other RL methods?; - (2) the exact formulation of per-step DPO. To answer (1), we show that per-step variants of other RL methods also improve over them. We add new results extending RFT and KTO to their corresponding per-step variants using both positive and negative data. We find that their respective per-step variants outperform both RFT (Fig 1 in PDF) and KTO (Fig 4 in PDF) on both GSM8K and MATH. For RFT, the per-step variant exactly optimizes a sampled estimate of Thm 6.1. In Fig 4 (in PDF), we find that while KTO is slightly worse than DPO, the per-step version of KTO consistently outperforms KTO. That said, we still emphasize that **our point is not to propose per-step DPO as an algorithm, but to show that our conceptual model of per-step advantages and negative data can enhance the efficiency of synthetic data.** We chose DPO as a base algorithm, but this is not the only choice. Regarding (2), we clarify that our implementation of per-step DPO is the one in L180. To avoid confusion, we present it as an algorithm box (Panel 5 in 1-page PDF). While this does deviate a little from Thm 6.1, it allowed us to build on Hwang et al. 2024’s codebase while still conforming to our conceptual model. In our implementation, we generated preference pairs using only negative responses. When we also consider low-advantage steps in positive data to construct preference pairs (closer to formulation in Thm 6.1), we see a further improvement of 1.5-2x (Panel 5). We also note that **our formulation from Thm 6.1 has been successfully utilized by [a paper that appeared on arXiv after NeurIPS deadline](https://arxiv.org/pdf/2406.18629), confirming the efficacy of our framework as well.** ___ ## Claims on sample efficiency, cost of per-step DPO vs RFT (W3) To clarify, when we say per-step DPO is more efficient than RFT/SFT, we mean that per-step DPO achieves better accuracy _for the same number of synthetic prompts_, vs SFT or RFT. We consider only the number of synthetic prompts here as these require querying proprietary models, which costs inference FLOPs, and hence is more expensive than sampling the SFT model. However, for a fixed set of synthetic problems, we can compare the efficiency of per-step DPO & RFT, in terms of the number of samples from the SFT policy used. For RFT, to collect enough positives from SFT, we draw 100 samples per prompt & filter correct ones. For DPO, if the accuracy of SFT is $p$, then with high probability, identifying *one* positive and negative sample takes $max(1/p, 1/(1-p))$ samples / prompt. For computing advantages in the negative response, we set the maximum number of steps per generation as 10 and sample 5 MC rollouts from each step. Thus, in total per-step DPO requires $max(1/p, 1/(1-p)) + 5* 10$ samples per prompt, which is smaller than $100$ for RFT, when we plug in $p \approx 0.4$ and $p \approx 0.7$ for MATH and GSM8K respectively. **Thus, per-step DPO is more computationally efficient than RFT in our results.** **Please see global response for more details.** ___ ## Explanation of Fig. 7 In Fig. 7, our main claim is that the per-step approach of using positive and negative data uniformly improves Q-values across all steps compared to the SFT policy. This should be expected since advantage-weighted RL optimizes the RL expected reward objective. On the other hand, standard DPO which does not maximize the per-step RL objective does not improve Q-values over that of the SFT policy, at earlier steps. The avg. Q-value at step $T$ is averaged over only those responses with $\geq T$ steps. We agree that later steps may only appear in harder problems & may have lower Q-values, but our goal is to compare per-step RL with the SFT policy (gray dashed line) *for the same step* and not to compare Q-values across different steps. ___ ## Why are advantages non-positive? In RL, advantages under the optimal policy are non-negative since the advantage is given by the difference between Q-value for an action and the maximum of Q-value at that state. In contrast, our definition in Eq. 3 relies on the choice of $\tilde{\pi}$, so you are correct that Eq. 3 may not always be non-positive for arbitrary $\tilde{\pi}$. However, when setting $\tilde{\pi} = \mathrm{BoK}(\pi_\mathrm{sft})$, and when $K$ is large, w.h.p. we would expect Eq. 3 to be non-positive since $\tilde{\pi}$ should improve over the choice of step $\hat{y}_{i-1}$ prescribed by the SFT policy. This may not be true when $K$ is small or when sampling error is present, and we will clarify this in the paper. ___ ## Clarifications about Line 138. In general, we cannot guarantee that generated data will be correct, but we follow the recipe of Li et al. 2024 that asks the model to also verify its responses. Li et al. 2024, Liu et al. 2023 follow the same data protocol and show that scaling up synthetic data this way can improve 7B models to beyond GPT4 performance, indicating that data quality may not be as bad. ___ ## Missing references. Thanks for pointing out these references, we will cite & discuss them. We believe our contributions go beyond these works: for e.g., while [1, 2] find slower convergence with synthetic data, and [3, 4, 5] discuss efficiency of self-gen. data, neither of them study how to use negative data or propose a conceptual framework to understand it. None of these focus on reasoning with modern LLMs. --- Rebuttal 2: Comment: I appreciate the response and the new experiments on KTO. However, I am not entirely convinced by the claim that "*per-step DPO is more computationally efficient than RFT in our results*" as the author is drawing 100 samples per prompt, which introduces the possibility of variance when drawing 10 samples per prompt while creating more prompts in RFT. This variant could potentially surpass the original RFT with 100 samples per prompt. The new experiments on the overfitting of RFT, however, demonstrate that given abundant compute resources, per-step DPO clearly outperforms. Consequently, I have increased my score. --- Rebuttal 3: Title: Clarification on computational costs of per-step DPO and RFT Comment: Thank you for your response and for increasing the score! We are not sure if we fully follow your comment on the “computational efficiency” of RFT vs. per-step DPO, and might be misunderstanding your proposed variant of RFT, and the precise notion of “variance” being referred to. Therefore we request you to kindly correct us if the following response does not address your comment. **Please let us know.** _We begin by belaboring that the goal of our work is not to propose “per-step DPO” as novel algorithm_, but more to highlight our conceptual framework of positive and negative synthetic data, where credit assignment with negative data can address issues caused by overfitting on spurious steps in positive synthetic data. This general point is also evidenced in our other comparisons: per-step RFT is better than RFT, and per-step KTO is better than KTO. **Sampling new prompts is substantially more costly in our setting than sampling new responses.** We apologize if this was not clear but our claim “per-step DPO is more computationally efficient than RFT in our results” is only meant to apply to the setting with the **same set of synthetic prompts**. This is because obtaining new synthetic prompts requires querying more-capable models (like GPT-4, Gemini 1.5 Pro) in all of our experiments, which would incur a much higher factor times the computational cost of the 7B models that we study (these models are more capable than Llama2-70B, so it is safe to say that their sizes are at least 10x larger, which means at least 10x the computational cost for sampling more questions). While we agree that in some uses cases where sampling new prompts using a 7B model might be sufficient, there can be variants of RFT with more prompts (and less solutions per prompt) which could be more computationally efficient, this is not true in our setting, and we are not aware of any work that samples synthetic math questions (of the same quality as GPT/Gemini) with smaller scale models. ___ **Empirical justification for the cost comparison given same set of prompts:** Within the context of the same set of prompts, more samples for RFT (e.g., 100) should reduce variance, it might lead to overfitting as we have shown in Section 5. Our results in Figure 1c (1-page PDF) compares the best configuration of RFT (which uses 4 diverse and correct samples out of the 100 samples) with per-step DPO. Note that these 4 out of 100 samples are selected based on the techniques prescribed by Yuan et al. [68] to obtain the most competitive RFT performance. For comparison, in our experiments on the 8k sized synthetic data which has 8k prompts, we got best performance from RFT by sampling 100 responses per prompt (total of **8k * 100 = 800k**) and selecting 4 diverse and correct responses per prompt from them. On the other hand, for per-step DPO we only need one positive and one negative trace. Even accounting for the advantage estimation run on every step in the negative trace, we only needed to draw **578k samples** from the SFT policy in our experiments (indicating that our calculation in the previous response was already overestimating the _actual_ cost of per-step DPO). ___ **Theoretical justification of the cost comparison given same set of prompts:** In general, when $p$ is the accuracy of the SFT policy, the number of samples needed to draw a single correct sample is a geometric distribution with mean $1/p$. Note, that we actually need $k$, way more than a single correct sample per prompt, to ensure there is enough diversity of solutions per prompt in RFT data (from these $k$ correct ones we selected 4 correct and diverse ones, as done in Yuan et al. [68]). From tail bounds on geometric distribution, to obtain at least $k$ positive samples per prompt, if we sample $\frac{k}{p} \cdot (1+\sqrt{\log(k/\delta)})$ samples from the SFT policy, then with probability at least $1- \delta$, we will have $k$ positive traces. On the other hand, per-step DPO requires only one positive and negative sample, obtaining which requires at most $\frac{2}{p} \cdot (1+ \sqrt{\log(2/\delta)})$ many samples with probability $\geq 1 - \delta$, when $p < 0.5$. For per-step DPO, we additionally require Monte Carlo advantage estimation, which requires sampling $M$ samples per step (with at most $L$ steps in the trace). Thus, the total number of samples (from SFT policy) needed to run per-step DPO is at most $\frac{2}{p} \cdot (1+ \sqrt{\log(2/\delta)}) + M \times L$. Note that for RFT we need about $k/p$, and since we follow Yuan et al. [68], we get the best performance of RFT when $k \approx 40$ at least, which puts the $k/p$ term in RFT around **70 to 100** for MATH and GSM8k. On the other hand the $M \times L$ in per-step DPO is **at most $50$** for both of these datasets in the experiments. Thus, even in the worst-case per-step DPO (which achieves 8x scaling) requires fewer samples than the best performing variant of RFT (which achieves 2x scaling).
Summary: This paper investigates the impact of synthetic data on improving the reasoning capabilities of large language models (LLMs). The authors conduct an empirical study followed by a theoretical formalization to understand when and how synthetic data helps or hurts model performance on reasoning tasks. Key findings include:- (1) Finetuning on synthetic correct/positive problem-solution pairs offers modest gains, but sampling more correct solutions from the finetuned learner doubles sample efficiency. (2) Training solely on model-generated positives can amplify spurious correlations, leading to flat or inverse scaling trends as data increases. (3) Utilizing negative responses (incorrect model-generated responses) alongside positives, with appropriate credit assignment for intermediate steps, yields consistent gains over positive-only data. (4) The proposed per-step negative data approach is equivalent to advantage-weighted reinforcement learning (RL) and helps unlearn spurious correlations. The authors demonstrate that their per-step negative data approach improves sample efficiency by 8x compared to standard finetuning on positives. They also provide theoretical and empirical analyses to explain why credit assignment improves model generalization. Strengths: - The work reveals important nuances in using synthetic data, particularly the value of leveraging negative examples and the importance of credit assignment for intermediate steps. - The findings offer concrete guidance for improving LLM training with synthetic data, potentially addressing data scarcity issues in language model development. - The authors conduct extensive experiments, including scaling studies and comparisons with various baselines, strengthening the validity of their claims. Weaknesses: - While the focus on reasoning tasks is well-justified, it's unclear how well the findings generalize to other types of language tasks or domains. - The study primarily uses 7B parameter models (DeepSeek-Math-7B and LLama2-7B). It would be valuable to see if the results hold for larger or smaller model sizes. - The paper doesn't discuss the computational costs of their approach, which could be a significant factor in its practical application, especially for larger models or datasets. - While the authors compare their method to several baselines, it would be beneficial to see comparisons with more recent techniques in synthetic data generation or model finetuning. - The study doesn't address potential long-term consequences of training on synthetic data, such as potential drift or accumulation of errors over multiple generations of synthetic data. Technical Quality: 2 Clarity: 2 Questions for Authors: None. Confidence: 3 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the feedback. To address your concerns, we clarify and add several new results to show how per-step RL on negative data can fix issues caused by accumulation of errors from training on multiple generations of positive synthetic data, and also respond to the other concerns on choice of tasks/model size, baselines, and their computational costs, and will add this discussion to the paper. **Please let us know if your concerns are addressed, and if so, we would be grateful if you are willing to raise your score.** ___ ## [New exp.] Model collapse from training on synthetic data. **As we already discussed in the submission, our study of negative synthetic data in Sec 6 precisely addresses the concern of amplifying model biases when scaling positive synthetic data.** For this, we introduce a conceptual framework that uses negative data to identify model biases (spurious/incorrect steps) and down-weights them appropriately with step-level advantage-weighted RL. Details below: We discuss spurious correlations in Section 5, where we discuss how naively scaling data by sampling more responses for the same set of synthetic questions leads to a drop in performance (Fig 2c). We explain this finding with an experiment where we finetune on positive data with spurious steps (Fig 4). Thus, as we scale synthetic data more spurious steps creep into the training data and corrupt final performance. To further show the amplification of model biases, we add a new experiment (Fig 3a in 1-page PDF) where we train on a large amount of self-generated RFT data (128k samples) on 8k/16k questions. Note the sharp increase in test error of RFT compared to training on just 10k or 20k RFT data points (1 or 2 responses per prompt). We also add a new result in Fig 1 in the PDF to show that per-step advantage filtering of this model-generated positive and negative data allows us to address this issue of amplification of spurious steps, leading to improved test error as we scale up the number of model samples.This shows that our conceptual framework allows us to help address model biases, amplified by training on positive data alone. We will add these to the paper. ___ ## Computational cost For a fixed size of synthetic dataset the training FLOPs for all methods SFT, RFT and all variants of DPO are identical. The main difference lies in the inference compute spent on collecting model-generated responses for RFT/DPO. For DPO, if the accuracy of $\pi_\mathrm{sft}$ is $p$, then with high probability, identifying a single positive and negative sample for a prompt takes $max(1/p, 1/(1-p))$ samples. For per-step DPO, we additionally spend $5$ samples per step to estimate advantage, for a maximum of 10 steps per question. Finally for RFT we collect positive training data, by sampling 100 prompts per question. Thus, in total, DPO requires $max(1/p, 1/(1-p))$, and per-step DPO requires $max(1/p, 1/(1-p)) + 50$ samples per question, both of which are smaller than $100$ for RFT, when we plug in $p \approx 0.4$ and $p \approx 0.7$ for MATH and GSM8K respectively. **Thus, per-step DPO is more computationally efficient than RFT in our experiments.** ___ ## Comparisons with more recent techniques in synthetic data generation or model finetuning Our technique for synthetic data generation for math follows the pipeline established in Li et al. [28], Yu et al. [61], Liu et al. [30], and Wang et al. [54]. These studies already compare various ways of prompting for new questions, and thus we rely on their findings to generate synthetic problems, but we study scaling behaviors on this data. **Crucially, we also study scaling for model-generated positive data and negative data, which these prior works don’t study.** While SFT does not have many variants, we already compare w/ two different DPO baselines (Fig 5c), to underline the importance of per-step advantage computation and step-level preference pairs. We also add a new result with per-step RFT and find it improves the efficiency of RFT (Fig 1 in 1-page PDF) while addressing spurious correlations. We also add a new result showing the efficacy of per-step KTO over naive KTO alone, further strengthening our claims. If the reviewer could elaborate on specific baselines or comparisons they would like to see, we will include them in the final version. ___ ## Justifying the choice of tasks/domains and model size used for experiments. Faced with computational constraints, at the beginning of the project, we had the choice of performing an in-depth analysis on one domain (math) or a shallow analysis over many domains. We chose the former to be able to provide detailed insight for future work on RL and synthetic data for math capabilities, which is an active area as well as to provide a roadmap for running similar analyses in other areas. Within math reasoning, our study is of a similar scope to Li et al. [28], Hwang et al. [23], etc. Finally, MATH and GSM8K datasets are the default standard for several papers. We agree that extending our analysis to other domains is interesting, and will add this as an avenue for future work. We also clarify that our conceptual framework of using positive and negative data for advantage estimation, spurious correlations in RFT, and per-step RL does not assume anything about the underlying task beyond the access to a final answer checker. Thus, our framework should be helpful for guiding algorithm design in other domains. **We also analyze how credit assignment disincentivizes memorization using a star-graph problem (Sec 6.3): while this problem is very different from the math tasks we study, we still draw conclusions that explain both of these phenomena.** **Regarding model size,** our experiments on the didactic setup uses GPT-2, so we already show that our analysis holds for smaller model sizes and reasoning tasks beyond math too (e.g., graph search). Due to computational constraints, we cannot run SFT/DPO beyond 7B. --- Rebuttal Comment 1.1: Comment: Dear Reviewer, We apologize for bothering you, but since there is only one day left in the discussion period, we wanted to check in with you to see if our rebuttal addresses all outstanding concerns and have a chance to address any new ones. Thanks, Authors
Summary: This paper studies the effect of synthetic data on improving the reasoning abilities of LLMs. The authors have compared with multiple approaches including SFT, RFT, DPO, per-step DPO, etc., and characterized the model performance w.r.t. different scales of synthetic data under each training regime. Several practical findings about improving sample efficiency are presented, including leveraging the finetuned model to sample for positive samples, utilizing per-step negative examples to alleviate the spurious correlations that might appear in the synthetic positive data, etc. Strengths: - The paper studies an important problem. - The conceptual model of negative sample construction covered in Section 6.1 is an interesting read, as the authors formally express the synthetic data problem in the context of RL. Weaknesses: - It seems that the authors wanted to cram lots of conclusions into a single paper. While I appreciate the incentive, the paper would make a much more academic read if more experimental results are clearly presented in tables. Currently, it seems that there are few quantitative results in the paper. This also led many specific concerns (see below). - In Lines 242-245, it reads `But, we find that even when RFT data is restricted to one solution per question, LLM trained on it outperforms SFT consistently by > 1%. Since verification is cheap, we can sample more solutions and also benefit from coverage.` Where is the evidence to this claim? In this claim, does the size of D_{syn} equal that of D_{SFT}^{+}? - It makes me confused when I compare Lines 224-244 and Lines 245-263. In Lines 224-244, the authors suggest that RFT outperforms SFT in terms of sample efficiency. But in Lines 245-263, the authors suggest that RFT data contain spurious correlations, and when manually raising the percentage of spurious correlations in RFT data, SFT would outperform RFT. Especially, in Lines 248-249, `This is unlike scaling of problems and solutions in SFT data (in Figure 2(a,b)).` Since RFT might even suffer from negative scaling because of spurious correlations, I don't think that RFT could be claimed to be `more sample-efficient`. - Also about the artificial spurious correlation amplification experiments in Lines 257-263: Does Figure 4 show the experimental results (there's no reference in the paragraph)? Does the `D_{π}^{+} spurious` correspond to the construction of `for each question in D_{syn} we sample “spurious steps” from π_{sft}`? Does this "100% spurious correlation injection" represent the authentic "spurious steps" ratio in RFT data (and what is the authentic "spurious steps" ratio in RFT data?)? Does D_{syn} contain no spurious correlation steps? As the RFT data scaling leads to negative performance, will those RFT data with a less percentage of spurious correlation steps lead to a not-so-negative performance? If this is true, can we just use the advantage defined in Eq.(3) as a stronger filtering rule and apply on RFT data? A more in-depth quantitative study is required here, so that the importance of the use of negative examples in Section 6 is better revealed. - Is the `per-step DPO` method used in the paper proposed in the cited [23] paper? If it is true, is it also true that no new method is proposed in Section 6? Note that it is fine to focus on analysis instead of proposing new techniques, but the author should make it more clear about this in the paper. It would also be helpful if the authors briefly introduce the per-step DPO method used in the paper in a background/methodology section. - Are the advantage in Eq.(3) and the per-step DPO method applicable to other types of reasoning tasks in addition to MATH and GSM8K? For example, ARC requires LLM reasoning (LLM often conducts this task by writing codes). Are the methods and the analysis applicable to ARC? - Lines 343-344: `As such, we needed to tune β in Equation 1 for DPO but could not fully avoid 344 performance degradation.` Where is the evidence to this claim? Experimental results with a sweep of β would be supportive. - Similarly, Lines 377-386 also require more experimental evidence. It would be helpful if quantitative observation is added to the claim like `When the initial advantage of a spurious step is incorrectly over-estimated, negative data algorithms up-weight the likelihood further. This only leads to further memorization. `, `leads to test-time model collapse`, and `On the other hand, when \tilde{π} = BoK(π_{sft}) for a higher value of K, the Monte-Carlo advantage estimator has a lower variance (and error). This discussion also justifies the choice of K=5`. - The paper lacks a Conclusion section, which I believe is because of the length limit of a submitted paper. But it is still better to add such a section and properly refactor the previous contents. I encourage the authors to address my concerns, and I would consider raising my score provided that the issues I raised are resolved properly. Others: Line 322: "instantation" --> "instantiation" Technical Quality: 2 Clarity: 2 Questions for Authors: See the Weaknesses part. Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 3 Limitations: The authors have addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the review! To address the concerns, we add many results for RFT, spurious correlations, per-step RL, and advantage estimation, which we believe improve the quality of the paper. We will use the 1 extra page in the final to incorporate them, along with a conclusion, and clarifications shown below. **Please let us know if your concerns are addressed, and if so, we would be grateful if you are willing to raise your score.** ## [New exp.] L242-245 RFT data w/ 1 sample / problem; RFT is 2x as efficient as SFT, L224-244 vs. 245-263. We compare SFT & RFT, when the RFT data is the same size as SFT (128k). For each prompt, we only include one response for RFT. On MATH we see: **44.09 (SFT) vs 45.17 (RFT)** and on GSM8K: 80.18 (SFT) to 81.27 (RFT). As said in L242-245, we observe a >1% gain. Our definition of efficiency compares performance from training on a fixed set of synthetic problems, while allowing for as many responses per problem. This is because synthetic questions are obtained by querying proprietary models, which is expensive in terms of both FLOPs and cost, and may require human verification in practice. In contrast, sampling responses from SFT policy only requires running inference against a 7B model. Please see the global response for a comparison of costs between SFT, RFT, and DPO. Fig 2 a,b shows that for an appropriate number of self-generated samples per question, RFT matches the performance of SFT with 2x more questions, hence the 2x efficiency. Of course, as we scale up RFT data, we also amplify spurious correlations as discussed in L245-263 (and per-step filtering can fix this), but for the right number of samples per prompt, **RFT does as well as SFT with 2x data**. Thus, **L224-244 & 245-263 are not contradictory**: they apply to different settings with different #samples per prompt. ___ ## [New exp.] Per-step RFT weighted by advantage We ran an experiment with advantage filtering on all the steps present in **both** positive & negative data from the SFT policy and cloned the filtered data. This “*per-step RFT”* outperforms standard RFT (Fig 1 in 1-page PDF), indicating that training on useful steps from negative data can improve beyond only training on positive data alone. While per-step RFT is worse than per-step DPO, we believe that this only further hints at the point that even using low advantage steps (that per-step RFT filters) for training, can further improve. ___ ## Fig 4; Artificial spurious correlations. We will update the paper to refer to Fig. 4 in L257-263. As you noted, in these experiments, we artificially injected an arbitrary step from an incorrect response produced by SFT policy, into a positive solution trace. While this artificial injection does not necessarily reflect the proportion of spurious steps in $D_\text{syn}$, it still does provide a controlled proof-of-concept to illustrate that RFT can amplify spurious steps when present, and perform worse than SFT as a result. To show the presence of spurious steps in $D_\text{syn}$, we sample 128k RFT data (16 responses per prompt), and find RFT to degrade drastically (Fig 3b in 1-page PDF). For both these cases, per-step RFT does not fall prey to spurious steps and consistently improves over SFT (Figs 1a,b, 3c in 1-page PDF). ___ ## [New exp.] Per-step DPO: algorithm box, comparison w/ Hwang et al. To avoid confusion, we provide an algorithm box for our per-step DPO in Panel 5 (PDF). As you note, our main focus is to analyze the importance of negative data and not to propose a new algorithm. For our experiments, we use the method from Hwang et al. [23], as this was more convenient for experimentation. That said, we have now run another version of per-step DPO (Panel 5 in 1 page PDF) derived from the advantage-weighted RL objective (Thm 6.1) and find that it improves efficiency further by 1.5-2x on MATH, supporting the efficacy of our framework. ___ ## [New exp.] Tuning $\beta$ for DPO (L343-344) We run DPO and per-step DPO run with different $\beta$ on 128k problems (same setting as Fig 5c of the paper). Even for the best value of $\beta=0.2$ at 128k, DPO performs worse than per-step DPO and also DPO on 8k problems. | $\beta$ | DPO | per-step DPO | |:-------:|:--------:|:-----------------:| | 0.05 | 35.58 | 45.11 | | 0.1 | 36.18 | 47.64 | | 0.2 | 36.21 | 46.52 | | 0.3 | 35.17 | 45.89 | | 0.5 | 34.49 | 45.93 | ___ ## [New exp.] L377-386: Concrete evidence **When adv. estimation error is high…memorization is amplified**: To support this, we add a new result on the didactic star graph. We train on negative data with per-step DPO, but instead of using 5 samples for advantage estimation (as in original Fig 8), we only use 1 sample (fewer samples ⇒ larger error), and find that per-step DPO no longer learns the generalizable feature, and instead only reduces training loss by memorizing the “hard/critical’ token”, with accuracy of per-step DPO around 50%. This underscores the advantage estimation error (Fig 2a in PDF). **For a fixed sample size, advantage estimation error drops as we increase $K$.**: Based on the above observation that higher estimation error leads to memorization, we suggest computing advantages under a higher $K$. To show this, we add a new result plotting the variance of the advantages given by two independent runs of advantage estimation for each value of $K$, averaged over all steps in the SFT data (Fig 2b in PDF). As claimed, we find that $K=5$ has an estimation error that is $<25%$ of the estimation error for K=1, with larger $K$ doing better. ___ ## Applicability to ARC. We believe that per-step RL should carry over to any task where we can break the response into steps, including code, where each line or function call can correspond to a step (e.g., prior work StepCoder (Ma et al. 23) trains process reward models for code using a similar framework). --- Rebuttal Comment 1.1: Comment: I appreciate the authors for adding the experiments. I request the authors to add these experiments into the main paper to support the claims, and also improve the presentation of the paper. I've raised the score. --- Reply to Comment 1.1.1: Comment: Thank you for increasing your score! We will definitely add in these experiments and discussion, to the main paper and improve the presentation per the discussion above. Since there is still one more day, we are also wondering if there would be some other discussion or evidence that we can provide in this period to help improve your evaluation of our paper further. Please let us know. Thanks a lot!
Rebuttal 1: Rebuttal: We thank all reviewers for their detailed feedback, and in particular would like to highlight the the positive assessment of our work by Reviewer iuJa: **“contains quite a lot of insights, and is substantially better than most papers on synthetic data that I’ve seen in the past”**. To address the reviewers' concerns, we have added several new experiments, which we believe substantially improve the quality of our paper. The results are shown in the 1-page PDF which we will incorporate using the extra page in the final version. We also discuss a common question on the computational cost of collecting positive and negative synthetic data and running per-step RL in this global response as well. **We hope that these responses address the reviewers' concerns and look forward to the discussion period.** ___ ## **List of new experiments in 1-page PDF** - **Figure 1:** We clone responses with high advantage steps from positive and negative responses sampled from the SFT policy. We filter all responses where the minimum advantage across all steps is in the bottom 50% percentile. - **Figure 2:** (Top) We analyze the effects of running per-step DPO on our didactic star-graph problem when advantages are computed using a single rollout and as a result are likely incorrect. (Bottom) We compute the variance of advantage estimates as we increase K for $BoK(\pi_{\mathrm{sft}})$. - **Figure 3:** We scale up RFT to 128k data points by sampling answers from the SFT policy for only 8k/16k questions in $\mathcal{D}_\mathrm{syn}$ (left). The performance degradation for RFT is now severe, and similar to what we observe when running RFT dataset where we synthetically inject spurious steps (right). In the same figure, we also see that per-step RFT (described above) is able to filter responses with low-advantage, spurious steps and is able to almost match the performance of per-step DPO. - **Figure 4:** We compare KTO with per-step KTO and find that the benefits of our conceptual framework of step-level credit assignment is agnostic of the choice of the underlying RL algorithm (DPO/KTO). - **Panel 5:** We show an algorithm box for the per-step DPO algorithm used in our experiments. Note that in this algorithm we compute step-level advantages over only negative responses, consistent with Hwang et al. [23]. To more closely approximate the theoretically optimal version (mimicking advantage weighted RL) in Thm 6.1, we run a modified version of this algorithm where we also compute advantages for steps in positive rollouts. This modified version performs even better than per-step DPO validating the utility of our framework in Sec 6. ___ ## **Computational costs of SFT, RFT, per-step RL** To understand the computational costs of SFT, RFT, and per-step DPO, we make the following comparisons. First, SFT exclusively uses synthetic prompts and responses generated from more capable models. Assuming inference FLOPs are $2 \times N \times D$, this cost scales with the parameter size ($N$) of the more-capable model used to generate the data for every new synthetic question ($D$), whereas RFT and per-step DPO only require running more rollouts with a much smaller 7B model for fewer questions, not incurring the inference FLOPs of the more capable model. Thus in a FLOPs-matched comparison, RFT / per-step DPO should have an even bigger edge over SFT. Now we perform a comparison of FLOPs / compute costs for RFT and per-step DPO, for a fixed number of synthetic prompts. We can break the total FLOPs into two parts: - **(a)** training FLOPs for RFT and per-step DPO. - **(b)** inference FLOPs needed to generate data from the SFT policy for running RFT and per-step DPO, and **Regarding (a),** we train both RFT and per-step DPO for an equal number of steps, with an identical number of forward and backward passes through the model: more precisely, since the DPO loss utilizes two samples together, we run DPO with half the batch size to fit on the GPU memory. Put together, this should lead to an equal number of forward and backward passes for per-step DPO and RFT. The training FLOPs are typically given by $6 \times N \times D$, which should be the same for both RFT and per-step DPO. **Regarding (b),** we compare the number of samples that need to be drawn from the SFT policy for both RFT and per-step DPO. For RFT, to collect enough positives from $\pi_\mathrm{sft}$, we draw 100 samples per question and filter for positive ones. For DPO, if the accuracy of $\pi_\mathrm{sft}$ is $p$, then with high probability, identifying a single positive and negative sample for a prompt takes $ \approx(1/p, 1/(1-p))$ samples. Now, for computing advantage estimates of each step in the negative response, we set the maximum number of steps per generation as 10 and sample 5 MC rollouts conditioned on each step. Thus, in total per-step DPO requires $ \approx(1/p, 1/(1-p)) + 50$ samples per question, which is a value smaller than $100$ for RFT, when we plug in $p \approx 0.4$ and $p \approx 0.7$ for MATH and GSM8K respectively (this is the accuracy of the SFT model in our experiments). **Thus, per-step DPO is more computationally efficient than RFT in our experiments.** Pdf: /pdf/9b7697fa542bd6fdc18efb4069c07b6cd5a3c556.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Score-based 3D molecule generation with neural fields
Accept (poster)
Summary: In this study the authors propose a novel FuncMol model for molecular generation. The proposed approach is based on continuous space neural fields and involves the joint training of a molecular neural field along with molecular modulation codes. The implementation of a neural empirical Bayes technique allows the generation of new molecular structures during the inference phase. The performance of the proposed model is compared against established neural baseline models on the GEOM-DRUGS dataset, which comprises small drug-like molecules. Additionally, the model is examined on the CREMP dataset, containing macrocyclic peptides, to provide a comprehensive analysis of its capabilities. Strengths: To the best of my knowledge, the idea of using continuous space neural fields for the generation of molecules is novel and previously unexplored. Moreover, in addition to training on the popular GEOM dataset, the authors also train the model on the CREMP dataset, which contributes to the advancement of peptide generation domain. Weaknesses: While the approach presented in the paper is novel in the context of molecular generation, the experimental analysis is incomplete. Several baselines and thorough ablation studies to justify the selection of hyperparameters are absent, as is an in-depth examination of the model's limitations. Further details can be found in the Questions section. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. (major) The authors have incorporated several diffusion models as baselines, such as EDM and GeoLDM, to benchmark their model's performance on the GEOM-DRUGS dataset. However, they have missed the important MolDiff (a) baseline, which mitigates the problems of generated spatial structures including the atom-bond inconsistency problem by introducing guidance from auxiliary bond predictors. Moreover, MolDiff enhances the evaluation framework by incorporating additional Jensen-Shannon divergence metrics to assess structural and spatial aspects more comprehensively. 2. (major) The section on macrocycle peptide generation does not take into account other relevant baselines, one of which is RINGER (b), whose authors released code for reproduction. 3. (moderate) The experimental setup in this study focuses only on unconditional molecule generation. In the practice, spatial molecular structures generation is mainly applied given some conditions, like conformation generation (c) and pocket-conditioned generation (d), which respectively aim to generate spatial molecular structures based on a given graph or a protein pocket. The paper would benefit from extending its analysis to demonstrate how the model could be adapted to integrate external conditions and discussing potential avenues for this adaptation. 4. (minor) The current model is not SE(3) equivariant, which implies that molecular rotations and translations could affect the modulation codes. The authors note this as a limitation in line 346 but do not specify whether any techniques were utilized, such as coordinate canonicalization, to mitigate this issue during model training. Further clarification on the methods used to address this challenge would be valuable. 5.(minor) One of the model bottlenecks is reconstructing atom positions from the continuous occupancy field, which necessitates calculations on a uniform discretization grid with cubic scaling. The authors select a discretization step of 0.25Å, followed by L-BFGS optimization for refining atom positions. However, the manuscript does not include an ablation study examining the impact of different discretization steps on the balance between model performance and computational speed, which would be critical for justifying the chosen discretization step. (a) MolDiff: Addressing the Atom-Bond Inconsistency Problem in 3D Molecule Diffusion Generation (b) RINGER: Rapid Conformer Generation for Macrocycles with Sequence-Conditioned Internal Coordinate Diffusion (c) Torsional Diffusion for Molecular Conformer Generation (d) 3D Equivariant Diffusion for Target-Aware Molecule Generation and Affinity Prediction Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: In the "Discussion" section, the authors address several model limitations; however, they do not explore the significant issue of integrating external conditions into the model. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the comments and questions. A general rebuttal is posted above. Below we address the reviewer's individual questions. **1. MolDiff Baseline.** MolDiff shows that incorporating bond information in point cloud-based approaches improves the quality of the generated samples. This contribution is orthogonal to ours and can potentially be incorporated into our generative model, e.g. via additional channels. This, however, is not the focus of our work, nor that of the baselines we considered. Here, we aim at proposing for the first time neural fields as a new representation for 3D molecules (a non-trivial task). For completeness (despite different training assumptions), we compare FuncMol to MolDiff. Since the weights for MolDiff with hydrogens were unavailable, we compared FuncMol using MolDiff’s metrics and the MolDiff performance reported in their Appendix D.1, Table 8. We observe that FuncMol achieves competitive results in most metrics despite not using bond information. | | MolDiff with H | FuncMol | |----------------|---------------|----------------| | Validity ↑ | 0.957 | 1.000 | | Connectivity ↑ | 0.772 | 0.739 | | Succ. Rate ↑ | 0.739 | 0.739 | | Novelty ↑ | 1.000 | 0.992 | | Uniqueness ↑ | 1.000 | 0.977 | | Diversity ↑ | 0.427 | 0.810 | | Sim. Val. ↑ | 0.695 | 0.554 | | QED ↑ | 0.688 | 0.715 | | SA ↑ | 0.806 | 0.815 | | Lipinski ↑ | 4.868 | 5.000 | | RMSD ↓ | 1.032 | 1.088 | | JS bond lengths ↓ | 0.414 | 0.529 | | JS bond angles ↓ | 0.182 | 0.217 | | JS dihedral angles ↓ | 0.244 | 0.232 | **2. RINGER Baseline.** To our knowledge, we are the first to report results on CREMP in the unconditional all-atom 3D molecule generation setting. This experiment is more for qualitative purposes as CREMP is very recent and not a standard benchmark for this task. RINGER [1] also considered CREMP but tackled conformer generation, a different problem from ours: it assumes knowledge of the molecular sequence/graph during training and sampling. This simplifies generation, since the model knows a priori the number of atoms, their types, the bonds between them and the approximate atom locations. RINGER only parametrizes angles and torsions, while FuncMol and its baselines perform all-atom generation. We tried to extend our baselines to CREMP but did not succeed, mainly due to the high memory consumption (e.g. VoxMol took 40 GPU-hours per epoch, while FuncMol took 2.7 GPU-hours). To encourage comparisons to FuncMol, we include some quantitative metrics used in [1] that measure the distance between test and generated distributions using KL divergence. The KL divergence for bond angles are 0.1615 ($\theta_1$), 0.1345 ($\theta_2$) and 0.2197 ($\theta_3$); the ones for the dihedral angles are 0.1127 ($\phi$), 0.1178 ($\psi$) and 0.1813 ($\omega$). The percentage of valid generated peptides (for which we can extract the sequence of amino acids from their SMILES) is 82.7%. **3. Focus on unconditional generation, which is not as good as conditional generation.** We agree that conditional generation presents more practical value than unconditional generation. We note, however, that our approach is the first to introduce neural fields as a representation of 3D molecules. The focus of our submission is to show that (i) it is a feasible approach (this is non-trivial), (ii) it scales well with data and molecule size, and (iii) it achieves competitive results with fast sampling time. We will include a discussion about this topic on the appendix and we will leave extensions to conditional generation for future work (similar to how TargetDiff [1]/DiffSBDD [2] extend EDM or VoxBind [3] extends VoxMol). **4. The model is not SE(3) equivariant. How to leverage rotation/translation?** We do not leverage any SE(3) equivariance on the approach described in the original manuscript. We acknowledge that this is a good point and address it in the rebuttal by performing data augmentation during training (see main Rebuttal comment "Issues with memorization"). Data augmentation improves the quality of the representations as it helps learn a "more semantic" latent space z. This is reflected in the empirical results of Rebuttal Tables 1 and 2, where the uniqueness score significantly improves. **5. Ablation study missing: Impact of different discretization steps.** Please, see the main Rebuttal, where we address this comment. Appendix D of the manuscript contains four other thorough ablation studies that justify other design choices in FuncMol. [1] Guan et al. "3D Equivariant Diffusion for Target-Aware Molecule Generation and Affinity Prediction". ICLR23 [2] Schneuing et al. "Structure-based Drug Design with Equivariant Diffusion Models". Arxiv22 [3] Pinheiro et al. "Structure-based drug design by denoising voxel grids" ICML24 --- Rebuttal 2: Comment: I would like to thank the authors for their answers on my inquiries, especially concerning the ablation study and the comparative analysis with the MolDiff model. I hope, that all provided metrics and the new baselines will be included in the final version. Additionally, it would be beneficial to include results from the MolDiff model trained on a hydrogen-depleted graph. I raise my score.
Summary: This paper proposes a new representation for 3d molecules, and the representations are low-dimensional, compact, and scalable. Based on the representation, this paper proposes a new score-based generative model, FuncMol, which shows competitive reuslts on GEOM-drug and scales up to CREMP. Besides, FuncMol adopts the recently proposed Walk-Jump sampling and enjoys fast sampling. Strengths: 1. New representation. This paper proposes atom occupancy fields as Gaussian-like 3D shapes, enabling "density" estimation for any points in the space. 2. Feasible decoding. Decoding a molecule (atom types and coordinates) directly from an embedding vector is extremely difficult, yet decoding a molecular field from an embedding vector seems more feasible. 3. Competitive results. Based on the proposed molecular neural field, this paper proposes a generative model and achieves competitive results with SOTA models, making this representation potential in downstream applications. 4. Fast sampling. Weaknesses: 1. From the perspective of 3d molecule generation, FuncMol doesn't outperform VoxMol and GeoLDM in almost all metrics. It could be a trade-off between generative ability and representation quality. 2. The authors claim FuncMol can scale up to larger molecules, yet the more fundamental problem is, can we have a scaling law for molecules like LLM. Scaling law is essential for molecular foundation models, and should have a much larger impact. 3. Molecule generation may not be a perfect scenario for this representation, which seems more like a generative pretraining framework. Besides, the reviewer is interested in the application of such representations on downstream applications, which are not tested, nor discussed. I will raise my score if the authors present more evidence for weakness 3. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Why FuncMol cannot outperform GeoLDM and VoxMol in many metrics? 2. Why use walk-jump sampling? It seems unnecessary to "jump" before the last step. 3. How to guarantee the generalization of learned "codes"? The manifold for valid molecules may not be continuous, or support the embedding space. Thus, the reviewer suspects that random initialization and Langevin MCMC may produce invalid molecules. I will raise my score if the questions are properly answered. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: See above Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the comments and positive feedback. A general rebuttal is posted above. Below we address the specific issues raised by the reviewer. **“Why does FuncMol not outperform GeoLDM and VoxMol?"** It is challenging to explain why these models perform differently, as they make different choices w.r.t. data representation, NN architectures and generative modeling. FuncMol outperforms GeoLDM in most of the metrics of Rebuttal Table 2 and some metrics of Rebuttal Table 1 (c.f. main rebuttal). VoxMol outperforms FuncMol on GEOM-drugs (the performance gap is smaller when adding data augmentation to FuncMol). VoxMol uses 3D UNets, an expressive and well-established NN architecture tailored to discrete data, while FuncMol uses relatively new neural field architectures; neural fields tend to underperform against discrete-data architectures in many computer vision applications [1]. FuncMol, however, generates 3D molecules an order of magnitude faster than VoxMol and GeoLDM. Moreover, unlike them, FuncMol easily scales to larger molecules (such as macro-cyclic peptides). **Scaling laws for molecules.** We agree that coming up with scaling laws for 3D molecules is an interesting research direction. However, this lies beyond the scope of this work, especially when taking into account the large amount of computation resources required for this type of study. That being said, we observed that at fixed dataset, increasing the parameter count of our models (both the neural field and our denoiser) significantly improved our results. Moreover, increasing the dataset e.g. with data augmentation leads to better performance. **Downstream applications: "Molecule generation may not be a perfect scenario for this representation".** We thank the reviewer for this comment that helped improve the paper. Generative modeling for molecules is an important task with many practical applications. It also serves as a testbed to demonstrate that neural fields provide a very expressive/efficient/scalable decoder for 3D molecules, unlike point clouds or voxel grids. At the reviewer’s request, we show that neural field representations can also be used in other downstream tasks, e.g. discriminative tasks related to property prediction. We consider some properties from QM9 ($\mu$ dipole moment, $u_0$ internal energy, $\alpha$ isotropic polarizability, $C_v$ heat capacity) and report the Spearman correlation performance (ranging from -1 to 1) in the linear probing setting (i.e. training a linear regression on top of frozen codes). We observe high ranking scores and correlations between predictions and ground truth labels as shown on Rebuttal Fig 4 (c.f. main rebuttal pdf) and the table below. | | $\mu$ | $u_0$ | $\alpha$ | $C_v$ | |----------------|---------------|----------------|----------|-----------| | Spearman_rho ↑ | 0.632 | 0.939 | 0.968 | 0.950 | **Why use walk-jump sampling?** We explain why we use walk-jump sampling/neural empirical Bayes on L40-41: "[it] enjoys many properties such as fast-mixing, simplicity for training and fast sampling speed" as it relies on a single noise level. It has been successfully applied in 3D molecule generation (e.g., VoxMol) and antibody sequence generation [2]. Despite choosing walk-jump sampling for our main results, we stress that our framework is agnostic to the generative model used; we reported results with diffusion models on Appendix C. Compared to diffusion, we found that walk-jump sampling on this task is simpler/faster to train, faster to sample from (less sampling steps are required) and achieves slightly better empirical results. **How to guarantee the generalization of learned "codes"? Random initialization and Langevin MCMC may produce invalid molecules.** We agree with the reviewer that the manifold of valid molecules is very complicated (low-dimensional in a large ambient space and potentially non-smooth). However, the manifold of codes after _gaussian-smoothing_ is much simpler, especially with large noise. This allows us to perform Langevin MCMC from random initialization. This is precisely the intuition of neural Empirical Bayes [3], the generative framework we used in this work. We observed empirically that Langevin MCMC on smoothed codes mixes well. For example, Rebuttal Fig. 5 (c.f. main rebuttal pdf) shows two single MCMC chains initialized randomly with different seeds, where molecules are generated after each 200 "walk" steps. This phenomenon has also been observed on different data modalities, e.g., images [4], biological sequences [2] and voxelized molecules (VoxMol). [1] Dupont et al. "From data to functa: Your data point is a function and you can treat it like one." ICML2022. [2] Frey et al. "Protein Discovery with Discrete Walk-Jump Sampling". ICLR24 [3] Saremi et al.. "Neural empirical Bayes". JMLR19 [4] Saremi et al.. “Multi-measurement Generative Models”. ICLR22 --- Rebuttal Comment 1.1: Comment: I raised my score
Summary: This paper introduces FuncMol, a method that leverages recent work on implicit neural representations for unconditional 3D molecule generation. The key idea is to (1) parametrize continuous atomic densities through a neural field network and molecule-specific modulation codes and (2) use a score-based generative model in modulation code space to efficiently generate new molecules. The authors demonstrate competitive performance on the GEOM-drugs dataset, showing that the generated molecules are roughly on par with existing techniques at significantly faster sampling speeds. Strengths: * Using implicit neural representations to represent 3D point clouds and atomic densities is an innovative approach to 3D molecular generation that could complement the current graph- and voxel-based state-of-the-art. * The score-based generative model and walk-jump sampling process in latent space afford significantly faster sampling speeds than existing approaches and enable the method to scale favorably to larger systems. * The paper is well-written and the authors clearly describe their method and the motivation behind it, as well as the experimental details used for training and evaluation. Weaknesses: * The approach seems to have some issues with overfitting/memorization, which the authors mitigate by applying dropout between the MFNs multiplicative blocks. However, the model still seems to generate significantly fewer unique molecules than the baselines. It would be good if the authors could comment on whether this is caused by e.g. limited capacity of the neural field, a degenerate modulation code space, or the walk-jump sampling scheme, and if there are any ways to address this. * The method performs strictly worse than existing approaches on the QM9 dataset. The authors argue that QM9 is not a suitable dataset to evaluate unconditional generative models, since it consists of an exhaustively enumerated set of small molecules. However [2] and [3] only state that novelty is not a meaningful performance metric in this setting. The ability to capture this relatively well-behaved training distribution and generate stable and unique molecules should still be a reasonable test of generative modeling performance. * The paper only provides qualitative results for the CREMP dataset. Since the scalability of the proposed method to larger systems is listed as one of its main advantages, it would be helpful to include a quantitative comparison to other methods from [1]. Most of the metrics in Table 1 should be directly applicable to macro-cyclic peptides, in addition to e.g. the percentage of closed rings and canonical side chains. Similarly, it would be good to include the sampling times in this setting. Technical Quality: 3 Clarity: 3 Questions for Authors: * What is the rationale behind switching from the meta-learning approach in [4] to optimizing the modulation codes and neural field parameters jointly? Could this cause some of the issues with generating novel/unique molecules? * Tables 1 and 2 report metrics for 10000 samples per model. However, in lines 257-263, the authors mention that the model without dropout needs to generate twice as many samples as the model with dropout. Are the models generating more than 10000 samples and are non-novel molecules filtered out before reporting the results? If not, what is the percentage of novel molecules for each model? * [2] defines *atom stability* as the proportion of atoms that have the correct valency and *molecule stability* as the proportion of generated molecules for which all atoms are stable. How can the percentage of valid molecules be higher than the percentage of stable molecules, since RDKit’s sanitization filter checks if a molecule has correct valencies? --- [1] Grambow, Colin A., et al. "RINGER: Rapid Conformer Generation for Macrocycles with Sequence-Conditioned Internal Coordinate Diffusion." arXiv preprint arXiv:2305.19800 (2023). [2] Hoogeboom, Emiel, et al. "Equivariant diffusion for molecule generation in 3d." International conference on machine learning. PMLR, 2022. [3] Vignac, Clement, and Pascal Frossard. "Top-N: Equivariant Set and Graph Generation without Exchangeability." International Conference on Learning Representations. [4] Dupont, Emilien, et al. "From data to functa: Your data point is a function and you can treat it like one." International Conference on Machine Learning. PMLR, 2022. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors provide a brief discussion of the lack of equivariance as a limitation in Section 6. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the valuable feedback. A general rebuttal is posted above. Below we address the reviewer's specific questions. **Issues with overfitting/memorization.** By further investigating this issue, we realized that memorization is likely caused by a degenerate latent space—as alluded to by the reviewer. Data augmentation helped mitigate this issue and learn a more "semantically meaningful" latent space. This is reflected in the results of Rebuttal Table 1 and 2 (see main Rebuttal "Issues with memorization"), where uniqueness is significantly improved and we no longer observe memorization. This is a consequence of the model overfitting less, which also allowed us to train denoisers with higher capacity. **Worse performance on QM9.** We agree that QM9 is a good task to test generative models. FuncMol and VoxMol perform comparably on QM9 and both are outperformed by point cloud approaches. FuncMol, however, has an order of magnitude faster sampling time. QM9 has limited relevance for drug discovery: the number of stable and unique molecules is limited as QM9 is an enumeration of all molecules up to 9 heavy atoms satisfying some constraint. Achieving a high performance on QM9 does not mean the model can handle more complex distributions. In fact, the best models we reported on QM9 (GeoLDM, EDM) perform the worst on GEOM-drugs. **Quantitative results on CREMP.** To our knowledge, we are the first to report results on CREMP in the unconditional all-atom 3D molecule generation setting. This experiment is more for qualitative purposes as CREMP is very recent and not a standard benchmark for this task. RINGER [1] also considered CREMP but tackled conformer generation, a different problem from ours: it assumes knowledge of the molecular sequence/graph during training and sampling. This simplifies generation, since the model knows a priori the number of atoms, their types, the bonds between them and the approximate atom locations. RINGER only parametrizes angles and torsions, while FuncMol and the baselines perform all-atom generation. We tried to extend our baselines to CREMP but did not succeed, mainly due to the high memory consumption (e.g. VoxMol took 40 GPU-hours per epoch, while FuncMol took 2.7 GPU-hours). To encourage comparisons to FuncMol, we include some quantitative metrics used in [1] that measure the distance between test and generated distributions using KL divergence. The KL divergence for bond angles are 0.1615 ($\theta_1$), 0.1345 ($\theta_2$) and 0.2197 ($\theta_3$); the ones for the dihedral angles are 0.1127 ($\phi$), 0.1178 ($\psi$) and 0.1813 ($\omega$). The percentage of valid generated peptides (for which we can extract the sequence of amino acids from their SMILES) is 82.7%. **Sampling time on CREMP.** We reported the sampling time on L327: "our model takes around 1.4s to generate a molecule. [..] should VoxMol be trained successfully, it would take over a minute [per] molecule". **Why switch from the meta-learning approach in [6]?** Joint optimization, a.k.a. auto-decoding has been applied in several works [2, 5] to learn signed-distance functions. Auto-decoding is attractive as it does not require the memory-expensive double loop optimizations in meta-learning and is simpler to optimize. We attribute the novelty/uniqueness issue to the lack of data augmentation during training (see "Issues with memorization" in the main Rebuttal). **Filtering samples due to memorization.** As written on L261: "we only consider the novel molecules of FuncMol$ _{drop=0}$ and exclude those pathological chains when benchmarking this model"; the total number of molecules that are evaluated is 10,000. The other reported FuncMol model (with dropout) considers both novel and non novel molecules. Our initial novelty scores are 56.3\% for FuncMol$ _{drop=0}$ and 77.3\% for FuncMol (with dropout). With data augmentation, the novelty score is over 99\%. **How can the percentage of valid molecules be higher than the percentage of stable molecules?** We use the same metrics as previous work for an apples to apples comparison to other work. In fact, all baselines that report these two metrics have higher validity than mol stability (EDM, GeoLDM, VoxMol). _Validity_ is measured as the success rate of RDKit sanitization and is computed only on the largest fragment of the generated molecules (when they consist of disjoint fragments). _Atom_ and _molecule stability_ are defined as the reviewer described and were introduced in [4]. As explained by [3]: "[mol stability] is similar to validity but, in contrast to RDKit sanitization, they do not allow for adding implicit hydrogens to satisfy the valency constraints." [1] Grambow et al. "RINGER" Arxiv23 [2] Park et al. “DeepSDF” CVPR19 [3] Vignac et al. "MiDi" ECML23 [4] Satorras et al. "E(n) equivariant normalizing flows" NeurIPS21 [5] Chou et al. "Diffusion-SDF" ICCV23 [6] Dupont et al. "From data to functa" ICML22 --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for the detailed response. The proposed improvements and reported performance gains strengthen the empirical section of the paper and address my main concerns. I will raise my score accordingly.
Summary: This study proposes a neural field model that treats molecular data as continuous atomic occupancy fields. The model learns a latent code that can be used to predict the atomic occupancy in discretized grids. The authors then perform score-based generative modeling using neural empirical Bayes and show higher generation quality than point cloud-based and voxel-based models. Strengths: This study utilizes a novel representation of 3D molecular structures. The proposed model does not require pre-defined atom numbers, which is a common limitation of point cloud representations, has higher expressivity than GNN-based methods, and also scales better than voxel grid representations. It could also scale to larger molecules such as cyclic peptides. Weaknesses: I only have some minor concerns and questions. See the "Questions" section. Technical Quality: 4 Clarity: 4 Questions for Authors: 1\. It would be interesting to see whether the latent space has any meaningful manifold. For example, does the latent code capture patterns such as basic molecular fragments or similarity between molecules, etc.? 2\. Also, though the sampling is fast, the peak finding and iterative refinement could be time-consuming. Could the authors comment on this? 3\. Some recent methods [1,2] explicitly model bonds, leading to high quality generations. How does the proposed method compare with these methods? Is it also possible to incorporate bond information into the molecular field modeling, e.g. as additional channels? [1] Peng, Xingang, et al. "Moldiff: Addressing the atom-bond inconsistency problem in 3d molecule diffusion generation." arXiv preprint arXiv:2305.07508 (2023). [2] Vignac, Clement, et al. "Midi: Mixed graph and 3d denoising diffusion for molecule generation." Joint European Conference on Machine Learning and Knowledge Discovery in Databases. Cham: Springer Nature Switzerland, 2023. Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: The authors have properly addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the positive and valuable feedback. A general rebuttal is posted above. Below we address the reviewer's concerns. **1. Meaningful representation of latent space.** See "Meaningful representation of latent space [yg8b]" on the main Rebuttal. Overall, the additional experiments show that the latent space is well structured. **2. Peak finding + iterative refinement can be time consuming.** To clarify, the average sampling time reported on Table 2 is the time required to fully generate the molecules ("from noise to set of atom types/coordinates"). It includes four steps: (i) the walk-jump sampling steps to sample latent codes, (ii) rendering (going from the code to the voxel grid at resolution .25A), (iii) the peak finding and (iv) the iterative refinement. The sampling time for all steps together is an order of magnitude faster than alternative baselines. The bottleneck in sampling time is the rendering phase (ii). The sampling time can be reduced at the cost of performance by rendering codes at a lower resolution (see "Ablation study" on the main Rebuttal). **3. Explicitly modeling bonds between atoms.** Some recent works e.g., MolDiff, MiDi, show that incorporating extra information such as bonds and formal charges in point cloud-based approaches (e.g. EDM) improves the quality of the generated samples. These contributions are orthogonal to ours and can potentially be incorporated into our generative model, e.g. via additional channels as suggested by the reviewer. This is, however, not the focus of our work (nor that of the baselines we considered). Here, we aim at proposing for the first time neural fields as a new representation for 3D molecules (a non-trivial task). For completeness (despite different training assumptions), we compare FuncMol to MolDiff. MolDiff only incorporates bond information into the diffusion process, making it a simple representative baseline for this class of model. Since the weights for MolDiff with hydrogens were unavailable, we compared FuncMol using MolDiff’s metrics and the MolDiff performance reported in their Appendix D.1, Table 8. We observe that FuncMol achieves competitive results in most metrics despite not leveraging bond information during training. | Metric | MolDiff with H | FuncMol | |----------------|---------------|----------------| | Validity ↑ | 0.957 | 1.000 | | Connectivity ↑ | 0.772 | 0.739 | | Succ. Rate ↑ | 0.739 | 0.739 | | Novelty ↑ | 1.000 | 0.992 | | Uniqueness ↑ | 1.000 | 0.977 | | Diversity ↑ | 0.427 | 0.810 | | Sim. Val. ↑ | 0.695 | 0.554 | | QED ↑ | 0.688 | 0.715 | | SA ↑ | 0.806 | 0.815 | | Lipinski ↑ | 4.868 | 5.000 | | RMSD ↓ | 1.032 | 1.088 | | JS bond lengths ↓ | 0.414 | 0.529 | | JS bond angles ↓ | 0.182 | 0.217 | | JS dihedral angles ↓ | 0.244 | 0.232 | --- Rebuttal Comment 1.1: Comment: I thank the authors for addressing my concerns and questions.
Rebuttal 1: Rebuttal: We thank the reviewers for the helpful comments. The reviewers agree that the paper has good soundness, presentation and contributions. They also acknowledge that our approach is novel, addresses limitations of point-cloud and voxel representations, is scalable, has faster sampling and is competitive. Next, we highlight the new experiments of the rebuttal (*see Rebuttal Figures in the attached pdf and Rebuttal Tables in the responses*). We address each review in their respective section. **Issues with memorization [R2Qo]. How to leverage data augmentation [NVMn]?** We trained a new model with data augmentation. Instead of using *auto-decoding* [1] to learn the modulation codes (c.f. manuscript), we use an *auto-encoding* approach [2], which allows us to leverage data augmentation on the fly. In this setting, z is computed with a (trainable) 3D CNN encoder that takes as input a voxelized molecule (Rebuttal pdf Fig 1). Once the encoder is learned, we use it to generate samples z with random data augmentation and train a denoiser as described in Section 4.1. This approach leads to a more structured latent space z as reflected in Rebuttal Table 1 and 2. We no longer observe memorization and uniqueness is significantly improved. **Rebuttal Table 1:** | GEOM-drugs | stable mol %↑ | stable atom %↑ | valid %↑ | unique %↑ | valency W₁↓ | atom TV↓ | bond TV↓ | bond len W₁↓ | bond ang W₁↓ | |----------------|---------------|----------------|----------|-----------|--------------|----------|----------|---------------|--------------| | **data** | 99.9 | 99.9 | 99.8 | 100.0 | .001 | .001 | .025 | .000 | 0.05 | | **EDM** | 40.3 | 97.8 | 87.8 | 99.9 | .285 | .212 | .048 | .002 | 6.42 | | **GeoLDM** | 57.9 | 98.7 | 100. | 100. | .197 | .099 | .024 | .009 | 2.96 | | **VoxMol** | 75.0 | 98.1 | 93.4 | 99.6 | .254 | .033 | .024 | .002 | 0.64 | | **FuncMol$ _{autodec}$** | 60.6 | 98.2 | 100. | 86.6 | .244 | .079 | .044 | .003 | 2.05 | | **FuncMol$_{autodec,drop}$** | 69.7 | 95.3 | 100. | 77.5 | .268 | .035 | .028 | .003 | 2.13 | | **FuncMol$ _{autoenc}$** | 72.6 | 99.1 | 100. | 95.6 | .250 | .107 | .046 | .003 | 2.31 | **Rebuttal Table 2:** | GEOM-drugs | single frag %↑ | median energy↓ | ring sz TV↓ | atms/mol TV↓ | QED ↑ | SA ↑ | logp ↑ | time s/mol.↓ | |------------------|----------------|----------------|-------------|--------------|-------|------|--------|--------------| | **data** | 100.0 | 54.5 | .011 | .000 | .658 | .832 | 2.95 | - | | **EDM** | 42.2 | 951.3 | .976 | .604 | .472 | .514 | 1.11 | 9.35 | | **GeoLDM** | 51.6 | 461.5 | .644 | .469 | .497 | .593 | 1.05 | 8.96 | | **VoxMol** | 82.6 | 69.2 | .264 | .636 | .659 | .762 | 2.73 | 7.55 | | **FuncMol$ _{autodec}$** | 86.0 | 105.0 | .263 | .677 | .713 | .862 | 2.87 | 0.69 | | **FuncMol$ _{autodec,drop}$** | 80.2 | 96.4 | .324 | .970 | .677 | .788 | 2.87 | 0.53 | | **FuncMol$ _{autoenc}$** | 73.9 | 111.7 | .434 | .878 | .715 | .816 | 3.03 | 0.86 | **Ablation study missing: impact of different discretization steps [NVMn].** As suggested by the reviewer, we include a new ablation study that measures the impact of resolution on sampling time and quality. We observe that finer resolutions improve performance but also increase sampling time. We chose resolution 0.25A as it provides a good trade-off between performance and speed. The table below shows the results as a function of resolution: | resolution | stable mol %↑ | stable atom %↑ | valid %↑ | unique %↑ | valency W₁↓ | atom TV↓ | bond TV↓ | bond len W₁↓ | bond ang W₁↓ | time per mol (s)↓ | |----------------|---------------|----------------|----------|-----------|--------------|----------|----------|---------------|--------------|--------------| | **0.167A** | 72.8 | 99.1 | 100. | 95.8 | .248 | .127 | .042 | .003 | 2.36 | 2.8 | | **0.25A** | 72.6 | 99.1 | 100. | 95.6 | .250 | .127 | .046 | .003 | 2.41 | .86 | | **0.5A** | 51.7 | 97.6 | 100. | 96.2 | .246 | .126 | .046 | .003 | 2.29 | .15 | For reference, we had already included four ablation studies in Appendix D.4. **Meaningful representation of latent space [yg8b]; downstream task and “generalization of learned codes” [eB4P].** We perform three experiments to qualitatively explore the learned manifold and show empirically that it is well structured. * First, we pick several pairs of molecules and show the interpolation trajectory in latent modulation space. We project the interpolated codes back to the learned manifold of molecules via a noise/denoise operation. Rebuttal pdf Fig 2 illustrates six trajectories, where we observe that molecules close in latent space share similar structure. * Second, we show t-SNE plots to demonstrate that the modulation space z encodes molecular properties of QM9. For four different properties, we use t-SNE to embed 400 molecules divided equally between those with the highest and those with the lowest property values. Rebuttal pdf Fig 3 shows that molecules with similar property values cluster together. * Finally, we evaluate the latent codes on downstream tasks. We train a linear regression model on frozen latent codes (a.k.a. linear probing) to see how the learned modulations correlate with different properties. Rebuttal pdf Fig 4 shows the scatter plots and Spearman correlation for four different properties. We observe that the codes are highly predictive of the considered properties, despite being trained in an unsupervised fashion. [1] Park et al. "DeepSDF Learning continuous signed distance functions for shape representation"CVPR19 [2] Mescheder et al. "Occupancy networks Learning 3d reconstruction in function space"CVPR19 Pdf: /pdf/3c5b0347029e1e0fc04404db58e60269a9e134ec.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Connectivity Shapes Implicit Regularization in Matrix Factorization Models for Matrix Completion
Accept (poster)
Summary: The paper studies implicit regularization in matrix completion, a problem that estimates missing entries from a partially observed matrix. In particular, in this paper, matrix completion is formulated as an optimization problem of the form: $\min_{A, B} f(A,B) = \|M - AB\|_S^2$ where $S$ is the subset of observed entries and the function $\|\cdot\|_S$ is the Frobenius norm of a matrix after masking entries outside $S$ to zero. Using this formulation, the authors study the implicit regularization of the gradient flow associated with $f$ whose initial condition is close to zero. Their main results characterize the influence of connectivity - a property defined based on $S$ - on the dynamics and solutions obtained by the gradient flow. The results are supported both theoretically and numerically. Strengths: - The results seem to be correct. Although I did not check all the details, there are certain lemmas/theorems that I checked carefully and I did not find any serious problem. - The presentation is excellent. Many definitions are followed by examples and/or remarks to provide intuitions/insightful discussion. The experiments are quite convincing. Weaknesses: - The main concern is the interest of the problem: why should we care about data connectivity (in the sense of this paper)? I think the motivation of this study needs to be further justified. Technical Quality: 3 Clarity: 4 Questions for Authors: 1. Major questions/suggestions: - I wonder under which regime of $n$ (the size of the matrix) and $m$ (the number of observed entries) do we have more than/exactly one connected component? I think this question is important to justify the motivation of this work. 2. Minor questions/suggestions: - The experiments in Section 4 need to be provided with more details: What is the algorithm used? If it is gradient descent, what is the learning rate? Or the solution is obtained by solving an ODE. - Throughout the introduction, the term "connectivity" is repeatedly used but its description or definition is not provided (it is delayed until Section 3). The authors might consider describing/defining this term earlier to improve the readability. - Line 612: The second inequality should be $\|w^T B\|\_F^2$ instead of $\|w^T B\|\_{S_x}^2$. Confidence: 3 Soundness: 3 Presentation: 4 Contribution: 2 Limitations: There are no particular limitations or potential negative impacts of this paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: * **Point 1: Why should we care about data connectivity (in the sense of this paper)? I think the motivation of this study needs to be further justified.** * Reply: We appreciate the reviewer's question regarding the importance of data connectivity in our study. As the title of our paper suggests, "Connectivity Shapes Implicit Regularization in Matrix Factorization Models for Matrix Completion", connectivity plays a crucial role in understanding the implicit regularization behavior of matrix factorization models. This concept is closely related to recent research on nuclear norm minimization and rank minimization mentioned in the Introduction and Related Works. - From an experimental perspective, our initial motivation stemmed from extensive experiments exploring the conditions under which low-rank tendencies emerge or fail to emerge. Through a series of simple experiments and analysis of their dynamics, we discovered that the connectivity of the observed data played a pivotal role, as illustrated in the Introduction. - Theoretically, the ongoing debate about whether the implicit regularization of matrix factorization models tends towards low nuclear norm or low rank necessitates a clear definition of the conditions that determine these outcomes. Our experiments provided the initial motivation for developing a theoretical framework. In fact, connectivity is closely related to whether the dynamic system can be decoupled into several parts (please refer to Proposition A.4 in Appendix A), which inspired our subsequent theoretical investigations. This study thus bridges a crucial gap in our understanding of how data structure influences the behavior of matrix factorization models, providing insights that are valuable for both theoretical analysis and practical applications in matrix completion tasks. * **Point 2: I wonder under which regime of $n$ (the size of the matrix) and $m$ (the number of observed entries) do we have more than/exactly one connected component? I think this question is important to justify the motivation of this work.** * Reply: We thank the reviewer for this insightful question about the relationship between matrix size, number of observations, and connectivity. To address this, we've added the following discussion in Appendix B: "Threshold function: For a square $n\times n$ matrix, the threshold for connectivity is $m = (n-1)^2 + 1$. Below this threshold, disconnected components are likely; above it, a single connected component is guaranteed. The intuition is that for a square $n\times n$ matrix, if the upper left $(n-1)\times (n-1)$ sub-block is fully observed, and the last $(n,n)$ position is observed, this represents the case with the maximum number of observations while still being disconnected. If one more entry is observed, the entire observation set becomes connected. However, if there are fewer than $m = (n-1)^2 + 1$ observations, both connected and disconnected situations may occur, depending on the specific sampling pattern." * **Point 3: The experiments in Section 4 need to be provided with more details: What is the algorithm used? If it is gradient descent, what is the learning rate? Or the solution is obtained by solving an ODE.** * Reply: We appreciate the reviewer's request for more experimental details. We have added the following information in Section 3 to briefly introduce the experimental setup, with further details provided in Appendix B.1: "The training dynamics follow the gradient flow of $R_S(\boldsymbol{\theta})$: $$ \frac{\mathrm{d} \boldsymbol{\theta}}{\mathrm{d} t}=-\nabla_{\boldsymbol{\theta}} R_S(\boldsymbol{\theta}), \quad \boldsymbol{\theta}(0)=\boldsymbol{\theta}_0 . $$ In all experiments, $\theta_0 \sim N(0, \sigma^2)$ is initialized from a Gaussian distribution with mean 0 and small variance $\sigma^2$. We use gradient descent with a small learning rate to approximate the gradient flow dynamics (please refer to Appendix B.1 for the detailed experimental setup)." * **Point 4: Throughout the introduction, the term "connectivity" is repeatedly used but its description or definition is not provided (it is delayed until Section 3). The authors might consider describing/defining this term earlier to improve the readability.** * Reply: We thank the reviewer for this suggestion to improve readability. We've added a brief description of connectivity in the Introduction: "Data connectivity, in the context of this paper, refers to the way observed entries in the matrix are linked through shared rows or columns. A set of observations is considered connected if there's a path between any two observed entries via other observed entries in the same rows or columns. This concept plays a crucial role in determining the behavior of matrix factorization models, as we will demonstrate throughout this paper." - **Point 5: Line 612: The second inequality should be $\\|w^\top B\\|^2_F$ instead of $\\|w^\top B\\|^2_{S_x}$.** * Reply: We sincerely thank the reviewer for catching this typographical error. We have corrected the inequality to $\\|w^\top B\\|^2_F$ as suggested. This correction ensures the mathematical accuracy of our presentation. We greatly appreciate the reviewer's thorough examination of our work. Your insightful comments and suggestions have significantly contributed to improving the clarity, precision, and overall quality of our presentation. --- Rebuttal Comment 1.1: Comment: Thanks the authors for their responses. I am quite satisfied with their answers, except for the second point: the threshold $(n - 1)^2 + 1$ seems too large for a graph to be connected. The classical random graph model yields a much better result, see [this link](https://mathoverflow.net/questions/60075/connectivity-of-the-erd%C5%91s-r%C3%A9nyi-random-graph). This is, however, just for information. I keep my score as it is and recommend acceptance.
Summary: This study empirical reveals that the connectivity of observed data significantly influences the implicit bias and identifies a hierarchy of intrinsic invariant manifolds in the loss landscape, providing a preliminary framework for understanding the mechanisms behind implicit regularisation in matrix factorisation models. Strengths: 1, The paper is clearly written.\ 2, Great presented figures illustrate experimental results and findings, making them easier for readers to understand.\ 3, Differences from prior similar works have been properly addressed. Weaknesses: Please refer to the "Questions". Technical Quality: 3 Clarity: 3 Questions for Authors: 1, Have you verified your findings on high dimensional matrix with experiments?\ 2, What's the intuition of the augmented matrix? See $\textit{Line 110}$?\ 3, What exactly "small initialisation" you defined in the experiments? See $\textit{Line 164}$\ Minor suggestion(s):\ $\textbf{A.}$ $Eq.3$ after $\textit{Line 117}$: You have mentioned $\theta(0) = \theta_0$, which indicates the initialisation (if I understand correctly). However, the initialisation method was mentioned far from here at $\textit{Line 36-37}$. It could be easier to follow if you can optimise this.\ $\textbf{B.}$ I expect a detailed example of connectivity and disconnectivity in the appendix (see $\textit{Line 561 - 564 and Fig. A1}$), which will make it easier for a wider audience to understand. Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: * **Point 1: Have you verified your findings on high dimensional matrix with experiments?** * Reply: We sincerely appreciate the reviewer's important question regarding the scalability of our findings. While our main results focus on smaller matrices for clarity of presentation, we have indeed conducted additional experiments on higher-dimensional matrices to verify the broader applicability of our findings. We've added a new subsection in the Appendix titled "B.7 High-Dimensional Experiments" with the following content: "To validate the scalability of our findings, we extended our experiments to higher-dimensional matrices. We conducted tests on randomly generated $20\times 20$ matrices, employing both connected (Fig. 2) and disconnected (Fig. 3) sampling patterns, while monitoring rank evolution during training. Our results consistently corroborated the main findings (please refer to high-dimensional experiments in the attached PDF): - (i) Connected observations converged to optimal low-rank solutions. - (ii) Disconnected observations yielded higher-rank solutions. - (iii) The Hierarchical Invariant Manifold Traversal (HIMT) process was observed in both connected and disconnected scenarios." * **Point 2: What's the intuition of the augmented matrix? See Line 110?** * Reply: We appreciate the request for clarification on the augmented matrix. The augmented matrix $W_{\text{aug}}$ is indeed a key construct in our analysis, and we recognize the need for a more intuitive explanation. We've added the following explanation after Line 110: "The augmented matrix $W_\text{aug}$ plays a crucial role in our subsequent analysis, particularly in characterizing the intrinsic invariant manifolds $\Omega_k$ of the optimization process. Specifically, it allows us to establish the relationship $\text{rank}(A) = \text{rank}(B^\top) = \text{rank}(W_{\text{aug}})$, which is important to understanding the invariance property under gradient flow." * **Point 3: What exactly "small initialization" you defined in the experiments? See Line 164.** * Reply: We thank the reviewer for pointing out the need for a clearer definition of "small initialization". To address this, we've revised the description in Line 164 as follows: "For a small random initialization (Gaussian distribution with mean 0 and variance $10^{-16}$), the loss curves exhibit a steady, stepwise decline" * **Point 4: You have mentioned $\theta(0) = \theta_0$, which indicates the initialization (if I understand correctly). However, the initialization method was mentioned far from here at Line 36-37. It could be easier to follow if you can optimise this.** * Reply: We appreciate the reviewer's suggestion to improve the flow and clarity of our presentation regarding initialization. We have revised the relevant section as follows: "The training dynamics follow the gradient flow of $R_S(\boldsymbol{\theta})$: $$ \frac{\mathrm{d} \boldsymbol{\theta}}{\mathrm{d} t}=-\nabla_{\boldsymbol{\theta}} R_S(\boldsymbol{\theta}), \quad \boldsymbol{\theta}(0)=\boldsymbol{\theta}_0 . $$ In all experiments, $\theta_0 \sim N(0, \sigma^2)$ is initialized from a Gaussian distribution with mean 0 and small variance $\sigma^2$. We use gradient descent with a small learning rate to approximate the gradient flow dynamics (please refer to Appendix B.1 for the detailed experimental setup)." * **Point 5: I expect a detailed example of connectivity and disconnectivity in the appendix (see Line 561 - 564 and Fig. A1), which will make it easier for a wider audience to understand.** * Reply: We appreciate the reviewer's suggestion to provide a more detailed example of connectivity and disconnectivity. We've expanded the examples in the appendix (Lines 561-564 and Fig. A1) with a step-by-step explanation: "Examples of connectivity and disconnectivity. - Consider three matrices to be completed, each obtained by adding one more observation to the previous matrix: $$ \boldsymbol{M}_1=\left[\begin{array}{ccc} 1 & 2 & \star \\\\ 3 & \star & \star \\\\ \star & \star & 5 \end{array}\right], \boldsymbol{M}_2=\left[\begin{array}{ccc} 1 & 2 & \star \\\\ 3 & 4 & \star \\\\ \star & \star & 5 \end{array}\right], \boldsymbol{M}_3=\left[\begin{array}{ccc} 1 & 2 & \star \\\\ 3 & 4 & \star \\\\ 6 & \star & 5 \end{array}\right]. $$ - The corresponding observation matrices $\boldsymbol{P}$ are: $$ \boldsymbol{P}_1=\left[\begin{array}{ccc} 1 & 1 & 0 \\\\ 1 & 0 & 0 \\\\ 0 & 0 & 1 \end{array}\right], \boldsymbol{P}_2=\left[\begin{array}{ccc} 1 & 1 & 0 \\\\ 1 & 1 & 0 \\\\ 0 & 0 & 1 \end{array}\right], \boldsymbol{P}_3=\left[\begin{array}{ccc} 1 & 1 & 0 \\\\ 1 & 1 & 0 \\\\ 1 & 0 & 1 \end{array}\right]. $$ - And by Definition 1, the corresponding adjacency matrices are: $$ \boldsymbol{A}_1=\left[\begin{array}{ccc} \boldsymbol{0} & \boldsymbol{P}_1^\top \\\\ \boldsymbol{P}_1 & \boldsymbol{0} \end{array}\right], \boldsymbol{A}_2=\left[\begin{array}{ccc} \boldsymbol{0} & \boldsymbol{P}_2^\top \\\\ \boldsymbol{P}_2 & \boldsymbol{0} \end{array}\right], \boldsymbol{A}_3=\left[\begin{array}{ccc} \boldsymbol{0} & \boldsymbol{P}_3^\top \\\\ \boldsymbol{P}_3 & \boldsymbol{0} \end{array}\right]. $$ - Given the adjacency matrix $A_i$, according to Definition 1, we can obtain a bipartite graph $G_{M_i}$, which we refer to as the associated observation graph. Fig. A1 in Appendix A illustrates the associated graphs $G_{M_i}$, from which we can see that $M_1$ is disconnected, with its associated observation graph consisting of two connected components. $M_2$ is also disconnected, but each connected component of its associated observation graph forms a complete bipartite subgraph. In contrast, $M_3$ is connected, and its associated observation graph consists of a single connected component." We sincerely appreciate the reviewer's thorough examination of our work, which has significantly contributed to improving the clarity and precision of our presentation. --- Rebuttal Comment 1.1: Comment: I appreciate the authors' thorough response. After careful consideration, I have decided to raise my score.
Summary: This paper studies the training dynamics of matrix factorisation for matrix completion, optimising the vanilla mean-squared-error loss function via gradient descent with small initialisation. The authors characterise the observation pattern of the underlying matrix via the connectivity of its associated bipartite graph, and show both empirically and theoretically that this plays an important role in the implicit regularisation of the learned solution. The paper provides empirical evidence that in both the connected and disconnected cases, the row/column spaces of the factor matrices remain aligned at all times, and that the loss curves exhibit steady step-wise decline as the solutions traverse solutions of increasing rank. In the connected case, it is observed that the learned solutions of each rank are near optimal, however in the disconnected case, suboptimal solutions are learned. These empirical observations and explained by theoretical results which examine the training dynamics of the continuous dynamical system following the gradient flow of the loss function with infinitesimal initialisation. The authors begin by defining a Hierarchical Inrinsic Invariant Manifold and Proposition 1 describes how the gradient flow behaves on these manifolds and that they form a hierarchy. Proposition 2 shows that in the special case that when the observation graph is disconnected, the gradient flow dynamics are restricted to sub-manifolds. Theorem 1 (which extends an analgous result for symmetric matrix factorization models in Li et al. (2020)) shows that the critical points of the gradient flow are either strict saddle points or global minima and Theorem 2 shows that the gradient flow traverses the hierarchy of Intrinsic Invariant Manifolds (or sub-manifolds in the disconnected case), a phenomenon coined "Hierarchical Invariant Manifold Traversal". The flagship results of the paper are Theorems 3 and 4 which show that in the connected case, the gradient flow converges to the minimum rank solution, and in the a special disconnected case (disconnected with complete bipartite components), the gradient flow converges to the minimum nuclear norm solution. Strengths: This paper makes a significant and original contribution to the understanding of implicit bias of gradient descent for matrix factorisation. The key strength of this paper is showing, both empirically and theoretically, how the connectivity of the observation graph impacts the training dynamics, and how this affects the final solution. Distinguishing the connected and disconnected cases is novel and enlightening, and the detail with which the training dynamics are theoretically characterised is eye-opening. The paper is beautifully written: the problem is set out very clearly, the relevant existing literature is summarises comprehensively and concisely. The reader is guided through simple toy experiments which precisely demonstrate the phenomenon which is described. The theoretical results then provide a satisfying explanation for the empirical observations. I really felt like I had gained a deep understanding of the training dynamics for this problem by the time I had finished reading this paper. While the theoretical results are technical, the intuition I gained in the first half of the paper really helped me to understand the details. In addition, the results and the proofs are written very clearly and accurately. Weaknesses: I find this work hard to fault. While the theoretical results could be criticised for being fairly complex, having spend some time understanding them, the payoff makes it absolutely worthwhile. I don't see that they could be made simpler. Technical Quality: 4 Clarity: 4 Questions for Authors: - What the three bars for each matrix in Fig 2b? What is the thick vertical line? - In Fig 3d, is there reason to expect the norms of the gradients to be on the same scale as the difference between the matrix at the saddle points and the optimal approximation for the corresponding rank? They seem to be close to the gradient norms at the saddle points. Is this a coincidence or is there a reason to expect this? - In Fig 3c and 4e, loss trajectories for different sampling patterns are plotted. If I understand correctly, in Fig 3a for example, each line corresponds to a different entry of the matrix being unobserved. In this case is $\star = 1.2$, so the underlying rank is three, or does it take a different value? Is it the case that the matrix has rank three if and only if the unobserved entry takes a specific value? - In Fig 4f, are you counting position indexes from zero? In Section 3 (lines 97 and 99), you count position indexes from one. - The red dot in Fig 4e would perhaps be best a little larger and plotted on top of the other lines. At present it is quite hard to see. - Assumption 1 is equivalent to the top singular value of $-\delta \mathbf{M}$​ being unique. This seems like a more intuitive way to write this assumption. Is there a reason you chose to write the assumption in terms of the eigenvalues of the symmetric dilation matrix? - In 231, it is claimed that "in the connected case, at each level [the] model reaches an optimal solution". Is this implied by one of the theorems, by previous work, or is this an empirical observation? Confidence: 5 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: The authors are upfront about the limitations of their work in the conclusions, which present exciting directions for future research. Flag For Ethics Review: ['No ethics review needed.'] Rating: 9 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: * **Point 1: I find this work hard to fault. While the theoretical results could be criticised for being fairly complex, having spend some time understanding them, the payoff makes it absolutely worthwhile.** * Reply: We are deeply grateful for the reviewer's careful examination and understanding of our theoretical results. Your recognition of the value and payoff of our complex theoretical framework is greatly appreciated. * **Point 2: What the three bars for each matrix in Fig 2b? What is the thick vertical line?** * Reply: We appreciate the reviewer's attention to detail regarding Figure 2b. To clarify: - The three bars for each matrix represent the singular values of the learned matrix. - The thick vertical line partitions significantly nonzero singular values, which serves as the empirical rank. To enhance clarity, we have updated the caption of Figure 2b as follows: "Figure 2(b): Singular values of the learned matrices for $M_1, M_2, M_3$. Each set of three bars represents the singular values of a matrix. The thick vertical lines partition significantly nonzero singular values, which serves as the empirical rank." * **Point 3: In Fig 3d, is there reason to expect the norms of the gradients to be on the same scale as the difference between the matrix at the saddle points and the optimal approximation for the corresponding rank?** * Reply: We appreciate the reviewer's insightful observation about the similarity between gradient norms and matrix differences at saddle points. This relationship indeed stems from the structure of optimization and can be more precisely characterized. As $R_S(\theta)$ approaches $R_S(\theta^*)$, we observe that $\nabla R_S(\theta)$ also approaches zero. This relationship can be formalized under certain conditions: 1. For $L$-smooth functions, we have the inequality: $\\|\nabla R_S(\theta)\\|^2 \leq 2L(R_S(\theta) - R_S(\theta^*))$, where $L$ is the Lipschitz constant of the gradient. 2. For $\mu$-strongly convex functions, we also have the inequality (the Polyak-Łojasiewicz condition): $\\|\nabla R_S(\theta)\\|^2 \geq 2\mu(R_S(\theta) - R_S(\theta^*))$. In our matrix factorization setting, while the problem isn't globally strongly convex, it may exhibit local strong convexity restricted to invariant manifold $\Omega_k$ near optimal solutions. This local behavior possibly contributes to the observed similarity. * **Point 4: Clarification on Figures 3c and 4e. For Figure 3c, is it the case that the matrix has rank three if and only if the unobserved entry takes a specific value?** * Reply: We appreciate the reviewer's attention to detail regarding Figures 3c and 4e. To clarify: 1. The dashed lines in both figures correspond to different sampling patterns. For Figure 3(c), the matrix indeed has rank three if and only if the unobserved entry takes a specific value. This is because, given 15 observations, the rank-3 matrix is uniquely determined. 2. To improve clarity, we have updated the captions of Figures 3(c) and 4(e): "Figure 3(c): Training loss for 16 connected sampling patterns in a $4\times 4$ matrix, each covering 1 element and observing the remaining 15 in a fixed rank-3 matrix." "Figure 4(e): Training loss for 9 disconnected sampling patterns in a $3\times 3$ matrix, each covering 4 elements and observing the remaining 5 in a fixed rank-1 matrix." * **Point 5: In Fig 4f, are you counting position indexes from zero? In Section 3 (lines 97 and 99), you count position indexes from one.** * Reply: We sincerely thank the reviewer for pointing out this inconsistency. Indeed, in Figure 4f, we counted position indexes from zero, while in Section 3 we counted from one. To maintain consistency throughout the paper, we have updated Figure 4f to count from one. We've revised the caption of Figure 4f as follows: "Figure 4f: Learned values at symmetric positions $(1, 2)$ and $(2, 1)$ under varying initialization scales (zero mean, varying variance)." * **Point 6: The red dot in Fig 4e would perhaps be best a little larger and plotted on top of the other lines.** * Reply: We appreciate this suggestion for improving the figure's clarity. We have implemented this change by increasing the size of the red dot and ensuring it's plotted on top of the other lines in Figure 4e. This modification enhances the visibility of this key data point in the figure. * **Point 7: Assumption 1 is equivalent to the top singular value of $-\delta M$ being unique. This seems like a more intuitive way to write this assumption.** * Reply: We thank the reviewer for this insightful suggestion. We agree that reformulating Assumption 1 in terms of the top singular value of $-\delta M$ is indeed more intuitive. We have revised Assumption 1 accordingly: "Assumption 1 (Unique Top Singular Value): Let $\delta M=\left(A_c B_c-M\right)_{S_x}$ be the residual matrix at the critical point $\theta_c=\left(A_c, B_c\right)$. Assume that the largest singular value of $\delta M$ is unique." * **Point 8: In 231, it is claimed that "in the connected case, at each level [the] model reaches an optimal solution". Is this implied by one of the theorems, by previous work, or is this an empirical observation?** * Reply: The statement "in the connected case, at each level the model reaches an optimal solution" is based on our empirical observations. Our experiments consistently show this behavior in connected cases, as illustrated in Figure 3. To improve precision, we have revised the statement in line 231 to: "In the connected case, at each level we observe that the model reaches an optimal solution (Figure 3)." This revision clarifies that our claim is based on empirical evidence rather than a theoretical guarantee. We sincerely appreciate the reviewer's thorough examination of our work, which has significantly contributed to improving the clarity and precision of our presentation. --- Rebuttal Comment 1.1: Comment: I thank the authors for their clarifications and I continue to strongly support this paper.
Summary: This paper attempts to present a unified understanding of when and how matrix factorization models have different implicit regularization effects. Their key finding is that connectivity of the observed data plays an important role: (i) in the connected cases, the model learns the lowest-ranked solution and (ii) in the disconnected case, it seems to sort of depend, but generally does not find the minimum nuclear norm solution. They identify invariant manifolds in the loss landscape that guides the training trajectory from these lower ranked to higher ranked solutions. They support a few of their findings with theoretical results. Strengths: - Overall, I think this paper presents some neat findings while addressing an interesting question. - The paper is well-written and supported with examples to help elucidate the definitions / settings. - The paper fits well into the literature of implicit regularization, and helps unify some of the observations in the literature as well, such as greedy low-rank learning. Weaknesses: - I am unsure of how restrictive the assumptions used for the theoretical results are which are used to prove Theorem 2. The authors do make a remark about the assumptions, but it is still not immediately clear to me that these are fair assumptions to make. - As the authors mention, it would be interesting (and important) future work to show how the connectivity of the observed data holds attainment across the invariant manifolds. Technical Quality: 3 Clarity: 3 Questions for Authors: - I am trying to understand Figure 4(e): do the different dashed lines correspond to different variances of the initialization? In that case, I am assuming that the figure is trying to show that despite the initialization scales, the model always learns a sub-optimal solution in this disconnected case. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: I have addressed a few limitations in the weaknesses section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: * **Point 1: I am unsure of how restrictive the assumptions used for the theoretical results are which are used to prove Theorem 2.** * Reply: We appreciate the reviewer's concern regarding the restrictiveness of our assumptions. We have taken steps to clarify and justify these assumptions: 1. Regarding Assumption 1, as other reviewers have noted, it can be more concisely stated as the uniqueness of the largest singular value of the residual matrix. We have revised it as follows: "Assumption 1 (Unique Top Singular Value): Let $\delta M=\left(A_c B_c-M\right)_{S_x}$ be the residual matrix at the critical point $\theta_c=\left(A_c, B_c\right)$. Assume that the largest singular value of $\delta M$ is unique." 2. We have also updated the accompanying remark: "Remark: Assumption 1 ensures that upon departing from a critical point $\theta_c$, the trajectory is constrained to escape along a single dominant eigendirection corresponding to the largest singular value. This assumption holds for randomly generated matrix with probability 1, making it a reasonable condition in most practical scenarios." 3. To illustrate the implications when Assumption 1 does not hold, we've added an example in the appendix: "Consider the $2\times 2$ matrix completion problem: $M = \begin{bmatrix}2 & \star\\\\ \star & 2\end{bmatrix}$. In this case, the two numbers on the diagonal are identical, which causes the maximum singular value of the residual matrix at the origin to be non-unique. Consequently, the training process will jump directly from the rank 0 to the rank 2 invariant manifold, thereby missing the lowest rank solution of rank 1. This behavior is demonstrated in the attached PDF (Figure 1), which shows experimental results for this scenario." 4. To enhance understanding of Theorem 2, we've included a detailed proof sketch: "Proof sketch: We analyze the local dynamics in the vicinity of the critical point $\theta_c$. The nonlinear dynamics can be approximated linearly near $\theta_c$: $\frac{d\theta}{dt} \approx H(\theta_0 - \theta_c)$, where $H = -\nabla^2R_S(\theta_c)$ is the negative Hessian matrix. For exact linear approximation, the solution is: $\theta(t) = e^{tH}(\theta_0 - \theta_c) + \theta_c$. Let $\lambda_1 > \lambda_2 > ... > \lambda_s$ be the eigenvalues of $H$, with corresponding eigenvectors $q_{ij}$. We can express $\theta(t)$ as: $\theta(t) = \sum_{i=1}^s \sum_{j=1}^{l_i} e^{\lambda_i t}\langle\theta_0 - \theta_c, q_{ij}\rangle q_{ij} + \theta_c$. For sufficiently large $t_0$, the dynamics follows a dominant eigenvalue trajectory: $\theta(t_0) = \sum_{j=1}^{l_1} e^{\lambda_1 t_0}\langle\theta_0 - \theta_c, q_{1j}\rangle q_{1j} + O(e^{\lambda_2t_0}).$ Through detailed analysis of the eigenvalues and eigenvectors of the Hessian matrix (Lemmas A.2-A.4), we demonstrate that if the largest singular value of residual matrix $\delta M$ at $\theta_c$ is unique and $\theta_c$ is a second-order stationary point within $\Omega$, the first principal component $\sum_{j=1}^{l_1} e^{\lambda_1 t_0}\langle\theta_0 - \theta_c, q_{1j}\rangle q_{1j}$ will correspond to an $\Omega_1$ invariant manifold. Consequently, escaping $\theta_c$ increases the rank by 1, entering $\Omega_{k+1}$." 5. Regarding Assumption 2, we've revised the remark to better explain its role: "To ensure the escape direction falls within the $\Omega_{k+1}$ invariant manifold, the Hessian's top eigenvectors must satisfy $\text{rank}(A) = \text{rank}(B^\top) = \text{rank}(W_{\text{aug}})$. The condition that $\theta_c$ is a second-order stationary point within $\Omega$ in Assumption 2 guarantees this Hessian structure. Our Assumption 2 is more general than conditions proposed by Li et al. (2020), as it remains valid across both connected and disconnected configurations. Empirical findings (Fig. 3 and Fig. 4) indicate that this assumption consistently holds in practical scenarios." * **Point 2: It would be interesting (and important) future work to show how the connectivity of the observed data holds attainment across the invariant manifolds.** * Reply: We concur that investigating the relationship between data connectivity and attainment across invariant manifolds is a crucial direction for future research. Our planned approach includes a comprehensive analysis of the loss landscape structure at each invariant manifold level, with a particular focus on the characteristics of critical points. This investigation will provide valuable insights into how data connectivity influences the optimization trajectory and the ultimate attainment of low-rank solutions. * **Point 3: Do the different dashed lines correspond to different variances of the initialization?** * Reply: We appreciate the reviewer's attention to detail regarding Figure 4(e). To clarify: 1. The dashed lines in Figure 4(e) represent different disconnected sampling patterns, each covering 4 elements and observing the remaining 5 elements in a fixed rank-1 3x3 matrix. We comprehensively explored all nine possible disconnected sampling patterns with 5 observations. To enhance clarity, we have updated the caption of Figure 4(e): "Figure 4(e): Training loss for all nine disconnected sampling patterns in a 3x3 matrix, each covering 4 elements and observing the remaining 5 in a fixed rank-1 matrix." 2. Furthermore, we have added a brief discussion in Section 5.2, Line 199: "In Figure 4(e), we fixed a rank-1 matrix and explored all nine disconnected sampling patterns with 5 observations. For each pattern, we conducted experiments with small initializations. The loss curves consistently indicate that in disconnected cases, the model learns a sub-optimal solution in the rank-1 manifold, ultimately resulting in a rank-2 solution. This demonstrates that regardless of the specific disconnected sampling pattern, the model fails to achieve the optimal low-rank solution." We sincerely appreciate the reviewer's thorough examination of our work. --- Rebuttal Comment 1.1: Title: Thank you for your detailed response Comment: Thank you for clearing up my confusion on the figure and the detailed response to the assumptions. I believe that my current assessment of the paper is correct and will keep my score, which recommends acceptance.
Rebuttal 1: Rebuttal: Dear Reviewers, We sincerely thank all reviewers for their thoughtful and insightful comments. We have carefully addressed every comment, and we believe that the reviewers' collective feedback has significantly improved the manuscript. To address the common concerns raised, we have made the following key improvements: 1. **Clarified Definitions:** We have refined the definition of connectivity and provided a more intuitive explanation of the augmented matrix, enhancing the overall readability of the paper. 2. **Enhanced Experimental Details:** We have added comprehensive information about our experimental setup, including initialization methods and training dynamics, in both the main text and Appendix B.1. 3. **Expanded Theoretical Justifications:** We have provided more detailed proof sketches and intuitive explanations for our theoretical results, particularly for Theorem 2. 4. **Improved Figures and Captions:** We have updated several figures (e.g., Fig. 2b, 3c, 4e, 4f) and their captions to provide clearer visualizations and explanations of our results. 5. **Added High-Dimensional Experiments:** To address concerns about scalability, we have included new experiments on 20x20 matrices, demonstrating the applicability of our findings to higher-dimensional problems. 6. **Strengthened Motivation:** We have elaborated on the importance of data connectivity in shaping implicit regularization, providing a stronger justification for our study. 7. **Corrected Mathematical Notations:** We have fixed minor typographical errors in mathematical expressions to ensure accuracy. 8. **Extended Appendix:** We have added more detailed examples and explanations in the appendix, particularly regarding connectivity and disconnectivity. Additionally, we have included a one-page PDF in the attachment presenting key experimental results, including the example of coincident top eigenvalues and the high-dimensional experiments. We believe these revisions comprehensively address the reviewers' concerns while maintaining the core contributions of our work. The revised manuscript now offers a clearer, more rigorous, and more insightful exploration of how connectivity shapes implicit regularization in matrix factorization models for matrix completion. We sincerely hope that the revised manuscript now satisfies the reviewers' requirements, and we hereby submit it for publication consideration. We are grateful for the opportunity to improve our work based on such valuable feedback. Best regards, The Authors Pdf: /pdf/82de6ce88aa00124b5ebda5d948c472428ebd0e4.pdf
NeurIPS_2024_submissions_huggingface
2,024
Summary: This paper systematically investigates the implicit regularization of matrix factorization for solving matrix completion problems. The authors find by empirical results that the connectivity of observed data plays a crucial role in the implicit bias, with a transition from low nuclear norm to low rank as data becomes more connected with increased observations. They identify a hierarchy of intrinsic invariant manifolds in the loss landscape that guide the training trajectory to evolve from low-rank to higher-rank solutions. The optimization trajectory follows a Hierarchical Invariant Manifold Traversal (HIMT) process, generalizing the characterization of Li et al. (2020), whose proposed Greedy Low-Rank Learning (GLRL) algorithm equivalence only corresponding to the connected case. Regarding the minimum nuclear norm regularization, the authors establish conditions that provide guarantees closely aligned with the empirical findings, and they present a dynamic characterization condition that assures the attainment of the minimum rank solution. Strengths: - The paper systematically studies the training dynamics and implicit regularization of matrix factorization for matrix completion, considering both connected and disconnected cases, which provides a more comprehensive understanding of the model. - The paper identifies the hierarchical invariant manifolds in the loss landscape and characterizes the training trajectory, providing a theoretical basis for understanding the model's behavior. - The authors conduct extensive experiments to support their findings, demonstrating the influence of data connectivity on the implicit regularization and the training dynamics of the matrix factorization model. Weaknesses: - In some cases, an extremely small initialization is required, which may potentially impact the training speed. - The proofs of the theoretical results are quite complex and may be difficult to follow for readers who are not familiar with the mathematical background. A more intuitive explanation or proof sketch could help readers better understand the key ideas. - It is suggested to provide justifications to demonstrate the reasonableness of Assumption 1. Technical Quality: 3 Clarity: 3 Questions for Authors: No Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: * **Point 1: In some cases, an extremely small initialization is required, which may potentially impact the training speed.** * Reply: Our empirical analysis in Appendix B.4 (Fig. B6) demonstrates the relationship between observation magnitude differences and the required initialization scale. To elucidate the trade-off between initialization scale and training speed, we have expanded Appendix B.4 with the following details: 1. Theoretical optimization insights: "For the matrix factorization model $f_\theta = AB$, the Hessian matrix at 0 has strictly negative eigenvalues, making the origin a strict saddle point. Under small random initialization, gradient descent escapes this saddle at an exponential rate. Our Theorem 1 ensures that only strict saddle points and global minima exist as critical points in the loss landscape. Subsequent saddle points on invariant manifolds are also strict, facilitating exponential escape speeds and maintaining an efficient optimization process." 2. Practical low-rank attainment: "Achieving the lowest rank solution requires parameters to escape the saddle point along the unique top eigen-direction and keep evolving near the invariant manifold $\Omega_k$. Fig. B6 in Appendix B.4 illustrates the relationship between required initialization scale and observation magnitude differences. For the lowest possible rank solution with large numerical magnitude differences in observations, an extremely small initialization is necessary. However, for approximately low rank solutions (some singular values ​​are relatively small), which may be sufficient in practice, a relatively small initialization suffices without significantly impacting training speed." * **Point 2: The proofs of the theoretical results are quite complex and may be difficult to follow for readers who are not familiar with the mathematical background. A more intuitive explanation or proof sketch could help readers better understand the key ideas.** * Reply: We acknowledge that Theorem 2 is a comprehensive result with a relatively complex proof. To enhance understanding, we have added a more detailed proof sketch for Theorem 2: "Proof sketch: We analyze the local dynamics in the vicinity of the critical point $\theta_c$. The nonlinear dynamics can be approximated linearly near $\theta_c$: $\frac{d\theta}{dt} \approx H(\theta_0 - \theta_c)$, where $H = -\nabla^2R_S(\theta_c)$ is the negative Hessian matrix. For exact linear approximation, the solution is: $\theta(t) = e^{tH}(\theta_0 - \theta_c) + \theta_c$. Let $\lambda_1 > \lambda_2 > ... > \lambda_s$ be the eigenvalues of $H$, with corresponding eigenvectors $q_{ij}$. We can express $\theta(t)$ as: $\theta(t) = \sum_{i=1}^s \sum_{j=1}^{l_i} e^{\lambda_i t}\langle\theta_0 - \theta_c, q_{ij}\rangle q_{ij} + \theta_c$. For sufficiently large $t_0$, the dynamics follows a dominant eigenvalue trajectory: $\theta(t_0) = \sum_{j=1}^{l_1} e^{\lambda_1 t_0}\langle\theta_0 - \theta_c, q_{1j}\rangle q_{1j} + O(e^{\lambda_2t_0}).$ Through detailed analysis of the eigenvalues and eigenvectors of the Hessian matrix (Lemmas A.2-A.4), we demonstrate that if the largest singular value of residual matrix $\delta M$ at $\theta_c$ is unique and $\theta_c$ is a second-order stationary point within $\Omega$, the first principal component $\sum_{j=1}^{l_1} e^{\lambda_1 t_0}\langle\theta_0 - \theta_c, q_{1j}\rangle q_{1j}$ will correspond to an $\Omega_1$ invariant manifold. Consequently, escaping $\theta_c$ increases the rank by 1, entering $\Omega_{k+1}$. We defer the details to Appendix A." * **Point 3: It is suggested to provide justifications to demonstrate the reasonableness of Assumption 1.** * Reply: 1. We appreciate this suggestion and have revised Assumption 1 and its accompanying remark to provide better justification: "Assumption 1 (Unique Top Singular Value): Let $\delta M=\left(A_c B_c-M\right)_{S_x}$ be the residual matrix at the critical point $\theta_c=\left(A_c, B_c\right)$. Assume that the largest singular value of $\delta M$ is unique." "Remark: Assumption 1 ensures that upon departing from a critical point $\theta_c$, the trajectory is constrained to escape along a single dominant eigendirection corresponding to the largest singular value. This assumption holds for randomly generated matrix with probability 1, making it a reasonable condition in most practical scenarios." 2. To further illustrate the implications when this assumption does not hold, we have added a simple example in the appendix: "Consider the $2\times 2$ matrix completion problem: $M = \begin{bmatrix}2 & \star\\\\ \star & 2\end{bmatrix}$. In this case, the two numbers on the diagonal are identical, which causes the maximum singular value of the residual matrix at the origin to be non-unique. Consequently, the training process will jump directly from the rank 0 to the rank 2 invariant manifold, thereby missing the lowest rank solution of rank 1. This behavior is demonstrated in the attached PDF (Figure 1), which shows experimental results for this scenario." We sincerely appreciate the reviewer's thorough examination of our work, which has significantly contributed to improving the clarity and precision of our presentation. --- Rebuttal Comment 1.1: Title: Response to rebuttals Comment: Thank you for the rebuttals. The authors' responses to my main concerns are generally satisfactory to me. Alongside the reviews of other reviewers, I am inclined to maintain my positive assessment of this submission.
null
null
null
null
null
null
Exploring the Edges of Latent State Clusters for Goal-Conditioned Reinforcement Learning
Accept (poster)
Summary: This paper considers goal selection in goal-conditioned reinforcement learning. The core idea of this paper is to group states with small temporal distance into clusters, and then select goals that are on the clutter boundaries. In addition, the method gives priority to goal states that are accessible to the agent. Once the agent reaches the goal, a go-explore style exploration can be performed to potentially add new visited states to the clusters. The authors test their system across 5 different domains, and show improved performance over previous methods. Strengths: - This paper is well written and easy to understand. - The idea of learning state representations that can help group state and identify easy-to-reach states is novel and exciting. - The experiment is thorough and the results are promising. Weaknesses: - It is worth pointing out that the method is only evaluated in relatively simple domains, with the maximum state dimensions being 29. - See the question section. Technical Quality: 3 Clarity: 3 Questions for Authors: - Figure 9 seems confusing. It seems to imply that CE2 would bias the exploration direction towards task completion, which shouldn’t be the case considering that the training phase of CE2 is task agnostic. - For previous methods like MEGA or PEG, couldn’t you still use the temporal distance to select “next goal to explore”? Why is the clustering necessary here? - Is there a more rigorous way of understanding why “less explored regions are naturally adjacent to these boundaries”? This seems intuitive in the original state space but I’m not sure if it carries over to the latent state space. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The limitation is briefly discussed in the conclusion section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your insightful feedback and constructive comments! **R1. State Dimensionality in the Experiment Benchmarks** The maximum state dimensions in our test suite exceed 29. Block rotation and pen rotation involve an anthropomorphic robotic hand with 24 joints. In total, the action space has 20 dimensions for absolute angular positions of the actuated joints and the observation space has 61 dimensions for information about the robot’s joint and block/pen states. **R2. Figure 9 seems confusing. It seems to imply that CE2 would bias the exploration direction towards task completion, which shouldn’t be the case considering that the training phase of CE2 is task agnostic.** We apologize for the confusion. Figure 9 was intended to visualize environment exploration in CE$^2$-G. This paper presents two training algorithms where CE$^2$ (Algorithm 2, Line 212) is for unsupervised exploration in unknown environments and CE$^2$-G (Algorithm 3, Line 237) assumes the test goal distribution is available to the agent at training time. CE$^2$-G progressively expands the scope of exploration around the possible trajectories leading to the environment goals. We will correct the image label in Figure 9. **R3. Is there a more rigorous way of understanding why “less explored regions are naturally adjacent to these boundaries”? This seems intuitive in the original state space but I’m not sure if it carries over to the latent state space.** Less explored regions are located near the boundaries of latent state clusters due to the way we construct the latent space. Our loss function for training the latent space ($\mathcal{L}_{dt}$ in Line 156, Equation 2) ensures that states easily reachable from one another in the real environment (determined by a learned temporal distance network $D_t$ in Line 120, Equation 1) are also close in proximity within the latent space. By enforcing the latent space to express the temporal distance between different states, $CE^2$ enables both efficient state clustering and frontier state identification. **R4. For previous methods like MEGA or PEG, couldn't you still use the temporal distance to select “next goal to explore”? Why is the clustering necessary here?** Similar to the approach taken by MEGA, which sets frontier goals in low-density regions within the replay buffer, we could simply select goals that are furthest from the initial states, as determined by the learned temporal distance network $D_t$ (Line 120, Equation 1). However, this strategy can reduce exploratory behavior because the policy during training may still have limited capability in reaching rare goals. Instead, CE$^2$ selects the next goal to explore at the edges of latent state clusters, providing two benefits: (1) less explored regions are adjacent to these boundaries, and (2) given the easy accessibility between states within each cluster by the training policy, the agent's capability extends to reaching states even at the cluster boundaries. In other words, clustering enables CE$^2$ to precisely identify the frontier of previously explored states. For example, as visualized in Fig. 11 in the appendix for the Ant Maze environment, CE$^2$ enhances exploration efficiency by consistently setting exploratory goals within the current policy's capabilities. In contrast, MEGA and PEG often set goals that are unlikely to be reachable by the current agent. We will update the paper to integrate all the discussions above. --- Rebuttal Comment 1.1: Comment: I thank the authors for the clarifications. I have no further questions.
Summary: The paper introduces a cluster edge exploration (CE2) algorithm, which is implementing the “Go-Explore” principle in a – to the best of my knowledge – novel manner. Key idea is to use clustering of the state space latents – go to one of these clusters and then explore from there. As a main result, exploration improves and, as a consequence, success rates rise. Strengths: The idea builds on prior work based on go-explore, is charmingly simple, and yields positive effects as expected. The model outperforms previous methods on standard tasks. Weaknesses: The evaluations are rather restricted to some standard artificial benchmark tasks. The temporal distance network seems tedious to train additionally. The algorithm in the end just introduces yet another method to explore the edge of the search space. The clustering algorithm seems detached from the latent learning algorithm – it does not structure the latent state in any way. Technical Quality: 3 Clarity: 3 Questions for Authors: Eq. 2 is not quite clear – particularly I wonder if here is not a unit problem as the first distance measure in latent state is representational while the second one depends on the movement distance estimates. Isn’t this severely restricting? Eq. 4 could you elaborate slightly? How are p and q densities represented / determined? Where does the learned value exploration function come from? What about environment, where sub-areas are hard to get to but do not offer themselves suitably for any further exploration. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: An integration / automatic abstraction via shielded / gated latents would be much more appealing. The evaluations are restricted to a standard test suite that does not really require deep reward propagation. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your insightful feedback and constructive comments! **R1. The temporal distance network seems tedious to train additionally** The temporal distance network can be trained efficiently using supervised learning from replay buffer trajectories to predict action steps from the current state to a goal state. Temporal distance networks are commonly used to create reward functions in goal-conditioned reinforcement learning e.g. [1,2]. We follow these prior works to implement this approach. [1] Discovering and Achieving Goals via World Models. NeurIPS 2021. [2] Planning Goals for Exploration. ICLR 2023. **R2. The algorithm just introduces yet another method to explore the edge of the search space** Our algorithm, CE$^2$, tackles the core challenge in the Go-Explore mechanism: how to select an exploration-inducing goal command $g$ and effectively guide the agent to $g$? Previous approaches, such as MEGA (Pitis et al., 2020), set exploratory goals at rarely visited regions of the state space. However, in these approaches, the policies *under training* may have limited capability of reaching the chosen rare goals, leading to less effective exploration. Our contribution is a novel goal selection algorithm that prioritizes goal states in sparsely explored areas of the state space, **provided they remain accessible to the agent**. This is the key factor in why CE$^2$ outperforms the MEGA and PEG (Hu et al., 2023) baselines in our benchmark suite in Fig. 3. As visualized in Fig. 11 in the appendix for the Ant Maze environment, CE$^2$ enhances exploration efficiency by consistently setting exploratory goals within the current policy's capabilities. In contrast, MEGA and PEG often set goals that are unlikely to be reachable by the current agent. **R3. The clustering algorithm seems detached from the latent learning algorithm – it does not structure the latent state in any way.** While our clustering algorithm does not directly structure the latent space, it requires the latent space to be organized in a specific manner to be effective. In other words, the latent space learning algorithm is a key prerequisite for the latent state clustering algorithm. Specifically, our latent space learning algorithm structures the latent space such that states easily reachable from one another in the real environment (as determined by the learned temporal distance network $D_t$ in Line 120, Equation 1) are also close together in the latent space. The clustering algorithm leverages this structure property to ensure that the latent state cluster boundaries align with the frontier of previously explored states. As such, CE$^2$ can efficiently *generate* exploratory goals at the frontier at training time. **R4. Eq. 2 is not quite clear – particularly I wonder if here is not a unit problem as the first distance measure in latent state is representational while the second one depends on the movement distance estimates** In the implementation, to address potential scale issues, we normalize the movement distance estimate, used in the second component of Eq. 2, by dividing it by the trajectory length to ensure it is smaller than 1, ensuring it falls within a range similar to the first component. We will clarify it in the paper. **R5. Eq. 4 could you elaborate slightly? How are p and q densities represented / determined?** Eq. 4 is used to optimize the Gaussian Mixture Models (GMMs) iteratively on sampled batches from the replay buffer. $p$ and $q$ are represented as Gaussian distributions within the GMMs. $q(c \vert \Psi(s))$ is the postior distribution over $c$ (the clusters) given an encoded state $\Psi(s)$. $\log p(\Psi(s) \vert c)$ is the distribution donating the probability of the encoded state $\Psi(s)$ in cluster $c$. $p(c)$ is the prior distribution of the weight of clusters in GMMs. For each round of optimization, we increase the probability of the sampled batches in GMMs by updating the weight of each cluster $c$ in GMMs and the mean and variance of them. **R6. Where does the learned value exploration function come from?** The learned exploration value function (used in Equation 7 for determining the exploration potential of a chosen goal state) is introduced in Line 109. This value function is used to guide the training of the exploration policy in our Go-Explore mechanism and is updated per learning iteration in our model-based GCRL framework (Line 12 of Algorithm 2). It encourages exploration by leveraging the Plan2Explore (Sekar et al. (2020)) disagreement objective, which motivates the agent to explore states in less familiar areas of the environment that the world models haven't adequately learned (intuitively such states often induce discrepancies among an ensemble of world models). **R7. What about environment, where sub-areas are hard to get to but do not offer themselves suitably for any further exploration** We indeed considered such environments. We tested CE$^2$ in Point Maze and Ant Maze (Fig. 2) which contain dead ends from which exploration is doomed to fail. As explained in Equations 6 and 7 (Line 200), CE$^2$ selects goal states from latent state cluster boundaries with *the highest exploration potential*. We use the learned exploration value function (see R6) to evaluate the exploration potential of a goal state. As exploration progresses, the exploration value of states at a dead end decreases. Consequently, CE$^2$ can then select other goal states on the cluster boundaries that have not yet been explored well to escape the dead end. **R8. The evaluations are restricted to a standard test suite that does not really require deep reward propagation.** Our benchmarks include tasks with horizons of up to 500 steps (Ant Maze and Walker). The environments are also high-dimensional; for instance, Pen Rotation and Block Rotation have 61 observation space dimensions and 20 action space dimensions. We will revise the paper to incorporate all the above discussions. --- Rebuttal Comment 1.1: Title: Discussion Comment: Dear Reviewer WfSq, We kindly inquire whether the information provided in our rebuttal adequately addresses your concerns. If you have any further questions or issues, we would be grateful for the opportunity to address them. Thank you once again for your valuable feedback and constructive comments! Best regards, The Authors --- Rebuttal Comment 1.2: Comment: Thank you for your careful responses and elaborations on your method. I would further encourage to discuss approaches that more suitably structure the latent state space, but also see the merit of the paper to explore latent space-oriented exploration even without any further inductive bias to shape the latent space itself. The results clearly show highly competitive performance, and the technique applied, as it is rather simple, is definitely universally useful. Thus, I raise my score to accept.
Summary: This paper develops an approach for frontier exploration in the context of model based reinforcement learning. The key idea of the paper (inspired by prior work Go-Explore) is to cluster a group reachable states in the latent space and keep track of the current frontier, such that new goals can be sampled close to the frontier. Experiments in simulated control benchmarks show that the proposed approach ism ore effective in solving hard exploration tasks compared to a few related baselines. Strengths: The paper is well motivated and targets an important problem in reinforcement learning that has been studied for many years - that of effective exploration. Under the goal-conditioned RL setting, the proposed modification to a prior work Go-Explore is novel to the best of my knowledge, and is intuitively sound in terms of choosing goals that are near the frontier and yet "accessible" with a high probability. The paper is well presented, with sufficient background on prior works, and appropriate details on the algorithm and simulation environments used for experiments. The experiments show better results than baselines on some challenging exploration tasks like in-hand manipulation and point maze navigation. The ablation experiments are good and provide into different stages of the approach like goal sampling, reaching the farthest reachable state, and exploration heuristic beyond that. Weaknesses: One of the main weaknesses of the paper is that the delta in terms of core algorithmic contribution beyond go-explore is limited. The paper pitches the core contribution as an exploration strategy but couples it with model-based RL in the instantiation, but the exploration algorithm on its own is a small modification to Go-Explore. A small change to an existing approach (Go-Explore) leading to massive improvements in tasks would be interesting, but the paper is not convincing in showing this. One reason for that is Go-Explore was shown to perform well in some very challenging scenarios (like Montezuma's Revenge) and the proposed approach doesn't perform direct comparisons on those environments. Combining model-based RL with exploration is a bit confusing. In many environments, a hard exploration problem needs to be solved (because of sparse rewards and long horizon tasks but learning a model of the world is an even harder task than exploring the environment to learn a policy - it seems the proposed approach will be very limiting in such scenarios) Technical Quality: 3 Clarity: 2 Questions for Authors: Please refer to the weaknesses above. - The paper pitches the core contribution as an exploration strategy but couples it with model-based RL in the instantiation, but the exploration algorithm on its own is a small modification to Go-Explore. Is there a reason why this instantiation is necessary? - Can the authors comment on the choice of completely orthogonal environments to those in Go-Explore (which is a direct baseline) ? - How feasible is the approach for deployment in real world control systems where building a model of the world may be much harder than learning a policy to solve a task? Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: Please refer to the weaknesses and questions above, and address limitations of the approach. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your insightful feedback and constructive comments! **R1. One of the main weaknesses of the paper is that the delta in terms of core algorithmic contribution beyond go-explore is limited.** Our algorithm, CE$^2$, tackles the core challenge in the Go-Explore mechanism: how to select an exploration-inducing goal command $g$ and effectively guide the agent to $g$? Previous approaches, such as MEGA (Pitis et al., 2020), set exploratory goals at rarely visited regions of the state space. However, in these approaches, the policies *under training* may have limited capability of reaching the chosen rare goals, leading to less effective exploration. Our contribution is a novel goal selection algorithm that prioritizes goal states in sparsely explored areas of the state space, **provided they remain accessible to the agent**. This is the key factor in why CE$^2$ outperforms the MEGA and PEG (Hu et al., 2023) baselines in our benchmark suite in Fig. 3. As visualized in Fig. 11 in the appendix for the Ant Maze environment, CE$^2$ enhances exploration efficiency by consistently setting exploratory goals within the current policy's capabilities. In contrast, MEGA and PEG often set goals that are unlikely to be reachable by the current agent. **R2. Can the authors comment on the choice of completely orthogonal environments to those in Go-Explore (which is a direct baseline) ?** As discussed in R1, the core challenge in the Go-Explore mechanism lies in selecting goal states that effectively trigger further exploration upon being reached. However, the original Go-Explore method (Ecoffet et al. (2019)) does not prescribe a general goal selection method, instead opting for a hand-engineered novelty bonus for each task (e.g. task-specific pseudo-count tables). CE$^2$ is more related to recent instantiations of Go-Explore that *automatically* selects exploration-inducing goals in less-visited areas of the state space to broaden the range of reachable states, e.g. MEGA and PEG. Therefore, we compare our method with these tools instead of Ecoffet et al. (2019) in environments where these tools are applicable, to evaluate *the strength of our goal selection method*. **R3. The paper pitches the core contribution as an exploration strategy but couples it with model-based RL in the instantiation. Is there a reason why this instantiation is necessary?** Our method, CE$^2$, follows the Go-Explore mechanism by executing an exploration policy (defined in Line 109) upon reaching a chosen goal state to expand the range of reachable states. A primary reason for implementing CE$^2$ with model-based RL is to effectively train the exploration policy. We use intrinsic explorer rewards to encourage the exploration policy to reach novel states where predictions among an ensemble of world models diverge, based on the Plan2Explore disagreement objective (Sekar et al., 2020). The exploration policy is optimized purely from imagined trajectories. In our experience, this strategy results in stronger exploration capabilities compared to epsilon exploration policies, which randomly "fill in" sparse regions of the achieved goal space. For example, our model-based MEGA baseline uses the same exploration policy as ours and demonstrates improvement over the original MEGA implementation, which uses random epsilon exploration, in the benchmark suites we consider. We adopt this model-based approach for all the baselines to ensure the exploration policy is not the bottleneck in our experiments. **R4. How feasible is the approach for deployment in real world control systems where building a model of the world may be much harder than learning a policy to solve a task?** The exploration strategy in CE$^2$ can be integrated with *any* model-based RL algorithm and applied to *any* tasks where model-based RL is applicable. Here, we specifically focus on the exploration problem in the unsupervised goal-conditioned reinforcement learning (GCRL) setting. During training, there are no predefined tasks or goals. A successful agent should be able to navigate to a wide range of previously unknown goal states upon receiving goal commands only revealed at test time. Intuitively, learning a policy in this setting is as challenging as learning a world model due to the vast range of possible goals. We believe this is a practical setting for deploying an agent into an unknown environment - the agent must learn about the environment and identify the feasible tasks it can perform without prior specifications. That being said, a model-free version of CE$^2$ should be conceptually simpler e.g., training the exploration policy with intrinsic rewards from an ensemble of goal-conditioned value functions instead of world models. Incorporating task-specific information, such as demonstrations and background knowledge, to guide CE$^2$'s exploration towards user-preferred areas would enhance its practicality, particularly for real-world robotics applications. We leave these extensions for future work. We will incorporate the discussions from R1 to R4 into the paper. --- Rebuttal Comment 1.1: Title: thanks for the response. my main concerns remain and so will not fight for acceptance Comment: Dear authors, Thanks for the rebuttal response. The explanations to my questions are helpful, and I would definitely encourage providing the additional context regarding goal selection being the key proposed novel contribution in the revised paper. However my main concerns regarding comparison with Go-Explore more directly, limited algorithmic contributions, and empirical evidence for applicability to model-free scenarios remain. In particular, I think a more direct comparison to Go-Explore is necessary because "the strength of our goal selection method" is something subjective in the larger context of the exploration strategy i.e. it may be the case that Go-Explore's 'heuristic' goal-selection strategy is good enough for tough exploration problems and the proposed approach is actually not scalable to these challenging scenarios - without empirical comparisons, we simple don't know! (I understand that experimental comparisons might be beyond the scope of the short rebuttal window) Apart from my concerns, I do not see any major flaws in the algorithm/experiments, and as such I am not recommending reject, but will not fight for acceptance if any other reviewers have major concerns. --- Rebuttal 2: Title: Rationale for Not Directly Comparing with Go-Explore (Ecoffet et al. (2019)) Comment: Dear Reviewer 7wGc, Thank you so much for your comments. We will provide additional context in the revised paper, emphasizing that goal selection is the key novel contribution. We want to clarify an important reason for not comparing with Go-Explore (Ecoffet et al. (2019)). Unlike CE$^2$, Go-Explore does not prescribe a general approach for goal selection to induce exploration; it relies on a hand-designed pseudocount metric. **Creating these task-specific pseudocount tables requires significant domain knowledge**, which CE$^2$ *does not assume*. For instance, in the Go-Explore implementation applied to robotics, the domain knowledge representation must be derived from the internal state of the MuJoCo simulator, such as the 3D positions of the robot's gripper (discretized in voxels with sides of length 0.5 meters), whether the robot is currently touching (with a single grip) or grasping (touching with both grips) the object, and whether the object is in the target location. Designing the Boolean predicates requires sophisticated, case-specific code, and these predicates are not generalizable to the diverse tasks in our benchmarks, such as the anthropomorphic robotic hand with 24 joints in the Block Rotation and Pen Rotation tasks. Directly comparing CE$^2$, a general, automatic algorithm for goal selection, with Go-Explore—given its reliance on extensive, case-specific domain knowledge—*would not be a fair comparison*. Instead, we have compared CE$^2$ with more recent versions of Go-Explore, such as MEGA (Pitis et al. (2020)) and PEG (Hu et al. (2023)), which automatically select exploration-inducing goals. **These tools, like ours, built on top of Go-Explore, do not assume access to domain knowledge**. We believe this comparison better highlights the effectiveness of our goal selection method in *unknown robotics environments where domain knowledge is not available* (as outlined in Section 2, our focus is on unsupervised exploration in unknown environments, where no predefined task information is provided during the exploration stage). That said, we are happy to include a comparison with Go-Explore in the appendix of the revised paper to showcase how the CE$^2$ agent learns to explore the environment compared to an agent that uses domain knowledge. We hope this response clarifies the reviewer's concerns. We look forward to the follow-up discussion and are happy to address any further comments or questions.
Summary: This paper presents a method called "Cluster Edge Exploration" (CE2) to perform goal selection in goal-conditioned reinforcement learning and enable efficient exploration. Concretely, the method builds on the Go-Explore principle, which learns separate policies for exploration and goal-reaching. The main idea is to learn a latent embedding space parameterized by Gaussian Mixture Models (GMMs) in the Dreamer framework. Selecting a goal is a two-step process. First, the method samples points from the GMM and only keeps a low probability collection. These points approximate the boundary of the existing explored states. Next, the algorithm uses the goal-conditioned policy to perform an imaginary rollout towards each goal candidate. That goal is selected if the rollout ends at a state with the highest exploration value. In addition, when the test goal distribution is known, the CE2-G variant learns the GMM only with trajectory data from test goals. This further provides an inductive bias to sample goals near the test distribution. The paper presents extensive empirical studies in navigation, locomotion, and manipulation. Quantitatively, both CE2 and CE2-G outperform various baselines in their respective settings. Through visualizations of the sampled goals, CE2 bears the advantage that it samples points that are both near the frontier of the existing data and feasible to reach. Strengths: - The method's main idea is sound: sampling goals that are feasible but underexplored makes sense. - The authors did a good job explaining the algorithm and training objectives. They also made the distinction between EC2 and EC2-G clear. - The experiments are done in a good span of different domains. This shows that the exploration strategy is not tailored only for a certain application, such as navigation. - Visualizations of the goals picked by the CE2 and relevant baselines are insightful. Weaknesses: - It seems that some of the tasks haven't been trained to convergence. This makes it harder to draw definitive conclusions on the method's sample efficiency or final performance. - The method experiments appear to be done in state-based environments only. Since Dreamer is a strong algorithm for vision-based control, I'm curious how this approach performs with image observations. Would additional challenges arise for feature learning? Technical Quality: 3 Clarity: 3 Questions for Authors: - Equation 7 performs imaginary rollouts toward the goal candidates. How is $T$ chosen here? - In the ablation studies, the relative performance of CE2 and CE2-noPEG seems stochastic. Why would the inclusion of exploration value estimate hurt performance? Does this mean that the exploration value function is under-trained? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes, there's a dedicated section on page 15. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your insightful feedback and constructive comments! **R1. It seems that some of the tasks haven't been trained to convergence. This makes it harder to draw definitive conclusions on the method's sample efficiency or final performance.** We thank the reviewer for the suggestion. In the global response, we have provided updated training results over extended environment interaction steps for our benchmarks in Fig. 1 of the attached PDF. Our method continues to outperform the baselines regarding both sample efficiency and final task success rates. **R2. The method experiments appear to be done in state-based environments only. Since Dreamer is a strong algorithm for vision-based control, I'm curious how this approach performs with image observations. Would additional challenges arise for feature learning?** Our method, CE$^2$, has so far only been evaluated in state space. A promising extension would be to handle image observations by optimizing goals in a compact latent space. This would likely require only minor adjustments to our code, as CE$^2$ (built upon Dreamer) already learns a latent space from state observations for goal command selection. We plan to explore this extension in future work. **R3. Equation 7 performs imaginary rollouts toward the goal candidates. How is $T$ chosen here?** In our implementation, we set $T$ to half of the maximum episode length for all environments. The time limits for both the Go and Explore phases during real environment exploration are also set to this value. We will clarify this in the paper. **R4. In the ablation studies, the relative performance of CE2 and CE2-noPEG seems stochastic. Why would the inclusion of exploration value estimate hurt performance? Does this mean that the exploration value function is under-trained?** Block Rotation is the only environment where CE$^2$-noPEG outperforms CE$^2$. In CE$^2$, our method selects goals by identifying states with the highest exploration value estimate sampled from the latent state cluster boundaries to initiate our Go-Explore procedure. Since we consider unsupervised exploration, the test goal distribution is not available to the CE$^2$ agent during training. In the Block Rotation environment, the CE$^2$ agent often pursues states where the block falls from the palm, due to their "high" exploration potential determined by the exploration policy value functions. In contrast, the CE$^2$-noPEG agent explores the state space more evenly, gaining more in-hand manipulation skills, which is crucial for achieving the block-rotation goals revealed at test time. CE$^2$ outperforms CE$^2$-noPEG in maze navigation environments (e.g., PointMaze and AntMaze) because it can use the exploration value estimates to escape the dead ends in mazes. We will incorporate this discussion into the paper. --- Rebuttal Comment 1.1: Comment: I appreciate the authors for their response and for performing extended experiments. The discussion on CE2 vs CE2-noPEG is insightful, especially on how sometimes unsafe states can lead to high exploration potentials. My questions have been answered, and I will keep my good rating.
Rebuttal 1: Rebuttal: We sincerely appreciate the valuable feedback and suggestions from the reviewers. This global rebuttal includes a PDF file with updated training results and ablation study findings over extended training steps for our benchmarks (suggested by Reviewer NgsZ). Our method consistently outperforms the baselines in both sample efficiency and final performance. We will address each reviewer's concerns in the individual review responses. Pdf: /pdf/09af6efdeca7024e5f0a595eed5ef360a60b9e57.pdf
NeurIPS_2024_submissions_huggingface
2,024
Summary: The authors propose a method for model-based exploration based on exploring reachable trajectories. The expand upon the Dreamer algorithm by optimizing the encoder to encode a notion of distance between state and optimizing for a likely reachable goal when training in the learned world model. The method is evaluated on a wide range of tasks, including dexterous manipulation tasks, outperforming other baselines in final performance. Strengths: The paper is clear and the algorithm well-motivated. The strong experimental performance in the complex tasks is promising and they provide nice visualizations of their algorithm versus others and over time. Weaknesses: One thing I would have like to see is some analysis of the additional computation time required to optimize for the goal states versus other methods. If it is prohibitively expensive, then the wall time could still be much longer than other methods. Technical Quality: 3 Clarity: 3 Questions for Authors: Why do you think the method is not able to achieve 100% on pen rotation? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes, on computational costs and if it can be applied in model-free settings Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your insightful feedback and constructive comments! **R1. One thing I would have like to see is some analysis of the additional computation time required to optimize for the goal states versus other methods. If it is prohibitively expensive, then the wall time could still be much longer than other methods.** We compared the computation time needed to optimize goal states for launching the Go-Explore procedure among our tool CE$^2$ and the baseline methods MEGA and PEG in the 3-Block Stacking environment. The average wall clock time are recorded in the table below: | Method | Seconds / Episode | |--------|--------------------| | **CE$^2$** | 0.56 | | **PEG** | 0.53 | | **MEGA** | 0.47 | The results indicate that there is little difference in speed between methods. CE$^2$ adds only a minimal overhead in generating candidate goal states compared to the other goal selection baselines. We will include this discussion and the complete results for each environment in the revised paper. **R2. Why do you think the method is not able to achieve 100% on pen rotation?** Pen Rotation is particularly challenging due to the pen's thin structure, which requires precise control to prevent it from dropping. We intended to convey that this is the most difficult benchmark (with 61 observation space dimensions and 20 action space dimensions) in our test suite. We will clarify this in the revised paper. --- Rebuttal Comment 1.1: Title: Rebuttal Response Comment: I thank the authors for taking the time to answer my questions!
null
null
null
null
null
null
Knowledge-Empowered Dynamic Graph Network for Irregularly Sampled Medical Time Series
Accept (poster)
Summary: The paper aims to handle difference and correlation between multiple variables in the irregularly sampled medical time series data. By proposing a model named KEDGN with textual medical knowledge and dynamic variable graphs, it is able to outperforms baselines on a variety of datasets. Comprehensive experiments have also been conducted to demonstrate the learned correlations between variables, which aligns with the motivation of this paper. Strengths: 1. The motivation of the paper is solid. The authors provide enough evidence and arguments to demonstrate the dynamic correlation patterns between time series data. 2. The paper is generally well written with clear figures and equations. 3. The proposed model demonstrates strong performance in various experiment settings. Comprehensive analysis is also being conducted to prove the effectiveness of each module design. 4. It is good to see some visualizations about the variable correlations learned by the model, which aligns with the motivation of this paper. Weaknesses: 1. The idea of learning dynamic graphs that model variable relationships is quite common in literatures. It would be better for the authors to cite related works, compare with existing works and make arguments about novelty and effectiveness of the proposed modules. Here are some references: - Jiang, Renhe, Zhaonan Wang, Jiawei Yong, Puneet Jeph, Quanjun Chen, Yasumasa Kobayashi, Xuan Song, Shintaro Fukushima, and Toyotaro Suzumura. "Spatio-temporal meta-graph learning for traffic forecasting." In Proceedings of the AAAI conference on artificial intelligence, vol. 37, no. 7, pp. 8078-8086. 2023. - Huang, Qihe, Lei Shen, Ruixin Zhang, Shouhong Ding, Binwu Wang, Zhengyang Zhou, and Yang Wang. "Crossgnn: Confronting noisy multivariate time series via cross interaction refinement." Advances in Neural Information Processing Systems 36 (2023): 46885-46902. - Wang, Dingsu, Yuchen Yan, Ruizhong Qiu, Yada Zhu, Kaiyu Guan, Andrew Margenot, and Hanghang Tong. "Networked time series imputation via position-aware graph enhanced variational autoencoders." In Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pp. 2256-2268. 2023. - Wang, Binwu, Pengkun Wang, Yudong Zhang, Xu Wang, Zhengyang Zhou, Lei Bai, and Yang Wang. "Towards Dynamic Spatial-Temporal Graph Learning: A Decoupled Perspective." In Proceedings of the AAAI Conference on Artificial Intelligence, vol. 38, no. 8, pp. 9089-9097. 2024. 2. There exist some missing references and baselines for irregularly sampled time series classification problems. It would be better to cite related works and compare with them in the experiment sections. Here are some examples: - Li, Zekun, Shiyang Li, and Xifeng Yan. "Time Series as Images: Vision Transformer for Irregularly Sampled Time Series." Advances in Neural Information Processing Systems 36 (2024). - Labach, Alex, Aslesha Pokhrel, Xiao Shi Huang, Saba Zuberi, Seung Eun Yi, Maksims Volkovs, Tomi Poutanen, and Rahul G. Krishnan. "Duett: Dual event time transformer for electronic health records." In Machine Learning for Healthcare Conference, pp. 403-422. PMLR, 2023. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. It seems some of the baselines such as DGM^2-O can reach good performance with relatively less time and space requirements. Can there be any potential improvements to the efficiency of the proposed KEDGN model? It seems BERT is also optimized during training, it would be interesting to see some efficiency experiments and analysis regarding this aspect. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Responses to Reviewer n8KK (1/1) Dear reviewer n8KK, Thank you for your valuable feedback on the supplement of relevant literature of the article. Here is the response: **W1. The idea of learning dynamic graphs that model variable relationships is quite common in literatures. It would be better for the authors to cite related works, compare with existing works and make arguments about novelty and effectiveness of the proposed modules.** - "Spatio-temporal meta-graph learning for traffic forecasting." In Proceedings of the AAAI conference on artificial intelligence, vol. 37, no. 7, pp. 8078-8086. 2023. - "Crossgnn: Confronting noisy multivariate time series via cross interaction refinement." Advances in Neural Information Processing Systems 36 (2023): 46885-46902. - "Networked time series imputation via position-aware graph enhanced variational autoencoders." In Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pp. 2256-2268. 2023. - "Towards Dynamic Spatial-Temporal Graph Learning: A Decoupled Perspective." In Proceedings of the AAAI Conference on Artificial Intelligence, vol. 38, no. 8, pp. 9089-9097. 2024. A1. Thank you for the reminder. In a later version, we will cite these four papers in the section on “Graph Neural Networks for Multivariate Time Series” in the related work. Although modeling dynamic graphs to capture relationships between variables is not new, many dynamic spatiotemporal graph methods are designed for traffic flow prediction tasks, which involve regularly sampled multivariate time series. Applying these methods directly to irregularly sampled medical time series classification tasks may lead to suboptimal performance. Our approach specifically addresses the challenges of ISMTS by using domain knowledge as guidance and adjusting edge weights in the graph based on observation masks and dynamic densities, which is a key distinction from these methods. **W2. There exist some missing references and baselines for irregularly sampled time series classification problems. It would be better to cite related works and compare with them in the experiment sections.** - Li, Zekun, Shiyang Li, and Xifeng Yan. "Time Series as Images: Vision Transformer for Irregularly Sampled Time Series." Advances in Neural Information Processing Systems 36 (2023). - Labach, Alex, Aslesha Pokhrel, Xiao Shi Huang, Saba Zuberi, Seung Eun Yi, Maksims Volkovs, Tomi Poutanen, and Rahul G. Krishnan. "Duett: Dual event time transformer for electronic health records." In Machine Learning for Healthcare Conference, pp. 403-422. PMLR, 2023. A2. Thank you for the reminder. In the revised version, we will cite these two papers in the section on “Irregularly Sampled Multivariate Time Series Modeling” in the related work. Additionally, we have conducted comparative experiments, and the results are as follows: | | P19 | | Physionet | | MIMIC-III | | P12 | | | ---------- | -------- | -------- | --------- | -------- | --------- | -------- | -------- | -------- | | | AUROC | AUPRC | AUROC | AUPRC | AUROC | AUPRC | AUROC | AUPRC | | ViTST | 91.7±0.1 | 57.5±0.7 | 81.3±1.9 | 37.4±2.9 | 81.8±0.3 | 39.6±1.3 | 86.3±0.1 | 50.8±1.5 | | Duett | 88.2±0.5 | 56.0±3.9 | 81.3±1.4 | 44.9±1.4 | 78.8±0.8 | 34.3±1.0 | 83.4±1.2 | 45.4±1.5 | | Ours(Best) | 92.3±1.0 | 62.5±0.7 | 88.2±1.1 | 57.5±2.5 | 85.1±0.3 | 48.4±1.5 | 87.8±0.5 | 54.5±1.5 | **Q1. It seems some of the baselines such as DGM^2-O can reach good performance with relatively less time and space requirements. Can there be any potential improvements to the efficiency of the proposed KEDGN model? It seems BERT is also optimized during training, it would be interesting to see some efficiency experiments and analysis regarding this aspect.** A3. Firstly, it should be clarified that BERT is frozen and does not participate in the optimization of training. We only take the sentence representation output by BERT as the semantic representation of variables, so the training cost is consistent with the variable embedding of random initialization. In addition, due to the sequential nature of RNN computation, there is an inherent bottleneck in the running time of our model. We have optimized the code implementation as much as possible to keep the overall running cost of the model within an acceptable range. We acknowledge this as a limitation of our model and will add it to the limitations section. --- Rebuttal Comment 1.1: Title: Response to rebuttal Comment: Thank you for the detailed response. All my concerns have been addressed properly and I'd like to keep my score.
Summary: This work considers the irregular timestamp contained in current medical data. They employ a density-aware mechanism to the time-varying correlations among variables. Strengths: This work addresses an important topic in this field and is well-organized and written. The experiments are sufficient to prove their conclusions. Weaknesses: 1. In formulation (6), the density of the observation point is calculated by the average time interval. What if the point is not observed in the next or previous time? e.g. t_v^(i+1) or t_v^(i) is None. 2. I am just a little bit confused by the node in the dynamic variable graph. As we know, the patient's EHR data will contain lots of clinical notes and some like treatment, medications, and so on. You have shown the description in your model's framework in Figure 3. So do you consider other variables in patient's EHR data? Besides, the node here denotes a text, not detailed medical terminology, right? 3. Could you please add a comparison graph between the original variable representation and the final representations? Technical Quality: 3 Clarity: 3 Questions for Authors: see weakness Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Responses to Reviewer 68iN (1/1) Dear reviewer 68iN, Thank you for your valuable feedback on the formula details and completeness of the paper. Here is the response: **W1. $Z^{t}$ 's definition is not clear** A1. We apologize for the oversight. If there are no observations at the preceding/following time points, we take the time interval of the following/preceding observation as the density. If neither a preceding nor a following observation exists, it indicates that this observation is the only one for the variable, so we take half of the maximum observation time span across all variables as the density. The corrected definition is as follows: $$ Z^{(t)}=Z_{i, v}=\begin{cases}((t_{i,v}-t_{i-1,v}) + (t_{i+1,v}- t_{i, v})) / 2, \quad \text{if }t_{i+1,v} \text{ and } t_{i-1,v} \text{ are both not None.}\\\\t_{i,v}-t_{i-1,v}, \quad \text{if }t_{i+1,v} \text{ is None.} \\\\t_{i+1,v}- t_{i, v}, \quad \text{if } t_{i-1,v} \text{ is None.}\\\\t_{max} / 2, \quad \text{if }t_{i+1,v} \text{ and } t_{i-1,v} \text{ are both None.}\end{cases} $$ Here, $Z^{(t)} = Z_{i, v} $ represents the average density at time $ t $ for the $ v $-th variable's $ i $-th observation. **W2. I am just a little bit confused by the node in the dynamic variable graph. As we know, the patient's EHR data will contain lots of clinical notes and some like treatment, medications, and so on. You have shown the description in your model's framework in Figure 3. So do you consider other variables in patient's EHR data? Besides, the node here denotes a text, not detailed medical terminology, right?** A2. Here, we use general medical semantic text to differentiate between variables and guide the modeling of correlations between these variables in irregular time series. If I understand correctly, the "other variables in patient's EHR data" you mentioned likely refer to patient-specific clinical records, treatment history, medication history, etc. These are personalized medical data and not general medical domain knowledge. They often include multiple modalities, such as text and images, which extend beyond the scope of time series analysis. Our datasets only contain time series data and do not include clinical notes. Thank you for the constructive suggestion; we may consider integrating multimodal medical data in future work. **W3. Could you please add a comparison graph between the original variable representation and the final representations?** A3. Certainly, but due to the limited information that can be uploaded during discussion period, we will include this in a later version. --- Rebuttal Comment 1.1: Comment: The variables in patients' EHR data mean the diagnosis and procedure codes. For example, the diagnosis code reflects the disease the patient has. It also should be a node in your graph. It doesn't mean multiple modalities. I agree with you that you are focusing on the time series topic. But you discuss it in the field of clinical notes, I believe it would be better if you could make full use of the data. Thanks for your response, I will keep my score.
Summary: Throughout this article, the authors have described an approach (KEDGN) to tackle Irregularly Sampled Medical Time Series. In this task the data is composed of multiple variables represented with time series. The sample rate of a time series can be irregular and two time series can have their samples taken at a different timestamp. In the approach described the authors propose to first find common knowledge correlation between the variables in the dataset. To do so, they compute a non-linear cosine similarity between embeddings of textual descriptions, obtained using a Pre-trained Language Model. They then precise this common knowledge correlation by adding a timestamp aware correlation. Finally using this correlation information and a Graph Convolutional Recurrent Network, their approach can predict the class expected of the time series. They have then analysed the results of their approach first through a benchmark with multiple state of the art approaches. They then added an ablation study and different visual analysis to give a better overview of the behavior of the approach. Strengths: * The proposal to use Common Knowledge through PLM to find correlation * Detailed Experiment with a detailed comparison with state of the art approaches * Detailed exploration of the behavior of the model through an ablation study and visual analysis of the Embedding of the explanations. Weaknesses: * Details of the Proposed Model could be better explained by giving the intuition behind the equations. (Especially for 4.3.2 and 4.4) Technical Quality: 3 Clarity: 3 Questions for Authors: * Did the authors try to use definitions given by health care professionals to confirm the definition given by the GPT model? * It is not clear how equation (7) uses equation (6). Indeed, $Z^{(i)}_{v}$ was defined with $i$ being the $i$-th observation point (I believe the $i$-th sample of this specific variable $v$). But on the other hand the equation (7) uses $Z$ with $t$ being a timestamp. -- It is not clear if $Z^{(i)}_{v}$ is defined for a timestamp in which no observation was made for a variable. * It is not clear why the equation (9) contains a transpose. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: It appears the limitations have been discussed in sufficient detail. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Responses to Reviewer yFgU (1/1) Dear reviewer yFgU, Thank you for your valuable feedback on the formula details of the article. Here is the response: **W1. Details of the Proposed Model could be better explained by giving the intuition behind the equations. (Especially for 4.3.2 and 4.4)** A1. Firstly, we need to correct equation(8) in Section 4.3.2 to: $ G^{(t)}_{ij} = A^{(t)}\_{ij} \times (1 - W\_{ij} |D^{(t)}\_{i} - D^{(t)}\_{j}|).$ The intuition behind this equation is as follows: If two variables have similar densities at a given time, their correlation should be higher. Conversely, if the density of one variable increases or decreases significantly at subsequent times, the correlation between these variables will decrease. For example, Figure 6 in Section 5.4.3 shows that NIDiasABP and HR have similar densities at $t=4$, but by $t=15$, the density of NIDiasABP decreases, which reduces its correlation with HR. Therefore, we use the absolute difference in densities between the two variables and add a negative sign so that the density difference is negatively correlated with the correlation. Regarding the formulas in Section 4.4, they integrate variable-specific parameter spaces and dynamic graph networks into the GCRNN backbone network. Equation (9) represents the standard definition of $1st$-order Chebyshev polynomial expansion approximation for graph convolution. Equations (10-12) define the operations in GCRNN. Equation (13) indicates that only the hidden states of observed variables are updated at each time step, ensuring that the model fully adheres to the original ISMTS pattern. **Q1. Did the authors try to use definitions given by health care professionals to confirm the definition given by the GPT model?** A2. In this paper, what the model required as input are general medical domain knowledge text descriptions of each variable. These domain knowledge is relatively well-established and fixed, and can be confirmed by consulting medical literature. Therefore, we did not seek help from health care professionals. **Q2. $Z^{t}$ 's definition is not clear.** A3. We apologize for the confusion regarding the notation. We have revised the definition of $Z^{(t)}$ in Equation (6) as follows: $$ Z^{(t)}=Z_{i, v}= \begin{cases} ((t_{i,v}-t_{i-1,v}) + (t_{i+1,v}- t_{i, v})) / 2, \quad \text{if }t_{i+1,v} \text{ and } t_{i-1,v} \text{ are both not None.}\\\\ t_{i,v}-t_{i-1,v}, \quad \text{if }t_{i+1,v} \text{ is None.} \\\\ t_{i+1,v}- t_{i, v}, \quad \text{if } t_{i-1,v} \text{ is None.}\\\\ t_{max} / 2, \quad \text{if }t_{i+1,v} \text{ and } t_{i-1,v} \text{ are both None.} \end{cases} $$ Here, $Z^{(t)}=Z_{i, v}$ represents the average density at time stamp $t$ for the $v$-th variable's $i$-th observation point. It is important to note that density is used for dynamically adjusting edge weights, and edges only exist between actual observations. Therefore, when a variable has no observations, there are no edges to adjust, so the density calculation is not needed in such cases. Hence, we did not consider this situation in defining the average density. **Q3. It is not clear why the equation (9) contains a transpose.** A4. As indicated in Equation (4), after computing the cosine similarity between variable node embeddings, we apply the Softmax function to normalize each column to obtain the graph. Since matrix multiplication involves the left matrix’s specific rows and the right matrix’s specific columns, and each column of the right matrix $S \in \mathbb{R}^{V \times I}$ represents the values of all nodes for a specific input channel, we transpose $I_V + G(t) $ in Equation (9). This transposition converts each column into a row, representing the correlations of one variable with all other variables, ensuring that the sum of the output edge weights for each variable node is 1.
Summary: This paper investigates the problem of irregularly sampled medical timeseries classification. The core idea is to use PLM to obtain semantic embeddings for variables, which are used to form a variable correlation graph. Then, the variable correlation graph is dynamically adjusted with the observations, based on which a spatiotemporal graph neural network is used to learn timeseries representation for classification. Typical experiments on widely used benchmarks are conducted to evaluate model performance. Strengths: 1. The paper is well-written and easy to understand. The authors clearly tell the story of using variable correlation graph to solve irregularly sampled medical timeseries classification problem. 2. The investigation of different PLMs on variable correlation graph discovery is important and useful to the community, although the conclusion is that most PLMs achieve similar results, and even name can achieve comparable performance with wikipedia. 3. The experiment section is well defined with various results illustrated. Weaknesses: 1. The parameter settings for baselines are not fair. In lines 228-230, the authors simply follow parameters in reference. However, such setting is not a fair way comparison due to several reasons: Some datasets like MIMIC-III for experiments are not public, i.e., although we have the same raw data source, the experimented datasets are never exactly same, and may be very different from distribution and statistics. This can be validated from the comparison between the results of Raindrop on P19 dataset from this paper and Raindrop paper. The performance has a significant difference, which implies the parameters should searched completely. Thus, grid search for baseline parameters is necessary. 2. The choice of GCRNN here is not well clarified. There are many spatiotemporal graph neural networks like STGNN /ST-GCN which can be used here. Moreover, the ablation study of graph convolution and temporal modeling for GCRNN should be conducted to evaluate the performance gains. 3. Some claims or statements intend to emphasize the model performance, which however is obviously set by design itself instead of model learning. For example, in Line 337 the authors state ''no correlation is observed with DiasABPas it has not been observed yet''. Actually, this is not by learning but the initial masking operation of adjacency matrix. 4. An important baseline of irregularly sampled medical timeseries modeling is missed here, i.e., StraTS [1]. 5. The ablation study doesn't identify very effective module of the proposed work. For example, Text, KEE and DAG's performance gains are marginal, so are these module necessary? [1] Self-Supervised Transformer for Sparse and Irregularly Sampled Multivariate Clinical Time-Series, TKDD. Typos: 1. The symbols or bold texts in Equation (10-12) are wrong. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. What's the function of activation function $\sigma$ in equation (7). In my opinion, this activation function reflects the dynamics of variables, like time decay or exponential increase. So, how to determine the function and what are the effects of different activation functions. 2. How is the parameter complexity of the model? Since each variable will require an independent $W_i$, the computation complexity might be very large. 3. This paper emphasizes the importance of using text information of variables. However, the performance of different text sources seems close. Especially, only using Name can achieve similar even better performance. From ablation study in Table 3, we can observe even without textual representations, the model can achieve almost similar performance. Thus, this part looks useless, which is opposite to the commonsense. 4. Is Figure 5(a) calculated from f(E) or E? If in the f(E), the authors should also provide the correlation graph from E, so that we can evaluate whether model learning is effective. To my experience, even bert embedding of these variables can achieve similar correlation graph. Also, why graph (a) is not symmetric? What's meaning of different correlations between (FiO2, HR) and (HR, FiO2)? 5. What's the basic time resolution or sampling rate for each dataset? 6. How is the static features combined with timeseries are not described in paper. 7. How are the input length and prediction length determined are not provided. Usually, in mortality prediction, we use the first 48 hours ICU data as input to predict whether the patient will die in the hospitalization. What's setting for this paper? Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors didn't comprehensively discuss the limitations of this paper, which is not accord with the paper checklist. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Responses to Reviewer vHK1(2/3) **Q1. The function of activation function in equation (7). ** A1. On one hand, as you mentioned, this activation function reflects the dynamics of variables, such as time decay or exponential increase. On the other hand, this activation function serves a normalization purpose because the edge weights in the knowledge-empowered complete graph $\boldsymbol{A}$ are normalized values. If we directly use the absolute value of the density to adjust the edge weights, the values might become excessively large or small, which would severely disrupt the basic graph structure learned from textual knowledge. Typical activation functions with normalization capabilities include Sigmoid and Tanh. Since Tanh performed better in our experiments, we chose Tanh. The experimental results for different activation functions are shown in rows W/O $\sigma$ and $\sigma$ Sigmoid in Table 2 of the newly submitted PDF. **Q2. The parameter complexity of the model.** A2. We have calculated the number of model parameters for each dataset, as shown in Table 5 of the newly submitted PDF. As shown, the parameter count of our model is not particularly high. It is on the same order of magnitude as Warpformer and significantly lower than Raindrop by three orders of magnitude. Although we calculate an independent $W$ for each variable, the total $ W \in \mathbb{R}^{V \times I \times O} $ does not equate to the parameter count. $ W $ is derived from the multiplication of two matrices: the variable embedding matrix $ Q \in \mathbb{R}^{V \times q} $ and the weight matrix $ W \in \mathbb{R}^{q \times I \times O} $. The first matrix is computed from textual embeddings, and only the second matrix belongs to the model parameters. The sizes $ q $, $ I $, and $ O $ are hyperparameters, independent of the number of variables, and typically set to be less than 16. Hence, the parameter complexity of our model is not high. We will include this parameter complexity analysis in Appendix F.2 on computational costs. **Q3. The importance of using text information of variables.** A3. First, as analyzed in Section 5.4.1, the relative distribution of variable embeddings obtained from different text sources via PLM is consistent with the relative distribution of the variables' time series patterns. Both of them reflects the relative distribution of the variables' inherent sense. The embedding space of variables obtained from different text sources may vary in absolute distribution, but the relative distribution should be similar. So the performance from different text sources should also be similar, which aligns with our expectations and proves incorporating text information has a universally valid. Moreover, as shown in table 4 of the newly submitted PDF, the overall results of the ablation experiments indicate that the introduction of text contributes to performance improvement and is not useless. More importantly, as demonstrated in the visualizations in Section 5.4.1, if the variable text embeddings are replaced with learnable embeddings, the resulting learned variable embedding space tends to be nearly uniformly distributed, which lacks interpretability. In contrast, using text embeddings ensures that the variable embedding distribution is consistent with domain knowledge, thereby providing the model with strong interpretability. **Q4. Calculation and symmetry issues of correlation graph** A4. Your observation is correct. Figure 5(a) is computed from $g(E)$. It is true that using the BERT embeddings of these variables can achieve a broadly similar correlation figure, but there are subtle differences induced by $ g(\cdot) $. The introduction of $ g(\cdot) $ is necessary as it not only performs feature reduction but also avoids using a completely fixed prior graph. This preserves the model’s ability to adaptively optimize the graph structure based on different data distributions and downstream tasks. To validate the effectiveness of $ g(\cdot)$, we provide the ablation experiment results on row W/O $ g(\cdot)$ in table 2 of the newly submitted PDF. Regarding the asymmetry of Figure 5(a): As indicated in Eq.(4), after computing the cosine similarity between variable node embeddings, we apply the Softmax function to normalize each column to obtain the graph. This normalization ensures that the diagonal elements correspond to the maximum values in their respective columns, but not necessarily the maximum values in each row. We intentionally set the graph to be asymmetric because some variable correlations are directional. For example, a decrease in insulin level may lead to an increase in blood glucose, but an increase in blood glucose does not necessarily lead to a decrease in insulin level. This is why the edge weight from insulin to blood glucose is not equal to the weight from blood glucose to insulin. The same reasoning applies to HR and FiO2. **Q5. The basic time resolution or sampling rate for each dataset.** A5. In ISMTS, there are significant differences in sampling rates among different samples, variables, and periods. As a result, calculating a basic sampling rate for an entire dataset is challenging and sometimes not very informative. Instead, more relevant statistics for ISMTS datasets include metrics such as missing rate and maximum length. **Q6. How are the static features combined with time series.** A6. We apologize for the omission. We follow the approach used in Raindrop to incorporate the static features. Specifically, static features are first mapped to static vectors through a linear layer and then concatenated with the feature vectors of the time series before being input into the classifier. --- Rebuttal 2: Comment: # Responses to Reviewer vHK1(3/3) **Q7. How are the input length and prediction length determined. ** A7. We apologize for the omission. As you mentioned, for the MIMIC-III, P12, and PhysioNet datasets, we use ICU data from the preceding 48 hours as input to predict the mortality during the hospitalization. For the P19 dataset, we use up to 60 hours of ICU data to predict whether sepsis will occur within the subsequent 6 hours. We will include these details in the "Appendix B: Datasets" section of the revised version. **Q8. Limitations** A8. Although our proposed method effectively guides ISMTS modeling through the domain knowledge from text modality, it has some limitations. The backbone of our model is based on an RNN architecture, which inherently has a sequential computation characteristic that can be a bottleneck in terms of runtime. Additionally, our method is specifically tailored for medical applications, and its performance may be limited in other ISMTS applications, such as human activity recognition, where variables may lack domain knowledge and thus cannot generate high-quality text descriptions. We will include a detailed discussion of these limitations in the revised version of the paper. --- Rebuttal 3: Comment: Dear Reviewer vHK1, Hope this message finds you well. We appreciate the diligent efforts of you in evaluating our paper. We have responded in detail to your questions. As the discussion period will end soon, we would like to kindly ask whether there are any additional concerns or questions we might be able to address. Once more, we appreciate the time and effort you've dedicated to our paper. Best regards, Authors
Rebuttal 1: Rebuttal: # Global Response We sincerely thank all the reviewers for their consistent positive feedback regarding the significance of our work, the novelty of our approach, the thoroughness of our experiments and analyses, and the quality of our presentation. Additionally, we greatly value the reviewers' insightful and constructive comments, which have significantly contributed to enhancing and refining this work. Here, we will response each of the weaknesses and questions raised by the reviewers in order to eliminate reviewers' concerns to the greatest extent possible. Due to the limited number of pages and characters allowed for rebuttal, we will begin our responses to Reviewer **vHK1** on this global rebuttal page. The responses to reviewers **yFgU, 68iN, and n8KK** can be found below the respective official review pages. # Responses to Reviewer vHK1(1/3) Dear reviewer vHK1, We thank for your carefully reading on our paper and valuable comments on the details of our work, the following is our responses, please download the 1-page PDF document we have newly uploaded at the left lower corner of this rebuttal page for easier reference and reading. **W1.Grid search for baseline parameters is necessary.** A1. We performed a grid search across four datasets around the optimal parameter range for the three most recent SOTA baselines due to time constraints. The results are shown in Table 1 of the newly submitted PDF. As you mentioned, some baselines did indeed show some improvement in performance on certain datasets after the grid search. However, our method still consistently maintained leading performance across all datasets. In comparison, although some baselines demonstrated competitive performance, they still exhibited significant disadvantages on a few datasets. For instance, Warpformer on the Physionet dataset, $DGM^{2}-O$ on the MIMIC-III dataset, and Raindrop on the Physionet dataset. Therefore, the effectiveness of our method remains evident. **W2. The choice of GCRNN here is not well clarified./The ablation study of graph convolution and temporal modeling.** A2. Firstly, our main contribution is not the improvement of the GNN backbone network but the learning of variable-specific parameter spaces and dynamic graphs. Therefore, what we need is a simple, effective, and easy-to-adapt backbone GNN for ISMTS. Existing spatio-temporal graph neural networks can be roughly classified based on their temporal module into three main categories: Convolution-Based, Attention-Based, and Recurrence-Based. - Convolution-Based methods capture temporal dependencies using TCNs. However, the fixed-step kernel size cannot perceive different time spans effectively, making them unsuitable for ISMTS. - Attention-Based methods face challenges due to the misalignment of sampling times across different variables in ISMTS. Computing variable-specific attention is difficult to parallelize in the variable dimension, and their computational complexity scales quadratically with the sequence length, limiting scalability. Furthermore, attention outputs have the same dimensions as inputs, requiring additional design for extracting final sample features from observation-level representations. - Recurrence-Based methods update variable latent states based on whether they are observed, which allows parallel computation in the variable dimension and suits ISMTS. The final sample feature representation can be directly obtained from the latent states at the last observation time, making this approach simple, effective, and well-suited for ISMTS. Thus, we chose GCRNN as the backbone GNN for this paper. Regarding the ablation study of graph convolution and temporal modeling for GCRNN, we set the graph into an identity matrix and remove the time embedding in the structured encoding respectively. The results are shown in rows W/O GCN and W/O TE in Table 2 of the newly submitted PDF. The results show that both the introduction of graph convolution and time embedding are necessary. **W3. Some claims or statements intend to emphasize the model performance, such as Line 337.** A3. We apologize for the misunderstanding caused by our statement. We have revised the phrasing in Line 337 to: "Around t = 4, HR shows a strong correlation with NIDiasABP, while the correlation with DiasABP is masked as 0 since DiasABP has not been observed yet." **W4. An important baseline of ISMTS modeling (StraTS) is missed here.** A4. Thank you for your reminder. We will include StraTS in the "Irregularly Sampled Multivariate Time Series Modeling" section of the related work in future versions. Additionally, we have added comparative experiments with StraTS. The experimental results are shown in Table 3 of the newly submitted PDF. **W5: The ablation study doesn't identify very effective module of the proposed work.** A5. We have calculated the average performance improvement brought by the three modules across the four datasets, as shown in Table 4 of the newly submitted PDF. Overall, all three modules contribute to performance improvement. Given that the baseline values of AUROC are relatively high (typically above 85%), the absolute value of the AUROC improvement may appear lower. In this context, a single module that can achieve an AUROC improvement greater than 0.5% is significant. Additionally, all three modules bring more than 1.5% ↑ in AUPRC, which has been shown to be more sensitive to imbalanced samples. Therefore, we consider these modules essential and indispensable components of our model. **W6. The symbols or bold texts in Equation (10-12) are wrong.** A6. We have removed the bold text styling from the symbols in equations (10-12). $$ r^{(t)}=\sigma(\Theta_{r} \star_{G^{(t)}} [X^{(t)}|| H^{(t-1)}] + b_{r}), $$ $$ u^{(t)}=\sigma(\Theta_{u} \star_{G^{(t)}} [X^{(t)}|| H^{(t-1)}] + b_{u}), $$ $$ C^{(t)}=tanh(\Theta_{C} \star_{G^{(t)}} [X^{(t)}|| (r^{(t)} \odot H^{(t-1)})] + b_{C}), $$ Pdf: /pdf/acf4a8cd29778f93c55e0920cc80690ef1ccc6eb.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
A Bayesian Approach to Data Point Selection
Accept (poster)
Summary: The paper proposes a method for data point selection (DPS) in the setting where only little data from the target (or meta) distribution is available, together with a lot of data from the training distribution. DPS aims to find the weights of data points in the training distribution such that the model parameters that minimise the loss of the therewith weighted training distribution also minimise the loss on the target distribution. The method proposed for DPS in this paper consists in treating the problem in a Bayesian way and doing posterior inference on both (data) weights and model parameters simultaneously, using stochastic gradient Langevin dynamic sampling (SGLD) to approximate the posterior. The crucial trick is to view the data from the target distribution as generated by the model (parameters), which allows for the inference. They show the effectiveness of their approach on 4 experiments and compare against 2 DPS baselines (bilevel optimisation (BLO) and contrastive data selection (CDS)) as well as a range of naive baselines (mixing available training and target distributions to different extents). - The first experiment is on data balancing for MNIST, where the class balance differs significantly between training (imbalanced) and target (balanced) distribution. - The second experiment is on data denoising for CIFAR, where the training distribution (noisy) differs from the target distribution (clean) in the degree of noisiness - The third experiment is on efficient learning on WebNLG, where there are differences in the topic or domain of the data between training distribution (Artist, City) and target distribution (Athlete, Politician, Astronaut) - The fourth experiment is on instruction finetuning (IFT) for LLMs, using an OpenLlama3B model. The proposed method compares favourably in all experiments in the main part of the paper. In the appendix results for the second experiment are additionally shown for longer training times, in which case BLO outperforms the proposed method. Strengths: - The idea of framing and solving DPS in a Bayesian way seems novel and innovative - The paper is well written in general and is easy to read and follow - The proposed method is getting good empirical results as displayed in the paper, in particular also on the relevant use case of IFT for LLMs - Compared to BLO, the proposed method seems to have lower memory and compute requirements Weaknesses: - For the most part, the method proposed is explained well and in detail. However, when it comes to a crucial aspect in the implementation that is subsequently used in all experiments (the weight network), details are brushed under the carpet as a 'straightforwardly modification'. However, for me the causal graph in Figure 1 seems to be different if the weights become a function of $z^t$, let alone $\theta$ if model embeddings $f_{\theta}(z^t)$ are used instead of raw data $z^t$ like in the experiments. This should be explained in more detail (see also Question about this below). - In the main part of the paper, performance of BLO is portrayed as clearly falling behind the proposed method and other methods in terms of accuracy on the CIFAR experiment (Fig 2, Fig 3, line 266). However in Fig 7 in the Appendix, it can be seen that with longer training, BLO is at least on par (symmetry noise) or even slightly better (asymmetry noise) in terms of accuracy. It would be better if this would already be mentioned in the main part of the paper - Assuming a non-negativity constraint on the weights and the loss, in Eq (4) and subsequently Eq (7) the gradient wrt $w$ will be negative for the data dependent term and thus each gradient update is only pointing away from $0$ through the gradient of the prior term $\log p(w)$ (and the noise term at the end). Therefore, preventing collapse to $0$ weights (as described in line 148) seems to crucially depend on a strong enough prior as well as the impact constant and $N_t$. At least for the not so compute intensive experiments on MNIST and CIFAR, it would be good to see an ablation on the effect of these impact constants and the prior governing sparsity parameter $\beta$. - The limitations are only spelled out after the page limit Technical Quality: 2 Clarity: 3 Questions for Authors: - Which part in the prior formulation or in the optimisation ensures that the weights are non-negative? Or are they intentionally not constrained by the method in that sense? - How does the causal graph in Figure 1 change if a weight network that is using embeddings of training data as inputs is being used like in the experiments? How does posterior inference change in this case? - Why is the weight network being used in the experiments if in the ablation in the appendix, the element wise weights version of the proposed approach is outperforming it? - Why are the inputs to the weight network different for the MNIST experiment (image embeddings) and the CIFAR experiment (image embedding + one-hot encoded label)? - When comparing to BLO in terms of compute efficiency, have the authors taken into account proposals of making the Hessian vector product calculations more efficient for BLO (see https://iclr-blogposts.github.io/2024/blog/bench-hvp/ for example)? Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 4 Limitations: - Limitations are discussed, but after the page limit Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## [Re: Graphical Model and Derivations] See the General Rebuttal: * [Re-1: Derivations for Eq 4] * [Re-2: Graphical Model and Derivations with Weight NN] ## [Re: Hyperparameter Analysis] See [Re-3: Hyperparameter Analysis] in the General Rebuttal ## [Re: Ensuring the Non-negative Weights] Lines 205-206 of the paper indicate that in BADS, the weight network is implemented as a single-layer Feedforward Neural Network (FNN) with a sigmoid activation function. The sigmoid function ensures that the output weights are valid. We will highlight this in the revised paper. ## [Re: Why Weight NN] In the paper, specifically in lines 170-174, we note that this weight network approach is effective for smoothing and regularizing output weights due to the smooth functional properties of neural networks. Importantly, if additional training samples are added after the model has been trained, the learned weight network can be used to assign weights or select samples from the new dataset without needing to retrain the entire model from scratch. Additionally, obtaining precise element-wise weights typically requires training the model on the entire training set for multiple epochs, which can be costly, especially for large foundational models fine-tuning. By using the weight network, we do not even need to train the model with one full epoch. After training on a subset of the data for one epoch, the weight network can be employed to assign weights to the remaining examples. We can then select the examples with higher predicted weights from this set and continue training the model on these selected samples. This approach further enhances training efficiency. We plan to conduct related experiments in future work. We will explain with more details about this in the revised paper. ## [Re: Different Weight NN for MNIST and CIFAR] In the MNIST scenario, we focus on data balancing, where the minority and majority examples can be distinguished solely by the input images. In contrast, the CIFAR scenario targets at data denoising. Here, we generated noisy examples by randomly sampling class labels for the input images (L242-245), making it impossible to determine whether an example has a correct label (clean examples) or an incorrect label (noisy examples) just by examining the input image. As a result, the weight network requires both image and label information for accurate decision. ## [Re: Efficient Hessian Vector Products (HVPs)] We use PyTorch's (v2.3) automatic differentiation so there is no explicit Hessian computation. Specifically, we use reverse-over-reverse differentiation and hence is possible that we can get more gains using other methods (e.g. reverse-over-forward) for BLO. Nevertheless, regardless of how HVP is computed in practice, BLO is still more expensive than BADS as BADS doesn't require 2nd-order computation. --- Rebuttal Comment 1.1: Comment: I thank the authors for answering some of my questions (non-negativity of weights, different weight NN inputs for MNIST and CIFAR, why using the weight NN). I also acknowledge that they added some ablation on WebNLG about the prior parameters $\beta$ and $\sigma$ as well as a derivation on the inference when a weight network is being used. Regarding this however I have 2 questions: - How does inference change if instead of $z^t$ model embeddings $f_{\theta}(z^t)$ are used like in the experiments of the paper? Does that not cause a circular dependence between $\theta$ and $w$? - Why is the Test/BLEU for $\beta=0.05$ in the ablation at 44 after 40k vs. only 36 in Figure 2 of the paper? --- Rebuttal 2: Title: RE: Official Comment by Reviewer 5MKv Comment: We appreciate the reviewer’s engaging discussion and sharp questions. **[RE: Circular Dependence]** Strictly speaking, if we use the embedding $f_\theta(z^t)$ in the weightnet, then weight $w$ becomes a function of both $\theta$ and $D_t$, thus we can have circular dependency as the reviewer pointed out. However, we would rather see this case in the following way: Suppose we were able to access the ideal oracle embedding function $f^*$. Then ideally the weightnet would use this $f^*$ internally to output the weight $w$, ie, $w = weightnet(f^*(z^t); \phi)$. This way, as $f^*$ is constant, our graphical model in Fig.1(b) of the attached PDF is still valid, and there is no circular dependency. In practice, however, since we cannot access $f^*$, one reasonable way to approximately mimic it is to use the current estimate $f(z^t; \theta)$ as an *online plugin proxy for* $f^*$. We hope this answers your question. FYI, in the implementation, we do stop-gradient $\theta$ in the weightnet update, which is in line with our argument of *online plugin proxy for* $f^*$. **[RE: Test/BLEU]** For WebNLG, we did experiments on two different setups: * **The main setup** (the performance is shown in *top-right plot in Figure 2 of the paper*): We use the original WebNLG domain adaptation set up (16 domains for training and 3 domains for testing) to report the models generation performance. Details are explained in *Appendix A*. * **Ablation setup (called 2-domain)** (the performance is shown in *Figure 8 in the Appendix*): The purpose of this setup is to easily assess the effectiveness of data selection, as visualizing the data selection across 16 domains can be challenging. Details and the rationale for this setup are explained in lines 291-301 of the paper. Generally, the Ablation setup is somewhat simpler than the main setup, resulting in a higher test BLEU score. In the Hyperparameter Analysis section of the rebuttal, we did experiments using the Ablation setup to visualize how hyperparameters affect data selection behavior (shown in Figure 3 in the rebuttal PDF). So, reviewers comparing Figure 1 in the rebuttal PDF to Figure 8 in the paper will observe that the test BLEU score for $\beta=0.05$ consistently hovers around 44. If the reviewer has any further questions, please feel free to reach out. --- Rebuttal Comment 2.1: Comment: Thanks, I understand now the discrepancy between Figure 2 in the paper and the ablation in the rebuttal pdf. I definitely think some discussion around the hyper parameter sensitivity based on the ablations in the rebuttal pdf should be part of the main paper, as this is important to guide future work around your method. Regarding the circular dependence when using embeddings I am not sure if I follow the authors arguments and would encourage them to think about the effects of using a changing embedding function as approximation of some fixed embedding function $f^*$. But overall many of my doubts or concerns have been answered and I definitely think the Bayesian perspective on DPS offers a novelty that makes this work stand out, so I have increased my original score from 6 to 7. --- Rebuttal 3: Comment: Regarding the embedding functions: In practice, using a fixed embedding function requires an additional embedding model, which increases memory usage. Additionally, the embedding model may require tuning, which would add extra time. We could consider including experiments to empirically understand the gap between using a changing function approximation and a fixed embedding function. We sincerely appreciate the reviewer’s insightful and valuable feedback, as well as their time. We will incorporate the information provided in the rebuttal into the revised paper.
Summary: This work proposes a Bayesian approach to the data point selection task. Specifically, the authors introduce the important weights to each training point and derive the posterior joint probability of the instance-wise weights and network parameters. The parameters and weights are then sampled iteratively based on the stochastic-gradient MCMC technique. Experiments on computer vision and natural language processing have been conducted to show the efficacy of the proposed method. Strengths: 1. This work views the data point selection problem from the perspective of Bayesian theory and introduces the posterior distributions of the instance-wise weights and network parameters. 2. Experimental results demonstrate the efficacy of the proposed method. Weaknesses: 1. I suggest the authors provide a detailed derivative for Eq. 4. 2. Eq. 5 still requires to compute the normalizing constant in Eq. 4, which serves as the bias of the gradient. 3. The authors claim that “Our method straightforwardly achieves sparsity on the w weights allowing efficient sample selection unlike”. I hope the authors provide more analysis and evidence about the sparsity of instance-wise weight. Technical Quality: 3 Clarity: 3 Questions for Authors: Please see the weaknesses Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: No potential negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## [Re: Derivations for Eq 4] See [Re-1: Derivations for Eq 4] in the General Rebuttal ## [Re: Eq5 and Normalizing Constant] No. The normalising constant in Eq.(4) is $p(D_m|D_t)$ (please see our derivations, especially, Eq.(R4) in our rebuttal above), and this normalising constant has no dependency on $\theta$ and $\lambda$. Thus after taking the gradient with respect to $\theta$ and $\lambda$, the normalising constant simply disappears. This technique is well known for Bayesian posterior sampling [Welling & Teh] and also for sampling from energy-based models, eg, T. Du, and I. Mordatch, "Implicit Generation and Modeling with Energy Based Models," NeurIPS 2019. ## [Re: Sparsity] Although we have observed clear distinction in the learned weights between relevant and irrelevant data points (eg. our proof of concept experiments), the weights are not as pronounced as 0/1 like binary selection. We believe that this issue might be diminished by incorporating an additional prior/regularizer that encourages 0/1 weight values. We will investigate it further along this line in our future work. --- Rebuttal 2: Comment: Dear reviewer oLJs, We appreciate your thoughtful and detailed feedback once again. Since today is the final day for the rebuttal discussion, we wanted to check if you have any remaining questions or suggestions that we can address before the deadline. Thank you for your time.
Summary: This paper proposes a new Bayesian method for Data Point Selection (DPS), called BADS. DPS aims to select training data points that optimize performance on downstream tasks. Instead of relying on bi-level optimization (BLO), BADS frames DPS as a posterior inference problem. The method uses a weight network to learn the importance of each data point and proposes a novel update rule for jointly updating model parameters and data point weights using Stochastic Gradient Langevin Dynamic sampling. Experiments on MNIST, CIFAR, and WebNLG datasets show that BADS outperforms or matches existing DPS methods, including BLO, in terms of accuracy and efficiency. Strengths: - Originality: This paper introduces a novel perspective on DPS by using a Bayesian framework instead of the BLO approach. This offers a new way to think about data selection and its connection to posterior inference. The paper also leverages Langevin sampling for optimizing the derived posterior inference which adds to the novelty. - Clarity: The paper is well-written and easy to understand. The authors clearly explain their method and the motivations behind their design choices. The final derived method is simple and intuitive. Weaknesses: - Limited theoretical analysis: The paper focuses on the empirical performance of BADS and lacks a deep theoretical analysis. Providing more theoretical insights on the convergence and optimality properties of BADS would strengthen the paper, especially since the authors claim that their method, unlike BLO, has theoretical convergence guarantees. - Hyperparameter sensitivity: The paper mentions that BADS has several hyperparameters that require careful tuning. A more in-depth discussion on the impact of these hyperparameters and potential strategies for automatic tuning would be beneficial. - Comparison and claims: The paper primarily compares BADS with BLO, but there are other methods that deal with selecting data to learn on, such as offline pruning methods [1], and online and offline batch selection [2, 3], among others. In addition to that, some of these methods have already been used for large scale model training, such as [4, 5]. Some important equations and their corresponding assumptions have been omitted. In particular, equation (4) is not derived and on first glance seems to assume independence of $w$ and $\mathcal{D}$, but it is not clear if this still holds after the introduction of the weight network. 1. https://arxiv.org/abs/2206.14486 2. https://arxiv.org/abs/2312.05328 3. https://arxiv.org/abs/2406.10670 4. https://arxiv.org/abs/2402.09668 5. https://arxiv.org/abs/2406.17711 Technical Quality: 3 Clarity: 3 Questions for Authors: Can the authors provide any theoretical guarantees on the convergence of BADS? What are the conditions under which BADS finds an optimal solution? What is the impact of different hyperparameters, especially the sparsity level parameter and impact constants? How robust is BADS to hyperparameter variability. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The authors have adequately addressed the limitations proposed in their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## [Re: Convergence Analysis] We provide a theorem showing that our SGLD algorithm converges to the true posterior to some extent. Our analysis is based on (Zou et al. 2021) where we make some adjustments for our case. **Assumption 1** (from Assumption 4.3 of (Zou et al. 2021)) There exists $m>0$ and $b\geq 0$ such that $$\bigg\langle\nabla_{\theta,w}\log p(\theta,w|D_t,D_m),[\theta, w]\bigg\rangle\geq m ||[\theta, w]||_2^2-b$$ holds for any $\theta,w$. **Assumption 2** (from Assumption 4.4 of (Zou et al. 2021)) Any minibatch gradient of the log-posterior is Lipschitz continuous. Ie, there exists a constant $L$ such that for any $z_i^t\in D_t$ and $z_j^m\in D_m$, $$ ||\nabla_{\theta,w} A(\theta,w)-\nabla_{\theta',w'} A(\theta',w') ||_2\leq L ||[\theta, w]-[\theta', w']||_2 $$ holds for any $\theta,w,\theta',w'$, where $A(\theta,w)=\log p(w)+\log p(\theta)-N_t w_i l(z_i^t;\theta)-N_m l(z_i^m; \theta)$. Then we have the following convergence theorem -- We omit detailed formulas due to space limit, but can be found in (Zou et al. 2021). **Theorem** (from Theorem 4.5 of (Zou et al. 2021)) Let $B$ be the batch size and $\eta$ the step size. Suppose Assumption 1 and 2 hold. For any $\epsilon\in(0,1)$, with the initial iterate satisfying a certain ball constraint, the distribution $\mu_K^{SGLD}$ of the $K$-th iterate in our SGLD iterations Eq.(6-7) satisfies: $$||\mu_K^{SGLD}-p(\theta,w|D_t,D_m)||_{TV} \leq C(B,\eta,K,\epsilon)+\epsilon/2$$ where $C(B,\eta,K,\epsilon)$ is constant that can be reduced by adjusting $B$, $\eta$ and $K$, and $||\cdot||_{TV}$ stands for the total variation distance. (Zou et al. 2021) "Faster Convergence of Stochastic Gradient Langevin Dynamics for Non-Log-Concave Sampling", UAI 2021. ## [Re: Hyperparameter Analysis] See [Re-3: Hyperparameter Analysis] in the General Rebuttal ## [Re: Comparison and Claims] We thank the reviewer for bringing the recent related work to our attention. We recognize that there are three types of DPS setups based on supervision signals: * **Unsupervised DPS**: DPS **without** the guidance of a held-out meta set involves models selecting a subset from a training set based on specific hypotheses, such as "challenging examples improve model performance". Curriculum learning falls into this category. This method is also widely used in pre-training data point selection (**[1]** and **[4]**). * **Self-supervised DPS**: DPS **with** a held-out meta set. However, the meta set **does not share the same data distribution** as the targeted test set. Typically, the examples in the meta set are also selected from the training set based on specific hypotheses, such as "learnable examples enhance model performance". **[2]**, **[3]** and **[5]** fall into this category. * **Meta-set guided DPS**: DPS **with** a small meta set that **shares the same distribution** as the test set, in order to train a model that performs well specifically on the target test set. The test set may encompass one or multiple downstream domains or tasks. This DPS is closely related to meta learning, domain adaptation, and transfer learning. Existing approaches are predominantly based on **BLO**. **Our work**, as explained in Section 2.1, focuses on this type. To broaden our comparison, we choose studies **[4]** and **[2]** from the first two categories above respectively and evaluate them using our experimental setups. Here are the results: * **Compared to Self-supervised DPS**: * To guide DPS, **[2] (ClassAct)** investigates using a reference model trained on a meta set chosen from the training set based on predefined learnability metrics. To fairly compare the selection mechanism, we replaced their meta set using our meta set. * **Figure 4 in the PDF** shows that ClassAct performs worse than BADS with a large margin across all three proof-of-concept scenarios. It completely failed in the Data Balancing and Efficient Learning scenarios. * **Figure 5 in the PDF** shows that the weights predicted by ClassAct for the training examples are unreliable. * **Table 1** below shows that BADS outperform ClassAct on all LLM benchmarks. * **Table 2** below shows that BADS requires significantly less GPU memory and computing time compared to ClassAct. * **Compared to unsupervised DPS**: * Paper **[4] (AskLLM)** selects examples from training set by prompting LLMs. We compare to it only in the LLM use case due to two reasons: * Paper **[1]** already showed that unsupervised DPS can amplify class imbalances. * Open-source LLMs typically do not handle vision input. * For fair comparison, we compare BADS with an online variant of [4] (dubbed **AskLLM-O**) where we used the pretrained OpenLLaMA 3B to obtain the sampling score for each training sample. * **Table 1** below shows that BADS outperforms AskLLM-O on 2 out of 4 LLM benchmarks. * **Table 2** below shows that BADS requires significantly less compute and memory compared to AskLLM-O. AskLLM-O also requires an offline scoring of all training samples which took 13932 seconds (3.87 hours) in our setup on a single NVIDIA A40 GPU. **[Table 1]** DPS performance in LLM use case |Models | MMLU | ARCc | ARCe | HellaSwag | |-----| ------------- | ------------- | ------------- | ------------- | | BADS | 26.59 | 34.39 | 67.00 | 52.91 | | ClassAct | 24.90 | 31.91 | 67.34 | 52.09 | | AskLLM-O | 25.55 | 35.15 | 66.88 | 53.71 | **[Table 2]** DPS memory and time usage over 100 steps |Models | Avg-Memory (MB) | Time (s) | |-----| ------------- | ------------- | | BADS | 14694.58 | 61.03 | | ClassAct | 34821.36 | 269.81 | | AskLLM-O | 32908.89 | 115.24 (+13932) | We also note that: * An online variant of [3] is similar to the CDS baseline in our paper. * [3] and [5] were released after the Neurips deadline ## [Re: Graphical Model and Derivations] See the General Rebuttal: * [Re-1: Derivations for Eq 4] * [Re-2: Graphical Model and Derivations with Weight NN] --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for thoroughly addressing my comments and quickly incorporating the revelant comparisons to related work and baseline comparisons where relevant. This new information has improved my understanding and confidence in the impact of the paper. --- Rebuttal 2: Comment: We sincerely appreciate the reviewer’s insightful and valuable feedback, as well as their time. We will incorporate the information provided in the rebuttal into the revised paper.
null
null
Rebuttal 1: Rebuttal: We sincerely thank all reviewers for their thoughtful and detailed feedbacks. ## [Re-1: Derivations for Eq 4] @Reviewer **JeEz**, **oLJs**, and **5MKv** From the graphical model in **Fig. 1 in PDF**, $D_m \perp (w, D_t) \ | \ \theta$ is the only conditional independence assumption we make. $$ p(\theta,w|D_t,D_m)=\frac{p(\theta,w,D_m|D_t)}{p(D_m|D_t)} $$ $$ \ \ =\frac{1}{p(D_m|D_t)}\cdot p(w)\cdot p(\theta,D_m|w,D_t) $$ $$ \ \ =\frac{1}{p(D_m|D_t)}\cdot p(w)\cdot p(D_m|\theta,w,D_t)\cdot p(\theta|w,D_t) $$ $$ \ \ = \frac{1}{p(D_m|D_t)}\cdot p(w)\cdot p(D_m|\theta) \cdot p(\theta|w,D_t) \ \ (R1) $$ $$ \ \ \propto\cdot p(w)\cdot p(D_m|\theta)\cdot p(\theta|w,D_t) \ \ (R2) $$ where in (R1), we use $D_m \perp (w, D_t) \ | \ \theta$, and in (R2) $\frac{1}{p(D_m|D_t)}$ is regarded as constant for it has nothing to do with $\theta$ and $w$. ## [Re-2: Derivations with Weight NN] @Reviewer **JeEz**, **oLJs**, and **5MKv** Here are the full details of possible changes when the weight network is adopted. First, the weights $w$ become a deterministic function of $D_t$ and $\phi$ (here $\phi=$ weightnet params) as shown in Fig. 1(a) in the attached PDF. But since $w$ is a deterministic function of $\phi$ and $D_t$, we can simplify it by having $w$ absorbed into $\phi$ while introducing conditional dependence (an arrow) from $D_t$ to $\phi$, as depicted in Fig. 1(b) in the PDF. Note that $D_t$ is always given, and we treat $\phi$ as random variables, and wherever $w_i$ appears, we replace $w_i$ by $weightnet(z_i^t; \phi)$. More specifically, we make the following changes to the equations in the weight net scenario: Eq.(2): $p(\theta|\phi,D_t) \propto p(\theta) \cdot \prod_{i=1}^{N_t} p(\textrm{weightnet}(z_i^t; \phi), z^i_t | \theta)$ Eq.(8): $p(\phi|D_t) \propto e^{-(\sum_i weightnet(z_i^t;\phi)-\lfloor N_t\beta\rfloor)^2 / 2\sigma^2}$ Eq.(4) (detailed derivations): $$ p(\theta,\phi|D_t,D_m) = \frac{p(\theta,\phi,D_m|D_t)}{p(D_m|D_t)} $$ $$ \ \ \ \ = \frac{1}{p(D_m|D_t)} \cdot p(\phi|D_t) \cdot p(\theta,D_m|\phi,D_t) $$ $$ \ \ \ \ = \frac{1}{p(D_m|D_t)} \cdot p(\phi|D_t) \cdot p(D_m|\theta,\phi,D_t) \cdot p(\theta|\phi,D_t) $$ $$ \ \ \ \ = \frac{1}{p(D_m|D_t)} \cdot p(\phi|D_t) \cdot p(D_m|\theta) \cdot p(\theta|\phi,D_t) $$ $$ \ \ \ \ \propto \cdot p(\phi|D_t) \cdot p(D_m|\theta) \cdot p(\theta|\phi,D_t) $$ Eq.(5): $[\theta, \phi] \ \leftarrow \ [\theta, \phi] + \frac{\eta}{2} \nabla_{\theta,\phi} \log p(\theta,\phi|\mathcal{D}_t,\mathcal{D}_m) + \epsilon \sqrt{\eta}, \ \ \ \ \epsilon\sim\mathcal{N}(0,I)$ Eq.(6): $\theta \ \leftarrow \ \theta + \frac{\eta}{2} \nabla_{\theta} ( \log p(\theta) - N_t \cdot E_{i\sim B_t}[weightnet(z_i^t;\phi) \cdot l(z_i^t;\theta)] - N_m\cdot E_{j\sim B_m}[l(z_j^m;\theta)] ) + \epsilon_\theta \sqrt{\eta}$ Eq.(7): $\phi \ \leftarrow \ \phi + \frac{\eta}{2} \nabla_{\phi} ( \log p(\phi|D_t) - N_t\cdot E_{i\sim B_t}[weightnet(z_i^t;\phi) \cdot l(z_i^t;\theta)] ) + \epsilon_\phi \sqrt{\eta}$ ## [Re-3: Hyperparameter Analysis] @Reviewer **JeEz** and **5MKv** We will incorporate a detailed discussion and ablation study on this topic in the revised paper. Here is a summary: **Two main sets of hyperparameters for BADS**: * For parameters update: $\eta$, $\epsilon_\theta$, $ \epsilon_w$, $\rho_{\theta }^{t}$, $\rho_{\theta }^{m}$, and $\rho_{w}^{t}$ (*Eq 6, 7 and Appendix B*). * For the prior distribution of the example weights: $\sigma$, $\beta$, $s_{avg}$ (*Eq 8,9*) . **Hparams tuning principles**: * We set $\eta$ to 1 and kept the Gaussian noise small with $\epsilon_\theta$ and $\epsilon_w$ equal to 1e-5. * $\rho_{w}^{t}$ is set to 1. * We set $\rho_{\theta }^{m}$ to 1 and primarily adjust $\rho_{\theta }^{t}$. In most cases, $\rho_{\theta }^{t}$ is simply set to 1. However, if the training set contains noise—where the ground truth labels might be incorrect—the loss from the training examples, particularly in the early stages of training, may be unreliable. In such cases, we decrease $\rho_{\theta }^{t}$. * $s_{avg}$ should not be too large, as it may incorporate outdated weights from earlier training steps. We set it to 10. * Generally, we select $\beta$ based on the proportion of training data we consider beneficial for the downstream tasks. For LLMs, we adopt the same ratio used in the previous studies (*paper [52] and [46]*). * $\sigma$ controls how tightly the weights should match $\beta$. We set $\sigma$ based on our confidence in the selection ratio $\beta$: a smaller $\sigma$ indicates greater confidence in $\beta$. Data Denoising scenario is a bit special, Eq 7 shows that high losses from the training batch push the example weights $w$ toward 0. With a high noise rate (50% and 80%), the training batch losses remain high throughout the training process. To prevent the weights $w$ from collapsing to 0, we use a high selection ratio $\beta$ and a low $\sigma$. **Ablation study conclusions: Influence of the impact constants $\sigma$ and sparsity level $\beta$**: * Higher $\sigma$ causes the example weights $w$ to drift away from $\beta$, occasionally collapsing to 0, and may result in incorrect example weights (See **Figure 2 in PDF**). In all three proof-of-concept scenarios, the models achieve similarly good performance when $\sigma$ is reduced to around $10^{-5}$. * In both WebNLG and MNIST scenarios, high $\beta$ leads to incorrect example weights (See **Figure 3 in PDF**). Conversely, due to the reason we mentioned above, in data denoising (CIFAR) scenario, lower $\beta$ leads to incorrect weights. * In both the WebNLG and MNIST scenarios, the backbone models' performance significantly declines when the example weights are incorrect (See **Figure 1 in PDF**). In the CIFAR scenario, the impact is less pronounced. Due to space constraints, we present a selective set of plots. The complete results for the three proof-of-concept scenarios will be provided in the revised paper. Pdf: /pdf/65d724ca68bba433274b325b9e94ea09b2bf47e5.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
A Flexible, Equivariant Framework for Subgraph GNNs via Graph Products and Graph Coarsening
Accept (poster)
Summary: This paper proposes CS-GNN, a subgraph GNN approach utilizing graph coarsening and graph Cartesian products. This novel and flexible subgraph GNN can effectively generate and process any desired bag size. The paper also discovers new permutation symmetries in the produced node feature tensor during generalized message passing. Theoretical and empirical analysis validate the efficacy of the proposed method. Strengths: 1. The topic of subgraph GNNs is interesting. 2. The paper is generally well-written with clarity. 3. The proposed method is novel, with theoretical underpinnings. 4. The empirical results seem promising. 5. The implementation code is provided for reproducing the results. Weaknesses: 1. The size of the coarsened graph, though sparse, is still very large with $2^n$ nodes, which might result in high computation complexity for large input graphs. 2. Eq. (3) indicates that the input graph is unweighted and edge weights might not be considered. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. The size of the coarsened graph, though sparse, is still very large with $2^n$ nodes, which might result in high computation complexity for large input graphs. Do you have an estimate of the complexity and runtime of the proposed model compared with others? 2. Eq. (3) indicates that the input graph is unweighted and edge weights might not be considered. What if the input graph is weighted, for example, with edge weights 1,2,3, and 4. Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes, the authors have adequately addressed the limitations and, if applicable, potential negative societal impact of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your positive feedback! We are pleased to hear that you find our method novel with solid theoretical foundations. We're also glad that you find the empirical results promising. Below, we address your specific comments and questions. **Q1:** *The size of the coarsened graph, though sparse, is still very large with $2^n$ nodes, which might result in high computation complexity for large input graphs. Do you have an estimate of the complexity and runtime of the proposed model compared with others?* **A1:** Let us clarify our approach. In our method, the size of the coarsened graph is always less than or equal to that of the original graph. **To further emphasize, the space complexity of storing this sparse graph is upper bounded by the space complexity of storing the original graph $G$ (we do not store $2^n$ nodes)**. As shown in Figure 1 (left), the coarsened graph $\mathcal{T}(G)$ has fewer nodes than the original graph $G$. Considering the feature dimension to be constant, and given that $V$ is the number of nodes in the original graph, $T$ is the bag size, and $\Delta_{\text{max}}$ is the maximal degree of the original graph, our time complexity is upper-bounded by $\mathcal{O}(T \cdot V \cdot (\Delta_{\text{max}} + T))$. When considering $T=\mathcal{O}(1)$, which is typically the case, the complexity simplifies to $\mathcal{O}(V \cdot \Delta_{\text{max}})$. The space complexity is $\mathcal{O}(T \cdot E + V \cdot T^2)$, where $E$ is the number of edges in the original graph, which reduces to $\mathcal{O}(V +E) $, when $T = \mathcal{O}(1)$. This complexity is comparable to that of other methods in the field [1, 2]. Additionally, we refer the reader to Table 10 in Appendix F.5 (page 28) for a detailed comparison of the runtime of our method with other baselines. Our method's runtime is comparable to theirs. **Q2:** Eq. (3) indicates that the input graph is unweighted and edge weights might not be considered. What if the input graph is weighted, for example, with edge weights 1,2,3, and 4. **A2:** Thank you for highlighting this point. Our method is capable of handling weighted graphs. We employ a GINE base encoder [3], which can process edge features when applicable. For further details regarding our implementation, please refer to Appendix F (page 24) -- the update mechanism of the GINE base encoder, which supports edge features, is detailed in Equation (92) on page 24. **References:** [1] Bevilacqua et al. Efficient subgraph gnns by learning effective selection policies. ICLR 2023 [2] Kong et al. Mag-gnn: Reinforcement learning boosted graph neural network. NeurIPS 2024 [3] Hu et al. Strategies for pre-training graph neural networks. ICLR 2020 --- Rebuttal Comment 1.1: Title: Thank you Comment: Thank you for your rebuttal. I acknowledge reading it.
Summary: The authors introduce a novel graph learning method that leverages message passing over product graphs. Specifically, message passing is performed over both the input graph and its product with a coarsened version of itself, which can be derived through techniques such as node clustering. Additionally, the authors propose a concept called symmetry-based updates, where message passing is enhanced by incorporating a graph representing a linear equivariant layer over the product graphs. A basis for such layers is also developed as part of their contribution. Experimental results demonstrate that this method performs comparably to state-of-the-art approaches. Strengths: **S1 Product construction** The main conceptual contribution of the paper is the use of product graph construction to allow more flexibility and control over certain types of subgraph GNNs. The product construction itself is quite natural, although it introduces some overhead depending on the chosen instantiation. **S2 Equivariant basis** The key theoretical contribution is the identification of the bases of equivariant layers for such product graphs. This allows for a principled introduction of message passing using symmetries in the product graph. **S3 Competitive architecture** Experiments validate the proposed methods, showing that they are competitive with state-of-the-art methods. Weaknesses: **W1 Unclear key definition** The key idea underlying the paper is commendable, although not extremely innovative. However, one crucial component is the graph coarsening introduced in Section 4.1. I found the definition on page 4 confusing despite multiple readings. What is defined there is an exponentially large fixed graph, with all subsets as nodes and adjacencies defined in equation (3). However, you mention before and after this that various coarsenings can be used, such as spectral clustering. How do these fit into the fixed definition given here? The presentation could be clearer. **W2 Weak theoretical analysis** The significance of Section 5, where the authors use their model for analyzing marking strategies, is not strongly demonstrated, at least based on the main paper. It primarily serves to justify the marking strategy in equation (10) but leaves a more detailed understanding of this strategy open. The section would benefit from a clearer presentation. **W3 Unclear presentation** Certain parts, as noted already in **W1** and **W2**, are not clearly described. Another example is Section 4.2.3, which is too succinct, and the visualization of $A_{equiv}$​ is rather cryptic. You mention a characterization and updates, but it is not clear what is meant here. A clear description of the instantiation of the framework used would be helpful. Similarly, the section starting from line 357 about “more than a coarsening function” is unclear, including the “informal” proposition statement. **W4 Unclear connection with OSANs** In the related work section, the authors contrast their proposal with OSANs [30], stating that OSANs use ordered tuples whereas the current paper uses sets of indices. However, unordered subgraphs play a key role in the OSAN paper as well since they showed that they can encompass most subgraph formalisms. A more detailed comparison would benefit the paper. **W5 Missing experimental details** - For the peptides experiments, you write “by using effective subsets of subgraphs.” What do you mean by this? What is this effective subset? How did you choose it? - Which version of OSAN do you compare with in the experiments (Table 4)? - Why do you use comparisons with different methods across the experiments (Table 2 vs Table 4)? **W6 Maximally expressiveness: clarification needed** At several points in the paper, you mention maximally expressive subgraph GNNs. It is unclear what is meant by this and this should be detailed, regardless of whether it was shown in [39]. As far as I know, general subgraph GNN do not have a maximally expressive instantiation, so I assume certain restrictions on the class of subgraph GNNs are in place. It would be beneficial to make these explicit to better understand the limitations of the proposed methods. **W7 Minor comments** *All figures (especially the font size used in the text in figures) in the paper are too small. Please enlarge them.* - L.82: What is $\mathcal{X}$? - L218: The choice of $S_n$​ as a symbol for the symmetric group is not optimal, as subsets are denoted by $S_1$, $S_2$, etc. - L225: You mention “extension to multiple features is straightforward” and refer to [22]. A bit more explanation would be helpful. Technical Quality: 3 Clarity: 1 Questions for Authors: I would appreciate it if the authors could respond to the weak points **W1**, **W4**, **W5**, and explain what they want to convey with the section on “more than a coarsening function" in Section 5. Confidence: 4 Soundness: 3 Presentation: 1 Contribution: 2 Limitations: This has been addressed in a satisfactory way by the authors. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your positive feedback. We are pleased you appreciated the novelty of our idea and the identification of the basis of equivariant layers for our product graphs. Below, we address each of your points. **Q1:** *Key definition* **A1:** We emphasize that the coarsened graph is not exponentially large in practice (see also lines 288-292 in our paper). While nodes of the coarsened graph are in $P([n])$, only a small number are used. Using spectral clustering, we partition the graph $G$ and construct the coarsened graph $\mathcal{T}(G)$ with nodes representing these partitions and edges induced by the original graph's connectivity (see Figure 1, left). This results in a coarsened graph with fewer nodes and edges than $G$. Although we define the graph for mathematical rigor, in practice, we use a much sparser graph by making $A^{\mathcal{T}}$ and $X^\mathcal{T}$ extremely sparse. **The space complexity of this sparse graph is upper bounded by that of the original graph $G$ (we do not store $2^n$ nodes)**. We thank the reviewer for pointing out this confusion and will clarify this in the final version of the paper. **Q2:** *Theoretical analysis* **A2:** We have justified the marking strategy from a theoretical perspective, demonstrating its superiority over previously suggested strategies [1,2,3]. While we have only briefly summarized our results in the main body due to space constraints, App. C (p.16) provides a detailed expansion. We will make efforts to add more information to the main paper. **Q3:** *Presentation* **A3:** Thank you for highlighting these issues. Due to space limitations, Section 4.2.3 provides a brief summary of our use of the equivariant layers (see Section 4.2.2). For a detailed discussion, refer to App. F.4, p. 26). We will improve presentation and optimize space in the final version. 'More than a coarsening function’: We wanted to convey the fact that our method does not simply reduce to the joint processing of the graph and its coarsened counterpart. In fact, as we show in App. D.2 in detail, our approach strictly improves over running msg-passing on the graph obtained by combining its original topology with information from the coarsening. We term this new graph the “sum graph” and illustrate it on p. 19. Our method gains in expressiveness over the “sum-graph”, demonstrating the importance of our additional technical contributions, i.e., the use of the graph product and the related equivariant operations. **Q4:** *Connection with OSAN* **A4:** We agree with the reviewer and will add a detailed comparison with OSAN in the next revision. In OSAN, the authors indeed show that ordered subgraphs are more expressive than unordered ones. In our case, although we choose coarsening functions which output unordered sets of nodes, we additionally leverage a set of additional equivariant operations which we formally derive as part of our contribution. Similarly to OSAN, it would be straightforward to show that our approach can indeed recover Subgraph GNNs operating on unordered subgraphs. However, the question whether OSAN fully captures our approach is of a less trivial answer, due to the aforementioned additional operations. Nevertheless, from an empirical perspective, our model demonstrates significantly better results. Additionally, we note that unordered tuples are practically preferred in OSAN due to computational complexity reasons. Throughout their experimental section, the authors focus on unordered subgraphs and learn how to sample subsets of them. Our method shares this trait of (practically) considering only unordered subgraphs. Contrary to OSAN, however, we choose coarsening and graph products as an alternative to sampling. We will refer to these observations in the next revision. **Q5:** *Experimental details* **A5:** We will answer each of the bullet points below: - "Using effective subsets of subgraphs": we refer to how our method naturally selects and uses relevant subgraphs for optimal performance. To clarify, the initial step of our method involves coarsening the original graph $G$. We then use the graph Cartesian product between the coarsened graph $\mathcal{T}(G)$ and $G$ to create the product graph, as illustrated in Figure 1 on p. 2. Even without explicitly extracting subgraphs, elements in $\mathcal{T}(G) \square G$ can be conceptually associated with them. The "rows" in the product structure (Figure 1, right) represent implicit subgraphs, a common association in previous works [2,3]. We will improve our wording in the final version of the paper. - To ensure a fair comparison, we compared with the best-performing version of OSAN, as shown in Table 1(a) of the OSAN paper [4], p. 9. - Different papers use various datasets. We used common datasets where possible for comparison. Using different baselines for different datasets aligns with field practices due to the high computational cost of running all baselines on all datasets, as shown, for example, in Tables 2,3 in [2]. **Q6:** *Maximally expressive* **A6:** We agree with the reviewer's observation, and indeed certain restrictions exist. To clarify, [1] studies the expressiveness of Subgraph GNNs using node-based policies. Thus, "maximal expressiveness" in our paper refers to Subgraph GNNs with node-based subgraph selection [1,3]. We will add an app. section to clarify this point. **Q7:** *Minor comments* **A7:** - $\mathcal{X}$ is the node feature matrix of the graph $\mathcal{T}(G) \square G$. - Thanks for the notes on L218, L225, and the figures. We'll revise them in the final version. **Refs.:** [1] Zhang et al. A Complete Expressiveness Hierarchy for Subgraph GNNs via Subgraph Weisfeiler-Lehman Tests. ICML 2023 [2] Bevi. et al. Equivariant Subgraph Aggregation Networks. ICLR 2022 [3] Frasca. et al. Understanding and Extending Subgraph GNNs by Rethinking Their Symmetries. NeurIPS 2022 [4] Qian et al. Ordered subgraph aggregation networks. NeurIPS 2022 --- Rebuttal Comment 1.1: Title: Thanks for the rebuttal Comment: Dear authors, thank you for your detailed rebuttal and thoughtful responses to my concerns. Incorporating those comments will definitely improve the paper.
Summary: The paper is very well written. Existing subgraph GNN methods are mentioned. Then a novel subgraph GNN framework is formulated. The authors discuss the equivariance properties of this new subgraph GNN formulation followed by an experimental evaluation of the proposed method. Strengths: * Strong mathematical formulation of the developed methods is provided. * The ideas are communicated well. One can see that the authors put a lot of time and effort into this work. * A comprehensive appendix is provided explaining the proposed approach in even more detail. Weaknesses: ### The use of few and small datasets If the goal of this work is to keep computational complexity low while improving the model quality, strong experimental results are expected. This puts into question the developed theory and methods in the paper. I suggest the authors to find a scenario where their model can achieve SOTA results to justify their ideas. I would recommend including more datasets such as PCQM4Mv2 [1] to the experimental evaluation. ### Limited baselines A simple comparison against a recent linear runtime complexity graph transformer alternative [2] shows that the experimental results are weak. A select few datasets are chosen and the achieved results are well below the current SOTA on them. ### Weak motivation If subgraph GNNs address the issue of limited expressiveness of existing work as stated in the introduction of the paper, then it makes one question why are the experimental results not good. [1] https://ogb.stanford.edu/docs/lsc/pcqm4mv2/ [2] https://openreview.net/pdf?id=duLr8BIzro Technical Quality: 1 Clarity: 4 Questions for Authors: 1. Is there a case where the computational gains leads to a SOTA results on a large scale dataset? 2. Could the experimental results improved if the proposed approach was combined with an approach similar to [2]. Improving the expressibility of the proposed model even further by combining with other classes of GNNs to get SOTA results would significantly improve the chances of the paper. [2] https://openreview.net/pdf?id=duLr8BIzro Confidence: 3 Soundness: 1 Presentation: 4 Contribution: 2 Limitations: The authors admit their proposed method has a high computational complexity. Thus naturally one would expect even higher model quality. Flag For Ethics Review: ['No ethics review needed.'] Rating: 2 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your detailed feedback. We appreciate your positive remarks on our method's novelty, strong mathematical formulation, and clear writing. Below, we address your specific questions. **Q1:** *The goal of our work and baselines* **A1:** Let us clarify. The goal of this work is to reduce the time and space complexity of Subgraph GNNs. As a result, some model degradation is expected. Subgraph GNNs are provably expressive models with theoretical guarantees [3, 4, 5], which may offer certain advantages over MPNNs and graph transformers. However, Subgraph GNNs typically have high computational complexity. Similarly to works like OSAN, PL, and Mag-GNN [6, 7, 8], our objective is to balance the high performance/accuracy of Subgraph GNNs with reduced computational expense. Unlike these other efficient Subgraph GNNs [6, 7, 8], our method offers a much finer control over this trade-off between computational complexity and accuracy in graph learning tasks, this is clearly illustrated in Figure 2, where bag size (x-axis) correlates with computational complexity. Within this family of more efficient Subgraph GNNs, we achieve SOTA results on most of the datasets these baselines are typically benchmarked on (see Tables 1, 4). **Q2:** *large scale datasets* **A2:** We refer the reviewer's attention to Table 8 in the appendix (p. 27), which presents results on the larger-scale ZINC-full dataset (250,000 graphs). We achieve top results in the efficient Subgraph GNN regime ($T=4$), outperforming MAG-GNN, the only other efficient subgraph GNN that reported results on this large-scale dataset. Furthermore, when using our model in the farthest end of the trade-off (Full bag - $T="Full"$), we achieve results comparable to the top-performing traditional "non-efficient" Subgraph GNNs, and *outperform leading Graph Transformers*. For instance, GRIT [9] reported an MAE of 0.023 on ZINC-FULL, while we obtained an MAE of 0.021. As the reviewer requested, we also conducted a preliminary experiment using the large-scale molpcba dataset (437,929 graphs), which is also considered in the paper mentioned by the reviewer [2]. In particular, we train one instance of our model with a 1M parameter budget and report its performance against the methods in [2], see Table 1 in the attached PDF (in the global response). Notably, our method achieves comparable results to SOTA on this dataset while using the least number of parameters. Compared to GatedGCN-LSPE, which uses a similar number of parameters, our method demonstrates better performance. This improvement is also evident when compared to the GraphTrans baseline, which utilizes approximately four times more parameters than our approach. We believe that the results over ZINC-FULL, as well as the preliminary results over molpcba, demonstrates the potential of our method to handle large-scale datasets well. If the reviewer considers it important, we will extend our experiments on molpcba and include the results in the final version of the paper. Due to resource constraints, we couldn't conduct an experiment with the PCQM4Mv2 dataset (approx. 3,500,000 graphs) within the rebuttal period. **Q3:** *SOTA results on large scale datasets* **A3:** Recalling **A1**, **A2**, we achieve SOTA results on the large-scale ZINC-FULL dataset (Table 8, appendix, p. 27) in both efficient ($T=4$) and full-bag Subgraph GNN scenarios. Our approach outperforms top transformers like GRIT [9]. Preliminary experiments on the large-scale molpcba also indicate our model's competitive SOTA performance (Table 1 in the pdf). **Q4:** *Weak motivation* **A4:** We want to emphasize that our primary motivation, similar to previous works [6, 7, 8], is to reduce the computational cost of Subgraph GNNs. As mentioned in **A1**, we believe that within the family of algorithms that tackle the same setup (OSAN, PL, and MAG-GNN) [6, 7, 8], our method provides significant improvements in most cases – Table 1, 4, 8. **Q5:** *Could the experimental results improved if the proposed approach was combined with an approach similar to [2]. Improving the expressibility of the proposed model even further by combining with other classes of GNNs to get SOTA results would significantly improve the chances of the paper.* **A5**: We appreciate the reviewer's suggestion and have incorporated a version of the LCB block from [2] into our framework, which we term `Ours + LCB`. We ran experiments on three datasets (ZINC12k, Molbace, and Molesol) for three different bag sizes ($T=2$, $T=5$, $T="Full"$). The results are in Table 2 of the attached PDF in the global response. We positively note that, in the small-bag regime, the `Ours + LCB` method improves the performance of our model (which already outperformed most competitors on these datasets), sometimes significantly. We will do our best to extend and include these results in the final version of the paper, should the reviewer find them relevant. **Summary:** We believe we have thoroughly addressed your concerns about our method's capability with large-scale datasets and its integration with approaches from [2]. If you find our response satisfactory, we kindly request that you consider raising your score. **References:** [1] https://ogb.stanford.edu/docs/lsc/pcqm4mv2/ [2] https://openreview.net/pdf?id=duLr8BIzro [3] Zhang et al. A Complete Expressiveness Hierarchy for Subgraph GNNs via Subgraph Weisfeiler-Lehman Tests. ICML 2023 [4] Bevi. et al. Equivariant Subgraph Aggregation Networks. ICLR 2022 [5] Frasca. et al. Understanding and Extending Subgraph GNNs by Rethinking Their Symmetries. NeurIPS 2022 [6] Qian et al. Ordered subgraph aggregation networks. NeurIPS 2022 [7] Bevi. et al. Efficient subgraph gnns by learning effective selection policies. ICLR 2023 [8] Kong et al. Mag-gnn: Reinforcement learning boosted graph neural network. NeurIPS 2024 [9] Ma et al. Graph Inductive Biases in Transformers without Message Passing. ICML 2023 --- Rebuttal 2: Title: Relating to our response Comment: Dear Reviewer hoHT, We appreciate your thoughtful review and hope that we have adequately addressed your concerns. We made an effort to address your feedback during the rebuttal period. Specifically, we shared new results on new experiments: * We conducted new experiments using a large-scale dataset OGBG-MOLPCBA (437K graphs). Additionally, we referred you to another large-scale experiment we already included in the original submission using ZINC FULL (250K graphs). Both experiments show solid results for our method compared to SoTA. * We successfully incorporated components from [2] to enhance our architecture. These new results can be found in our original response and PDF. We would be grateful if you could review these additions and consider raising your score if they satisfactorily address your concerns. Best regards, The Authors --- Rebuttal Comment 2.1: Comment: I would like to thank the authors for their rebuttal. I am comparing Table 2, Peptides-func and peptides-struct results to [2]. The reported SOTA results in [2] for these datasets are 0.6975 and 0.2464, both of which significantly are better than what is being reported in Table 2 in this work. molhiv results are subpar as well (79.44 in this paper vs 80.84 reported in for CIN method and 79.80 for GECO both in [2]). Essentially there is no dataset that justifies the added complexity of the proposed methods in this work. To me, it looks like the authors are working on making an approach faster that does not seem to bring any clear benefits. > The goal of this work is to reduce the time and space complexity of Subgraph GNNs. As a result, some model degradation is expected. Subgraph GNNs are provably expressive models with theoretical guarantees [3, 4, 5], which may offer certain advantages over MPNNs and graph transformers. If subgraph GNNs have high computational complexity but don't work well, then this undermines the impact of the work in this paper. Since my score is borderline, I would like to instead change my score and go for a recommendation of rejection. If the authors can motivate subgraph GNNs better by showing that they work well over a class of datasets in a future revision of the paper, that would significantly boost the chances of acceptance of their work. This is due to the fact that datasets with a lot of graphs already are made of small graphs, and non-subgraph classes of methods already work well and are computationally efficient. [2] https://openreview.net/pdf?id=duLr8BIzro --- Reply to Comment 2.1.1: Title: Concerns Regarding Unjustified Score Reduction Comment: Dear Reviewer, We appreciate your time and effort in evaluating our work. However, we respectfully disagree with the recent score reduction and would like to raise some concerns: 1. Changing the score so abruptly just before the discussion deadline hinders our ability to respond and compromises the fairness of the rebuttal and discussion process. 2. Our rebuttal comprehensively addressed all your initial concerns, providing clarifications and additional positive experimental results as per your request. Your response does not acknowledge our inputs or new results. Instead, it lowers the score from 4 to 2 based on the same initial comments that originally received a score of 4. This discrepancy is difficult to reconcile. 3. Your current score of 2 contrasts sharply with both your initial assessment and the positive evaluations from other reviewers (6,7). 4. We are concerned that our prompt for a response may have indirectly led to this unjustified score reduction. 5. We would like to emphasize a key contribution that may have been overlooked: the introduction of new symmetries and the characterization of accompanying equivariant layers. For a conference of this tier, we expect reviews to be consistent, thorough, and responsive to rebuttals. The dramatic score change without new concerns or acknowledgment of our responses does not align with these expectations. We have also contacted the AC and SAC regarding this matter. We respectfully request that you reassess your recent score reduction.
null
null
Rebuttal 1: Rebuttal: We are grateful to all reviewers for their feedback and constructive comments and happy to see that our work was positively received in general. In particular, the reviewers all recognized the novelty and importance of our proposed method: - “The topic of subgraph GNNs is interesting” (**DujM**) - “a novel subgraph GNN framework” (**hoHT**) - “a novel graph learning method” (**h2GK**) - “The proposed method is novel” (**DujM**) An essential part of our work involves recognizing that our product graph introduces new permutation symmetries. We demonstrate how to construct networks that respect these symmetries, supported by both theoretical analysis and empirical evidence. This approach has also been positively received: - “The paper also discovers new permutation symmetries” (**DujM**) - “Strong mathematical formulation” (**hoHT**) - “The proposed method is novel, with theoretical underpinnings” (**DujM**) - “Theoretical and empirical analysis validate the efficacy of the proposed method”. (**DujM**) - “The key theoretical contribution is the identification of the bases of equivariant layers for such product graphs.” (**h2GK**) - “Experiments validate the proposed methods, showing that they are competitive with state-of-the-art methods” (**h2GK**) - “The empirical results seem promising.” (**DujM**) Finally, we were also very happy to notice that the reviewers have found our paper to be “very well written” (**hoHT**, **DujM**). At the same time, reviewers shared comments and questions, which we address in the specific rebuttals below. Additionally, we conducted new experiments as requested by (**hoHT**). The results, included in the attached PDF file, demonstrate that our method effectively handles large-scale datasets and benefits from combining our approach with his suggested method. Pdf: /pdf/363fdb66b0a145e441155200192339289d27b84b.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
IPM-LSTM: A Learning-Based Interior Point Method for Solving Nonlinear Programs
Accept (poster)
Summary: This paper introduces IPM-LSTM, an approach integrating Long Short-Term Memory (LSTM) neural networks with Interior Point Methods (IPMs) to solve Nonlinear Programs (NLPs). The key innovation lies in approximating solutions to linear systems within IPMs using LSTMs, aiming to accelerate the convergence of classic IPM solvers. This approach leverages the Learning to Optimize (L2O) paradigm, presenting a two-stage framework where LSTM-generated solutions are used to warm-start an IPM solver. The authors compare the IPM-LSTM against traditional solvers and recent L2O methods across various NLPs, including Quadratic Programs and Quadratically Constrained Quadratic Programs. The proposed method reportedly reduces iterations by up to 60% and solution time by up to 70%. Strengths: 1. The integration of LSTM networks to approximate the solutions of linear systems in IPMs is novel. 2. The paper provides theoretical insights into the convergence properties of the proposed method under specific assumptions, adding to its credibility and understanding. 3. The paper provides a comprehensive empirical evaluation across several types of NLPs. The results demonstrate improvements over traditional methods in terms of iteration count and computational time, which supports the effectiveness of the proposed method. Weaknesses: 1. The IPM-LSTM, which uses LSTM to iteratively solve linear systems at each IPM iteration, repeats K times and feeds it into the IPM solver as a warm-start point, needs more justification. 1. The decision to use the L2O approach for solving a least squares problem (problem (4)) is not adequately justified. According to Assumption 1, the approximated solution needs to be bounded and accurate enough to guarantee the convergence of the outer loop. From the theoretical side, it is unclear how hard it is to satisfy those conditions by L2O approaches. From the empirical side, as shown in Figure 3(a), the accuracy condition is not always satisfied. If assumption 1 can not be guaranteed, the convergence of approximated IPM can not be guaranteed according to Prop. 1. 2. The approach of using an approximated IPM solution instead of directly generating a warm-start point using either NN or L2O raises questions about efficiency and effectiveness. Previous works [1-4] have shown that direct prediction of warm-start points can be more straightforward and computationally efficient. Besides, the convergence of approximated IPM can not be guaranteed, which also raises concerns about the quality of such a warm-start point. The authors need to provide more discussion or experimental comparisons to justify their more complex, iterative approximation method. [1] R. Sambharya, G. Hall, B. Amos, and B. Stellato, "End-to-End Learning to Warm-Start for Real-Time Quadratic Optimization", arXiv preprint arXiv:2212.08260, 2022. [2] Sambharya R, Hall G, Amos B, et al. Learning to warm-start fixed-point optimization algorithms[J]. Journal of Machine Learning Research, 2024, 25(166): 1-46. [3] F. Diehl, "Warm-Starting AC Optimal Power Flow with Graph Neural Networks", in Proceedings of the 33rd Conference on Neural Information Processing Systems (NeurIPS) Workshop, Vancouver, BC, Canada, Dec. 8 - 14, 2019. [4] K. Baker, "Learning Warm-Start Points for AC Optimal Power Flow", in Proceedings of IEEE 29th Machine Learning for Signal Processing Conference, Pittsburgh, PA, USA, Oct. 13 - 16, 2019. Technical Quality: 2 Clarity: 2 Questions for Authors: 1. In the experiments, why not directly use the simple NN prediction as a warm-start point for the IPM solver? 2. In the simulation part, as the solutions obtained by the proposed IPM-LSTM approach are used as a warm-start point to IPM, why are the equality constraints not satisfied? The authors are suggested to explain the experimental setup clearly. 3. In Table I, the proposed IPM-LSTM approach has a larger constraint violation with a longer solution time as compared to the OSQP algorithm. Does this mean the proposed approach can not exceed state-of-the-art methods for solving convex QP problems? Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: The authors have discussed the limitation in Sec. 5. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank the reviewer for reading our manuscript and providing constructive coments. Please refer to the Author Rebuttal for a clarification of the two-stage framework proposed in our work. ### Weakness >*1. The decision to use the L2O approach for solving a least squares.......* Thanks for raising this point. - **Theoretical**: the least-squares problem (4) in our manuscript is an **unconstrained convex optimization** problem. Several works have proved that **L2O approaches could solve such problems to guaranteed tolerances**. For example, the work of [5] proved that solutions provided by properly parameterized LSTM networks converge to one of the minimizers of composite optimization problems. Another interesting case is the work of [6] in which the authors proposed an unrolling of the ISTA algorithm and proved that solutions returned by the ISTA-unrolled method converge to the optimal ones. - **Empirical**: The accuracy condition is not strictly enforced in our implementation since we aim to **balance the accuracy and efficiency** of solving linear systems via LSTM. As shown in Section 4, the IPM-LSTM algorithm is able to provide high-quality approximate solutions with mild infeasibility and sub-optimality (**Stage I**). **These solutions are suitable for many practical applications**. If solutions with increased feasibility and optimality are needed, one can quickly **polish and refine** the approximate solutions by feeding them to an IPM solver (**Stage II**). >*2. The approach of using an approximated IPM solution instead of.....[1][2][3][4]...* Thanks for raising this point. We further clarify the motivation of our algorithmic design as follows. - In this work, we focus on developing learning-based methods to **warm-start an IPM solver for optimizing general nonlinear and convex/non-convex programs**. As the Reviewer 9Qph commented, "*The topic is important as IPM plays a crucial role in solving linear and nonlinear programs, which have extensive applications in scientific computing.*" The works of **[1][2] are focused on warm-starting fixed-point algorithms for efficiently solving convex QPs and conic programs**. - As we mentioned in Section 2, warm-starting an IPM solver is notoriously hard since it entails providing **well-centered primal-dual solutions** [7][8]. Recently, learning-based approaches have been proposed to address this. However, most of works such as **[3][4] did not even include dual solutions** for warm-starting. **These methods might work well for specific applications** but could not boost the performance of an IPM solver when optimizing general nonlinear programs. Some works such as [9] indeed provide primal-dual solutions but those pairs are **not intended to be well-centered**. - Our proposed IPM-LSTM approach works in a similar way as a classic IPM except that linear systems are approximately solved by LSTM networks. Since every iterate (e.g., primal-dual solution) follows a central path (e.g., the same parameter $\mu$ is chosen for each dimension in the perturbed KKT system, as shown in Algorithm 1), the final solution pair returned by IPM-LSTM would then be well-centered and of high quality. **Such a primal-dual solution is thus well-suited for warm-starting an IPM solver**. ### Questions >*1. ...why not directly use the simple NN prediction as a warm-start point...* Thanks for raising this point. As pointed out by [10], a **simple NN prediction often leads to significant constraint violation**. To alleviate this issue, we integrated the objective function and penalty for constraint violation into the loss function and included such a method as a baseline (denoted as "NN") in our manuscript. From Table 1, 2, 5, 7, and 8, we can conclude that **warm-starting IPOPT with initial solutions predicted by NNs brings limited (sometimes even negative) performance improvement**. >*2. ...why are the equality constraints not satisfied...* We apologize for any confusion. In the computational results, **the "IPM-LSTM" in the end-to-end fashion produces approximate solutions** that might be neither feasible nor optimal, which can result in small violations of the equality constraints. However, when **these solutions are used to warm-start an IPM solver** such as IPOPT (denoted as "IPOPT (warm-start)" in each table), IPOPT returns locally optimal solutions that **do not exhibit any constraint violations**, as shown in Table 1, 2, 5, 7, and 8. >*3....Does this mean the proposed approach can not exceed state-of-the-art...* Thank you for raising this question. Indeed, the proposed IPM-LSTM approach is outperformed by OSQP when solving convex QPs. - **The IPM-LSTM method is designed to provide high-quality approximate solutions for general nonlinear and convex/non-convex programs**. It follows a classic IPM approach, with the distinction that linear systems are approximately solved using an LSTM network. - **OSQP** employs an operator-splitting algorithm **specifically tailored for convex QPs**. This solver has demonstrated superior performance even when compared to state-of-the-art general-purpose solvers like IPOPT and Gurobi in optimizing convex QPs. Therefore, **IPM-LSTM is capable of addressing a wide range of optimization problems, but it is not as competitive as OSQP when it comes to solving convex QPs**. [5] Liu, Jialin, et al. (2023) "Towards constituting mathematical structures for learning to optimize." ICML. [6] Aberdam, A., et al. (2021). Ada-lista: Learned solvers adaptive to varying models. IEEE TPAMI. [7] Nocedal, J., & Wright, S. J. (Eds.). (1999). Numerical optimization. [8] Wright, S. J. (1997). Primal-dual interior-point methods. [9] Park, S., & Van Hentenryck, P. (2023). Self-supervised primal-dual learning for constrained optimization. AAAI. [10] Donti, P. L., et al. (2021). DC3: A learning method for optimization with hard constraints. ICLR. --- Rebuttal Comment 1.1: Title: Thanks for your rebuttal Comment: I have reviewed the rebuttal and acknowledge that accelerating linear system solving by NN within IPM iterations is a reasonable approach. While the use of NNs for solving linear systems is not new [1], integrating them within IPM solvers introduces specific challenges related to convergence and accuracy. I still have some concerns about the LSTM approach to solving linear systems with high accuracy. - The provided theoretical guarantees for the Learning to Optimize (L2O) approach are limited to convex problems. - Many practical applications, especially those involving KKT systems for general non-convex problems, deal with ill-conditioned linear systems. Can the LSTM approach effectively handle such ill-conditioned systems. - It is also unclear whether the used simple non-convex problem is ill-conditioned or not. --- * [1] Kaneda, A., Akar, O., Chen, J., Kala, V. A. T., Hyde, D., & Teran, J. (2023, July). A deep conjugate direction method for iteratively solving linear systems. In International Conference on Machine Learning (pp. 15720-15736). PMLR. --- Reply to Comment 1.1.1: Comment: >I have reviewed the rebuttal and acknowledge that accelerating linear system solving by NN within IPM iterations is a reasonable approach. Thanks for your efforts and acknowledgement. >While the use of NNs for solving linear systems is not new [1], integrating them within IPM solvers introduces specific challenges related to convergence and accuracy. Thanks for sharing with us this work [1]. The authors of [1] focused on using NNs to solve **positive-definite linear systems of equations** while **linear systems in IPMs are indefinite**. We incorporate this work in our revised manuscript. >I still have some concerns about the LSTM approach to solving linear systems with high accuracy. >1. The provided theoretical guarantees for the Learning to Optimize (L2O) approach are limited to convex problems. Thanks for raising this point. In this work, we focus on developing learning-based IPMs to address general nonlinear programs which might be non-convex. - The **IPM itself (i.e., the outer loop of IPM-LSTM) would guarantee that the returned solutions are locally optimal** as long as linear systems are solved to specified tolerances, as shown by Proposition 1 in our manuscript. - To solve linear systems via LSTM, we convert them to least squares problems of the form $\text{min}_y \frac{1}{2}|| J^ky+F^k ||^2$, which are **unconstrained convex programs**. As implied in Theorem 1 of [2], **solutions provided by** properly parameterized **LSTM networks would converge to one of the minimizers of convex optimization problems**. Hence, from a theoretical point of view, **the proposed IPM-LSTM approach can solve both convex and non-convex programs to optimality**. We incorporate the above points in the revised manuscript. >2. Many practical applications, especially those involving KKT systems for general non-convex problems, deal with ill-conditioned linear systems. Can the LSTM approach effectively handle such ill-conditioned systems. Thank the reviewer for raising such an interesting point. Let $\kappa(\cdot)$ denote the condition number of a matrix. - **The LSTM approach for solving linear systems is negatively affected by their large condition numbers.** To demonstrate this, we consider the least squares problem \begin{align} \underset{y \in \mathbb{R}^m}{\text{min}} \; f(y) := \frac{1}{2}|| J^ky+F^k ||^2. \end{align} We utilize a first-order method (say the steepest descent method) to minimize $f(y)$ and achieve a linear convergence rate [3], i.e., \begin{align} f\left(x_{k+1}\right) - f\left(x^{\star}\right) \leq \left(1-\frac{2}{(\kappa(J^k))^2+1}\right)^{2} \left( f(x_{k})-f(x^{\star})\right). \end{align} As we discussed in the **Preconditioning** part of our manuscript, since solving linear systems via LSTM networks emulates iterative first-order methods, thus the value of $\kappa(J^k)$ affects the performance of LSTM networks. - As shown in the computational studies (Section 3.1) of [4], **LSTM networks can empirically achieve a faster convergence rate than traditional first-order algorithms** when solving the same least squares problems. - To alleviate the effect of large condition numbers, as discussed in the **Preconditioning** part of our manuscript, **we have employed preconditioning techniques**. >3. It is also unclear whether the used simple non-convex problem is ill-conditioned or not. Thanks for your suggestion. For the simple non-convex programs used in our experiment, we report $\kappa(J^k)$ and their values after preconditioning (in parantheses) across several IPM iterations (say 1, 10, 20, 50, 100) in the following table. - The condition numbers **$\kappa(J^k)$ remain within reasonable magnitudes**, even during the later IPM iterations. - Applying the **preconditioning** technique indeed significantly **reduces the condition numbers** for those non-convex problems. | Instance | $1^{\text{st}}$ Iter. | $10^{\text{th}}$ | $20^{\text{th}}$ | $50^{\text{th}}$ | $100^{\text{th}}$ Iter. | | ------------------------- | ------- | --------- | --------- | --------- | ---------- | |**Non-convex Programs (RHS) (100, 50, 50)**|53.8(59.8)|126.2(580.7)|153.1(711.2)|208.7(1004.8)|348.4(1860.1)| |**Non-convex Programs (ALL) (100, 50, 50)**|55.4(59.8)|113.6(517.0)|139.8(658.5)|214.0(1190.9)|329.2(1859.9)| |**Non-convex Programs (RHS) (200, 100, 100)**|91.5(99.8)|157.1(1114.0)|205.8(1441.1)|326.2(2398.3)|488.3(3667.8)| |**Non-convex Programs (ALL) (200, 100, 100)**|72.1(75.7)|175.4(1143.4)|184.5(1352.7)|249.5(2016.6)|368.4(3015.3) [2] Liu, Jialin, et al. (2023) "Towards constituting mathematical structures for learning to optimize." ICML. [3] Nocedal, J., & Wright, S. J. (Eds.). (1999). Numerical optimization. New York, NY: Springer New York. [4] Andrychowicz, M., Denil, M., Gomez, S., Hoffman, M. W., Pfau, D., Schaul, T., ... & De Freitas, N. (2016). Learning to learn by gradient descent by gradient descent. Advances in neural information processing systems, 29.
Summary: This paper introduces a method called IPM-LSTM, which integrates machine learning techniques into interior point methods (IPM). Specifically, the authors propose training a RNN model, LSTM, to quickly approximate the solution of linear systems within IPM. This approach is numerically validated on several convex and nonconvex QPs. Strengths: - The topic is important as IPM plays a crucial role in solving linear and nonlinear programs, which have extensive applications in scientific computing. - The idea of accelerating a subroutine of IPM, rather than applying deep neural nets end-to-end to solve optimization problems, is insightful. The convergence of IPM is typically fast (superlinear convergence), leaving little room for improvement. However, solving the linear system in IPM is usually a computational bottleneck, making it worthwhile to accelerate with machine learning. - The experimental results are promising. IPM-LSTM clearly outperforms other learning-based baseline methods in terms of the objective function. Weaknesses: My main concern about this paper is Assumption 1, which requires the accuracy of the linear system solution to increase as the number of iterations $k$ increases. Based on this assumption, exact convergence is derived, as shown in Proposition 1. However, the empirical results in Section 4 indicate that IPM-LSTM does not achieve exact optimality, revealing a gap between theory and practice. To address this gap, I suggest: - Reporting the error of LSTM at each iteration. This would provide readers with an understanding of how accurately the LSTM performs. Additionally, it would be beneficial to report the relationship between the error of the linear system solution with the size, training, and testing overhead of the LSTM. - Using a log scale y-axis for Figure 3a for better precision. For example, 0.01 is 10 times greater than 0.001, but this difference is not reflected in the linear scale of Figure 3a. - Modifying Assumption 1 to better align with practice. For example, assuming a fixed error on the right-hand sides of equations (5) and (6). Based on this relaxed assumption, a result similar to Proposition 1 could be derived, but with a fixed error on the limit of $(x^k,\lambda^k,z^k)$. The relationship between the allowed error in Assumption 1 and the propagated eventual error in Proposition 1 would sufficiently describe the performance of IPM-LSTM. Technical Quality: 2 Clarity: 4 Questions for Authors: Refer to "Weaknesses". Confidence: 5 Soundness: 2 Presentation: 4 Contribution: 3 Limitations: This theoretical paper appears to have no potential negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank the reviewer for reading our manuscript and providing constructive coments. Please refer to the Author Rebuttal for a clarification of the two-stage framework proposed in our work. > *My main concern about this paper is Assumption 1, which requires the accuracy of the linear system solution to increase as the number of iterations $k$ increases. Based on this assumption, exact convergence is derived, as shown in Proposition 1. However, the empirical results in Section 4 indicate that IPM-LSTM does not achieve exact optimality, revealing a gap between theory and practice.* Thank you for pointing this out. The IPM-LSTM utilizes an LSTM network to return approximate solutions to linear systems. Proposition 1 implies that exact optimality could be achieved by IPM-LSTM in theory. However, in practice, solving linear systems to the tolerance specified in Assumption 1 would be too time-consuming. As a result, we choose to **strike a balance between the accuracy and efficiency of solving linear systems via LSTM**. Consequently, the satisfaction of Assumption 1 is not fully ensured in our implementation, leading to the mild suboptimality and infeasibility observed in Section 4. >*To address this gap, I suggest:* >*1. Reporting the error of LSTM at each iteration. This would provide readers with an understanding of how accurately the LSTM performs. Additionally, it would be beneficial to report the relationship between the error of the linear system solution with the size, training, and testing overhead of the LSTM*. Thanks for your suggestion. The error of solving linear systems is indeed vital. We have plotted the progress of $\\|J^ky^k+F^k\\|$ as the IPM iteration increases in Fig. 3(a) of our manuscript. Now we report the detailed values in the following table. From this table, **$\\|J^ky^k+F^k\\|$ is roughly in the same order of magnitude as $\eta[(z^k)^{\top}x^k]/n$** at each IPM iteration. | IPM Ite. | $\|\|J^ky^k+F^k\|\|$ | $\eta[(z^k)^{\top}x^k]/n$ | |:------ | ----- | -----| | 1 | 2.396 | 0.900 | | 10 | 0.154 | 0.255 | | 20 | 0.104 | 0.124 | | 30 | 0.073 | 0.073 | | 40 | 0.052 | 0.047 | | 50 | 0.040 | 0.031 | | 60 | 0.032 | 0.021 | | 70 | 0.027 | 0.013 | | 80 | 0.024 | 0.008 | | 90 | 0.022 | 0.006 | | 100 | 0.020 | 0.005 | To reveal the relationship between the error of the linear system solution $\\|J^ky^k+F^k\\|$ and the LSTM time steps, hidden dimensions, training sizes and test sizes, we conduct experiments on representative convex QP (RHS) problems with 100 variables, 50 inequality constraints, and 50 equality constraints, and the results are included in Fig. 2 of the supplementary material. - In Fig. 2(a), with the LSTM time step increasing, $\\|J^ky^k+F^k\\|$ decreases. - In Fig. 2(b), we consider LSTMs with 25, 50, 75, and 100 hidden dimensions and find that an LSTM with a hidden dimension of 50, as used in our manuscript, generally performs the best (e.g., resulting in the smallest $\\|J^ky^k + F^k\\|$). - In Fig. 2\(c\), a larger training set is more beneficial for model training. The training set size used in our manuscript is 8,334, and the error in solving the linear system $\\|J^ky^k + F^k\\|$ is smaller compared with the case of 4,000 or 6,000 training samples. - As shown by Fig. 2(d), the number of samples in the test set does not affect the performance of LSTM for solving linear systems. >*2. Using a log scale y-axis for Figure 3a for better precision. For example, 0.01 is 10 times greater than 0.001, but this difference is not reflected in the linear scale of Figure 3a.* Thanks for your suggestion. We now take the log of the y-axis in Fig. 3(a) of our manuscript and plot it in Fig. 3(a) of the supplementary material. - Roughly speaking, $\\|J^k y^k + F^k\\|$ is smaller than $\eta [(z^k)^\top x^k] / n$ in the first $40$ IPM iterations, while $\\|J^k y^k + F^k\\|$ surpasses $\eta [(z^k)^\top x^k]/n$ in the later IPM iterations. - Following the comments from Reviewer Qdyo, we increase the LSTM time steps and report the computational results in Fig. 3(b) of the supplementary material. From Fig. 3(a) and 3(b), We can claim that **with the LSTM time steps increasing, $\\|J^k y^k + F^k\\|$ becomes smaller and closer to $\eta [(z^k)^\top x^k]/n$**. >*3. Modifying Assumption 1 to better align with practice. For example, assuming a fixed error on the right-hand sides of equations (5) and (6). Based on this relaxed assumption, a result similar to Proposition 1 could be derived, but with a fixed error on the limit of $(x^k, \lambda^k, z^k)$. The relationship between the allowed error in Assumption 1 and the propagated eventual error in Proposition 1 would sufficiently describe the performance of IPM-LSTM.* Thank you for your suggestion. **Equation (5) in our manuscript is designed to be consistent with the assumptions made in classic inexact IPMs or inexact Newton methods**, as seen in Equation (4) in [1], Equation (2.1) in [2], Equation (6) in [3], and Equation (12) in [4]. To the best of our knowledge, the theoretical bounds referenced in these works are dependent on the iteration count $k$, and do not involve fixed error terms. We would explore the use of fixed error bounds in our future studies. [1] Bellavia, Stefania. Inexact interior-point method. Journal of Optimization Theory and Applications 96 (1998): 109-121. [2] Eisenstat, Stanley C., and Homer F. Walker. Globally convergent inexact Newton methods. SIAM Journal on Optimization 4.2 (1994): 393-422. [3] Al-Jeiroudi, Ghussoun, and Jacek Gondzio. Convergence analysis of the inexact infeasible interior-point method for linear optimization. Journal of Optimization Theory and Applications 141 (2009): 231-247. [4] Gondzio, Jacek. Convergence analysis of an inexact feasible interior point method for convex quadratic programming. SIAM Journal on Optimization 23.3 (2013): 1510-1527. --- Rebuttal Comment 1.1: Title: Response to authors Comment: I greatly appreciate the detailed response and additional experiments. My concerns are fully addressed. The new experimental results regarding the accuracy of solving linear systems look pretty promising. I would like to upgrade my score. --- Reply to Comment 1.1.1: Comment: We really appreciate your efforts for improving our work and are happy to know that all of your concerns have been addressed.
Summary: This paper proposed to replace the linear system solver used in the inner loop of interior point method (IPM) with an LSTM for solving general non-linear programs. The LSTM is trained in a unsupervised manner to minimize a unconstrained least square objective derived from the KKT conditions. The proposed framework, IPM-LSTM, can be used in an end-to-end way, or to warm-start IPM so that the number of overall outer IPM iterations can be reduces and thus the solving time decreases. The authors theoretically proved that, as long as the trained LSTM achieves certain level of accuracy, the IPM will converge. The authors conducted empirical experiments on a variety of non-linear programs to verify the effectiveness of IPM-LSTM and that it approximately satisfied the assumption in the theoretical analysis. Strengths: 1. The idea of plugging in L2O methods to approximate a single step in a bigger optimization framework is interesting and promising. 2. The proposed architecture is concise and effective to some extent. The authors also provided theoretical analysis to support their method. 3. To some extent, the presented empirical resutls are promising.s Weaknesses: There are three major issues that undermine the robustness of this work: 1. The linear system solver used for IPOPT is unclear, which significantly impacts the solver's overall performance. While MUMPS is the default, HSL typically performs better, achieving 2-3 times faster results. This performance surpasses that of IPOPT when warm-started with the IPM-LSTM solution. 2. There is no analysis of how the performance changes with varying numbers of iterations in the inner LSTM, which decides the performance-efficiency trade-off of IPM-LSTM. Longer inner iterations (deeper LSTM) may improve the quality of the solutions but increases computation costs. However, improved solution quality may also decrease the number of outer iterations needed and thus reduce the overall solving time. Moreover, increased LSTM depth can make the training process more difficult. Overall, thise is tricky part and should be empirically investigated more carefully. 3. Although LSTM benefits from batched processing—a potential strength of IPM-LSTM accelerated by GPU—this work fails to provide empirical evidence supporting this property. Technical Quality: 3 Clarity: 3 Questions for Authors: See the weaknesses section. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: See the weaknesses section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank the reviewer for reading our manuscript and providing constructive coments. Please refer to the Author Rebuttal for a clarification of the two-stage framework proposed in our work. >*1. The linear system solver used for IPOPT is unclear, which significantly impacts the solver's overall performance. While MUMPS is the default, HSL typically performs better, achieving 2-3 times faster results. This performance surpasses that of IPOPT when warm-started with the IPM-LSTM solution.* Thanks for your suggestion. The default solver MUMPS was used in our experiments. To evaluate the effect of linear solvers on the performance of IPOPT, we consider 3 commonly used linear solvers (MA27, MA57, and MA86) from HSL and bundle IPOPT with each of them. We conduct experiments on 6 datasets used in our manuscript, each with 100 variables, 50 inequality constraints, and 50 equality constraints. We report the average computational time (in seconds) for each dataset in the table below. Results for IPOPT with different linear solvers are presented in the 2nd - 5th columns, while those for IPOPT warm-started by IPM-LSTM are listed in the last four columns. - The computational results demonstrate that **MUMPS is generally outperformed by MA57** but superior to MA27 and MA86. - **Regardless of linear solvers, warm-starting IPOPT** with primal-dual solutions provided by **IPM-LSTM enhances the performance** of IPOPT itself. ||IPOPT||||IPM-LSTM+IPOPT|||| |--|---|---|---|---|---|---|---|---| |**Dataset**|**MUMPS**|**MA27**|**MA57**|**MA86**|**MUMPS**|**MA27**|**MA57**|**MA86**| |**Convex QPs (RHS)**|0.269 |0.328 |0.191 |0.304 |0.170 |0.220 |**0.131** |0.195 | |**Non-convex Programs (RHS)**|0.289 |0.428 |0.215 |0.387 |0.225 |0.297 |**0.171** |0.299 | |**Convex QCQPs (RHS)**|0.287 |0.388 |0.251 |0.327 |**0.204** |0.270 |0.220 |0.226 | |**Convex QPs (ALL)**|0.279 |0.354 |0.199 |0.376 |0.201 |0.272 |**0.159** |0.245 | |**Non-convex Programs (ALL)**|0.305 |0.396 |0.213 |0.387 |0.193 |0.256 |**0.146** |0.237| |**Convex QCQPs (ALL)**|0.253 |0.328 |0.213 |0.311 |0.173 |0.202 |**0.160** |0.194 | >*2. There is no analysis of how the performance changes with varying numbers of iterations in the inner LSTM, which decides the performance-efficiency trade-off of IPM-LSTM. Longer inner iterations (deeper LSTM) may improve the quality of the solutions but increases computation costs. However, improved solution quality may also decrease the number of outer iterations needed and thus reduce the overall solving time. Moreover, increased LSTM depth can make the training process more difficult. Overall, thise is tricky part and should be empirically investigated more carefully.* Thanks for your suggestion. We first remark that in our implementation, the number of IPM iteration (e.g., outer iteration) is fixed to $100$. The number of iterations in the inner LSTM indeed incurs the performance trade-off. To illustrate this, we conduct experiments on convex QP (RHS) problems with 100 variables, 50 inequality constraints, and 50 equality constraints, investigating the quality of approximate solutions under different LSTM inner iteration settings. We report the results in the table below and Fig. 1 in the supplementary material. - At each IPM iteration, **as the LSTM network depth increases**, $\\|J^k y^k + F^k\\|$ decreases (see Fig. 1(a)). This indicates **an improvement in the quality of solutions to the linear systems**. Furthermore, the corresponding **IPM-LSTM converges faster** (e.g., fewer outer iterations) when the LSTM network becomes deeper (see Fig.1(b)). - From the table below, generally speaking, **the IPM-LSTM with deeper LSTM architectures tends to produce better approximate solutions** (with lower objective values and smaller constraint violation) but **with longer computational time.** - Training deeper LSTM networks indeed becomes more challenging, such as **longer training time and more memory consumption** due to large computational graphs, **vanishing or exploding gradient** issues [1]. | # Ite. in LSTM | Obj. | Max ineq. | Mean ineq. | Max eq. | Mean eq. | Time (s) | | -------- | -------- | -------- | -------- | -------- | -------- | -------- | |10 | -12.740 | 0.000 | 0.000 | 0.006 | 0.002 | 0.012 | |20 | -14.615 | 0.000 | 0.000 | 0.003 | 0.001 | 0.020 | |30 | -14.753 | 0.000 | 0.000 | 0.002 | 0.001 | 0.028 | |40 | -14.897 | 0.000 | 0.000 | 0.003 | 0.001 | 0.037 | |50 | -14.906 | 0.000 | 0.000 | 0.001 | 0.000 | 0.045 | |60 | -15.021 | 0.000 | 0.000 | 0.001 | 0.000 | 0.055 | |70 | -15.026 | 0.000 | 0.000 | 0.000 | 0.000 | 0.063 | |80 | -15.012 | 0.000 | 0.000 | 0.000 | 0.000 | 0.072 | |90 | -14.960 | 0.000 | 0.000 | 0.000 | 0.000 | 0.080 | >*3. Although LSTM benefits from batched processing—a potential strength of IPM-LSTM accelerated by GPU—this work fails to provide empirical evidence supporting this property.* Thanks for your suggestion. Given a batch size of $k$ testing samples, we are able to feed all of them to the trained IPM-LSTM at the same time. Let $T$ denote the wall-clock time used when all samples are solved by IPM-LSTM. The average solution time for each instance is then $T/k$. Thus, the proposed IPM-LSTM approach can indeed leverage the advantage of GPU batch processing. To demonstrate this, we conducted tests on convex QPs (RHS) with 100 variables, 50 inequalities, and 50 equalities, with batch sizes varying from 10 to 5,000. The computational results are reported in Fig. 1\(c\) of the supplementary material. As shown in Fig. 1\(c\), **the average solution time decreases as the batch size increases**. However, due to hardware limitations, once the batch size exceeds a certain threshold, the average computational time will level off. [1] Pascanu, Razvan, Tomas Mikolov, and Yoshua Bengio. "On the difficulty of training recurrent neural networks." International conference on machine learning. PMLR, 2013. --- Rebuttal 2: Title: Please Engage in Discussion Comment: Dear Reviewer, Thank you for your time and efforts throughout the review period. Please read the authors' rebuttal as soon as possible and indicate if their responses have addressed all your concerns. Best, Your AC
null
null
Rebuttal 1: Rebuttal: Before addressing the reviewers' comments, we would like to first clarify the two-stage framework proposed in our work. - **Stage I**: we **utilize IPM-LSTM to produce a high-quality approximate solution** (which might neither be feasible nor optimal but presumably well-centered). The IPM-LSTM is designed like a classic IPM with linear systems being solved by an LSTM network rather than a linear system solver. - **Stage II**: we then use the approximate solution to **warm start an IPM solver**, such as IPOPT. To the best of our knowledge, **our work is the first attempt of enhancing IPMs with learning-based techniques for addressing general nonlinear programs**. Pdf: /pdf/f297b6e67884041566db613e315eb103f6aae9bb.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Context and Geometry Aware Voxel Transformer for Semantic Scene Completion
Accept (spotlight)
Summary: This paper proposes CGFormer for semantic scene completion. It generates distinct voxel queries for different input images, instead of simply predefining a set of trainable parameters. The deformable cross attention is extend to 3D pixel space, avoiding sampling the same features for different projecting points. This method further enhances the 3D volume from both local and global perspectives. This paper incorporates stereo depth to improve the accuracy of depth probability by a depth refinement strategy. Strengths: 1. The code is submitted in the supplement material, whose guideline is detailed and easy to follow. 2. The proposed blocks bring notable performance gain, with elaborate experiments. 3. The CGFormer surpasses previous methods on both the SemanticKITTI and SSC-Bench-KITTI360 benchmarks. 4. The paper is well-organized the motivation is clear. Weaknesses: 1. EfficientNetB7 contains much more parameters than the ResNet50 employed in the previous coarse-to-fine methods (e.g., VoxFormer, Symphonize). Comparing the performance under the same setting will be fairer. 2. Text error in Figure 1. 3. Although the paper presents both superior qualitative and visualization results, it would be better to provide several failure cases. Technical Quality: 3 Clarity: 3 Questions for Authors: Please see the weaknesses above. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: In this paper, the limitations have been well discussed. The accuracy on most of the categories is unsatisfactory. Furthermore, there is a need to explore designing depth estimation networks under multi-view scenarios to extend the geometry-aware view transformation to these scenes. The method is worth further exploration. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > Q1. Parameters and performance comparison between CGFormer and Symphonize. To compare the performance of CGFormer with Symphonize with a comparable number of parameters, we revisited the design of our CGFormer and replaced the EfficientNetB7 with ResNet50 and the Swin blocks in the TPV branch with more lightweight residual blocks. The results on the semantickitti validation set are summarized in the table below. As shown in this table, our CGFormer is robust across different backbone networks and still outperforms Symphonize with a comparable number of parameters. This highlights CGFormer's potential and robustness. | Model | IoU&uarr; | mIoU&uarr; | Parameters (M)&darr; | Training Memory (M)&darr; | | --- | --- | --- | --- | --- | | CGFormer (EfficientNetB7, Swin Block) | **45.99** | **16.87** | 122.42 | 19330 | | CGFormer (ResNet50, Swin Block) | **45.99** | 16.79 | 80.46 | 19558 | | CGFormer (ResNet50, ResBlock) | 45.86 | 16.85 | **54.8** | 18726 | | Symphonize | 41.92 | 14.89 | 59.31 | **17757** | > Q2. Text error in Figure 1. Thanks for pointing out this mistake. We will correct it in the revised manuscript. > Q3. Failure cases. Figure 2 in the uploaded PDF shows two examples of failure cases, where it is difficult to distinguish between adjacent cars. As seen in the RGB images, these objects are located in distant regions, making them challenging to differentiate. In the future, we plan to explore ways to improve the performance of our method in these far regions. --- Rebuttal Comment 1.1: Comment: Thanks for the detailed rebuttal, which has addressed my concerns. The parameters and performance comparison is convincing that can be added in the revised version. As a result, I am willing to raise the score from *Weak Accept* to *Accept*. --- Rebuttal 2: Comment: Dear Reviewer Lm1y, Thank you for raising your score. Your suggestions are valuable to improve the quality of our paper. We will update our manuscript in the revision. Authors of Paper ID 3532 Title: Thanks for your review
Summary: The authors present Context and Geometry Aware Voxel Transformer (CGFormer) for the semantic scene completion task. Their method extends the baseline VoxFormer with a Context-Aware Query Generator (CAQG), 3D deformable attention layers, a depth refinement block, and a dynamic fusion of voxel and TPV features. Experiments on two datasets demonstrated that CGFormer can achieve state-of-the-art mIoU performance. Strengths: 1. solid experiments. Two datasets are experimented with and results tables are presented clearly along with many baseline methods. 2. detailed ablation study for every designed module. 3. The figures are well-organized and easy to follow, which is greatly appreciated. Weaknesses: 1. The biggest concern to the reviewer is that the empirical results are not convincing enough to support the main claims of the proposed method. Specifically, a) The authors claim that context-aware queries are a major novelty that helps the model perform better in its region of interest. However, the empirical results show that the performance is superior mainly in categories with larger areas such as the roads and sidewalks, while not as good in categories with finer details such as trucks, persons, and bicycles. The fact that CGFormer is comparable or worse compared to the baselines with context-independent queries on 50% of the categories makes the claims less compelling. b) Besides a), the qualitative difference in Figure 3 is not illustrative. According to the first two columns, the context-dependent query should sample more at locations with details such as cars and bicycles, therefore, in the 3rd column, shouldn’t the context-aware queries be focused more on the white car? It is not clear why the points in (b) are irrelevant. c) The model is named CGFormer, while how context-aware plays a part has been discussed in the paper, how the performance reflects geometry-aware is barely touched on. 2. Some minor issues with the writing. The proposed CAQG seems to be a key component in the model. However, its details are glossed over in Section 3.2 lines 137--146. Please consider elaborating on this part to emphasize the novelty. Technical Quality: 2 Clarity: 3 Questions for Authors: 1. details of the CAQG should be put in Section 3 since it is a key component of the proposed method. 2. Table 3 shows that directly adding TPV slightly increases mIoU while hurting the IoU. This is interesting and can the authors provide their thoughts and analysis? 2. In Table 3, LB-only, DF, and adding each of the TPV planes are ablated. Is it possible to also ablate the TPV branch only? Conceptually this is the only missing setting. Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: The authors have discussed the limitations in their appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > W1 (a). Performance gain of the context-aware queries. Compared to methods that do not use temporal inputs or rely on much larger image backbone networks, CGFormer achieves the highest performance in 12 out of 19 categories on both the test and validation sets, as shown in Tables R2 and R3 of the uploaded PDF, encompassing both large-area and fine-detail objects, while the other methods excel in at most 3 to 4 categories. Although CGFormer shows limitations in several categories such as person and truck, it excels in most of the categories, including car, traffic sign, trunk, and pole. Indeed, CGFormer attains the top performance on the bicycle under the same condition. To further validate the effectiveness of the proposed context-aware queries, we remove the CAQG module while keeping all other components unchanged. We then compare the performance of the two models across various categories. The results are presented in Table R3 of the uploaded PDF, with categories where performance improves after integrating the CAQG module highlighted in red. The CAQG module enhances performance in most categories (11 out of 19), including both large-area objects (sidewalk, terrain) and small objects (trucks, bicycles, person), while having minor effects on others. The above results demonstrate the efficacy of the proposed module. > W1 (b). More explanation of Figure 3 in the manuscript. Thanks for your suggestion to help illustate our motivation. We will replace Figure 3 with more clear examples in the revised manuscript, as shown in the Figure R1 in the uploaded PDF. Here's a more detailed explanation for this figure. The yellow point represents the projected location of the visible voxel queries of the deformable cross-attention from 3D space onto the 2D image plane using the camera's intrinsic and extrinsic matrices. This projection aggregates information from the image features. For a voxel projected onto the image plane, the query should tend to aggregate information relevant to the semantics of the position where it lands. As shown in the left and middle columns, the yellow reference points fall onto the car and bicycle, and the sampling points of the context-aware query are mainly distributed within the regions of these two objects. In the right column, the query point projects onto the building, and the context-aware query points are primarily located on the building. In contrast, the context-independent query points are scattered in the region of the car. > W1 (c). More explanation for "geometry-aware". The modules derived from depth information are referred to as "geometry-aware" in the title. We provide a more detailed discussion as follows. (1) As mentioned in the manuscript (line 49-51), visible queries are projected onto the image plane to aggregate information by deformable cross-attention. However, when projecting the 3D points onto the image plane, many points may end with the same 2D position with similar sampling points on the 2D feature map, causing a crucial depth ambiguity problem. To address this issue, we extend deformable cross-attention from 2D to 3D pixel space, which allows to differentiate points with similar image coordinates by their depth coordinates, as illustrated in Figure 1 of the manuscript. (2) Additionally, we introduce depth refinement to improve the accuracy of the estimated depth probability. Model (a) in Table 3 provides ablation results for the 3D deformable cross-attention, and Table 5 includes ablation results for depth refinement. > W2 & Q1. More explanation for the CAQG module. Thanks for pointing out this point, here is a revised version (line141-line144) of the manuscript. To elaborate, the context feature $\mathbf{C}\in\mathbb{R}^{H\times W\times C}$ and depth probability $\mathbf{D}\in\mathbb{R}^{H\times W\times D}$ are first derived from the 2D image feature $\mathbf{F}^{2D}$. Taking $\mathbf{C}$ and $\mathbf{D}$ as inputs, the query generator $f$ maps them from 2D image space to 3D space to generate context-aware voxel queries $\mathbf{V_{Q}}\in\mathbb{R}^{X\times Y\times Z}$, where $(X,Y,Z)$ denotes the spatial resolution of the 3D volume. The query generator can be any explicit view transformation approach (e.g., voxel pooling, FLoSP, CaDDN). Table 4 provides ablation experiments for different methods. > Q2. More analyses for the TPV branch. Thanks for your constructive question. We find additional insights regarding the TPV branch. We reanalyze the ablation experiments (models d, e, f, g in Table 4 of the manuscript) and discover that the performances were quite similar. To determine whether these performance gains are statistically significant across multiple training seeds, we conduct further experiments. As shown in Table R5 of the uploaded PDF, combining the outputs from both branches does enhance performance, but with some variability. We speculate that the TPV branch primarily focuses on global information, making it challenging to capture fine-grained voxel details. In contrast, the local branch enhances the fine-grained structural details of the 3D voxels. Simple addition treats all features as equally important, which may introduce side effects. Instead, weighting the more important features from each branch can enhance overall performance by leveraging their distinct strengths. > Q3. Ablation experiments for TPV branch. Thanks for your suggestion to complete our experiments. Taking the model (b) of the table 3 in the manuscript as baseline, we present the ablation results for the TPV branch. As shown in the Table R4 in the uploaded PDF, enhancing the features with TPV branch boost the performance in terms of IoU, while minor improvement in terms of the mIoU, following the assumption that TPV focuses more on the global information. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed rebuttal. I appreciate the explanations to my questions, the added experiments tables, the added ablation study, and the analysis on the TPV branch. Those have lifted most of my concerns. Thank you for updating Figure 3 with a more detailed analysis. I find it improved over the previous version though the examples have been changed, understandably for better illustration. Thank you for highlighting the performance gain in Table R3. While there are improvements in many categories, I still find the performance drop in others a bit concerning. But given the improved average IoU and mIoU, I think it should be OK. However, in future revisions, I encourage the authors to provide some failure cases in those categories to give more insights into the advantages and disadvantages of the proposed module. I have also read the other reviews and separate rebuttals. All in all, I am willing to raise my rating to *boarderline accept*. --- Reply to Comment 1.1.1: Title: Thanks for your review Comment: Dear Reviewer ryRH, Thank you for raising your score. Your suggestions are valuable to improve the quality of our paper. We will update the overall manuscript and provide more failure cases, as displayed as the examples provided in the uploaded PDF. Authors of Paper ID 3532 --- Rebuttal 2: Title: Inquiry About Any Additional Concerns Comment: Dear Reviewer ryRH, Thanks for your comments, which are valuable for improving the overall quality of this manuscript. To address your major concerns, we compared the performance of CGFormer with other methods using consistent inputs and similar image backbones. We conducted ablation experiments to evaluate the performance gains of context-aware queries across different classes. The uploaded PDF includes a more detailed analysis of the figures and additional illustrative examples. We have also provided a thorough explanation of the origin of the term 'geometry aware' as mentioned in the title. Additionally, we have included ablation studies and analyses for the TPV branch. Could we kindly ask if our responses have addressed for your concerns and if you have any new questions? Thanks for your time and effort. Authors of Paper ID 3532
Summary: This manuscript studies the problem of road scene semantic scene completion from RGB images. The architecture and benchmarking frameworks follow a widely accepted literature. The innovation proposed here is better queries that are informed of the geometry and semantics of the input scene, a cross-attention variant that leverages rich information from different depth planes in the cost volume, and better information fusion from different represetnations. The whole architecture achieves state-of-the-art performance on public benchmarks. Strengths: (1+) The paper aims to improve the query quality and solve the deep ambiguity and context-independence problems, which are important and motivated for the SSC task. (2+) The paper proposes a novel method, the CGFormer, which introduce a context aware query generator to capture context -dependent queries and a novel Depth Net utilizing stereo depth and monocular depth for effective refinement. (3+) Experiments on SemanticKITTI (table 1.) and SSCBench-KITTI360 (table 2.) show CGFormer outperforms prior methods. The motivation and the proposed components are validated by detailed ablation studies. (Table 3.,4.,5.) (4+) The paper is basically clearly written and model architecture is easy to follow. Weaknesses: (1-) The paper notes that context-dependent query tends to aggregate information from the points within the region of interest, Ablation on the the number of the cross-attention and self-attention layers should be provided. (2-) In alignment with previous methods, it would be better to demonstrate the performance when using only a monocular image as input (See Table 3. in VoxFormer: Sparse Voxel Transformer for Camera-based 3D Semantic Scene Completion). (3-) The visualization (figure 4. and figure 5.) shows some common scenes (It would be even better if the corresponding image views could be added). Could you show more performance improvements for objects like the bicycle in Figure 3 (due to capturing the regions of interest)? Technical Quality: 2 Clarity: 3 Questions for Authors: My questions are incorporated into the weakness section. Confidence: 5 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: The manuscript is limited in scholarship and can be improved by incorporating semantic scene completion references like [A] and [B]. [A] Efficient semantic scene completion network with spatial group convolution, ECCV 2018 [B] Lode: Locally conditioned eikonal implicit scene completion from sparse lidar, ICRA 2023 Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > W1. Ablation on the number of the cross-attention and self-attention layers. We present the results of different configurations of cross-attention and self-attention layers on the semantickitti validation set in the table below. As shown in this table, the performance improves gradually with the increase of the number of the attention layers. However, once the number exceeds a certain threshold, the performance tends to stabilize. In alignment with previous methods, we set the number of cross-attention layers to $3$ and the number of self-attention layers to $2$ in the manuscript. This table also includes the results of the model without the CAQG module. Notably, using just one self-attention and one cross-attention layer with the CAQG module yields better performance than using two self-attention and three cross-attention layers without the CAQG module. This demonstrates the effectiveness of our proposed context and geometry-aware voxel transformer. | Model | IoU&uarr; | mIoU&uarr; | Training Memory (M)&darr; | | --- | --- | --- | --- | | 1 self, 1 cross | 44.95 | 16.24 | **16602** | | 1 self, 2 cross | 45.12 | 16.26 | 16836 | | 1 self, 3 cross | 45.22 | 16.40 | 16920 | | 2 self, 3 cross | **45.99** | **16.87** | 19930 | | 2 self, 4 cross | 45.86 | 16.74 | 19601 | | 3 self, 4 cross | 45.76 | 16.77 | 22364 | | w/o CAQG | 44.88 | 15.84 | 18959 | > W2. Performance using only monocular image. Following VoxFormer and Symphonize, we replace the depth estimation network with AdaBins and present the results on the semantickitti validation set in the table below. To better evaluate the performance of our CGFormer, we also include the results of VoxFormer, Symphonize, and OccFormer. Compared to the stereo-based methods when using only a monocular image (VoxFormer, Symphonize), CGFormer achieves superior performance in terms of both IoU and mIoU. Furthermore, our method also surpasses OccFormer, the state-of-the-art monocular method. | Model | IoU&uarr; | mIoU&uarr; | | --- | --- | --- | | CGFormer (AdaBins) | **41.82** | **14.06** | | VoxFormer-S (AdaBins) | 38.68 | 10.67 | | VoxFormer-T (AdaBins) | 38.08 | 11.27 | | Symphonize (AdaBins) | 38.37 | 12.20 | | OccFormer | 36.50 | 13.46 | > W3. More visualizations. Figure R1 in the uploaded PDF displays two visualization examples for objects with finer details. As shown in the RGB image, the sampling points of the context-dependent queries are typically situated within the region of interest. This allows CGFormer to capture much more clear details compared to other methods, highlighting the effectiveness of our proposed module. We will also provide the image view in the revised manuscript as done in this figure. > W4. The manuscript is limited in scholarship and can be improved by incorporating semantic scene completion references. Thanks for your suggestion. We will incorporate these semantic scene completion references in the revised manuscript. --- Rebuttal Comment 1.1: Title: Thanks Comment: I'd like to thank the authors for the thoughtful rebuttal. I am glad to see the performance using only monocular image, which demonstrates that CGformer still performs best with a monocular image as input. It dispelled my doubts that there might be an unfair comparison with other methods. Futhermore, Fig. R1 shows two visualization examples or objects with finer details that are convincing. Based on the authors' responses and the comments of other reviewers I am willing to change my rating to weak accept. --- Rebuttal 2: Title: Thanks for your review Comment: Dear Reviewer wDiN, Thank you again for your review. We are glad that our response has addressed the questions you raised. Authors of Paper ID 3532
Summary: This paper proposes a state-of-the-art Semantic Scene Completion method called CGFormer. It introduces a Context and Geometry Aware Voxel Transformer that dynamically generates queries tailored to individual input images, addressing depth ambiguity through a 3D deformable cross-attention mechanism. The network leverages a multi-scale 3D representation by combining voxel and tri-perspective view to enhance both local semantic and global geometric information. State-of-the-art results are achieved in key benchmark tests. Strengths: 1. This paper focuses on limitations of existing methods that utilizes a combination of voxel and tri-perspective view representations to capture both local details and global structures. 2. The idea that different input images have unique contextual features is very interesting. 3. The experiments are adequate with complete qualitative analysis, ablation experiments and visualization results. 4. In the SemanticKITTI test, the proposed method improves most of the metrics. Weaknesses: 1. In the SSCBench-KITTI360 test set, CGFormer does not improve much compared to Symphonie. Symphonie has almost half of the metrics higher than CGFormer, and Symphonie seems has fewer parameters. 2. Context and Geometry Aware Voxel Transformer seems to be a redesign of Symphonie and adds Deformable Self-Attention after Deformable Cross-Attention, which should be added with more explanation as to why it's done. 3. Context net is not described in the main text or in the supplementary material. 4. The table font is too small to read. Technical Quality: 2 Clarity: 3 Questions for Authors: Refer to the weaknesses. Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: The limitations of this paper are primarily in the accuracy of certain categories (e.g., pedestrians and bicyclists), which suggests that there is room for improvement in these areas, as outlined by the authors in Section 5. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > W1. Parameter and performance comparison with Symphonize. | Model | IoU&uarr; | mIoU&uarr; | Parameters (M) &darr; | Training Memory (M) &darr; | | --- | --- | --- | --- | --- | | EfficientNetB7, Swin Block | **45.99** | **16.87** | 122.42 | 19330 | | ResNet50, Swin Block | **45.99** | 16.79 | 80.46 | 19558 | | ResNet50, ResBlock | 45.86 | 16.85 | **54.8** | 18726 | | Symphonize | 41.92 | 14.89 | 59.31 | **17757** | Thanks for your valuable suggestion. We revisit the architecture of CGFormer and discover some interesting new results. To compare its performance with Symphonize with a comparable number of parameters, we analyze the components of CGFormer, finding that replacing EfficientNetB7, used as the image backbone, and the Swin blocks, used in the TPV branch backbone, with more lightweight ResNet50 and residual blocks, respectively, can significantly reduce the number of parameters of our network. The results on the semantickitti validation set are presented in the table above. Compared to the original architecture, CGFormer maintains stable performance regardless of the backbone networks used for the image encoder and TPV branch encoder, underscoring its effectiveness, robustness, and potential. Compared to Symphonize, lightweight CGFormer achieves an IoU of 45.86 and mIoU of 16.85, significantly surpassing Symphonize's IoU of 41.92 and mIoU of 14.89 on the SemanticKITTI validation set with a comparable number of parameters. We retrain this lightweight model on the KITTI-360 dataset, with the detailed results of each class in Table R1 of the uploaded PDF. The lightweight version of CGFormer achieves an IoU of 47.78 and mIoU of 20.03, demonstrating a substantial improvement of 1.45 mIoU and 3.5 IoU over Symphonize. For specific categories, the original CGFormer architecture outperforms Symphonize in 9 out of 18 classes, and with the lighter backbones, it surpasses Symphonize in 10 out of 18 classes. These results further highlight the superiority of our approach. > W2. More explanation of context and geometry aware voxel transformer. We apologize for any misunderstanding. Context and geometry aware voxel transformer is not the redesign of Symphonize. We will provide a more detailed explanation here. Existing coarse-to-fine (sparse-to-dense) methods, such as VoxFormer and MonoOcc, generally follow a pipeline that first aggregates 3D information for visible voxels using depth-based queries. These queries are defined as a set of learnable parameters, which are the same for all the input images. Subsequently, these methods complete the 3D information for non-visible regions using the reconstructed visible areas as starting points. The aggregation of information for visible voxels is accomplished through deformable cross-attention, while the completion of information in non-visible regions is handled by deformable self-attention, similar to the MAE [1]. For the proposed context and geometry-aware voxel transformer, we take into account the context of different images and introduce context-dependent queries. Instead of solely predefining a set of learnable parameters that primarily capture the overall distribution of the dataset, the context-dependent queries are related to the image content, allowing them to aggregate information from points within contextually relevant regions. Additionally, we extend deformable cross-attention from the 2D to 3D pixel space, enabling the differentiation of points with similar image coordinates based on their depth coordinates. In contrast, Symphonize employs context-independent voxel queries for all the input images. It first condenses the image into a set of tokens with higher-level instance semantics, then the voxel queries serve as the query of the deformable cross attention while the set of tokens serve as the key and value. The original manuscript of Symphonize referred to this set of tokens as "queries", which may have led to some misunderstanding. > W3. Details of context net. Sorry for the oversight of this module. The image encoder consists of a backbone network for extracting multi-scale features and a feature pyramid network to fuse them. We employ SECONDFPN [2] network here. It fuses the multiscale image features and output a feature map $\mathbf{F}^{2D}\in\mathbb{R}^{H\times W\times C}$, with $C$ set to $512$. The context net consists of several convolution blocks that take $\mathbf{F}^{2D}$ as input and reduce the number of channels. The generated context feature $\mathbf{C}$ has a shape of $H\times W\times 128$. We will include these details in the revised manuscript. > W4. The table font is too small to read. Thanks for your suggestion. We will fix it in the revised manuscript. Examples of tables with larger font are provided in the uploaded PDF. [1] Masked autoencoders are scalable vision learners, CVPR 2022 [2] SECOND: Sparsely Embedded Convolutional Detection, Sensors 2018 --- Rebuttal 2: Title: Inquiry About Any Additional Concerns Comment: Dear Reviewer jg8M, Thanks for your comments, which are valuable for improving the overall quality of this manuscript. We've provided additional architecture analysis on our proposed method and compared it with Symphonize under a comparable number of parameters. Besides, we provided more detailed explanations for the components of the network and examples of tables with larger fonts in the uploaded PDF. Could we kindly ask if our responses have addressed for your concerns and if you have any new questions? Thanks for your time and effort. Authors of Paper ID 3532
Rebuttal 1: Rebuttal: We appreciate the valuable comments of reviewers, which have greatly contributed to enhancing the quality of our paper. We are glad that the reviewers recognized various strengths of our work, including the clear motivation [wDiN] [Lm1y], interesting idea [jg8M], comprehensive experiments [ryRH, jg8M, Lm1y], good performance [jg8M, wDiN, Lm1y], clear writing [wDiN] [Lm1y], and easy-to-follow figures [ryRH, wDiN]. Additionally, Reviewer [Lm1y] highlighted that the provided guideline for the submitted code is detailed and easy to follow. We will answer the specific questions from each reviewer. We upload a PDF file with figures and tables which we will present in our rebuttal. Pdf: /pdf/daefc12ebe14cc1aecdc687cf937a675d99b63ff.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
SWT-Bench: Testing and Validating Real-World Bug-Fixes with Code Agents
Accept (poster)
Summary: In this paper, the authors propose a novel benchmark for evaluating the capability of LLMs and Code Agents in generating test cases from GitHub issues. The benchmark includes real-world issues, patches, and golden tests from popular Python repositories. The authors claim that Code Agents outperform traditional test generation methods and highlight the potential of these agents in improving software quality and developer productivity. Strengths: The paper proposes a new benchmark for test generation, addressing a gap in current research. The approach leverages real-world data from GitHub, making the benchmark relevant and practical. Weaknesses: The data contamination problem. The proposed benchmark is collected from GitHub, which might have been exposed to the pretraining corpus of LLMs. The authors overlook substantial existing work on test case generation using LLMs. There is no discussion on the long-term relevance and maintenance of the benchmark, which is crucial for its sustained utility. The benchmark is focused solely on Python, ignoring other programming languages. The experiments are conducted on only three LLMs (GPT-4, Claude-3, Mixtral 7x22B), which is insufficient for a comprehensive evaluation of the benchmark. The paper fails to address the broader societal impact. Technical Quality: 2 Clarity: 1 Questions for Authors: How does the benchmark ensure that it does not include tasks already present in pre-trained DNN datasets? Why were significant existing works on LLM-based test generation not included in the paper? What measures are in place to ensure the long-term relevance and maintenance of the SWT-BENCH benchmark? How might the results differ if the benchmark included tasks in programming languages other than Python? Can the authors discuss the potential societal impacts of the widespread adoption of Code Agents in software testing, particularly concerning developer employment? Please see the above to clarify any misunderstandings and add additional results. Confidence: 4 Soundness: 2 Presentation: 1 Contribution: 2 Limitations: The paper lacks discussion on various limitations, including the limitations of the metrics used and potential data contamination across models. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank Reviewer ueFW for their critical perspective and for raising interesting questions, which we address below. **Can you discuss the effect of possible data contamination on your results and how it might be mitigated?** We thank the reviewer for raising this point. In short, we see no statistically significant difference in performance (p=37%) for issues from before and after the data cutoff and thus conclude that contamination is not affecting our results significantly. Please see the main response for more details. **Why were significant existing works on LLM-based test generation not included in the paper?** We discuss automated test generation (including using LLMs) in our related work (Lines 71-75). Due to space constraints, this discussion focuses on the most important related works and, e.g., omits [1,2,3,4] which are not applicable to our setting as they only consider coverage and generating passing tests [1,3,4], require a large number of tests and fixes to be generated [2], and the to be tested code to be specified [3,4]. We are happy to include these in the next revision using the additional space. If the reviewer could provide a list of works they believe to be relevant, we would be happy to review and include them. **How can the long-term relevance and maintenance of SWT-Bench be ensured?** We agree that long-term relevance is an important topic, actively discussed in the community. We believe SWT-Bench is particularly well positioned to ensure long-term relevance. In particular, new instances can be easily created from new repositories as well as new issues in already used repositories. However, such a rolling benchmark also has disadvantages (comparability of scores, cost of reevaluation). Regardless of these aspects, we believe our initial results showing the promise of Code Agents to already be valuable irrespective of any long-term maintenance of SWT-Bench. **Can you discuss the relevance of your results on Python and how they might generalize to other languages?** Please see our main response for a discussion on Python’s relevance and how our approach applies to other languages. Further, it is common in the field to focus on a single language, especially when addressing a new task, be that Java [3,4], Kotlin [1], or Python [2,5]. **Can you evaluate your benchmark on more than three LLMs?** We first want to highlight that while we only considered 3 LLMs (the, at the time, best proprietary model (GPT4), a model balancing cost and performance (Claude 3 Haiku), and a strong open-weight model (Mixtral 8x22B)), we consider a wide range of different agents based on the strongest available model. Given the challenging nature and thus low resulting performance even of GPT4, we refrained from considering weaker models. We want to highlight that all other reviewers specifically highlighted the quality of our extensive evaluation. Finally, since the original submission, stronger models have been released and we have conducted additional experiments on GPT4o-mini, Claude 3.5 Sonnet, and Mistral Large 2 using SWE-Agent, reporting results in Table 2 of the attached PDF. We observe that results follow general model capability as expected. **Can you discuss the potential societal impact of widespread adoption of Code Agents in software testing, particularly concerning developer employment?** Testing and bug reproduction are often neglected among professional developers due to a lack of extrinsic and intrinsic motivation [6]. We thus believe a more widespread adaptation of Code Agents in software testing has the potential to not only improve code quality but also developer productivity and satisfaction. Regarding the potential to displace human developers, benchmarks such as SWT-Bench show that current Code Agents are still far from matching or even outperforming human developers. Instead, they show that human supervision is still essential to leverage current Code Agents. Finally, while we believe that benchmarks such as ours can help drive progress toward better AI systems and thus increased automation, we do not believe our work specifically to have an outsized societal impact beyond the general developments in Generative AI. We are happy to include this discussion in the revised version of the paper. **Can you discuss the various limitations of your work including the metrics used and potential data contamination?** We first want to highlight that we discuss a range of limitations in “Section 6: Limitations”. Regarding data contamination, we are happy to include the new results presented in the attached PDF and discussed in the global response in the paper. We would like to ask the reviewer to explain which further “various limitations, including … of the metrics used” they have in mind. **Conclusion** We hope we were able to address the reviewer’s concerns and questions. We would further like to ask the reviewer, where they see flaws in the soundness and presentation of our work given that all other reviewers rate them as good or excellent. Finally, we would like to ask the reviewer to reconsider if their score is justified given the practical relevance and high potential impact they attest our work with weaknesses being shared by most other work in the space. **References** [1] Alshahwan et al. 2024, Automated Unit Test Improvement using Large Language Models at Meta [2] Chen et al. 2022, CodeT: Code Generation with Generated Tests [3] Schäfer et al. 2023, An Empirical Evaluation of Using Large Language Models for Automated Unit Test Generation [4] Chen et al. 2024, ChatUniTest: A Framework for LLM-based Test Generation [5] Wang et al. 2024, TESTEVAL: Benchmarking Large Language Models for Test Case Generation [6] Straubinger and Fraser 2023, A Survey on What Developers Think About Testing --- Rebuttal 2: Title: Response by Reviewer ueFW Comment: Thank you for the responses, and sorry for the late response due to a flurry of proposals and review dues. I have increased my overall assessment as some of my concerns have been addressed. While others are shown in below. **Data contamination** Thanks for your experiments for GPT-3-preview 1106, the experiments do not address my concern. At least, I need to mention the data contamination for all models by directly providing previous inference results and dividing them based on the KC time. **Existing works not included in the paper?** Thanks for your response. But I need to mention that omitting CodeT is crazy. **Other languages?** The response does not address my concern. **Other LLMs** I would recommend providing experiments for OpenCodeInterpreter, DeepSeek-Coder, XwinCoder, CodeLlama, WizardCoder, and Starcoder2 family in open-source LLMs for a new benchmark. --- Rebuttal Comment 2.1: Title: Reply to Reply to ueFW Comment: We thank the reviewer for acknowledging our rebuttal, engaging in the discussion, and raising their score. We would like to address their comments below. **Can you provide the data contamination results for each model?** Below we report results on SWT-bench Lite for all considered models, split by whether the issue was created before or after the corresponding Knowledge Cutoff (KC). The results correspond to Table 2 in the global attached PDF, comparing the model performances using SWE-Agent. Note that for our previous results on GPT4, we considered the full SWT-bench to ensure a sufficiently large number of samples from after the KC is available. Due to time constraints, this is not possible here and the smaller size of SWT-bench Lite results in much fewer instances being available after the KC. In fact, many of the recent models have a KC later than the latest instance (15 Aug 2023) in the full SWT-Bench (see the table below). For the only model with sufficiently early KC (Mixtral 8x22B), the overall F2P rate is so low that no meaningful comparison is possible (2/190 F2P instances before KC and 0/89 after). | Model | KC | PR created | n | Applicable |F2P | Coverage | |---------------------------|-------------|--------------|-----|--------------|-------|------------| | GPT-4 Preview 1106 | 30 Apr 2023 | before KC | 268 | 76.9 | 16.4 | 18.1 | | | | after KC | 11 | 54.5 | 18.2 | 20.0 | | Mistral Large 2 | 31 Jan 2024 | before KC | 279 | 4.3 | 0.7 | 0.2 | | | | after KC | 0 | - | - | - | | Claude 3.5 Sonnet | 30 Apr 2024 | before KC | 279 | 60.2 | 12.2 | 22.1 | | | | after KC | 0 | - | - | - | | GPT-4o mini (2024-07-18) | 31 Oct 2023 | before KC | 279 | 53.8 | 9.7 | 13.0 | | | | after KC | 0 | - | - | - | | Claude 3.0 Haiku | 31 Aug 2023 | before KC | 279 | 6.8 | 2.9 | 1.9 | | | | after KC | 0 | - | - | - | | Mixtral 8x22B | 30 Sep 2021 | before KC | 190 | 2.6 | 1.1 | 0.0 | | | | after KC | 89 | 2.2 | 0.0 | 1.4 | **Why did you not discuss CodeT in the paper?** As discussed in detail in our response to Reviewer PhkB, CodeT is not applicable to our setting, and we thus omitted it in our literature review due to space constraints. However, we will make sure to include it with a suitable discussion in the next revision of our paper, using the extra space of the camera ready version. **Why did you not consider popular Open-Source Code Models like CodeLlama?** During our experimentation, we indeed evaluated CodeLlama-70B and WizardCoder-Python-34B. However, we found they were not capable of following the desired diff format or agent instructions. Instead, they produced incoherent, incorrect, and degenerated outputs. As they thus consistently yielded close-to-zero F2P rates, we have excluded them from the reported results. We do not believe that comparing such low F2P rates would be interesting or meaningful, but are happy to include results for these models in the appendix of a revised version. Since both the strongest models (e.g. GPT4) alone (ZeroShotPlus) and slightly weaker models (e.g. Claude 3.0 Haiku) in an agent framework struggle with SWT-Bench, we believe it is most interesting to focus on comparing different agent frameworks using the best available base models. **Conclusion** We hope to have been able to address the reviewers concerns and are looking forward to their response. --- Rebuttal 3: Title: Reply to Reply to Reply to ueFW Comment: We thank the reviewer for their quick reply and hope that we could successfully address all points they did not have follow-up question on. **Effect of Data Contamination on Applicability for GPT4** While the applicability is indeed higher for issues created before the KC for GPT4, both F2P rate and Coverage are lower, which are much stronger indicators for memorization. Applicability solely measures whether the LLM can respect the required diff format, while F2P and Coverage measure the correctness of the generated tests. While memorization would lead to a correct test (higher F2P and coverage) it would not necessarily improve applicability as the test would not have been included in the training data in the right diff format. Further, we want to highlight that only 11 samples after the KC were available, making any conclusions based on these results statistically questionable (as highlighted in our previous reply).
Summary: This paper introduces SWT-BENCH, a novel benchmark for evaluating automated test generation capabilities of AI models, particularly Code Agents. The authors adapt existing code repair datasets and methods to the task of test generation, proposing new metrics such as fail-to-pass rate and change coverage. Their experiments reveal that Code Agents, originally designed for code repair, outperform methods specifically designed for test generation. The best-performing method, SWE-AGENT+, achieves an 11.1% success rate in generating relevant fail-to-pass tests. The study also demonstrates significant complementarity between different approaches, with an ensemble of methods solving 70% more samples than the best single method. Additionally, the authors show that generated tests can serve as a strong signal for the correctness of proposed code fixes. While the study is limited to Python and may have some selection biases, it suggests that Code Agents have significant potential for automated test generation, potentially improving both software quality and developer productivity. Strengths: Originality: - Introduces SWT-BENCH, a novel benchmark for test generation, adapting existing code repair datasets to a new task. - Proposes the use of Code Agents for test generation. - Develops new metrics (fail-to-pass rate and change coverage) specifically for evaluating test generation. Quality: - Comprehensive evaluation of multiple methods, including baselines and state-of-the-art approaches. - Rigorous experimental setup with detailed reporting of methodologies and results. - Great analysis of results, including complementarity between methods and correlation with code repair performance. Clarity: - Well-structured paper with clear explanations of complex concepts. - Effective use of figures and tables to illustrate key points and results. - Detailed appendices providing full prompts and additional experimental details. Significance: - Demonstrates the potential of Code Agents for automated test generation, a critical area in software development. - Shows that generated tests can effectively validate code fixes, potentially improving code quality processes. - Provides a new benchmark (SWT-BENCH) that could drive further research in this area. - Highlights the complementarity of different approaches, suggesting potential for ensemble methods in this domain. Weaknesses: 1. Limited statistical analysis: The authors acknowledge they were unable to perform a full statistical analysis due to computational costs. This limits the confidence in the reported results. It would be great if they can conduct a power analysis to determine the minimum number of runs needed for statistical significance. 2. Lack of error analysis: The paper doesn't provide a detailed analysis of the types of errors made by different methods. Maybe they should categorize and quantify common error types for each method. Provide qualitative examples of generated tests, both successful and unsuccessful. Analyze how error types correlate with issue complexity or repository characteristics. 3. Limited exploration of hyperparameters: The paper doesn't discuss the impact of different hyperparameters on the performance of Code Agents. To improve: Conduct an ablation study on key hyperparameters (e.g., number of interaction rounds, temperature). Provide insights on how to optimize Code Agents specifically for test generation. 4. Insufficient comparison to human performance: The paper lacks a comparison to human-generated tests. To address this: Include a small-scale study with professional developers generating tests for a subset of issues. Compare the quality, coverage, and time efficiency of AI-generated vs. human-generated tests. 5. Narrow focus on Python: The study is limited to Python, potentially limiting generalizability. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Given the computational constraints that prevented a full statistical analysis, could you provide more details on the variability of your results? Even with limited runs, could you estimate confidence intervals for your main findings? 2. The paper lacks a detailed error analysis. Could you provide examples of common failure modes for the different methods, particularly for SWE-AGENT+? How do these failure modes relate to the characteristics of the issues or repositories? 3. How sensitive are the Code Agents' performances to different hyperparameters? Did you explore variations in the number of interaction rounds or temperature settings? If so, what insights can you share about optimizing Code Agents specifically for test generation? 4. The paper doesn't compare AI-generated tests to human-generated ones. Have you considered conducting even a small-scale comparison with professional developers? This could provide valuable context for understanding the practical impact of your methods. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: 1. Limited Statistical Analysis: Due to computational constraints, the authors were unable to perform a full statistical analysis. This limits the confidence in the reported results and makes it difficult to assess the robustness and reproducibility of the findings. 2. Lack of Detailed Error Analysis: The paper doesn't provide an in-depth analysis of the types of errors made by different methods. This limits understanding of where and why the methods fail, which could be crucial for further improvements. 3. Python-Centric Approach: The study focuses exclusively on Python, which may limit the generalizability of the results to other programming languages. 4. Absence of Human Baseline: There's no comparison between AI-generated tests and human-generated tests, making it challenging to assess the practical impact and efficiency gains of these methods in real-world scenarios. 5. Limited Hyperparameter Exploration: The paper doesn't thoroughly explore the impact of different hyperparameters on the performance of Code Agents, potentially missing opportunities for optimization. 6. Lack of Runtime Performance Analysis: While computational costs are mentioned, there's no detailed analysis of execution times for different methods, making it difficult to assess practical trade-offs. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank Reviewer pxq3 for their detailed review, insightful questions, and helpful suggestions. We are happy to hear they appreciate the originality, quality, clarity, and significance of our work and in particular our comprehensive and rigorous evaluation across methods and models. Below, we address their remaining questions. **Can you conduct additional experiments to establish statistical significance and/or report confidence intervals?** Please see the main response for a detailed answer. In short, we have added experiments and statistical analysis demonstrating a high statistical significance of our results. **Can you conduct an error analysis, categorizing common errors made by different methods, and provide qualitative examples? Do these error types correlate with issue complexity or repository characteristics?** We have conducted a detailed analysis of common errors for our best-performing method SWE-Agent+ and will gladly add the findings to the paper for the camera-ready version together with concrete examples of their occurrence. We find the most common issues of the agent to be adding tests that do not reproduce the issue (either due to unrelated failure or non-failure) and incorrectly classifying the result as relevant, getting stuck in a loop after making inapplicable edits or encountering syntax errors, failing to execute the test environment after test creation, and adding tests with missing fixtures or missing indentation, rendering the test function non-executable. Further, we want to point the reviewer to Section 5.4, where we already investigate correlations of success with repositories, issue description length, and code repair success. **Can you conduct an ablation study on key hyperparameters of the employed code agents?** In the attached PDF, we provide ablation studies on the temperature used for decoding, the number of interaction rounds for various agents, and the Libro sample size (see Table 3 and Figure 1a,b). In agreement with prior work, we consistently observe the best performance for greedy decoding (t=0). We further observe increasing interaction rounds improve performance until saturation at 5-10 iterations (we use 20 as a default) with the only exception being AutoCodeRover, which still gains performance up to the maximum of 20 iterations we consider. Similarly, Libro’s performance improves as more samples are considered, however saturating also around the 5 samples we use by default. We will include these ablations confirming the choices of our hyperparameters in the updated appendix. **Can you conduct a small-scale study with professional developers to establish a baseline in terms of quality, coverage, and time efficiency?** We believe a human study of statistically significant sample size to be infeasible as a baseline given that many of the issues at hand are complex and challenging to understand, with the original underlying issues taking up to a year to resolve. Further, as all ground truth tests were written by humans, we believe that most professional developers should be able to solve this task given sufficient (substantial) time. We want to highlight that such human baselines are uncommon for code benchmarks, even at the much simpler function-synthesis level, due to the above-mentioned substantial time requirements. Finally, we believe that establishing such human baselines, possibly across skill levels, and focusing on simpler benchmarks could be an interesting work on its own and leave it to future work. **Can you discuss the applicability of your approach beyond Python?** Please see our answer in the main response. **Can you include an analysis of execution times for different methods?** For all LLMs we consider, part of the execution time is directly related to the number of tokens digested and generated (see Tables 7 and 8 of the main submission). For methods that require interaction with an execution environment however, time is usually dominated by setting up such an environment in a clean and reproducible manner (i.e. dockerized). We list results on execution times below and observe that all methods except zero-shot inference take between 3-5 minutes per instance, where we can observe a small trade off due to many-turn interactions in Code Agents versus single-shot execution in LIBRO. Given these small differences however, we believe execution time to be of limited practical relevance as issues can be processed in the background, similar to continuous integration, in response to raised user issues. | Method | Execution Time | |---|---| | ZeroShotPlus | 12.6s | | LIBRO | 2m53s | | SWE-Agent | 3m42s | | SWE-Agent+ | 4m25s | | AutoCodeRover | 5m1s | **Conclusion** We hope to have been able to address the reviewer's concerns, in particular regarding the statistical significance and hyperparameter sensitivity of our approach, remain happy to answer their follow-up questions and look forward to their reply.
Summary: This paper focuses on automatic test generation using large language models (LLMs) and code agents. The authors introduce a benchmark called SWT-BENCH, which aims to analyze the performance of LLMs and agents in generating unit tests given a task description. The key contributions include: 1) Creating the SWT-BENCH testing benchmark, which contains over 1700 samples from real-world GitHub issues. 2) Benchmarking various LLMs and agent systems. This includes evaluating direct LLM prompting, existing test generation tools (e.g., LIBRO), and adapted code agents (e.g., SWE-AGENT, AUTOCODEROVER). 3) Introducing a new format for LLM-based code editing. Strengths: 1) LLMs are stochastic black boxes by nature, and building experimental testbeds is crucial to advance the field. They can provide training data for the models as well as relevant signals for improvements. I think this benchmark is a very valuable contribution to the field as it will enable more algorithmic and agentic advances on the topic. 2) This benchmark does a good job at testing several methods such as LLMs, agents, and modifications to existing agentic systems. 3) The performance analysis of various methods is quite detailed, and the correlation or lack thereof between methods is very interesting. Weaknesses: 1) The scope of the dataset could be expanded further. In its current form, it only addresses Python codebases with associated GitHub PR issues. The distribution of testing problems encountered by real-world software engineers is likely quite different from those represented in this testbed. 2) A private test set that is not part of the publicly available training data would strengthen the claims that could be made when improvements are observed on this benchmark. Technical Quality: 4 Clarity: 4 Questions for Authors: 1) Have you looked into leveraging the failure modes of the generated test cases to improve code repair or code generation tasks? 2) LLMs for code generation seem to be very stochastic by nature. You mention that the failure modes on the benchmark vary depending on the underlying method used. To what extent do you think this variation is explained simply by the stochasticity of the method? For example, if you ran swe-bench 5 times, would that variation disappear? 3) How do you think would your method generalize in a private codebase that isn't available online? Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: The contamination problem with LLMs is, in my opinion, very important and under-discussed. Due to GitHub data being public, a lot of the data already exists in the training data for LLMs, which can render some conclusions less impressive. This is especially true in the code repair and test generation areas. I think it would be very helpful to include some analysis of the contamination problem. This could be done by using a held-out dataset in a similar way to how the benchmark below does it: https://livecodebench.github.io/ Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank Reviewer Ky1F for their detailed review, insightful questions, and helpful suggestions. We are happy to hear they appreciate the potential impact of our work as well as the extensiveness of our empirical evaluation and its analysis. Below we address their remaining questions. **Can the scope of the benchmark be expanded to address other repositories and programming languages?** Our approach can be applied to any programming language and can be easily extended to other repositories, including private projects if Github issues and pull requests were used. However, setting up the corresponding execution environments with historically correct dependency versions and parsing of all test results requires some manual effort. Given the already substantial cost of evaluating the whole benchmark, we decide to leave this to future work targeting specific languages or repositories (please also see our answer in the main reply). Further, we want to highlight that most currently used benchmarks such as HumanEval ([1], 164 instances) and the concurrent Software Test benchmark TestEval ([2], 210 instances), focus on much smaller sets of standalone functions in one or few languages, which are much less representative of real-world issues than the popular Github repositories our benchmark is built on. **Can you add a private test/holdout set to avoid contamination?** Having a fully private holdout set is challenging for agent evaluation, as this would require the party hosting this benchmark to execute submitted methods, which can incur substantial costs. Alternatively, the issue descriptions and codebase details would need to be shared with submitters, which would defeat the purpose of a private holdout set. Besides these practical challenges, the code underlying any such holdout set would still be available online, making it challenging to guarantee that it was not used for training. A previously entirely private repository, circumventing the latter issue, is difficult to obtain. Sticking to online code, a rolling benchmark set of recent instances could be used, similar to live code bench. However, this has the issue that reported numbers can not be compared directly and rerunning all methods on the current version would be necessary when evaluating the benchmark. We thus leave this variation to future work. **Can the failure modes of the test generation be leveraged to improve code repair?** We believe that test generation provides a valuable alternative perspective on the capabilities of Code Agents and help drive developments in the field. It can for example unveil repositories where Agents struggle to leverage even the existing tests, highlighting the need for further improvements in Agent tooling. We find several such cases, especially in the repository django, where executing the test suite itself is non-trivial but would benefit Code Agents for validating their changes. Even when test suites are executed, we find that the model often wrongly considers unrelated failures as related to the given issue. Other failures, such as getting stuck in editing loops seem universal and their resolution would benefit both code repair and test generation. We leave a detailed exploration of these directions for future work. **To what extent does the stochasticity of LLMs explain the variance in failure modes and performance between methods?** We first want to highlight that we use greedy decoding for most methods (except Libro and pass@k) which is deterministic up to floating point errors. Further, since the submission, more powerful cheaper models have been released, allowing us to investigate the variance across multiple runs with GPT4o-mini. There, we obtain the results shown in Table 3 of the attached PDF, showing 95% confidence intervals obtained with 5 runs for a range of temperatures. We observe very low variance, which is interestingly dominated by the flakiness of tests (despite execution in the same docker), present even for greedy decoding (temperature 0). **Can you comment on the generalizability of your results to private codebases that are not available online?** Based on manual inspections of agent traces we find that models typically don’t show signs of memorization. In a few cases, the model tries to assess files that do not exist, pointing to either hallucinations or outdated memorization. Overall, we observe that models make use of the provided tooling to navigate and explore the code base. Therefore we conjecture that our results should transfer well to private codebases. We further confirm this by comparing the performance of ZeroShotPlus on all instances created after the training data cutoff of GPT4 to a random subset of instances created before. We observe no statistically significant difference (p-value of 37% using Students t-test). **Conclusion** We hope we were able to address all of the reviewer's remaining questions and remain happy to answer any follow-up questions they might have. **References** [1] OpenAI 2021, Evaluating Large Language Models Trained on Code [2] Wang et al. 2024, TESTEVAL: Benchmarking Large Language Models for Test Case Generation
Summary: This paper re-purposes SWE-Bench, a previous benchmark on repository-level code generation, to SWT-Bench, a benchmark for test generation by asking LLMs to generate tests to reproduce the issues for various GitHub repos and see if such tests can capture the bugs before the gold-standard patches are applied and whether the generated tests will pass after the patches are applied (i.e., Fail -> Pass). Experiments are conducted with multiple baselines and results show that SWE-Agent, the previously proposed method for code repair, achieves the best performance for test generation. Strengths: S1. This paper targets on test generation, which is a relatively under-explored area and could be useful for increasing popular coding agents; S2. This work thoroughly studies the relation between the tasks of code repair and test generation, and show how we can turn a code repair dataset into a test generation dataset using SWE-Bench -> SWT-Bench as an example; S3. The designed metrics to evaluate the generated tests (i.e., Fail-to-Pass rate, Change Coverage, and Patch Applicability) are quite interesting and reasonable, which could be useful for further work on automatic test generation evaluation. Weaknesses: I think the main weakness of this paper is over-claiming. While it is a clever idea to re-purpose code repair benchmarks as SWE-Bench into a test generation benchmark, there are quite a lot limitations of this method for the conclusions on the resulting dataset to claim "state of the art software tester". The main purpose of software testing is not to reproduce the issues, but given the specifications, crafting inputs and the constraints on the outputs to makes sure that the code to be tested has the expected behavior given different inputs. To this end, I believe there is a large gap between the ability that SWT-Bench is testing, and the actual test generation ability needed for the job of software testers. To make such a claim, I believe at least a few software testing benchmarks (e.g., https://swtesting.techconf.org) need to be used and the results to be reported. Technical Quality: 3 Clarity: 3 Questions for Authors: Please respond to the concerns in the "Weakness" section. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank Reviewer i5fN for their review and valuable questions. We are happy to hear they consider the problem we study important and under-explored, our work thorough and interesting and our results impactful. Below, we address their remaining concern. **Is your paper over-claiming and implying that SWT-Bench measures all abilities of human software testers?** While our title may be placative, it only implies that Code Agents perform better than other automated systems on the studied task. In this case, this is the task of reproducing an issue from a description in the form of a test. We fully agree with the reviewer that there is a big gap between the skills SWT-Bench is testing and the full spectrum of abilities required by human software testers. We want to highlight that we do not make any such claims anywhere in our paper. Instead, we explicitly focus on reproducing tests (see e.g. the Conclusion (Line 408) and the Introduction (Line 35)). We will make it even more explicit in the revision that reproducing issues is the only measured skill of multiple software testing skills. **How important is the measured capability of reproducing issues?** While reproducing a described issue or testing a specific functionality is certainly not the sole purpose of software testing, we firmly believe it is a very important one. In particular, it is an aspect of software testing that is difficult to solve without powerful NLP processors such as LLMs. The task of issue reproduction requires translating natural language descriptions of these issues/intended functionalities and into formalized definitions of the desired behavior in the form of unit tests. This is in stark contrast to the task of, i.e increasing coverage of a given (subset of a) codebase, or fuzzing, where the target is already formally defined. Finally, our approach permits test generation for test-driven development, as it can generate tests for unimplemented functionality, i.e. when the raised user issue is a feature request. We thus find it important and promising to study this (relatively under-explored) aspect of software testing. **Conclusions** We hope to have been able to address the reviewer’s concerns regarding the claims made by our paper and would like to respectfully ask them to reconsider their evaluation given their own positive assessment of our work. We are happy to answer follow-up questions and would greatly appreciate it if the reviewer could highlight specific sections where they believe we overclaim. --- Rebuttal 2: Title: Thanks for the response Comment: I'd like to thank the authors for the response. > While our title may be placative, it only implies that Code Agents perform better than other automated systems on the studied task. I think this is where the major discrepancy is: the studied task is **GitHub issue reproduction**, which is quite far from what people would normally think of the work of **software testers**. Thus I still think the title is over-claiming and somewhat misleading, and I'm afraid I could not give higher score for this work. Respectfully, Reviewer i5fN --- Rebuttal 3: Title: Reply to Reply of i5fN Comment: We thank the reviewer for their response and engaging in the discussion. We are happy to modify the title of our submission (in accordance with the submission guidelines) to follow the reviewer’s suggestion and avoid over-claiming. We propose to adjust the title to **Can Code Agents Reproduce Real-World GitHub Issues?**. As promised in the initial rebuttal, we will also revise the text to make the scope of our work more explicit. We are happy to answer any followup questions or suggestions they might have.
Rebuttal 1: Rebuttal: We thank all reviewers for their detailed, insightful, and overwhelmingly positive reviews. We are encouraged to see that the reviewers consider the problem we investigate under-studied and important (PhkB, i5fN, pxq3, ueFW), our benchmark impactful (PhkB, i5fN, Ky1F, pxq3, ueFW), our empirical evaluation and its analysis exhaustive and interesting (PhkB, i5fN, Ky1F, pxq3), and our exposition clear and well written (PhkB, pxq3). Below, we address remaining shared questions, before going into more detail in reviewer-specific responses. Since the original submission of this work, we have fully dockerized our evaluation, improved the parsing of test results to cover additional edge cases, and conducted experiments on more models (GPT4o-mini, Claude 3.5 Sonnet and Mistral Large 2, all released after the submission), methods (Aider), and ablation settings (sampling temperature, agent interaction rounds, multiple runs). Please see the attached PDF for these updated and extended results, referred to in some of our responses. **Statistical significance of results (Ky1F, pxq3)** To confirm the statistical significance of our results, we have conducted a dependent t-test for paired samples to compute the statistical significance of SWE-Agent+ having higher performance than LIBRO. We find that SWE-Agent+ has higher performance (Fail-to-Pass rate) with a p-value of $7 \times 10^{-6}$, indicating strong statistical significance of this main result. We further want to highlight that most of the considered approaches use greedy decoding and are thus deterministic up to floating point errors. Finally, we conduct additional experiments with GPT4o-mini using sampling at a range of temperatures and consistently find small 95% confidence intervals of around $\pm$0.2% over 5 runs of ZeroShotPlus, demonstrating only small sensitivity to inference randomness even when not using greedy decoding. Further details are provided in Table 3 of the attached PDF. We believe these results further strengthen our conclusions and are happy to include them in the main paper. **Relevance of Python results and generalization to other languages (Ky1F, pxq3, ueFW)** First, we want to highlight that Python is an extremely popular language, often ranked #1 by a wide margin in corresponding surveys [1][2]. Thus, we believe that our results on Python are already highly relevant. Further, our approach to designing a testing benchmark can be applied to any programming language where Github repositories with suitable issues and pull requests can be sourced. However, setting up the corresponding execution environments with historically correct dependency versions and test results parsing requires some manual effort. Given the already substantial cost of evaluating the whole benchmark of 1700 instances, we decide to leave this to future work targeting specific languages or issue types. **What is the effect of possible data contamination on SWT-Bench and how can it be addressed? (Ky1F, ueFW)** As SWT-Bench is based on historic GitHub issues, they could be included in the pre-training data of the LLMs underlying the methods we investigate. To assess this effect, we conducted an experiment comparing the performance (of ZeroShotPlus) on all issues created after the training data cutoff of GPT4 (April 2023) to a random subset of instances created before of the same size and include the results in the attached PDF in Table 4. We observe that for the issues created before the cutoff, only one more sample is solved compared to those created after, leading to no statistically significant difference in performance (p-value of 37% using Students t-test). This agrees well with the findings on SWE-Bench. We are happy to include a corresponding discussion and these results in the revised version of our paper. We further note that all methods we investigate should benefit from memorization to a similar extent and hence contamination should not affect their ranking and our conclusion that Code Agents perform surprisingly well. One approach to address this contamination issue is to create a rolling version of SWT-Bench, based only on the most recent issues. However, this comes at the cost of direct comparability of results and increased cost for reproducing results for all baselines on a changing evaluation set. **Conclusion** We hope to have been able to address the reviewers’ questions and look forward to the discussion period. **References** [1] TIOBE Index for August 2024, [https://www.tiobe.com/tiobe-index/](https://web.archive.org/web/20240807025036/https://www.tiobe.com/tiobe-index/) [2] PYPL PopularitY of Programming Language, [https://pypl.github.io/PYPL.html](https://web.archive.org/web/20240806100838/https://pypl.github.io/PYPL.html) Pdf: /pdf/58e1db872b379faeb9d00f3d7b3cb11332fca046.pdf
NeurIPS_2024_submissions_huggingface
2,024
Summary: Authors present a new benchmark for the SWE tasks of generating tests corresponding to an issue in a codebase with unit tests. They propose to repurpose the SWE-bench dataset for this task. They also evaluate LLM based prompting and agentic approaches (SWE-Agent and AutoCodeRover) with metrics like code change coverage and fail to pass rate that are formally defined and measured. Strengths: - This work focuses on generating test cases corresponding to user issues with a codebase with existing unit tests. This problem has been not studied in such detail in my knowledge. Authors have presented this problem with sufficient motivation. - While authors adapt an existing code fix benchmark (SWE-bench) for their task (SWT-bench), their contributions in the form of task formulation, corresponding new metrics and exhaustive analysis of the benchmark could be key to future research on this topic. - Authors also adapt popular LLM approaches like SWE-Agent and ACR for their benchmark and show their effectiveness on generating tests. The demonstrated impact automated test generation has on automated code fixing is particularly promising. Their results include exhaustive analysis of the performance of different methods on the proposed benchmark. - The paper is very well written and clear to follow. Weaknesses: - Arguably the most significant impact of automatic test generation would be on automated code repair/generation. While authors describe this aspect of their work (Lines 328-334), providing more details and large-scale experiments could significantly strengthen the perceived utility of this work. - CodeT by Chen et al https://arxiv.org/abs/2207.10397 is a key related work that was missed in the related work. Technical Quality: 3 Clarity: 4 Questions for Authors: - Lines 328-334: You mention about the impact on precision and recall, can you share more details of the method used and the final correctness rate with synthetic tests generated by the Agent? - What are some future directions you would recommend to improve the performance of LLM Agents on SWT-bench and SWE-bench based on your study? A future work recommendation would greatly help the completeness of the paper. Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: Authors have briefly discussed limitations of their work in the main paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank Reviewer PhkB for their detailed review, insightful questions, and helpful suggestions. We are happy to hear they appreciate the importance and novelty of the problem we study, the importance of our contributions, the extent of our analysis, and the quality of our exposition. Below we address their remaining questions. **Can you provide more details on how the generated tests improve the precision of code repair agents? And can you conduct more large-scale experiments in this direction** Using SWE-Agent, we generate tests and code patches independently for all instances. We then run the generated code patches against the generated tests and retain only those code patches where the tests fail before the patch is applied and succeed after. Evaluating the resulting patches against the ground truth tests shows a precision of 47.5%, significantly higher than the precision of unfiltered patches (19.2%). How to best leverage tests directly for code generation rather than simply selection is an active topic of research with many competing approaches. We therefore leave this as a promising future work item. **How does your work relate to CodeT by Chen et al.?** CodeT focuses on selecting the best implementation(s) from a large set of proposals (100) by generating large numbers of tests and using execution agreement as a quality metric to select correct ones. However, while this approach is effective for function-level, HumanEval-style problems, we focus on significantly more challenging repository-level synthesis. There, neither sampling nor executing such large numbers of different tests and implementations is feasible due to the substantial computational and monetary (when using APIs) cost. In particular, ZeroShot sampling of 100 tests and repairs would cost around $15k for one evaluation of SWT-Bench-Lite. Therefore, we do not compare to CodeT directly. We will include a corresponding discussion in our related work section. **Can you discuss future directions for improving the performance of LLM Agents on SWT-bench and SWE-bench?** We recommend future work to further explore the relationship between patch generation and test generation to improve the performance for both tasks. CodeT [1] is an interesting prior work in this direction. However, it relies on a large number of candidate implementations and tests to find agreements, which is too costly for real-world repositories. Therefore, future work might consider developing more cost-effective approaches. We have further conducted a detailed analysis of common errors for our best-performing method SWE-Agent+ and will gladly add the findings to the paper for the camera-ready version together with concrete examples of their occurrence. We find the most common issues of the agent to be adding passing tests that do not reproduce the issue, getting stuck in a loop after making inapplicable edits or encountering syntax errors, failing to execute the test environment during test creation, and adding tests with syntax errors. Future work may consider addressing these issues through specialized modules or additional monitoring. We are happy to include this analysis, discussion and recommendations for future work in the camera-ready version of the submission. **Conclusion** We hope to have been able to address all of the reviewers questions and remain happy to answer any follow up questions they might have. **References** [1] Chen et al. 2022, CodeT: Code Generation with Generated Tests
null
null
null
null
null
null
Coded Computing for Resilient Distributed Computing: A Learning-Theoretic Framework
Accept (poster)
Summary: This paper focuses on coded computing for machine learning and derives loss-minimizing encoding and decoding functions. Strengths: Please see the “Questions” section. Weaknesses: Please see the “Questions” section. Technical Quality: 3 Clarity: 3 Questions for Authors: My review is as follows: Major: The biggest thing that confused me when reading the paper was the use case for this kind of method. As far as I understand, this method is specifically designed for inference. Inference of computer vision models such as the ones used in the experiments of this paper (VGG, VIT) is actually very fast in practice. Even mobile phones can run inference on these models within a few tens or hundreds of milliseconds. What application would require to run many inferences for these models in parallel in a timely manner? I understand that the results are interesting from a theoretical point of view. It’s nice that encoding and decoding functions can be derived theoretically. But from a practical perspective, I cannot think of a scenario where one would need to make their inference straggler-resilient because it is typically already very low latency. (Furthermore, if an even lower latency is needed, then quantization is usually the direction to explore.) Please let me know if I’m missing something here. Perhaps, straggler resilient computing makes more sense for large-scale computing (e.g. large-model training) where you have a large-scale computation that is distributed onto a compute cluster. Minor: Some of the related work on coded computing has the property that if a given number of nodes return their solution, the exact output can be recovered and if not, an approximate solution can be generated. As far as I understand, the proposed method cannot recover the exact result even if all nodes return their result successfully. Please correct me if I’m wrong about this point. I wonder if this method could be modified to have the exact recovery property. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We express our gratitude to the reviewer for their valuable comments and feedback. In the above, we have offered a general response to the reviewers, addressing some of their common concerns. Here, we provide additional responses to the remaining questions raised. > Major: >The biggest thing that confused me when reading the paper was the use case for this kind of method. As far as I understand, this method is specifically designed for inference. Inference of computer vision models such as the ones used in the experiments of this paper (VGG, VIT) is actually very fast in practice. Even mobile phones can run inference on these models within a few tens or hundreds of milliseconds. What application would require to run many inferences for these models in parallel in a timely manner? > I understand that the results are interesting from a theoretical point of view. It’s nice that encoding and decoding functions can be derived theoretically. But from a practical perspective, I cannot think of a scenario where one would need to make their inference straggler-resilient because it is typically already very low latency. (Furthermore, if an even lower latency is needed, then quantization is usually the direction to explore.) Please let me know if I’m missing something here. Perhaps, straggler resilient computing makes more sense for large-scale computing (e.g. large-model training) where you have a large-scale computation that is distributed onto a compute cluster. **Answer:** We appreciate the reviewer’s valuable feedback. We would like to answer this as follows: 1. Machine Learning as a Service (MLaS) is a new paradigm where resource-constrained clients outsource their computationally expensive tasks to powerful clouds such as Amazon, Microsoft, and Google [1]. Consequently, prediction serving systems in these powerful clouds host complex machine learning models and respond to a vast number of inference queries worldwide with low latency [1]. However, the limited number of worker nodes, the large volume of inference queries, and the ever-increasing complexity of models make this task even more challenging. In this context, addressing the straggler issue and making the entire system straggler-resistant is crucial. As shown in [2], stragglers are inevitable even in simple tasks like matrix multiplication. Therefore, straggler resiliency is a vital feature of any prediction serving system. 2. Inference is just one example of the proposed framework's applications. As mentioned in the paper, the framework is designed for general computing functions. Notably, inference is a crucial task, as it has been used as a benchmark in other works, such as [1]. By utilizing the gradient of a model's loss with respect to its parameters as the computing function, $f(x) =\nabla L(x; \theta)$, where $L$ is the loss of a large pre-trained neural network like large language models, the proposed framework can be applied to fine-tuning large neural networks as well. We consider this case as a potential direction for our future work. [1] Soleymani, M., et al. "ApproxIFER: A model-agnostic approach to resilient and robust prediction serving systems." AAAI 2022. [2] Gupta, V., et al. "Oversketch: Approximate matrix multiplication for the cloud." Big Data 2018. > Minor: Some of the related work on coded computing has the property that if a given number of nodes return their solution, the exact output can be recovered and if not, an approximate solution can be generated. As far as I understand, the proposed method cannot recover the exact result even if all nodes return their result successfully. Please correct me if I’m wrong about this point. I wonder if this method could be modified to have the exact recovery property. **Answer:** We appreciate the reviewer’s valuable feedback. To the best of our knowledge, for **general computation**, there is no solution that has the property of exact recovery for a small number of stragglers and approximate results for a larger straggler. On the other hand, for specific structured computations such as polynomial computation, Lagrange coded computing [1] can give us exact output if $(K-1)\cdot \operatorname{deg}(f) + S < N$. However, the decoding algorithm of Lagrange coded computing does not work for the cases where the above condition does not hold. If we use a heuristic solution in which we fit a lower degree polynomial to the existing solution, the approximation is not acceptable (see Figure 3 in the attachment). Additionally, if the function that we want to compute belongs to the space of functions generated through smoothing spline basis (refer to Equation (5) in the paper), then exact recovery is also feasible in our proposed scheme with appropriate hyper-parameters and appropriate condition on the number of stragglers. [1] Yu, Q., et al. "Lagrange coded computing: Optimal design for resiliency, security, and privacy." PMLR 2019. --- Rebuttal 2: Title: Any feedback on the rebuttal? Comment: Dear Reviewer a2Af, As we approach the conclusion of the author-reviewer discussion phase, we wish to gently remind you that we remain available to address any additional questions or concerns you may have before finalizing your score. Best regards, The Authors --- Rebuttal Comment 2.1: Comment: Thank you for the rebuttal. I still think that if the stragglers were a big issue in inference serving, then the proposed method would be an interesting solution. However, I don't really see the straggling nodes to be a real issue in practice for inference serving. I agree with the authors' point that the model complexity keeps growing. We should keep in mind that the largest models are typically genAI models such as LLMs or image generation models (e.g. stable diffusion). LLMs generate tokens autoregressively and stable diffusion has an iterative denoising component. --- Rebuttal 3: Comment: Thank you for your valuable feedback. The importance of stragglers in distributed computing was raised in a seminal paper by Google [1]. The proposed framework introduces a straggler-resistant scheme for general computing, which is not limited to inference. For numerical evaluation, we chose machine learning inference as a benchmark, as it has been used in related papers such as [2]. [1] Dean, J., et. al, "The tail at scale." Communications of the ACM 2013 [2] Soleymani, M., et al, "ApproxIFER: A model-agnostic approach to resilient and robust prediction serving systems." AAAI 2022.
Summary: The authors consider the problem of improving the reliability of distributed computing architectures by encoding the input data before it is processed by worker machines such that a good approximation of the desired output can be reconstructed using only a subset of the workers’ outputs. They utilize learning theory to determine a regularized objective function for the decoder as well as a loss function for the overall system, and provide solutions for the encoder and decoder which optimize their derived upper bounds on the loss function. The objective function is based on kernel ridge regression, which leads to a second-order smoothing spline solution for the decoder. Both the noiseless and the noisy cases are considered, and the proposed method is compared with the existing approach of Berrut coded computing in terms of its convergence rate in the noiseless setting and its empirical performance. To evaluate the empirical performance, distributed inference is performed for deep learning models of varying sizes and tasks with varying sizes for the output vector. The experimental results show that LETCC consistently achieves a lower estimation error on the model outputs. Strengths: # Originality The main contribution which differentiates the LeTCC method from prior works like BACC appears to be the introduction of regularization which causes the optimal solution to be based on smoothing splines rather than interpolation. Existing papers which have considered optimizing cost functions only focus on learning how to generate parity data points. Overall, this paper brings a novel method to the table and adequately cites related works. # Quality Although I lack experience with some of the mathematical tools used, to my knowledge there is nothing incorrect about the technical results. The experimental methodology seems to allow for a fair comparison between LeTCC and BACC for the use case of image classification with a deep learning model. In terms of reconstruction accuracy, the experimental results validate that LeTCC is superior to BACC, in some cases by a significant margin. # Clarity The main content of the paper is laid out clearly with some small typos but no major issues. # Significance The method for coded computing introduced in this paper represents an advancement in the level of accuracy that can be achieved when the function applied by the worker nodes is a complex deep learning model. The authors show that this advancement can be achieved by adapting the encoder and decoder to the function through the tuning of the regularization weight, which is an interesting idea. Weaknesses: # Originality This paper distinguishes itself sufficiently from existing works, so I see no issues in terms of originality. # Quality LeTCC is proven to have a reconstruction error that scales better than BACC in terms of the number of worker nodes, but its scaling in terms of the number of stragglers is not directly compared. Based on Theorem 9 in the BACC paper, it appears that the max error along a single dimensions scales as $S^2$. The mean-squared error (MSE) for LeTCC scales as $S^4$, but since the max norm is upper bounded by the Euclidean norm it is unclear how to compare these results. It would be nice to compare the theoretical dependence on the number of stragglers, especially since the experimental results are mixed (in two cases the difference in MSE shrinks as $S$ increases, and in the other case the MSE grows as $S$ increases). # Clarity The authors should explicitly state how many trials were averaged over to produce the plots in figure 3 to help readers interpret and/or reproduce the results. I identified the following typos in the paper: - Line 502: There is a “loc” subscript missing - Line 49: The sentence starting on this line is grammatically incorrect, I presume that the second comma is supposed to be replaced with the word “is” - Figure 3: The title of the top-left plot seems to have the values of N and K reversed # Significance Without more details on the hyperparameter optimization process and the relative computational complexity of fitting the encoder and decoder functions as compared to those of BACC, it is hard to say whether LeTCC is more practically advantageous to use than BACC. It could be that the overhead introduced by hyperparameter tuning outweighs the benefit from reduced reconstruction error in some cases. It would have been helpful to see how tolerant LeTCC is to changes in the smoothing parameters, i.e. how much variation can be allowed while still outperforming BACC. One can also see from Figure 3 that a large difference in the mean-squared error does not consistently translate to a large difference in relative accuracy (at least when deep models are used), which makes it less likely that LeTCC’s performance benefits would be worth the additional overhead. As it currently stands, these issues reduce the practical value of the contribution made by this paper. Technical Quality: 3 Clarity: 4 Questions for Authors: According to Theorem 4, the mean-square error of LeTCC has a much worse scaling in terms of the number of stragglers in the noiseless setting than in the noisy setting. Does this originate from a sacrifice made to improve the scaling with respect to the number of worker nodes, or is there some other cause? Confidence: 2 Soundness: 3 Presentation: 4 Contribution: 2 Limitations: The authors are upfront about the limited scope of their work. However, as mentioned in the weaknesses section, the extra overhead introduced by the need for hyperparameter tuning is something that I would have liked to have seen addressed in the paper. As mentioned in the weaknesses section, the comparison of error convergence rates with BACC is also limited. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We express our gratitude to the reviewer for their valuable comments and feedback. In the above, we have offered a general response to the reviewers, addressing some of their common concerns. Here, we provide additional responses to the remaining questions raised. > LeTCC is proven to have a reconstruction error that scales better than BACC in terms of the number of worker nodes, but its scaling in terms of the number of stragglers is not directly compared. Based on Theorem 9 in the BACC paper, it appears that the max error along a single dimensions scales as S^2 The mean-squared error (MSE) for LeTCC scales as S^4 but since the max norm is upper bounded by the Euclidean norm it is unclear how to compare these results. It would be nice to compare the theoretical dependence on the number of stragglers, especially since the experimental results are mixed (in two cases the difference in MSE shrinks as S increases, and in the other case the MSE grows as S increases). **Answer:** We thank the reviewer for the detailed question. Note that our proposed solution achieves a factor of $S^4$ for the **squared** $\ell_2$ norm of the error, compared to the factor of $S^2$ for the $\ell_\infty$ norm of error (equivalently $S^4$ factor for **squared** $\ell_\infty$ norm) in the BACC paper. Thus, if we compare these two schemes since $\ell_2$ norm is upper bounded by $\ell_{\infty}$ norm, both of them have the factor of $S^4$ for the upper bound of the $\ell_2$ norm of the error. Therefore, we cannot conclude that BACC performs better than LeTCC in terms of the scale of the error as a function of $S$. > The authors should explicitly state how many trials were averaged over to produce the plots in figure 3 to help readers interpret and/or reproduce the results. **Answer:** We have provided information on the statistical properties of our experiments, including confidence intervals, in the general rebuttal and Figures 1 and 2 of the attached PDF. For a comprehensive overview of our experimental results and statistics, please see the general rebuttal and the attached file. >I identified the following typos in the paper: >> - Line 502: There is a “loc” subscript missing >>- Line 49: The sentence starting on this line is grammatically incorrect, I presume that the second comma is supposed to be replaced with the word “is” >> - Figure 3: The title of the top-left plot seems to have the values of N and K reversed **Answer:** Thank you for pointing out the typos in our paper. We appreciate your attention to detail and have corrected them. > Without more details on the hyperparameter optimization process and the relative computational complexity of fitting the encoder and decoder functions as compared to those of BACC, it is hard to say whether LeTCC is more practically advantageous to use than BACC. It could be that the overhead introduced by hyperparameter tuning outweighs the benefit from reduced reconstruction error in some cases. It would have been helpful to see how tolerant LeTCC is to changes in the smoothing parameters, i.e. how much variation can be allowed while still outperforming BACC. One can also see from Figure 3 that a large difference in the mean-squared error does not consistently translate to a large difference in relative accuracy (at least when deep models are used), which makes it less likely that LeTCC’s performance benefits would be worth the additional overhead. As it currently stands, these issues reduce the practical value of the contribution made by this paper. **Answer:** Please refer to the general rebuttal and attached PDF for comprehensive information on the experiments, including smoothing parameter sensitivity, experimental statistics, and a comparison of the computational complexity of the proposed model. > According to Theorem 4, the mean-square error of LeTCC has a much worse scaling in terms of the number of stragglers in the noiseless setting than in the noisy setting. Does this originate from a sacrifice made to improve the scaling with respect to the number of worker nodes, or is there some other cause? **Answer:** Based on Theorem 4, the error convergence rates of noiseless and noisy settings are $\mathcal{O}(S\cdot(\frac{S}{N})^3)$ and $\mathcal{O}(S\cdot(\frac{S}{N})^\frac{3}{5})$, respectively. We note that, for $S<N$, $S\cdot(\frac{S}{N})^3 < S\cdot(\frac{S}{N})^\frac{3}{5}$ and thus, as we expect, the upper bound on the error of the noiseless cases scales better compare to that of noisy ones. It is important to note that considering the variation of these bounds as a function of $S$ without considering the effect of $N$ may lead to a misleading conclusion >Limitations: The authors are upfront about the limited scope of their work. However, as mentioned in the weaknesses section, the extra overhead introduced by the need for hyperparameter tuning is something that I would have liked to have seen addressed in the paper. As mentioned in the weaknesses section, the comparison of error convergence rates with BACC is also limited. **Answer:** We thank the reviewer for pointing out the limitations of out work. We have already addressed some of them in the general rebuttal and we will include them as a separate section in the revised paper. Also we decided to add a full section to the revised version of the paper to discuss the limitations in a more structured way in the paper. --- Rebuttal Comment 1.1: Comment: Thank you for your response. Most of my concerns have been addressed, although the nature of the sensitivity experiment which the authors conducted does not exactly explain how bad the performance drop will be if the value of the smoothing parameters is different from the optimal values by a given amount. I raise my score to 7. --- Reply to Comment 1.1.1: Comment: Thank you for you insightful comments and feedback. We will address the complete sensitivity analysis in the revised version.
Summary: This work proposes a learning theory-based novel framework for coded computing with a focus on distributed machine learning applications. The proposed method sends mixtures of input samples to the worker nodes that compute the desired results on the mixtures. An encoder and a decoder functions are fitted at the master node using input samples and the result from workers, respectively. Finally, the decoder function can be used to estimate the output of computing function for the input samples. It is shown that the loss function (divergence of the estimated output with the true output of computing functions) is upper bounded by the generalization error of the decoder regression and training error of the encoder regression under some conditions. Experimental evaluations are provided to show the efficacy of the proposed method over the state-of-the-art baseline. Strengths: 1. The proposed method leverages learning theory for coded computing which seems novel and interesting. 2. Rigorous theoretical analysis is provided including convergence rate derivation and recoverability analysis. Weaknesses: 1. The experimental section does not look comprehensive enough for the following reasons: (a) Only one baseline is considered. (b) Authors acknowledge in Section 3 that the computational efficiency of encoder and decoders is a crucial factor. Yet, this was not reported in the numerical section. 2. The computing functions (which are generally pre-trained neural networks) are evaluated on (probably) mixtures of input samples at the worker nodes. Those neural networks were trained on the data distribution. However, the mixture of input samples might not fall on the support of data distribution. Although for images, this does not seem to be an issue, there could be unexpected behavior in general. Technical Quality: 3 Clarity: 2 Questions for Authors: 1. Could the authors provide comparision with additional baselines? 2. Could the authors report the computational resources needed for fitting and evaluating the encoder and decoder? 3. Could the authors provide a discussion for the weaknesses number 2 explained above? Also, does this framework work (or can it be modified to work) for computing functions that take discrete inputs? Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: Yes, but it is spread out throughout the manuscript. A separate limitation section/paragraph would be helpful for the readers. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We express our gratitude to the reviewer for their valuable comments and feedback. In the above, we have offered a general response to the reviewers, addressing some of their common concerns. Here, we provide additional responses to the remaining questions raised. > The experimental section does not look comprehensive enough for the following reasons: (a) Only one baseline is considered. (b) Authors acknowledge in Section 3 that the computational efficiency of encoder and decoders is a crucial factor. Yet, this was not reported in the numerical section. >>Could the authors provide comparison with additional baselines? >>Could the authors report the computational resources needed for fitting and evaluating the encoder and decoder? **Answer:** We thank the reviewer for their constructive comment. Although there is only one baseline for general computing approximation, we compared our proposed scheme with Lagrange Coded Computing for polynomial computation. Additionally, we have analyzed the computational complexity of our scheme and compared it with BACC and Lagrange. Please refer to our general response for a detailed discussion. > The computing functions (which are generally pre-trained neural networks) are evaluated on (probably) mixtures of input samples at the worker nodes. Those neural networks were trained on the data distribution. However, the mixture of input samples might not fall on the support of data distribution. Although for images, this does not seem to be an issue, there could be unexpected behavior in general. >>Also, does this framework work (or can it be modified to work) for computing functions that take discrete inputs? **Answer:** We appreciate the reviewer's thoughtful and detailed comment. Firstly, as established in Theorems 1-3, our results are predicated on the assumption of computing function smoothness, which is guaranteed by bounding the maximum norm of its first and second derivatives. Consequently, our framework ensures high recovery accuracy only when the computing function exhibits smoothness. If this smoothness assumption is not met, our framework cannot guarantee accurate recovery. Secondly, based on the results of Theorems 1-3 and Corollary 2, we have shown that the optimal encoder is the smoothing spline. It is shown that the smoothing spline operator is asymptotically equivalent to a kernel regression estimator with a Silverman kernel, whose local bandwidth is $\lambda^{\frac{1}{4}}q(t)^{\frac{-1}{4}}$, where $\lambda$ is the smoothing parameter and $q(t)$ is the probability density function of the input data points [1]. This leads to the key observation: * In the smoothing spline, the bandwidth (i.e., the number of data points effectively contributing to the input data of a worker node) and their corresponding weights depend on the input data distribution as well as the smoothing parameter. This property makes our approach more generalizable to various data distributions. If the data distribution is such that a linear combination of input data points may be inappropriate, we can control the number of points involved in the linear combination and their weights by choosing the right smoothing parameter. This ensures that the output of the linear combination remains reasonably close to the data distribution on which the model was trained. In contrast, the Berrut approach has a fixed, bounded bandwidth due to the $\frac{1}{z-\alpha}$ factor in the numerator of the data points coefficient $$u_{enc}(z)=\sum_{i=0}^{K-1} \frac{\frac{(-1)^i}{\left(z-\alpha_i\right)}}{\sum_{j=0}^{K-1} \frac{(-1)^j}{\left(z-\alpha_j\right)}} \mathbf{x}_i$$ Therefore, the problem that the reviewer mentioned will affect BACC more. This observation suggests another perspective on why the proposed solution outperforms BACC. Finally, in the case of discreet input, if the model exhibits "smoothness" (i.e., it accepts continuous input and has bounded first and second derivatives, ensuring that small changes in the input result in minimal output changes), our proposed scheme will be effective. [1] Silverman, B.W. "Spline smoothing: the equivalent variable kernel method." The Annals of Statistics, 1984 > Limitations: Yes, but it is spread out throughout the manuscript. A separate limitation section/paragraph would be helpful for the readers. **Answer:** We thank reviewer for the helpful comment. We will add a dedicated section to the limitations of our work in next version. --- Rebuttal Comment 1.1: Comment: Thank you for the additional experiments, discussion about the baselines and question 3. The authors have successfully addressed my comments. Therefore, I raise my score to 6. --- Reply to Comment 1.1.1: Comment: Thank you for your insightful comments.
Summary: The paper deals with coded distributed computing. I need to note that it is very popular research area now with a vast number of papers. But the authors are right when mention that the majority of papers utilizes standard algebraic codes (such as Reed-Solomon codes). The main problem of such approach is that these codes are designed over finite fields, which is not the case for machine learning tasks. The improvements utilize real or complex valued codes (e.g. Reed-Solomon codes over real or complex fields) but they face with the problems of accuracy (Vandermonde matrices have bad condition numbers). Moreover such approaches work only for some particular functions, e.g. polynomial ones. In this paper the authors propose a framework, which outperforms the Berrut approximate coded computing (BACC), which is the state-of-the-art coded computing scheme for general computing. Strengths: The main strong point are as follows: - new framework foundation for coded computing, new loss function and its upper bound by decomposition - theoretical analysis and guarantees. Under some assumptions the authors find the optimum encoding and decoding functions and characterize the convergence rate for the expected loss in both noise-free and noisy computations. - Numerical analysis, that shows the new framework to outperforms the Berrut approximate coded computing (BACC), which is the state-of-the-art coded computing scheme for general computing. Weaknesses: I list the main weaknesses below: - «We develop a new foundation for coded computing, based on the theory of learning, rather than the theory of coding». I would not be so categorical. Coding is a method when you add and the utilize the redundancy to deal with errors and erasures. The main advantage of your approach is that you are not using standard finite field codes. - How the optimal (under some assumptions) encoder and decoder functions in section 4 are related to the functions utilized for numerical experiments (DNNs)? Is it possible to analyse the performance under optimal encoder and decoder and compare it to the results from Section 5? - You made the comparison to BACC, which is a framework for general computing. Could you please make a comparison for some particular (e.g. polynomial) problems? I just wonder if proposed general approach is competitive with e.g. Lagrange computing and what you should pay for universality. So my claim is that more scenarios and computing problems should be checked to understand the limitations and applicability of your method. - How flexible is your approach if the system parameters change (the number of worker nodes or the number of stragglers). - Figure 3 seems to suffer from the lack of statistics (you need more experiments). - You mentioned Byzantine case. It is known to be a problem for the codes over the real field (in case of finite field the approaches are well-developed). It would be beneficial to briefly explain how you plan to deal with this problem. Technical Quality: 3 Clarity: 3 Questions for Authors: See weaknesses Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Limitations are well described. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We express our gratitude to the reviewer for their valuable comments and feedback. In the above, we have offered a general response to the reviewers, addressing some of their common concerns. Here, we provide additional responses to the remaining questions raised. > «We develop a new foundation for coded computing, based on the theory of learning rather than the theory of coding». I would not be so categorical. Coding is a method when you add and the utilize the redundancy to deal with errors and erasures. The main advantage of your approach is that you are not using standard finite field codes. **Answer:** We appreciate the reviewer’s valuable feedback. We would like to clarify our first contribution. As we mentioned in the objective box (line 70): > The main objective of this paper is to develop a new foundation for coded computing, **not solely based on coding theory, but also grounded in learning theory** as well as abstract (lines 8 and 9): > we propose a novel foundation for coded computing, **integrating the principles of learning theory**, and developing a new framework that seamlessly adapts with machine learning applications. Therefore, what we mean by "based on the learning theory" is basically integrating a learning theoretic mindset, on top of coding theory, into the whole framework, including (i) considering the whole system as an end-to-end system, (ii) defining its corresponding loss function and (iii) deriving optimum encoder and decoder function using theories from kernel regression. It does not mean that we are not using coding and learning theoretical approaches instead. Conventional coded computing did not consider the whole system view and was solely based on algebraic coding theory, originally developed for communication theory, which makes them not well generalizable to machine learning applications. We agree that this sentence in Line 86 could cause misunderstanding, and we will change it accordingly. > How the optimal (under some assumptions) encoder and decoder functions in section 4 are related to the functions utilized for numerical experiments (DNNs)? Is it possible to analyze the performance under optimal encoder and decoder and compare it to the results from Section 5? **Answer:** In fact, we do use the optimal encoder and decoder functions in the numerical experiments. We proved in Corollary 4 (lines 236-240) that the optimal encoder function is the smoothing spline function, just like the decoder function. As a result, we use smoothing spline functions for our experiments in both encoder and decoder with smoothing parameters $\lambda_{enc}$ and $\lambda_{dec}$ respectively. For more clarification, we will mention this fact directly in our experimental setup section (Section 5). > You made the comparison to BACC, which is a framework for general computing. Could you please make a comparison for some particular (e.g. polynomial) problems? I just wonder if proposed general approach is competitive with e.g. Lagrange computing and what you should pay for universality. So my claim is that more scenarios and computing problems should be checked to understand the limitations and applicability of your method. **Answer:** We have provided further comparisons with Lagrange Coded Computing in the general rebuttal, as well as in Figure 3 of the attached PDF. > How flexible is your approach if the system parameters change (the number of worker nodes or the number of stragglers). **Answer:** We have included a discussion on the sensitivity of the framework's hyper-parameters ($\lambda_{enc}$ and $\lambda_{dec}$) to the number of stragglers in the general rebuttal. > Figure 3 seems to suffer from the lack of statistics (you need more experiments). **Answer:** We have included a detailed description of our experiments and added confidence intervals to the plots. Please refer to the general rebuttal and Figures 1 and 2 in the attached PDF for a comprehensive discussion. > You mentioned the Byzantine case. It is known to be a problem for the codes over the real field (in the case of the finite field, the approaches are well-developed). It would be beneficial to briefly explain how you plan to deal with this problem. **Answer:** We appreciate the reviewer's insightful comment. While analyzing the scheme in the presence of Byzantine faults is beyond the scope of this paper, which primarily focuses on straggler resiliency, we acknowledge the importance of this aspect. Similar to [1], which enhances Berrut coded computing with Byzantine robustness using the Berlekamp-Welch (BW) decoding algorithm for Reed-Solomon codes [2], we plan to explore the same approach to incorporate Byzantine robustness into our framework. Still, it needs significant work to design an effective error correction algorithm and develop a theoretical guarantee tailored to our framework. [1] Soleymani, M., et al. "ApproxIFER: A model-agnostic approach to resilient and robust prediction serving systems." AAAI 2022. [2] Blahut, R.E., "Algebraic codes on lines, planes, and curves: an engineering approach." Cambridge University Press 2008.
Rebuttal 1: Rebuttal: # General Response to Reviewers We appreciate the reviewers' constructive feedback. Here, we provide a general response to their common questions. **Experiments.** In our revised version, we present a more comprehensive evaluation by incorporating the statistical properties of our experiments. Specifically, for each number of stragglers $S$, we evaluate LeTCC and BACC using the same input data points $\mathbf{x}_1, \dots, \mathbf{x}_k$ and repeat the experiment $20$ times with different sets of randomly chosen input data. We then plot the average result with a 95% confidence interval, providing a clearer picture of the performance and variability of each method. Figures 1 and 2 in the attached PDF display the average performance at a 95% confidence interval. Figure 1 shows a visible performance gain in both the RelAcc and MSE when $\frac{N}{K}$ is small (this is the practically important case, where the system is not over-designed with highly coded redundant computing). On the other hand, when the system is over-designed with a smaller $\frac{N}{K}$ ratio, then LeTCC shows a minor improvement compared to BACC, specifically in the relative accuracy. We will include the new experiments (the ones depicted in Figure 1 of the attached PDF) in our next version. **Sensitivity Analysis.** The smoothing parameters for each model exhibit low sensitivity to the number of stragglers (or worker nodes). To determine the optimal smoothing parameter, we employ cross-validation for different values $\frac{S}{N}$. The following tables display the optimal smoothing parameters for some numbers of stragglers for LeNet5 with $(N, K) = (100, 60)$ and RepVGG with $(N, K) = (60, 20)$, respectively. We will include this discussion in the revised version of the paper. As we can see, the optimum values of $\lambda^*_{enc}$ and $\lambda^*_{dec}$ are not sensitive to the number of stragglers. |#Stragglers|$\lambda^*_{enc}$|$\lambda^*_{dec}$| |-|-|- |0|$10^{-13}$|$10^{-6}$ |5|$10^{-13}$|$10^{-6}$ |10|$10^{-13}$|$10^{-6}$ |15|$10^{-13}$|$10^{-6}$ |20|$10^{-13}$|$10^{-6}$ |25|$10^{-8}$|$10^{-5}$ |30|$10^{-8}$|$10^{-4}$ |35|$10^{-8}$|$10^{-4}$ |#Stragglers|$\lambda^*_{enc}$|$\lambda^*_{dec}$ |-|-|- |0|$10^{-6}$|$10^{-4}$| |5|$10^{-6}$|$10^{-4}$| |10|$10^{-5}$|$10^{-4}$| |15|$10^{-5}$|$10^{-4}$| |20|$10^{-5}$|$10^{-4}$| |25|$10^{-5}$|$10^{-4}$| |30|$10^{-5}$|$10^{-3}$| |35|$10^{-5}$|$10^{-3}$| **Other Baselines.** Note that the **only** existing coded computing scheme for general functions is Berrut coded computing [1], which we have compared with our proposed scheme. Other schemes handle specific computations like polynomial functions [2] and matrix multiplication [3]. As suggested by the reviewers, we compare our proposed scheme with Lagrange coded computing [2], designed for polynomial computations: * **Accuracy of function approximation**: Lagrange coded computing is only applicable to polynomial computing functions [2]. Also, the number of servers required to recover must be at least $(K-1)\times \text{deg}(f)+S+1$ worker nodes [1, 2]; otherwise, the master node cannot recover anything. Finally, the Lagrange coded computing is designed for computation over finite fields, and it faces serious instability problems in computation over real numbers when $(K-1)\times \text{deg}(f)$ is around $10$ or more [1, 4]. We compare the proposed framework (LeTCC), and the Lagrange coded computing in Figure 3 in the attached document. Recall that if $N < (K-1)\times \text{deg}(f)+S+1$, Lagrange coded computing does not work. Still, to push the application of Lagrange coded computing to those cases, we can fit a lower degree polynomial to the existing workers' results to get approximate results. We run LeTCC and Lagrange coded computing for the same set of input data and fixed polynomial function 20 times and plot the average performance with a 95% confidence interval in Figures 3a, 3b and 3c, 3d. Figures 3a and 3b show performances of Lagrange and LeTCC for the case where the degree of the polynomial and the number of data points are small ($\text{deg}(f)=3$ and $K=5$), while Figures 3c and 3d show the performance for larger polynomial degree and number of data points($deg(f)=15$ and $K=10$). As shown in Figure 3a, 3b, Lagrange gives us the exact result for $S\le7$. However, for larger values of $S$ and also in Figures 3c and 3d, the proposed approach, without any parameter tuning, outperforms Lagrange coded computing both in terms of computational stability (low variance) and the accuracy of recovery. * The **computational complexity** of encoding and decoding in Lagrange Coded Computing are $\mathcal{O}(K\cdot \log^2(K) \cdot \log\log (K)\cdot d)$ and $\mathcal{O}((N-S)\cdot \log^2((N-S)) \cdot \log\log ((N-S))\cdot m)$, respectively, where $d$ and $m$ are input and output dimensions of the computing function $f(\cdot)$, respectively [2]. In contrast, for Smoothing Splines, the encoding and decoding process, which involves the evaluation of new points and calculating the fitted coefficients, have the computational complexity of $\mathcal{O}(K.d)$ and $\mathcal{O}((N-s).m)$, respectively, leveraging the B-Spline basis functions [5-7]. Consequently, the computational complexity of the proposed scheme is less than Lagrange Coded Computing. Note that based on the BACC paper [1], the computational complexity of the Berrut method for encoding and decoding is the same as LeTCC. From the experimental view, we compare the whole encoding and decoding time (on a single CPU machine) for LeTCC and BACC frameworks, as shown in the following table: ||BACC|LeTCC |-|-|- |LeNet5, $(N,K)=(100, 20)$| $0.013s \pm 0.002$|$0.007s \pm 0.001$ |RepVGG, $(N,K)=(60, 20)$| $1.62s \pm 0.18$|$1.59s \pm 0.14$ |ViT, $(N,K)=(20, 8)$|$1.60s \pm 0.28$|$1.74s \pm 0.29$ We will include the above discussion in the revised version of the paper. Pdf: /pdf/8bcdd3d00ef4e0973ce501cdafa045fd115073d2.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Vision Mamba Mender
Accept (poster)
Summary: The paper introduces a novel post-hoc optimization strategy for existing Vision Mamba architectures, termed Vision Mamba Mender, aimed at enhancing the performance of Mamba models in visual recognition tasks. The authors seek to identify and rectify flaws in the Mamba model’s mechanisms from both external and internal hidden state perspectives, proposing a state correlation analysis method. Through this analysis, the authors pinpoint critical modules requiring repair and subsequently design a state correlation constraint method. The authors conduct extensive experiments on several Vision Mamba architectures to validate the efficacy of the proposed optimization strategy. Strengths: 1. The paper proposes a novel and intriguing method to enhance state space models from a post-hoc perspective for Mamba. The state correlation analysis used to identify flaws in Mamba, and the subsequent state correlation constraint employed to rectify these flaws, are both new explorations for this kind of technique. 2. The proposed method for identifying and rectifying flaws in Mamba is systematic to some degree. Moreover, the experiments are clearly designed and well-executed, demonstrating that the proposed method can be applied to mainstream Mamba architectures, further improving their performance. 3. The paper is well-written, with clear language and a well-structured presentation. The background and motivation are clearly articulated and convincing. Each part of the method is presented step-by-step and in detail, allowing readers to effectively grasp the core ideas. 4. The paper also provides some insights into the working mechanisms of Mamba, having the potential to inspire future research on the explanation of Mamba models. Weaknesses: 1. Before reading the methodology section, the description of "certain modules of Mamba" (Lines 65-67) may be confusing to readers. The authors should avoid using such vague terms. 2. The results in Table 1 show that the proposed method improves the performance of Mamba by 1% to 5%. For the sake of rigor, the description in Line 71, stating that the performance is "enhanced by 3% to 5%," does not seem very precise. 3. In Lines 224-229, the authors claim that the states $c_n^{(l)}$ and $s_n^{(l)}$ in the shallowest blocks are crucial for influencing the model’s anomalous decisions. However, this cannot be observed from Fig. 4, as $c_n^{(l)}$ and $s_n^{(l)}$ do not appear to vary most significantly from the simple sample (Fig. 4a) to the hard sample (Fig. 4b). 4. In addition to comparing the state correlation scores before and after flaw repair (Fig. 5), the authors should provide more visual comparisons of the state correlation (like Fig. 2 and Fig. 3), which would be more intuitive for the readers. 5. Only analyzing and enhancing state space models from a post-hoc perspective for Mamba may not be enough. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. In Eqns. (8) and (10), the two terms are multiplied together rather than averaged. What is the rationale behind this design choice? 2. In Figure 4, why are both the ViM and VMamba structures adopted to obtain these observations? 3. The loss function defined in Eq. (11) seems incorrect. Could the authors clarify this? 4. In Table 1, it appears that the performance of "Internal Flaw Repair" is better than that of "External Flaw Repair." Are the authors aware of this, and do they have any thoughts on why this is the case? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The limitations appear to have been discussed in Section 6 by the authors. Moreover, there is no societal impact on the work performed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer’s recognition of the novelty and interest of our proposed method and the acknowledgment of its contribution to the interpretability research of the Mamba model. Below are our responses to each of your comments. --- > **Q1**: Before reading the methodology section, the description of "certain modules of Mamba" (Lines 65-67) may be confusing to readers. The authors should avoid using such vague terms. > **A1**: We apologize for the confusion caused by this description. Lines 65-67 refer to "certain modules of Mamba," which specifically denotes modules within a Mamba block that exhibit defects after state correlation analysis. For example, the module identified with defects through internal state correlation analysis is $x_{n}^{(l)}$. We will update this section in the paper to clarify this description and avoid any confusion. --- > **Q2**: The results in Table 1 show that the proposed method improves the performance of Mamba by 1% to 5%. For the sake of rigor, the description in Line 71, stating that the performance is "enhanced by 3% to 5%," does not seem very precise. > **A2**: We apologize for this typographical error and have corrected the description in Line 71. --- > **Q3**: In Lines 224-229, the authors claim that the states  $c_n^{(l)}$ and $s_n^{(l)}$  in the shallowest blocks are crucial for influencing the model’s anomalous decisions. However, this cannot be observed from Fig. 4, as $c_n^{(l)}$ and $s_n^{(l)}$  do not appear to vary most significantly from the simple sample (Fig. 4a) to the hard sample (Fig. 4b). > **A3**: Yes, you are correct. Figure 4a and 4b only show that for hard samples, the external state correlation scores decrease across all states. However, since only Conv and SSM participate in external interactions with states within a Mamba block (lines 121-123 of the paper), we believe that states $c_n^{(l)}$ and $s_n^{(l)}$ have the most significant impact on the model’s defective decisions. It is important to note that subsequent ablation experiments show that defect repair on states $c_n^{(l)}$ and $s_n^{(l)}$ leads to improved model performance (Appendix E). --- > **Q4**: In addition to comparing the state correlation scores before and after flaw repair (Fig. 5), the authors should provide more visual comparisons of the state correlation (like Fig. 2 and Fig. 3), which would be more intuitive for the readers. > **A4**: Thank you for your suggestion. We found that, in addition to the improvement in state correlation scores after flaw repair (as shown in Figure 5), the visual morphology of the states, similar to Figures 2 and 3, also changes. Specifically, after defect repair, the difficult samples in Figure 2 exhibit a greater focus on the foreground compared to before repair. We will include these additional visual comparisons in the appendix of the paper. --- > **Q5**: Only analyzing and enhancing state space models from a post-hoc perspective for Mamba may not be enough. > **A5**: Thank you for your insightful comment; we understand your concerns. In addition to post-hoc analysis and enhancement of the Mamba model, there are indeed preemptive approaches to strengthen the Mamba model's architecture, such as improving its scanning mechanisms. However, these methods often require strong priors and extensive trial-and-error (lines 32-38 of the paper). The primary distinction of our work, compared to existing studies, lies in its post-hoc analysis and optimization of the Mamba model. This approach not only provides a clear understanding of Mamba's operational mechanisms but also allows for targeted and precise solutions to the issues encountered during inference. We believe that the insights gained from post-hoc analysis will better assist researchers in designing more refined Mamba architectures, which is one of the key contributions of this work. Moreover, our paper goes beyond analyzing and enhancing the state space model operations within Mamba blocks by also addressing various linear mapping operations, including general 1D convolutions and gating mechanisms within the Mamba blocks. This analysis and enhancement are comprehensive and thorough. We hope this response addresses your concerns. Exploring additional optimization methods for the Mamba model and enhancing the applicability of our proposed methods to more complex tasks will be a continued focus of our future research. --- Rebuttal 2: Title: Rebuttal by Authors [Q6-Q9] Comment: > **Q6**: In Eqns. (8) and (10), the two terms are multiplied together rather than averaged. What is the rationale behind this design choice? > **A6**: The rationale behind this design choice is that the units and scales of the terms in these equations are different. Averaging them could lead to information loss and might not adequately reflect the importance of each term. Multiplying the terms amplifies the differences between them and highlights the significance of each term more effectively. --- > **Q7**: In Figure 4, why are both the ViM and VMamba structures adopted to obtain these observations? > **A7**: We sincerely apologize for the confusion regarding the description of Figure 4. In reality, all observations in Figure 4 are based on the ViM structure. As detailed in lines 196-198 and 204-206 of the paper, Figure 4(a,b) shows the external state correlation scores for different states within Mamba blocks for simple and hard samples, respectively. Similarly, Figure 4(c,d) displays the internal state correlation scores for these states in the same Mamba blocks for simple and hard samples. The x-axis of each subplot represents different Mamba blocks, and the y-axis indicates the magnitude of the correlation scores. We will correct the description of Figure 4 in the paper. --- > **Q8**: The loss function defined in Eq. (11) seems incorrect. Could the authors clarify this? > **A8**: We apologize for the typographical error. The correct loss function should be: $\text{Loss}\_{\mathbf{e}} = \mathbb{E}\_{HW}(\mathbf{e}^{(\ell,c)+} \odot m) + \mathbb{E}\_{HW}(\mathbf{e}^{(\ell,s)+} \odot m)$ We have corrected this in the paper. Thank you for pointing this out. --- > **Q9**: In Table 1, it appears that the performance of "Internal Flaw Repair" is better than that of "External Flaw Repair." Are the authors aware of this, and do they have any thoughts on why this is the case? > **A9**: Great question. The reason why "Internal Flaw Repair" shows better performance than "External Flaw Repair" is primarily due to the more pronounced patterns observed in internal state correlations compared to external state correlations. Specifically, as shown in Figure 4(c,d), the internal state correlation scores for states such as $x_{n}^{(\ell)}$ drop more significantly for difficult samples. Therefore, repairing internal state correlations for $x_{n}^{(\ell)}$ results in more substantial improvements. --- Rebuttal Comment 2.1: Title: Thanks for response Comment: Thank you for your response. My concerns have been mainly addressed. I have also reviewed the comments from other reviewers and the corresponding responses. I have increased my score to 6. --- Rebuttal 3: Comment: Thank you for your positive feedback. We are pleased that your concerns have been addressed. We greatly appreciate your support for our work.
Summary: The papers addresses the limitations of Mamba based models in vision through a post-hoc optimization scheme that address external state flaws and internal state flaws, identified through respective internal and external state correlation analysis. The introduced corrective measures improve performance on image classification tasks across various mamba-based backbones on the imagenet dataset. Strengths: 1. The paper is very well-motivated and addresses a specific topic very relevant to the vision community. 2. The presented correlation analysis is novel and detailed, and the resulting flaws identified along with presented mitigations are consistent and show consistent improvements across various backbones. 3. The paper is well-written and easy to follow. Weaknesses: There are two major concerns here: 1. The method is limited by the required annotations (for example foreground annotation) which limits its usefulness across other classification datasets and other tasks. 2. The paper does not evaluate on any detection/segmentation benchmarks. This is required to understand whether the features learned by the backbone are rich enough to generalize to these more complex tasks. All recent vision backbones, including the various mamba-based ones, report detection results on MS coco and segmentation results on ade20k, which the current work lacks. Technical Quality: 3 Clarity: 3 Questions for Authors: Please consider the Weaknesses, particularly point 2. It is important to have results for detection and segmentation to ensure the backbone is learning features that are rich enough for dense prediction tasks and not just over-fitting to the image classification task on imagenet. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: There is some discussion on limitations in section 6. However, the authors should include additional limitations regarding their method, particularly about the fact that it requires additional annotations that may not be available across all datasets. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thorough review and comments. We are pleased to hear that you find the motivation of this paper clear and that it holds significant importance for the vision community. We are also glad that you consider the correlation analysis proposed in the paper to be novel and detailed, and that the writing is clear and easy to understand. Below are our responses to each of your comments. --- > **Q1**: The method is limited by the required annotations (for example foreground annotation) which limits its usefulness across other classification datasets and other tasks. > **A1**: We fully understand your concerns. While it is true that not all datasets or tasks provide foreground annotations, it is important to note that the amount of foreground annotation required for external state defect repair is actually quite minimal. In our experiments, ImageNet-S includes foreground annotations for 9,190 images, with 10 images per class from 919 classes in the ImageNet-1K training set, while approximately 1,200,000 images in the ImageNet-1K training set are not annotated. This means that the amount of required foreground annotation represents a very small fraction of the total (lines 254 to 256 of the paper). Additionally, our exploratory experiments show that selectively annotating misclassified images in the training set is more effective than random annotation. Therefore, in practical scenarios, manually annotating approximately 10 challenging samples per class can sufficiently enable external state defect repair for Mamba. Furthermore, for internal state defect repair, the proposed method does not require any additional annotations (lines 242 to 235 of the paper). In summary, the proposed method is highly practical and applicable to a wide range of classification datasets and tasks. --- > **Q2**: The paper does not evaluate on any detection/segmentation benchmarks. This is required to understand whether the features learned by the backbone are rich enough to generalize to these more complex tasks. All recent vision backbones, including the various mamba-based ones, report detection results on MS coco and segmentation results on ade20k, which the current work lacks. > **A2**: Thank you for your valuable feedback. We appreciate your suggestion to evaluate the generalization of the learned features on detection and segmentation benchmarks. It is important to clarify that the goal of this work is not to develop a general-purpose backbone. Instead, one of our primary motivations is to investigate the internal mechanisms of the Mamba model from a post-hoc perspective, identify operational flaws in specific tasks, and repair them. This approach focuses on optimizing the model based on interpretability (lines 109 to 119 of the paper). Similar to many interpretability studies [1,2], we chose to validate our methods using widely adopted classification tasks. The results demonstrate that the proposed method effectively identifies and repairs defects in the Mamba model, thereby improving model accuracy. However, we acknowledge that adapting post-hoc interpretability methods to more complex tasks like detection and segmentation presents challenges. Most interpretability research has not yet been validated on detection/segmentation tasks, but this does not imply that our method lacks applicability in these areas. Indeed, the paradigm of our method is broadly applicable to Mamba-like architectures and can analyze defects from both external and internal state perspectives, regardless of the specific task. For detection and segmentation tasks, the main challenge is constructing correlations between external/internal states and prediction results. In classification tasks, these correlations are established using the predicted probabilities and gradients between the states and the true classes (Eq. (7) and Eq. (9) in the paper). For tasks like segmentation, where predictions involve probabilities for each pixel in the image, it is necessary to associate external/internal states with all pixels. One potential approach is to aggregate the predicted probabilities for different segmented regions from the ground truth, and calculate the gradients between the total prediction probabilities for each region and the states to construct these correlations. This can be used to analyze and constrain the defects in external/internal state correlations when segmentation results are poor (similar to Eq. (8) and Eq. (9) in the paper). Similarly, for detection tasks, it would be necessary to establish correlations between external/internal states and both the detected categories and spatial coordinates, analyzing these correlations to identify and repair defects. In summary, while the proposed post-hoc defect identification and repair paradigm for Mamba-like architectures is applicable to complex detection and segmentation tasks, specific modifications are needed due to the unique nature of task outputs. This remains a focus of our current and future work. We appreciate the reviewer's concerns regarding the validation on classification tasks and hope this response addresses your concerns. [1] Ali, Ameen, Itamar Zimerman, and Lior Wolf. "The hidden attention of mamba models." *arXiv preprint arXiv:2403.01590* (2024). [2] Jafari, Farnoush Rezaei, et al. "MambaLRP: Explaining Selective State Space Sequence Models." *arXiv preprint arXiv:2406.07592* (2024). --- Rebuttal 2: Title: Rebuttal by Authors [Q3&Q4] Comment: > **Q3**: Please consider the Weaknesses, particularly point 2. It is important to have results for detection and segmentation to ensure the backbone is learning features that are rich enough for dense prediction tasks and not just over-fitting to the image classification task on imagenet. > **A3**: Thank you for your comment. We sincerely hope that the previous response addresses your concerns. It is important to reiterate that flaw identification and repair are post-hoc optimization methods applied to a trained model. Like many studies exploring internal mechanisms of models, the patterns discovered are specific to that model. Consequently, the backbone obtained from flaw identification and repair on a classification task with Mamba may not be directly applicable to detection or segmentation tasks. However, as mentioned earlier, the proposed flaw identification and repair methods are adaptable to detection and segmentation tasks. --- > **Q4**: Limitations: There is some discussion on limitations in section 6. However, the authors should include additional limitations regarding their method, particularly about the fact that it requires additional annotations that may not be available across all datasets. > **A4**: Thank you for your valuable feedback. We agree that discussing additional limitations would make the paper more comprehensive. We have added the following content to the paper to better reflect the limitations of our method: “In the context of external state defect repair, while our method performs well with a small amount of annotations, it is undeniable that, with limited annotation resources, even manual labeling of a few samples may affect the usability of the proposed method. Similar to internal state defect repair, exploring defect repair methods that do not require additional annotations is a direction we will continue to investigate.” We hope that this response addresses your concerns. --- Rebuttal Comment 2.1: Comment: Thank you for your valuable feedback and comments. We have provided our responses to your concerns above. Should you have any further questions or require additional clarifications, we would be more than happy to discuss them with you. --- Rebuttal 3: Comment: Dear Reviewer 3tmx, Please allow me to join your discussion. I agree with the authors that this paper may not need to provide comprehensive experimental results in downstream tasks because this manuscript is more like a machine learning analysis paper. From my side, I think it is acceptable to only provide the results in classification tasks and analyze the inner mechanisms of the model architecture using this basic task. This is the style many machine learning scientists follow in their research. --- Rebuttal Comment 3.1: Comment: Dear Reviewer 3tmX and Reviewer nvv2, Thank you both for your continued engagement and thoughtful feedback. We appreciate Reviewer 3tmX’s perspective on the importance of evaluating the generalizability of features learned by our method through detection and segmentation tasks. While we understand the significance of such experiments, we would like to reiterate that the primary focus of our study is to analyze the internal mechanisms of the Mamba model from a post-hoc interpretability standpoint, specifically within the context of classification tasks. As highlighted by Reviewer nvv2, the primary objective of our work is not to develop or evaluate a general-purpose backbone, but rather to perform a detailed post-hoc analysis of the Mamba model. Our focus is on understanding and enhancing specific aspects of the model's internal mechanisms, particularly in the context of classification tasks, which is a widely accepted approach in related studies. Our choice to validate on classification tasks aligns with the goals of the study, as it allows us to systematically identify and repair flaws within the model, thereby improving its accuracy. That said, we fully acknowledge that extending our evaluation to include detection and segmentation tasks is an important future direction. Furthermore, as mentioned in our response to your second question (Q2), the implementation of post-hoc analysis on the Mamba model in classification tasks differs somewhat from that in detection or segmentation tasks. We believe that our internal mechanism analysis and optimization paradigm hold potential for application in more complex tasks as well. We are actively working on this as part of our ongoing research. Once again, we sincerely thank you both for your constructive feedback, and we hope that this clarification helps to contextualize our approach within the intended scope of the paper. Best regards, Authors
Summary: This paper analysis Mamba model from a post-perspective. It introduce a state correlation analysis method to establish the correlation between hidden states and predicted results, and analysis the external state flaws and internel state flaws. Furthermore, this manuscript propose repair method to handle these flaws. Extensive experiments show its advantanges. Strengths: 1. This paper is well written and easy to follow. 2. The topic is interesting and novel, which propose a new analysis perspective for the recent model Mamba. 3. Since the Mamba model is very popular recently, analysis this model is well motivate. 4. The experiment results are abundant and solid, as well as the detailed experiment setting. Weaknesses: 1. One importent point is the classification task is not necessary only rely on the foreground pixels. Background regions may provide useful information some times. So the proposed external flaws may not be totally reasonable. 2. Only classification on ImageNet results are provides. It it suggented to provide more diverse experiments to full evaluation the proposed repair method. 3. Minors: (1) Not all symbols are explained from Eq.(1)~(6). (2) Missing punctuation (comma or full stop) in Eq.(1)~(6). (3) In Fig.5, there are only (a)(b)(c)(d) in subfigures, while authors use (1)(2)(3)(4) in the caption. Technical Quality: 3 Clarity: 3 Questions for Authors: see weakness Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: see weakness Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thorough review and comments. We are pleased that you find our research topic both interesting and novel, and that you believe the analysis of the Mamba model is well-motivated. We are also gratified that the experimental results have met your approval. Below are our responses to each of your comments. --- > **Q1**: One important point is the classification task is not necessary only rely on the foreground pixels. Background regions may provide useful information some times. So the proposed external flaws may not be totally reasonable. > **A1**: Thank you for your valuable feedback. We agree that background regions can sometimes provide useful information. However, it is important to clarify that external flaws are identified through the analysis of external state correlation scores (Section 3.1). These scores reflect the interpretability of external correlations, specifically the degree to which the predicted results are associated with the foreground object regions (lines 146 to 159 of the paper). By comparing external state correlation scores between simple and difficult samples, we observed a significant drop in the scores for difficult samples (lines 196 to 203). This indicates that external flaws do not imply a complete disregard for background information, but rather that the Mamba model tends to focus more on background information in difficult samples, potentially neglecting important foreground details. Furthermore, during the defect repair process, we only constrained a small number of samples to focus on the foreground (lines 254 to 256 of the paper). This means that most samples are free to learn useful information from the background if it is beneficial, and our results demonstrate that this approach indeed improves the performance of the Mamba model. --- > **Q2**: Only classification on ImageNet results are provides. It it suggested to provide more diverse experiments to full evaluation the proposed repair method. > **A2**: Thank you for your valuable feedback. ImageNet is a widely used large-scale dataset for classification experiments. To thoroughly evaluate the proposed repair method within the limited time available for the rebuttal, we have also tested the ViM model on the smaller CIFAR-10 dataset. The results of these experiments are shown bellow: CIFAR-10 | Base: 84.25 | Flaw Repair: 85.77, where “Base” refers to the original model, and “Flaw Repair” refers to the proposed defect repair method. As shown above, the proposed defect repair method is also applicable to smaller datasets. For instance, the accuracy of the ViM model on CIFAR-10 improved by 1.50% after applying defect repair. It is worth noting that due to the small image size of CIFAR-10, the repair method in this experiment only addresses internal state defects. Additionally, to further validate the effectiveness of the proposed repair method, we conducted comparative experiments on the external and internal state correlations in the Mamba model before and after defect repair. These comparisons are illustrated in Figure 5 and described in lines 268 to 275 of the paper. Figure 5 shows that prior to defect repair, both external and internal state correlations of the Mamba model had low scores, indicating that the model associated incorrect regions during predictions. After defect repair, the scores for both external and internal state correlations improved, demonstrating that the model associated more accurate regions during predictions. In summary, the proposed repair method has broad applicability. We hope this response addresses your concerns, and we will update the paper with additional experiments to further validate the proposed repair method. --- > **Q3**: Minors: (1) Not all symbols are explained from Eq.(1)~(6). > **A3**: Thank you for your careful review and observation. We indeed overlooked providing explanations for some symbols. We have now added the following explanations after Eq. (1)~(6): “Here, SiLU$(.)$, causal-Conv1D$(.)$, and selective-SSM$(.)$ denote the activation function, the casual 1D convolution, and the selective state model, respectively. $W_x^{(\ell)}$, $W_z^{(\ell)}$, and $W_y^{(\ell)}$ are the projection matrices for the linear operations in the Mamba block, while $x$, $z$, $c$, $s$, and $y$ represent the intermediate states within the Mamba block.” Thank you once again for pointing out this detail. --- > **Q4**: *Minors: (2) Missing punctuation (comma or full stop) in Eq.(1)~(6).* > **A4**: Thank you for your meticulous review and observation. We have added the missing commas and full stops in *Eq.(1)~(6)* and have rechecked the entire manuscript. --- > **Q5**: Minors: (3) In Fig.5, there are only (a)(b)(c)(d) in subfigures, while authors use (1)(2)(3)(4) in the caption. > **A5**: We apologize for the typographical errors. We have corrected these discrepancies in the manuscript and have carefully reviewed the entire paper to ensure consistency. --- Rebuttal Comment 1.1: Comment: Thank you for your valuable feedback and comments. We have provided our responses to your concerns above. Should you have any further questions or require additional clarifications, we would be more than happy to discuss them with you. --- Rebuttal 2: Comment: Thank you for your thorough review and valuable suggestions. We are pleased that our responses have addressed your concerns. We will incorporate these points into the final version of the manuscript. Once again, we appreciate your contribution to improving this work.
Summary: 1. Addressing the main issue in the existing model: Mamba, despite its success in long sequence tasks, faces mixed opinions and challenges in visual tasks due to inherent flaws and suboptimal performance. Understanding these flaws and optimizing Mamba's performance in the visual domain are critical research questions. 2. Proposed solution for the issue: The paper proposes Vision Mamba Mender, a systematic approach to enhance Mamba's performance in visual recognition tasks. This approach involves predictive correlation analysis of Mamba's hidden states, both internally and externally, to identify and address flaws. Tailored repair methods are then applied to optimize model performance. 3. How did the author evaluate the proposed solution on various standard dataset: The efficacy of Vision Mamba Mender is validated through extensive experiments on prevalent Mamba architectures. These experiments demonstrate significant improvements in model performance across various standard datasets, showcasing the practical impact and effectiveness of the proposed methods. The algorithm code is also provided for transparency and reproducibility. Strengths: 1. Innovative Post-Hoc Optimization: Vision Mamba Mender introduces a novel approach to optimize Mamba models post-training, focusing on identifying and rectifying operational flaws rather than predefining architectures. This method is innovative in its systematic approach to improving model performance. 2. Comprehensive State Analysis: The paper introduces a detailed state correlation analysis method that evaluates Mamba's hidden states from both external and internal perspectives. This comprehensive analysis helps in pinpointing specific flaws that affect prediction accuracy in visual recognition tasks. 3. Tailored Repair Methods: Tailored repair methods are proposed for both external and internal state flaws identified through the analysis. By imposing constraints on state correlations within Mamba modules, these methods effectively enhance the model's ability to make accurate predictions. 4. Applicability to Existing Architectures: Vision Mamba Mender is designed to be applicable across various state-of-the-art Vision Mamba architectures. This versatility ensures that the optimization approach can be seamlessly integrated into different implementations without significant modifications. 5. Experimental Validation: Extensive experiments validate the effectiveness of Vision Mamba Mender across different benchmarks and datasets. The approach consistently demonstrates improvements in model accuracy, showcasing its practical utility in real-world applications. 6. Transparency and Reproducibility: The availability of algorithm code in the supplementary material and commitment to making it publicly accessible enhance the transparency and reproducibility of the research findings. This openness facilitates further validation and adoption of the proposed methods by the research community. Weaknesses: 1. Limitation in Methodology: The paper mentions the introduction of state correlation analysis and repair methods for Mamba, but it lacks clarity on the specific algorithms or mathematical formulations used. This could hinder reproducibility and transparency, as other researchers may find it challenging to implement or verify the proposed methods without detailed methodology descriptions. 2. Assumptions on Flaws and Repair Methods: The paper suggests that Vision Mamba Mender identifies and rectifies anomalies in Mamba's mechanisms, focusing on external and internal state correlations. However, without empirical evidence or case studies illustrating the nature of these flaws across different datasets and scenarios, the validity and generality of these assumptions remain unclear. Technical Quality: 3 Clarity: 3 Questions for Authors: Please refer to the Weaknesses block Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Please refer to the Weaknesses block Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your comments and the positive feedback. We have carefully reviewed each of your comments. Although some comments may not fully align with our research objectives, we are nonetheless very appreciative of your feedback. Below are our responses to each of your comments. --- > **Q1**: The paper mentions the introduction of state correlation analysis and repair methods for Mamba, but it lacks clarity on the specific algorithms or mathematical formulations used. This could hinder reproducibility and transparency, as other researchers may find it challenging to implement or verify the proposed methods without detailed methodology descriptions. > **A1**: Thank you for your valuable comments. We understand your concerns. In response, we have detailed the methods proposed for the novel Mamba model in the visual domain to ensure clarity and reproducibility. In Section 3 of the paper, we thoroughly describe the computational process of the Mamba model using numerous formulas. Subsequently, in Section 4, we provide a detailed mathematical formulation and definitions for the algorithms used to identify defects in the Mamba model, including how to establish external/internal state correlations and how to define correlation scores. Finally, in Section 5, we offer a comprehensive description, using mathematical formulas, of the algorithms employed to repair identified defects in the Mamba model, including specific methods for addressing external/internal state defects. Additionally, we have included detailed experimental setups in Appendix D. We hope these measures will assist readers in implementing our proposed algorithms effectively and address your concerns. --- > **Q2**: The paper suggests that Vision Mamba Mender identifies and rectifies anomalies in Mamba's mechanisms, focusing on external and internal state correlations. However, without empirical evidence or case studies illustrating the nature of these flaws across different datasets and scenarios, the validity and generality of these assumptions remain unclear. > **A2**: Thank you for your comments. We fully understand your concerns. The approach proposed in our paper, which is based on external and internal state correlations, aims to explore the mechanisms of the Mamba model from these two perspectives. We have demonstrated the presence of external and internal state defects in the Mamba model through a comparison of state correlation scores for simple and challenging samples. Extensive experiments and analyses are detailed in Section 3.3 of the paper, as well as in Appendices E and F. Furthermore, in the field of computer vision, other researchers have also attempted to understand the workings of the Mamba model, similar to our approach. For example, Ali et al. [1] have explored the mechanisms of Mamba in image recognition tasks and their findings reveal some external state anomalies. However, their work primarily focuses on external state correlations and lacks an analysis of internal state correlations. Additionally, their methods are applicable only to state space models and cannot be used for other computations within the Mamba block (as discussed in lines 113 to 119 of our paper). In summary, this paper is the first to propose a post-hoc optimization approach for the Mamba model by analyzing both external and internal state correlations to identify and repair defects. This not only improves model performance but also aids readers in understanding the mechanisms of the novel Mamba model. We appreciate your feedback and hope the above response addresses your concerns. [1] Ali, Ameen, Itamar Zimerman, and Lior Wolf. "The hidden attention of mamba models." *arXiv preprint arXiv:2403.01590* (2024). --- Rebuttal Comment 1.1: Comment: Thank you for your valuable feedback and comments. We have provided our responses to your concerns above. Should you have any further questions or require additional clarifications, we would be more than happy to discuss them with you. --- Rebuttal 2: Comment: Thank you for your diligent review and comments. We are pleased that our responses have addressed your concerns. We appreciate your support for this work.
Rebuttal 1: Rebuttal: Dear Reviewers, Thank you for the time and effort you have invested in reviewing our paper. We are particularly grateful for your recognition of the novelty and originality of our work. We are also pleased that our approach to analyzing and optimizing the Mamba model from a post-hoc perspective has contributed to a better understanding of this new model and its performance improvements. In addressing each of your comments and concerns, we believe that the paper has been significantly improved. We have individually responded to each comment and collected the following common issues raised by reviewers. If you find that our replies address your concerns, we would be grateful if you could consider raising the score. Should you have any further questions, we are more than happy to engage in additional discussions. --- > Reviewer JDBm expressed concerns about the details of the flaw identification method for the Mamba model and the limitations of analyzing and enhancing the Mamba model solely from a post-hoc perspective. > In response to these concerns, we have provided a detailed analysis and explanation of the defect identification methodology in our responses to the reviewer’s comments. Additionally, we have highlighted the significant advantages and importance of analyzing and enhancing the Mamba model from a post-hoc perspective compared to other approaches. We believe these responses address the reviewer’s concerns effectively. --- > Reviewers nvv2 and 3tmX both raised concerns regarding the relevance of foreground information in the identification and repair of external state flaws. Reviewer nvv2 pointed out that classification tasks do not solely rely on foreground information, as background regions can sometimes provide useful data. Reviewer 3tmX noted that the requirement for foreground annotations limits the applicability of the proposed method to other classification datasets and tasks. > Thank you for the valuable feedback. We apologize for any confusion caused. We have provided detailed responses to each reviewer’s comments below and believe these responses address their concerns. It is important to emphasize again that the proposed method for identifying and repairing external state flaws does not restrict the Mamba model from learning background information. Furthermore, the amount of foreground annotation required for repairing external state flaws is minimal, making the proposed method both reasonable and practical. --- > Reviewer 3tmX expressed concerns about the lack of evaluation of backbone generalization on more complex tasks. > We understand the reviewer’s concerns. To address this, we have provided detailed responses regarding why the proposed method was validated only on classification tasks, how the method can generalize to more complex tasks, and how it can be adapted for such tasks. In summary, the goal of this work is not to develop a universally applicable backbone for various complex tasks. Instead, one of the motivations for this work is to analyze the internal workings of the Mamba model from a post-hoc perspective, identify specific operational flaws in certain tasks, and repair them. The paradigm of identifying and repairing flaws from both external and internal state perspectives is applicable and valuable. --- > Reviewers 4vzy, nvv2, and JDBm pointed out issues with the explanations of formulas and some typographical errors in the paper. > We appreciate the reviewers' attention to these details. We have corrected these issues in the paper and thoroughly checked the entire text to ensure accuracy. --- Besides the issues mentioned above, more detailed responses to the reviewers' comments can be found below each reviewer's specific feedback. We sincerely thank all the reviewers for their diligent review and valuable suggestions, which have greatly improved the paper.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
No "Zero-Shot" Without Exponential Data: Pretraining Concept Frequency Determines Multimodal Model Performance
Accept (poster)
Summary: This paper investigates the "zero-shot" performance of multimodal models like CLIP and Stable-Diffusion. By analyzing 34 models and 5 pretraining datasets, the study finds that these models require exponentially increasing amounts of data to achieve linear improvements in performance. Additionally, the study suggests that the key to effective "zero-shot" generalization with large-scale training data and compute paradigms remains elusive. Strengths: - The authors conducted a comparative analysis involving two main factors: (1) the performance of models across various downstream tasks, and (2) the frequency of test concepts within their pretraining datasets. - This paper is easy to read. - There is ample work involved in this study. Weaknesses: - Please unify the description of the table, above or below the table. - This work does not point out correlations or differences with prior work, preventing me from assessing the novelty of this work. - Qualitative visualizations and relevant theoretical analysis are lacking, although the perspective of the work is easy to understand. Technical Quality: 2 Clarity: 2 Questions for Authors: - Does this work conflict with research into improving the generalization of multimodal models? I think the author should discuss it. - Explanation of Figures 24-27 is insufficient. - What is the motivation behind this work? What implications does it have for future research? - To validate the points made in this paper, should other tasks beyond classification and retrieval be evaluated? Confidence: 3 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: - See Weaknesses and Questions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > **W1: unify table descriptions** Thank you for pointing out this discrepancy, we apologise for the oversight and have fixed this in the paper. > **W2: connections to existing work** Thank you for this question. We have included a brief related work section in the main paper in sec 7, describing the differences of our work to the other related work in literature. Further, we discussed our paper’s position in the literature in the last paragraph of the introduction under, "Situating our Contributions in Broader Literature." Lastly, we note that we have included a larger related works section in appx E. We hope this addresses the concern. > **W3: qualitative visualizations/theoretical analysis lacking** - **Regarding qualitative visualizations:** While we share T2I generations for the long-tailed "Let It Wag!" concepts in figs 7, 24-27 as well as misalignment in fig 23, we are happy to provide additional figures visualizing concepts according to frequency in the revised paper. - **Regarding relevant theoretical analysis:** While we mention that theoretical analysis is lacking explicitly in our limitations (see appx N), our work is an example of empirical theory [[1](https://arxiv.org/abs/2001.08361),[2](https://openreview.net/forum?id=AssIuHnmHX)], where we attempt to quantitatively describe real-world behavior, thereby yielding conjectures, which, if proven, yields theory that holds in practice. Such approaches are standard in physics, and has yielded characterizations that have become prescriptive for practitioners ([Hoffmann et al](https://openreview.net/forum?id=iBBcRUlOAPR)). In our case, we have shown that typical power law (log-log) relationshisp between training set size and performance do not hold when downstream performance is related to concept frequency within training sets, where we instead observe log-linear trends, thereby characterizing the detrimental effects of the long tail. We will expand upon this in the revised version of the paper. > **Q1: conflicting with works that improve multimodal model generalization?** Thank you for this insightful question. We believe our conclusions should hold regardless of the model architecture and the training objective for any VLM. However, to test this, we investigated two methods that have been empirically shown to improve generalization capabilities of CLIP models: [CyCLIP](https://arxiv.org/abs/2205.14459) and [SLIP](https://arxiv.org/abs/2112.12750). We use 4 different models, each trained with either CyCLIP/SLIP on three different datasets---we then plot our main log-linear scaling results similar to figs 2, 3 for CyCLIP and SLIP models—these plots are in the uploaded rebuttal PDF at the bottom. We observe for both SLIP and CyCLIP models, **the log-linear scaling trends hold strong, with high Pearson correlation coefficients**, further signifying the robustness of our main results. Hence, we emphasize that **our main conclusions hold true even when considering multimodal models that explicitly introduce new training objectives with the aim of improving model generalization**. > **Q2: insufficient explanation of figs 24-27** We apoloigise for the lack of clarity. The motivation behind figs 24-27 was to provide qualitative insights into failure of T2I models on long-tailed concepts, providing a more detailed analysis of fig 7 of the paper. We used three T2I models to generate images from prompts generated for each concept by Gemini/GPT4. For a more holistic understanding of model failures on long-tailed concepts, we group concepts into a broader semantic category: aircraft (fig 24), activity (fig 25), animal (fig 26) and misc (fig 27), to showcase the broad spectrum of concepts captured and how T2I models fail in each case. This has been detailed in sec 6 of main paper and appx. J.2 but we will expand the section further with clarifying details. > **Q3: motivation and implications of work?** Thanks for this question. While we provide motivation in our introduction and have also included broader impacts section (appx O) to highlight the implications of our work for future research in multimodal models, we will make sure to clarify these sections further. - **Motivation:** While multimodal models demonstrate impressive "zero-shot" performance, knowing whether this is a result of an overlap between downstream tasks and the pretraining dataset not only calibrates expectations for how these models will perform in-the-wild, but also clarifies if the aforementioned performance could be achieved more efficiently by leveraging improved dataset priors, ie downstream task-aware curation. - **Implications:** Given the observed sample inefficient log-linear scaling trend suggests that models indeed are reliant on downstream task concepts being exponentially frequent in their pretraining dataset, this implies that reliable performance when encountered with rare concepts remains an outstanding question for current multimodal foundation models. Based on this analysis, we contribute the "Let-It-Wag!" testbed by aggregating rare concepts, to further future research in this direction. > **Q4: tasks beyond classification/retrieval** Thank you for the question—we agree, it is important to consider tasks outside classification/retrieval. Hence, we have studied the performance of 24 T2I models on the task of text-based image-generation and observed the same log-linear performance trend as that of classification and retrieval. Please refer to fig 3 (main paper) and fig 11-15 (appendix) for results of the log-linear claim for all metrics across all 24 text-to-image models. Additionally, we provide a human evaluation on the text-to-image results as well, please see appendix C and fig 16, 17. Finally, we also highlight the difference in performance between head and tail concepts using a quantitative nearest-neighbour retrieval experiment—please refer appendix J. We hope this sufficiently addresses the reviewer’s concern.
Summary: The authors evaluate CLIP's zero-shot performance and its correlation to the pretraining data. Their experiments, based on the open LAION datasets, show that the limitations of Vision-Language Models (VLMs) in downstream tasks such as image generation and zero-shot recognition are linked to the frequency of concepts in the data. VLMs like CLIP are pivotal for downstream multimodal applications, and understanding their biases is crucial for developing more robust systems. While the paper demonstrates the limitations of these foundational models, it currently focuses more on breadth rather than presenting in-depth insights. I would be willing to increase my score if the authors could address my questions and fix some of the references. [1] Parashar, Shubham, et al. "The Neglected Tails in Vision-Language Models." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024. [2] Schuhmann, Christoph, et al. "Laion-5b: An open large-scale dataset for training next generation image-text models." Advances in Neural Information Processing Systems 35 (2022): 25278-25294. Strengths: 1. The paper is well-written and motivated, highlighting the importance of understanding the limitations of Vision-Language models like CLIP. 2. The paper uniquely demonstrates that the generation capabilities of T2I models are correlated with the frequency of a given concept, which is novel and insightful. 3. The authors also introduce a new dataset that aggregates tailed concepts from Vision-Language Model pretraining datasets. Weaknesses: 1. I feel the authors are somewhat aggressive in their claim of a 'constant linear relationship' as stated in Fig 2's caption. There are noticeable dips in accuracy that are not well explained. Given that the authors can identify false positives using their frequency measurement pipeline and are confident about the frequency of a given concept, a true linear correlation should not show these dips. This observation is not adequately addressed by the authors. 2. The authors' claim that their work is the first to establish that pretraining datasets of VLMs are long-tailed seems overzealous. Prior work has demonstrated this fact [1], and the correlation between zero-shot accuracy and pretraining frequency has been established. Additionally, [1] analyzes both LAION-400M and LAION-2B, a superset of LAION-400M [2], whereas the biggest dataset evaluated in this work is LAION-400M. Furthermore, [1] has been incorrectly cited in the paper. 3. The authors present a challenging new benchmark but provide very limited information on how these concepts were collected. There could be multiple ways to sample 290 classes, which could lead to worse performance for CLIP compared to popular benchmarks like ImageNet. 4. Additionally, the authors could have provided some high-level information about which concepts are relatively less present, such as snakes, birds, airplanes, etc. This information could offer insights toward improving CLIP. Technical Quality: 3 Clarity: 4 Questions for Authors: 1. Can the authors provide insights about the classes used to create the dataset? Specifically, what was (a) the most frequent concept, (b) the least frequent concept, and (c) the average frequency of a concept? This would help understand the structure of the dataset, and how tailed the dataset is. 2. Can the authors present the standard deviation of each frequency bin? It would be interesting to see the variance as the frequency increases, as this may help explain the dips. CLIP might show less variance for relatively frequent concepts. Confidence: 5 Soundness: 3 Presentation: 4 Contribution: 4 Limitations: 1. While the authors highlight a significant problem, they do not propose any solutions to mitigate it. Offering potential solutions could have strengthened the paper. 2. The paper covers a broad range of topics, but it would benefit from a stronger focus on the generation aspect and dataset. Previous literature [1] has indicated that zero-shot recognition of CLIP models depends on the distribution of concepts in the pretraining data. The value of this work lies in highlighting that generative systems also face this issue. Emphasizing this aspect would make the paper easier to follow and enhance its impact. 3. Lastly, while analyzing the smaller YFCC or CC datasets is commendable, I feel that modern VLMs are pretrained on a scale of billions. Therefore, presenting this analysis on these smaller datasets has limited practical impact compared to insights derived from larger datasets like LAION or DataCOMP. Additionally, YFCC and CC datasets have experienced data loss over time, further diminishing their relevance. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > **W1: Dips in accuracy and consistency of log-linear trends?** We thank the reviewer for raising this concern. - **Analysis of drops in high freq concepts.** We provide some intuitions on why there are some drops in the trend at high freqs for CC-3M and CC-12M, we investigated which concepts occur at such high freqs, specifically above 10^4. The complete list of the relevant concepts and the corresponding downstream dataset will be added to the paper. From our analysis, we hypothesize two key reasons for a performance dip: - **Concept ambiguity**: we observe many concepts that are homonyms / polysemous (same spelling but different meaning ie can represent multiple concepts at once). Some examples are watch, bear, house, fly, bridge, cloud, park, face, bar, tower, wave, etc. - **Broad concepts**: A concept with a broader scope of definition supersedes a narrower one (concept ‘dog’ vs the specific breeds of dogs seen in imagenet-r ('yorkshire terrier', 'boston terrier', 'scottish terrier', 'golden retriever', etc)). These concepts are too coarse-grained and hence can be visually represented by a diverse set of images. Performance variance of these concepts can be quite high based on the specific set of images given for testing. These ambiguities become more prevalent the more ubiquitous a concept is, which is directly tied to its freq obtained from pretraining datasets. Some more examples for a deeper understanding of the diversity of concepts are: cucumber', 'mushroom', 'Granny Smith', 'camera', 'chair', 'cup', 'laptop','hammer', 'jeep', 'lab coat', 'lipstick', 'american-flag', 'bear', 'cake', 'diamond-ring', etc. - **Statistical validity across broad evaluations.** Beyond our analysis, while we agree that there are some slight deviations from the log-linear scaling trend, particularly at high frequencies for the CC-3M and CC-12M datasets for classification, we would like to point out that these log-linear trends in general hold strongly across two different tasks and 4 different datasets. We have also validated that these trends are statistically significant by including the Pearson correlation coefficients and conducting a two-tailed t-test to increase confidence in our results. - **Noise in web-scale datasets and frequency estimation.** Further, we would like to highlight that web-scale datasets are inherently very noisy, and coupled with that our frequency estimation procedures could be quite noisy due to the inherent scale of our search and estimation procedures. Despite this noise, we observe strong log-linear scaling trends across the board. Further, we point the reviewer to the results in fig 22 of appx where we showcase that the log-linear scaling trends hold even when our frequency estimation pipeline is explicitly made noisier (by varying threshold of RAM++), further signifying the robustness. We will further ensure that each of the figure captions reflect the exact nature of the consistency of the log-linear trends in the main paper. We hope to have adequately addressed the reviewer’s concerns, by probing into the relevant concepts, as well as discussing the statistical validity of our results. > **W2: claiming first to show long-tailed VLM datasets, contributions with respect to prior work, and incorrect citation** We thank the reviewer for pointing out these concerns. - We apologise for the confusion—we would like to clarify that we **do not explicitly claim that we are the first to establish that VLM pretraining datasets are long-tailed**. We agree with the reviewer that prior work has showcased that VLM pretraining datasets like LAION-400M are long-tailed with respect to the concepts they contain, and we build, complement and generalize the findings of these prior works. - **Contributions with respect to [[1](https://arxiv.org/abs/2401.12425)]:** We emphasise that we complement this prior work and point out that our work comprehensively tests the strength of the log-linear scaling trend across several datasets, spanning varying levels of data curation and dataset sizes. Further, we note that in [1], the estimated frequencies are computed using only the text captions of LAION-2B. These estimated frequencies are then used as the canonical frequencies for plotting the performance-frequency curves for all the tested models (despite these models being trained on different pretraining datasets other than LAION-2B). Our work strongly showcases why this apparent asymmetry in their frequency estimation methodology should work—from tab 4, we show that different VLM pretraining datasets are strongly correlated in their concept distributions. Hence, in spite of [1] using only LAION-2B as their source dataset for freq estimation, their results roughly hold true because of this strong correlation across pretraining datasets. Our methodology of incorporating both images and text captions when computing the freq estimates is crucial for explaining this. Hence, we believe that our work comprehensively generalizes and explains the findings of prior work while also providing insights into the pretraining datasets (eg, misalignment degree and correlation of concept distributions in datasets). We will be sure to clarify this in the revision. - **Scale of experiments:** While we agree that the largest dataset we analyse (LAION-400M) is smaller than LAION-2B/DataComp-1B, we point to fig 3 and 22 of [[2](https://arxiv.org/abs/2304.14108)], where authors showcase that zero-shot performance of different data curation methods are strongly correlated across different dataset scales (12.8M vs 128M vs 1.28M), while the source distribution is held constant (Common-Crawl). Similarly, in our case since LAION-400M, LAION-2B and DataComp-1B are sourced from the same Common-Crawl source-distribution, we are confident our results also generalize to LAION-2B / DataComp-1B. - **Incorrect citation:** We apologise for this oversight—we fixed it in the paper to reflect the correct version. --- Rebuttal 2: Title: Rebuttal to reviewer WCHb contd. [1/2] Comment: > **W3: info on Let-It-Wag! construction** Thank you for pointing out this concern, we apologise for the lack of details regarding the construction of the Let-It-Wag! Benchmark. We provide some details here and have also added this information to the appx. For curating the set of 290 concepts, we collate our frequency estimates from the LAION-400M dataset for all the 4,029 concepts we consider. We then remove all the concepts that have 0 counts to ensure that our final dataset consists of concepts that have been detected atleast once in LAION-400M, this method has also been used in [[3](https://arxiv.org/pdf/2211.08411)] to ensure robustness to noise in the estimation process. For each extracted concept, we then apply the image download and manual cleaning procedure described in appx G.2. Finally, we removed those concepts that had a low number of total images left post the cleaning process, and retained all the other concepts. We then sorted these concepts in ascending order of their estimated concept frequencies, and retained the top 290. These concepts are used as the final list of classes for our Let-It-Wag! benchmark. > **W4: high-level insights about long-tail concepts** Thank you for this question. The broad categories of the most long-tailed concepts with a few examples for each are as follows (a majority of them have been highlighted already in Figs 24-27 in the appendix): - **Birds:** Western_Scrub_Jay, Cassins_Finch, Prairie_Warbler, Red_eyed_Vireo, Veery - **Animals:** flatworm, Tibetan_Mastiff, Scottish_Terrier, vine_snake, newt - **Aircrafts:** A300B4, A310, Falcon_900, DHC-8-300, MD-11 - **Objects:** guillotine, letter_opener, ladle, dust_jacket - **Plants and fungi:** mexican_aster, gyromitra, great_masterwort, thorn_apple, cape_flower - **Misc:** consomme, stratified_texture, eggnog Additionally, we will release the full list of these concepts as the classes of the Let-It-Wag! benchmark in the appx. > **Q1: stats about Let-It-Wag?** Thank you for raising this question. We provide the required stats below. - **Most frequent concepts:** partridge (count=9489), Bank Swallow (count=9489), eel (7907) - **Least frequent concepts:** Red-necked Grebe (count=0), SR-20 aircraft (count=0), Globe-flower (count=0) - **Median frequency of concepts:** 97.5 - **Mean frequency of concepts:** 1096.2 We also show the full histogram of concept frequencies for the 290 concepts in the Let-It-Wag! dataset in the uploaded rebuttal pdf. From the histogram, it is evident that most of the concepts in Let-It-Wag! have frequency less than 2000. About half of the concepts in Let-It-Wag! (~140) have a frequency less than 1000. Hence, this histogram sufficiently establishes that our **Let-It-Wag! dataset truly captures the long tail**. > **Q2: variance in fig 2 points** Thank you for the insightful question. We provide zero-shot classification plots for CC-3M, CC-12M, and LAION-400M in the uploaded rebuttal PDF, including 95% confidence intervals for each point. This approach follows the standard practice from works like [[1](https://arxiv.org/abs/2107.04649), [2](https://arxiv.org/abs/2007.00644)]. Our plots show that **the spread at higher frequencies is significantly larger than at moderate frequencies**, corroborating the finding that **higher frequency concepts are more ambiguous and polysemous**. These results **support the observed dips in accuracy at high-frequency points**. We will include these plots in the appendix and discuss this in the paper. --- Rebuttal 3: Title: Rebuttal to reviewer WCHb contd. [2/2] Comment: > **L1: mitigating solutions?** Thank you for this insightful question. While our paper does not propose specific solutions, we believe its primary contribution is in thoroughly highlighting the issues with current pretraining strategies for multimodal models across various datasets, pretraining methods, architectures, training objectives, and tasks. Additionally, by releasing the "Let it Wag!" testbed, we provide a straightforward test set for future research to build upon, aiming to improve the generalization of multimodal models to long-tail scenarios. However, we suggest a few potential methods that could be explored to enhance multimodal long-tail: - **Retrieval Augmentation:** Enhancing generalization to long-tail concepts can be achieved by utilizing the "world-knowledge" of LLMs to provide detailed descriptions for these concepts. This approach transforms the task from simply recognizing long-tail concepts by name to recognizing them by both names and descriptions. - **Curriculum Learning:** Our tested models used random IID sampling during training. However, research into better sequencing of data samples could potentially improve model generalization to long-tail concepts by inducing more transferable feature representations in VLMs. - **Synthetic Data:** Addressing the issue of long-tail concepts in web-sourced datasets may not be feasible by merely increasing data samples. There will likely always be low-data density regions in the pretraining data distribution. Using synthetic data, either through procedurally generated samples or text-to-image models, could be a viable mitigation strategy. We hope these suggestions provide valuable directions for future research and contribute to the development of multimodal models capable of better generalization. We will add these points to the future works section in the appendix. > **L2: stronger focus on generation and Let-It-Wag! dataset** We thank the reviewer for raising this point. We would like to respectfully argue that we believe our image-text contributions are equally as important to the other contributions, please see response to W2. Re. focus on Let-It-Wag! dataset, we agree that it would be prudent to emphasize its construction/properties further. We will make these points clearer in the paper and hope this clarifies the issue. > **L3: usefulness of insights gained from CC/YFCC experiments?** We thank the reviewer for raising this important point. - **CC/YFCC datasets used for important VLM pretraining methods.** While we agree that smaller datasets like CC-3M/YFCC-15M might not yield insights which are practically relevant for SoTA performance, we still believe that it is an important validation to perform these experiments at that scale. We note that lots of high impact work on CLIP-like models have been empirically validated by pretraining on datasets like CC-3M/CC-12M/YFCC-15M, including [CyCLIP](https://arxiv.org/abs/2205.14459), [DeCLIP](https://arxiv.org/abs/2110.05208), [OpenCLIP](https://github.com/mlfoundations/open_clip), [SLIP](https://arxiv.org/abs/2112.12750), [FILIP](https://arxiv.org/abs/2111.07783), [MERU](https://arxiv.org/abs/2304.09172) and [LaCLIP](https://arxiv.org/abs/2305.20088). - **Expansive insights spanning a range of dataset scales.** Further, we note that each of these datasets from small-scale CC-3M to the large scale LAION-400M employ different data curation strategies, and by showcasing that (1) *all datasets have very similar pretraining concept distributions*, and (2) *models trained on any of these datasets showcase the same consistent log-linear scaling trends*, we have empirically showcased the robustness of our main exponential data inefficiency result, and uncovered several important properties of VLM pretraining datasets. - **Significant compute resources.** Finally, we would like to point out the significant compute resources it would take to conduct our analysis on even larger-scale datasets like DataComp-1B or LAION-2B—in appx L tab 8, we report the storage and compute costs required for conducting all our experiments—at the LAION-400M scale, we would need almost 10TB of disk space and 2000GPU / 7000CPU hours (this is roughly in the order-of-magnitude required for pretraining CLIP models themselves, see [Cherti et al](https://arxiv.org/abs/2212.07143) tab 18). - **Testing on other web-scale datasets with Let-It-Wag!** Given our results’ robustness across several scales, we have strong reasons to believe that our result will continue to generalize even for larger-scale datasets and models. One point of evidence to further bolster this is that we tested models trained on larger-scale datasets including DataComp-1B, DFN-1B and WebLI-10B, in our analysis on the Let-It-Wag! Dataset in fig 6. We however do not see any significant deviations for models trained on these datasets that would lead us to believe that they showcase different characteristics to the datasets we analysed in our work. --- Rebuttal 4: Comment: Thank you for your response—it has addressed most of my questions. Here are my thoughts on each point: 1. I think it's important to note the ambiguity problem as a limitation of the current frequency measurement pipeline. This can help offer some explanation of the dips observed and clear some questions a reader may have when observing the dips. 2. It seems there's still some confusion regarding [1], which also analyzes LAION-400M, not just LAION-2B, as I mentioned in my review. Since your work is closely related to previous research, it's crucial to provide a clear comparison that differentiates your contributions from existing work. I suggest enhancing the current literature review section to better highlight these novel aspects, as the current writing doesn't achieve this. 3. Thank you for detailing the dataset curation process. **However, I'm curious if you considered reducing the benchmark size by sampling only classes with fewer than 1,000 instances. If so, did this result in further accuracy degradation?** 4. It would be beneficial to include the dataset statistics and high-level insights presented in the rebuttal within the appendix of the submission. I have asked one additional question above and will wait for the authors' reply. --- Rebuttal Comment 4.1: Title: Response to Reviewer WCHb Comment: Thank you for following up, and highlighting what should be resolved in the revision—we will most certainly fix the references, will point to our ambiguity analysis in the main text, ensure our contributions wrt to [1] are adequately addressed in the literature review, and altogether will include all rebuttal insights within the appendix of the revised paper. Regarding creating a filtered version of the Let-It-Wag! dataset, as suggested, we keep only those classes that have a frequency of less than or equal to 1000 instances per class. This filtered dataset contains 151 classes compared to the original 290 classes in Let-It-Wag!. We then re-ran a diverse set of zero-shot CLIP/SigLIP models on this Let-It-Wag-filtered dataset, and showcase the comparison results to both ImageNet and the original Let-It-Wag! dataset in the table below: |Model/Dataset|ImageNet Acc|Let-It-Wag! Acc (290 classes)|Let-It-Wag-filtered! Acc (151 classes)| |-|-|-|-| |RN50/CC-3M|20.09|3.74|2.56| |RN50/CC-12M|33.14|8.92|5.95| |RN-50/openai|59.84|32.93|30.18| |ViT-B-32/openai|63.34|33.52|32.90| |ViT-B-32/datacomp|69.18|46.91|49.21| |ViT-B-16/CC-3M|17.11|3.01|2.42| |ViT-B-16/CC-12M|37.39|11.49|7.88| |ViT-B-16/openai|68.36|37.86|37.77| |ViT-B-16/datacomp|73.48|52.90|56.00| |ViT-B-16/WebLI|78.49|54.64|49.69| |ViT-L-14/openai|75.53|45.32|44.49| |ViT-L-14/datacomp|79.21|63.04|65.70| |ViT-H-14/DFN|83.45|71.91|71.80| |ViT-L-16/WebLI|82.07|61.51|56.13| |ViT-So400m/WebLI|82.03|67.33|63.61| We observe that almost all models underperform on the Let-It-Wag-filtered dataset compared to the Let-It-Wag! dataset. One interesting point to note here is that all exceptions were models trained on DataComp-1B---they perform ~2-3% worse on Let-It-Wag! compared to the Let-It-Wag! filtered dataset. Given DataComp’s focus of optimizing data curation, this perhaps suggests that research into better data curation could be a viable route to enabling progress on the long-tailed concepts, however this warrants a more concrete treatment which could be an interesting direction for follow-up work to explore. With that said, **all performance numbers remain significantly lower than ImageNet performance across all models, which underscores that models still struggle to perform well on concepts that are long-tailed in web data-distributions**. We will highlight these results and interpretations in the revision. We hope this sufficiently addresses the reviewer's concerns and are happy to provide further clarifications if required.
Summary: This paper examines the relationship between the frequency of concepts in pretraining data and the performance of downstream tasks associated with those concepts. Extensive experimental results reveal a log-linear relationship, suggesting that exponential increases in data are necessary to improve zero-shot model performance. Additionally, the authors reconfirm the long-tailed distribution of concepts in well-known pretraining datasets, highlighting the challenge of handling rare concepts in foundation models. Based on this analysis, the authors provide a benchmark to evaluate model performance on tail concepts. Strengths: - This paper significantly advances our understanding of data efficiency in multimodal foundation models, revealing a log-linear relationship between concept frequency in pretraining datasets and downstream task performance. - Extensive experiments are conducted, covering various pretraining datasets and downstream tasks. - The additional analysis provides valuable insights into the long-tailed distribution problem of foundation models. - The paper introduces a new benchmark called “Let It Wag,” crucial for evaluating the performance of multimodal foundation models on long-tail distributions. - The paper is clearly written and easy to follow, with most relevant details included. Weaknesses: Nothing I can think of. Technical Quality: 3 Clarity: 4 Questions for Authors: - I am curious about the phenomenon where the average accuracy drops after the concept frequency of 10^4 in Figure 2. Could the authors provide some examples and hypotheses about why this happens? - How much variance is there in each point of Figure 2? I am curious about how much the tendency varies depending on the difficulty of the concept, its size, or other aspects. - Does zero-shot performance predict few-shot or linear probing performance? I wonder if zero-shot models still learn important features for rare concepts, but these features are dominated by class imbalance in zero-shot tasks. - Typically, how long does it take to train a model on each dataset? (GPU hours) Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 4 Limitations: The authors adequately addressed the limitations with future directions in Appendix N. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > **Q1: Dips in accuracy at high freqs** Thank you for the suggestion. We look into CC-3M and CC-12M, the pretraining datasets corresponding to which we see dips in accuracy on the classification tasks. From our analysis, we hypothesise two main reasons for these performance dips: - **Concept ambiguity:** we observe many concepts that are homonyms / polysemous (same spelling but different meaning ie can represent multiple concepts at once). Some examples are watch, bear, house, fly, bridge, cloud, park, face, bar, tower, wave, etc. - **Broad concepts:** A concept with a broader scope of definition supersedes a narrower one (concept 'dog' vs the specific breeds of dogs seen in imagenet-r ('yorkshire terrier', 'boston terrier', 'scottish terrier', 'golden retriever', etc)). These concepts are too coarse-grained and hence can be visually represented by a diverse set of images. Performance variance of these concepts can be quite high based on the specific set of images given for testing. These ambiguities become more prevalent the more ubiquitous a concept is, which is directly tied to its frequency obtained from pretraining datasets. Some more examples for a deeper understanding of the diversity of concepts are: cucumber', 'mushroom', 'Granny Smith', 'camera', 'chair', 'cup', 'laptop','hammer', 'jeep', 'lab coat', 'lipstick', 'american-flag', 'bear', 'cake', 'diamond-ring', etc. We will provide this analysis in the appendix and hope this adequately addresses the question. > **Q2: Variance in fig 2 points** Thank you for the insightful question. We provide zero-shot classification plots for CC-3M, CC-12M, and LAION-400M in the uploaded rebuttal PDF, including 95% confidence intervals for each point. This approach follows the standard practice from works like [[1](https://arxiv.org/pdf/2107.04649), [2](https://arxiv.org/abs/2007.00644)]. Our plots show that **the spread at higher frequencies is significantly larger than at moderate frequencies**, following the finding in the previous point that **higher frequency concepts are more ambiguous and polysemous**. These **results support the observed dips in accuracy at high-frequency points**. We will include these plots in the appendix and discuss this in the paper. > **Q3: Zero-shot perf predictive of few-shot/linear-probing?** Thank you for this insightful question. Our work focused on zero-shot evaluations, the standard for assessing vision-language models "out-of-the-box". However, according to [Gadre et al.](https://arxiv.org/abs/2304.14108) fig. 16, ImageNet zero-shot performance and linear probing performance are highly correlated across various CLIP models. Thus, it is likely our trends would also apply to few-shot fine-tuning, at least for ImageNet. We agree with the reviewer that investigating how log-linear scaling trends change with few-shot fine-tuning, such as in [TIP-Adapter](https://arxiv.org/abs/2111.03930), [CLIP-Adapter](https://arxiv.org/abs/2110.04544), or [CoOP](https://arxiv.org/pdf/2109.01134), would be a valuable follow-up to our work. We will add a point on this in our future works section in the appendix. > **Q4: Training time for models?** Each of the models we consider are roughly trained for ~30 epochs each. This results in a different total compute budget (samples seen budget) for each run. We provide some canonical estimated total GPU hours for training different models below: |Model|Samples seen|GPU hours| |-|-|-| |RN50|90M/340M/430M|50h/186h/240h| |ViT-B-32|13B|4500h| |ViT-B-16|90M/340M/430M/13B|60h/200h/280h/5000h| |ViT-L-14|13B|7000h| --- Rebuttal 2: Comment: Thank you for your further investigation and thorough response! My questions are nicely addressed, and I really enjoyed reading your paper. I am going to keep my score as it is, since I already gave a high score from the beginning. --- Rebuttal Comment 2.1: Comment: We would like to thank the reviewer for deeply engaging with our work and helping us improve the overall quality of the paper.
Summary: This work explores the extent to which zero-shot generalisation really occurs in large-scale in model that were trained on web-scale datasets. The approach taken relies on identifying concepts that are present in train and test data and evaluating concept frequencies and per-concept performance. From extensive experiments, the authors find that test performance of the models correlates strongly with the frequency of concepts seen during training. Strengths: * The paper tackles an interesting and important topic - something that is often mentioned / discussed, but prior to this paper did not get proper analytic treatment. * Good and mostly clear presentation, clear methodology, extensive experimental evidence Weaknesses: * My general feel after reading the paper is that the issue of zero-shot performance of model trained on web-scale data is not as bad as the paper (abstract) makes it out to be. For example, from Fig 6 we see that ImageNet and long-tail (e.g. Let It Wag!) benchmarks actually tend to agree as model performance improves. Similarly, we see from Fig 2 that larger models tend to have better performance even for concepts with low train dataset frequencies. * Presentation of the results is not always clear. For example, what are the different pannels in Fig 3 or Fig 5? * Results presented in the paper do not really study zero-shot performance. Specifically, in Fig 2 or 3 there appears to be no evaluated points with freq = 0. As such, the paper really studies few-shot performance as a function of the number of training examples. * Methodologically, I have an issue with way concept frequency in images was estimated - it relies on a pre-trained model (RAM++) to tag/classify images. This model itself likely suffers from training datasets biases and thus could miss or over- or under-estimate image concepts. This limitation also puts into questions the image-caption misalignment results presented in Tab 3. Technical Quality: 3 Clarity: 3 Questions for Authors: See weaknesses. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: See weaknesses, specifically whether zero-shot performance is really being evaluated in the paper, and how concept frequencies were determined for images. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > **W1: Why not simply scale up models?** Thank you for raising this point. - **Train-test similarity as a control factor.** We agree models with higher ImageNet accuracy perform better on the long-tail. Note however that results in fig 6 are not normalized for *“train-test similarity”*. Normalizing for *train-test similarity* is crucial for interpreting these results. For eg, the absolute performance on low-freq concepts is higher across larger datasets like LAION400M in fig 2. However, when normalized for train-test similarity (fig 4 left), performance drops to levels comparable to smaller datasets like CC-12M. This highlights the importance of considering *train-test similarities* when comparing absolute performance across pretraining datasets. - **On scaling up models.** We agree increasing model sizes can improve performance offsets, as supported by scaling laws. However, trends in performance degradation on low-freq concepts remain consistent across different model-sizes (RN50 - ViT-L/14), indicating a significant issue even with larger models. We will clarify these points in the manuscript and add a discussion on model scaling laws. > **W2: What are panels in Fig 3/Fig 5?** We apologize for the lack of clarity. - **Figure 3**: The different panels denote log-linear performance-frequency scaling curves for different T2I models. Note that in total we analyze 24 different T2I models (see tab 2). Since it would be tedious to fit all 24 onto a single plot, we split into 8 sub-plots with 3 models each. - **Figure 5**: The different panels in fig 5 all showcase the concept count distribution across three pretraining datasets. In each plot, we showcase estimated frequency of concepts in the pretraining dataset on the y-axis, and the sorted concept index (sorted by frequency) on the x-axis. Since there are three ways to estimate concept frequencies (freqs from text captions, freqs from images, and joint frequencies from both images and text captions), we showcase the frequency distributions as obtained using all three methods independently. We will update the paper to add these clarifications, and hope this will simplify and ease the presentation of results. > **W3: Paper does not really study 0-shot performance, no points with freq 0 in figs 2 and 3** Thank you for this important point. The reviewer is right—we explicitly exclude zero-freq concepts from our evaluations following [[1]](https://arxiv.org/pdf/2211.08411), since frequency estimation is potentially noisy, leading to low recall rates (also discussed in appx H). However, to verify if our log-linear trends still hold when including all the zero-freq concepts, we replot all our main zero-shot classification results from fig 2 by including the ones which have zero-freqs—this plot is in the attached PDF. **We find our main log-linear scaling trends from fig 2 are retained**. To further corroborate this, we present average accuracies for concepts with freq 0 and non-zero freq bins in the table below. We note that **average performance for the 0-freq concepts are significantly lower than other non-zero freq concepts**, especially when compared to very high-freq concepts. This justifies our main claim that exponentially more data is needed per concept to improve performance linearly. |Dataset/Model|freq=0|freq=1-10|freq=10-100|freq=100-1000|freq=1000-10000| |-|-|-|-|-|-| |CC-3M/RN50|5.10|13.89|20.18|32.93|44.30| |CC-3M/ViT-B-16|4.27|11.98|17.21|27.48|39.24| |CC-12M/RN50|12.91|21.49|27.75|39.48|50.38| |CC-12M/ViT-B-16|16.48|25.59|32.07|45.65|57.06| |YFCC-15M/RN50|16.49|19.59|24.12|34.26|39.97| |YFCC-15M/RN101|17.43|22.06|25.72|36.77|43.14| |YFCC-15M/ViT-B-16|20.75|25.06|29.68|38.73|45.96| |LAION-400M/ViT-B-32|47.41|46.42|50.53|55.96|65.00| |LAION-400M/ViT-B-16|51.77|52.09|57.12|61.32|70.73| |LAION-400M/ViT-L/14|60.44|58.87|62.43|67.63|76.65 | > **W4: Issues with RAM++** Thank you for this important point. - **Extensive Ablations on Image-Tagging Models:** We agree using a pretrained model for tagging concepts might introduce biases. However, we conducted extensive ablations on this (see appx H). We tested concept-tagging ability of open-world object detectors (Owlv2) and multi-label tagging models (RAM/RAM++), finding RAM++ to be most precise for our case (see appx H.1, fig 20). - **Context Enhancement improves RAM++ tagging precision**: Unlike object detectors, RAM++ leverages GPT-4-generated descriptions (see tab 5 appx), improving tagging precision by using visual descriptions to better identify concepts (this has been shown to enhance performance [[2](https://arxiv.org/abs/2210.07183),[3](https://arxiv.org/abs/2209.03320)]. - **Robustness vs RAM++ Thresholds:** We investigated different hparam thresholds for RAM++ (appx H.2). Despite some thresholds yielding sub-optimal tagging, our log-linear scaling results remained robust. - **Human verification for misalignment results**: To verify misalignment results, we manually annotated 200 random image-text pairs from each dataset as aligned or misaligned. An image-text pair is misaligned if the text caption was irrelevant to the image. Previous work also found a similarly small random subset over large-scale web-datasets to be representative [[4]](https://arxiv.org/abs/2307.03132). Our estimated misalignment results from tab 3 were in line with human-verified results (see tab below), corroborating our findings. |Dataset|tab 3 results|human baseline results| |-|-|-| |CC-3M|16.81%|18%| |CC-12M|17.25%|14.5%| |YFCC-15M|36.48%|40.5%| |LAION-400M|5.31%|7%| - **High YFCC Misalignment Degree:** From our human experiment, we found that the high misalignment degree in YFCC-15M is likely due to the lack of text quality filtering. YFCC-15M images are sourced directly from Flickr, where captions often provide high-level context rather than accurately describing the image content. We hope these clarifications address the reviewer's concerns and provide a better understanding of our work.
Rebuttal 1: Rebuttal: **General Response to all reviewers** We thank all the reviewers for finding our work ***interesting and important*** (Reviewer oRi7), ***clearly written and well presented*** (Reviewers oRi7, s2XF, WCHb, KUaM), ***containing extensive empirical evidence*** (Reviewers oRi7, s2XF, KUaM), and for finding ***our Let-It-Wag benchmark useful*** (Reviewers s2XF, WCHb). We provide detailed answers to each of the individual reviewers' concerns independently, and collate the most important common points here to further reiterate additional experimental results provided during the rebuttal. 1. We have added plots that include the **0-frequency concepts in the zero-shot classification plots** in the uploaded rebuttal pdf. We find that even when incorporating the significantly noisier 0-frequency concepts into our plots, our **main log-linear performance-frequency scaling trends remain preserved**. 2. We have provided intuitions for why there is an apparent dip in performance at the high frequency concepts for CC-3M and CC-12M. We hypothesize that these high-frequency concepts are **homonyms/polysemous** and **broad**, suggesting that the concept difficulty and the visual diversity of related test-set images of these high-frequency concepts is much more varied. 3. We have added **variance plots for showcasing the spread across each point in the zero-shot classification results** by including 95% confidence intervals. Our plots show that the **spread at higher frequencies is significantly larger than at moderate frequencies**, corroborating the finding that **higher frequency concepts are more ambiguous and polysemous**. This further explains some of the dips in performance we see at higher frequency concepts. 4. We have additionally provided more detailed statistics on the Let-It-Wag! dataset. We provide a histogram that showcases the exact tailed nature of our dataset—the median frequency of concepts in the dataset is less than 100, suggesting that **our Let-It-Wag! dataset truly tests the long-tail**. 5. We have run additional experiments with both SLIP and CyCLIP models, both of which claim to improve the generalization of CLIP models. We find that even **for these models with different training objectives, the log-linear scaling trends still hold true**, suggesting that they do not fully close the gap to improving the long-tailed performance. Pdf: /pdf/7378f8dd5b05ef312c9e6c422b54e6178060567f.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Compact Language Models via Pruning and Knowledge Distillation
Accept (poster)
Summary: This paper empirically explores compressing language models with pruning and knowledge distillation. It summarizes the best practices of pruning and distilling language models, which are supported by extensive experiments. Strengths: 1. This paper is well-written, and the best practices are easy to follow, which is useful in practical LMs. 2. The paper includes sufficient experiments to support the main results (best practices). Weaknesses: 1. The novelty of compressing LMs with pruning and knowledge distillation is limited because the method in this paper seems to be a simple combination of these two widely used techniques. Although the main contribution of this paper may be the best practices summarized from extensive experiments, it is better to highlight the difference between the final choice in this paper and the approaches in previous work like [1]. 2. Extra computational cost should be considered. It seems the pruning process and the online inference of knowledge distillation requires extra computation. Besides the trained tokens, it is better to additionally compare the FLOPs of model training in the main experiments, like in Figure 1 and Table 2. [1] Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruning. 2024. In ICLR. Technical Quality: 3 Clarity: 3 Questions for Authors: N/A Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Suggestions: The number of significant figures should remain consistent in Table 2 and Table 3. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for their encouraging comments and insightful feedback. Please find our responses below: > **... it is better to highlight the difference between the final choice in this paper and the approaches in previous work like Sheared LLaMa.** To help with this comparison, we have created the following table to highlight the differences between our approach and Sheared LLaMa [1]: | Criteria | Sheared LLaMa | Minitron | Summary | | -------- | ------- | ------- | ------- | | Pruning importance estimation | Learnable mask on embedding, MLP and attention heads. Mask learning uses 0.4B tokens and is 5x slower than standard LM training [1]. | Forward pass on a tiny number of samples (1024 in paper) to compute importance for embedding, MLP and attention heads at once. This adds negligible overhead. | An expensive gradient-based importance estimation strategy (Sheared-LLaMa) is not required to achieve strong accuracy for pruned models. A simpler, forward pass only approach (Minitron) works well. | | Pruning dimensions | Depth, embedding, MLP, attention | Depth, embedding, MLP, attention | Both approaches support multiple width and depth axes. | | Retraining | Uses 50B tokens with conventional finetuning | Uses 94B tokens with knowledge distillation | Both papers find that retraining is required to recover accuracy loss on benchmarks. We showcase the superiority of knowledge distillation over conventional training and recommend the former as the retraining approach. | | Search | No search process. Matches a previously-defined fixed architecture configuration. | Ability to search for optimal architectures using a constrained brute-force algorithm. We also observe that lightweight retraining of searched candidates is essential to stabilize relative rankings. | Minitron can find better architectures through search. For eg: search finds that keeping attention heads intact is better for Minitron-8B. The paper also provides new insights on how zero-shot accuracy post-pruning isn’t reflective of final accuracy. | | Multiple compressed models | Requires repeating the 5x slower mask learning process N times for producing N compressed models. Each of the N models must also be finetuned. | Single importance estimation pass (negligible overhead) is sufficient for all N compressed models. Each of the N models must be distilled. | Minitron approach is significantly less costly when multiple compression targets are specified. | > **Extra computational cost should be considered. It seems the pruning process and the online inference of knowledge distillation requires extra computation. Besides the trained tokens, it is better to additionally compare the FLOPs of model training in the main experiments, like in Figure 1 and Table 2.** Importance estimation for pruning has negligible overhead, since we use a single forward pass on 1024 samples. This costs less than the forward pass on a single step for training the model, which uses 1152 samples. Regarding the overhead for knowledge distillation, we present ablations using a fixed amount of compute (trained on 128 GPUs for a fixed time) for conventional training vs distillation in Appendix Table 11 (with a 3B model, due to time constraints). We believe that this is more representative of real compute cost than FLOPs. Results show that distillation provides an absolute 1.1% gain in accuracy on the standard LM evaluation harness and a 5+% gain on MMLU when compared to conventional training. We also obtain the table below for a 4B model for consistency with the model sizes trained in the paper post submission. We plan to replace tables 11 and 12 in the paper with the one below. Rows marked with * below indicate settings with fixed compute/cost. In this table: 1. `4B-Random-Init` is a 4B model trained from scratch using conventional ground truth CE loss. 2. `4B-Pruned (prune Nemotron-4 15B)` is a 4B model pruned from a 15B model, using conventional ground truth CE loss for retraining. 3. `4B-Pruned-Distill (prune Nemotron-4 15B)` is a 4B model pruned from a 15B model, using distillation loss w.r.t. to the 15B teacher for retraining. | Model | Tokens | Hellaswag | MMLU | | -------- | ------- | ------- | ------- | | 4B-Random-Init | 150B* | 46.22 | 24.36 | | 4B-Random-Init | 400B | 48.23 | 26.24 | | 4B-Pruned (Prune Nemotron-4 15B) | 150B* | 50.85 | 24.57 | | 4B-Pruned-Distill (Prune Nemotron-4 15B) | 100B* | 51.04 | 37.81 | | **4B-Pruned-Distill (Prune Minitron 8B)** | **100B*** | **52.04** | **42.45** | We can see orthogonal improvements from 1 to 2, and 2 to 3 using a fixed amount of compute. This experiment demonstrates benefits of using knowledge distillation (+13% improvement in MMLU) with the additional computational cost for online inference factored in. > **The number of significant figures should remain consistent in Table 2 and Table 3.** Thank you for pointing this out, we will fix this in the final version. **References:** 1. Xia, Mengzhou, et al. "Sheared LLaMa: Accelerating language model pre-training via structured pruning." arXiv preprint arXiv:2310.06694 (2023). --- Rebuttal Comment 1.1: Title: Official Comment by Reviewer W3pm Comment: I thank the authors for their response. After reading their reply, I will increase my score from 6 -> 7. I suggest the authors to include the additional experiment results in the final version of the paper. --- Reply to Comment 1.1.1: Comment: Thank you raising the score. Really appreciate it! We will make sure to include the additional results in the final paper.
Summary: In this paper the authors explore compression of LLMs via pruning and Knowledge Distillation. They try out a variety of approaches for pruning as well as the retraining step and provide a comprehensive analysis of best practices for getting compact LLMs from their larger counterparts. The authors explore pruning across width and depth using importance scores and then retraining on a relatively small dataset via Knowledge distillation to get a capable compact model. This paper provides super interesting insights into model pruning which can be used in practice. Strengths: 1. The paper is very well written and provide excellent explanations for all of their modeling choices. 2. I really enjoyed reading the key takeaways of the paper being structured as best practices. Each point in the best practices part of the paper provide some key insight into model compression and what works and what does not work. 3. I also enjoyed reading the comprehensive details that are written in the appendix. Overall this is a very well thought out research paper with proper explanation for each modeling decision that is taken. Weaknesses: 1. Small nitpick: The figures and tables need much much more detailed captions. I should have some idea about what the table wants to say just from the caption. 2. While results on benchmarks are appreciated, I always feel that these benchmarks do not reflect everything about LLMs. I would have liked some form of qualitative study on comparing the generations from the models. Some form of human evaluation on say 25-50 long form generation examples would be great given that there is not enough time during rebuttal for a larger scale human evaluation. Technical Quality: 4 Clarity: 3 Questions for Authors: 1. In table 2 is the nemotron-8B baseline also trained on the 94B token retraining dataset? Even if the retraining dataset is a subset of the pretraining dataset of nemotron models, I believe the apples to apples comparison would be to retrain all the models on the retraining dataset as well. I understand retraining all the baselines would be tough given the time constraints but it would be great if you could retrain only the nemotron-8B model and share the results. 2. How does this compare to taking the retraining dataset, generate logits from nemotron-15B model and then using vanilla KD to train the nemotron-8B model further? So basically I ask for 2 more baselines if possible. First is further vanilla training on nemotron-8B on the retraining dataset and second is further KD on the nemotron-8B model with nemotron-15B as the teacher. 3. Check weakness 2 Confidence: 4 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: The authors have mentioned limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for pointing out the strengths of the paper and providing valuable feedback. > **I would have liked some form of qualitative study on comparing the generations from the models. Some form of human evaluation on say 25-50 long form generation examples would be great ...** The reviewer raises an important question of evaluation via human preference. We conduct a small-scale (due to time constraints) human evaluation experiment with 7 people and 25 random prompts drawn from the MT-Bench multi-term benchmark. We generate responses from 3 models (all instruction-tuned): Minitron-4B, Gemma-2 2B, and Phi-2 2.7B and ask users to rate responses based on correctness, relevance, conciseness, hallucination and personal preference. For each prompt, the user selects a better model among Minitron and a randomly chosen second model (one of Gemma-2 or Phi-2), or specifies a tie. Model names are anonymized to prevent bias. The results of this experiment are shown in the table below: | Minitron 4B | Tied | Other Model (Gemma-2 / Phi-2) | | -------- | ------- | ------- | | 43.43% | 25.71% | 30.86% | Here, we notice that Minitron is selected as a better model in 43.43% of cases. While this is a small-scale experiment at the moment, we will add a larger scale human evaluation to the final paper. Additionally, to gain more insights into the qualitative performance of Minitron models, we perform supervised fine-tuning (SFT) on Minitron 4B using instruction-tuning data used for Nemotron-4 to create Minitron 4B-instruct, and evaluate it on various tasks, including instruction-following and roleplay (IFEval and MT-Bench), RAG QA (ChatRAG-Bench), and function calling (BFCL). The results for this experiment are shown in Tables 6 to 9 in the 1-page PDF attached to the global rebuttal. Tables 6 to 8 demonstrate that Minitron 4B-instruct has strong instruction-following, roleplay and RAG capabilities, beating similarly sized models across all tasks. On function calling (Table 9), Minitron 4B-instruct outperforms Gemma-2B-IT and even Llama-3-8B-instruct. We will add these results to the final paper. > **In Table 2 is the nemotron-8B baseline also trained on the 94B token retraining dataset?** In Table 2, the nemotron-8B baseline is not trained on the exact 94B retraining dataset. > **So basically I ask for 2 more baselines if possible. First is further vanilla training on nemotron-8B on the retraining dataset and second is further KD on the nemotron-8B model with nemotron-15B as the teacher.** We agree that comparing models trained on exactly the same data is better. However, note that Nemotron-8B model is already trained on 3.8T tokens. Hence, we believe further vanilla training on the retraining dataset would be an unfair comparison and propose to train Nemotron-8B from scratch on the retraining dataset, identical to the retraining routine for Minitron-8B. This would provide an apples to apples comparison for: 1) training from scratch vs 2) our proposed approach, when training a new 8B model. Going the other way, we also have an ablation study where we train Minitron-8B on exactly the same Phase 1 dataset Nemotron-8B model was trained on: | | Minitron 8B | Nemotron 8B | | -------- | ------- | ------- | | Tokens | **94.4B** | 3.5T | | MMLU | **0.521** | 0.485 | | PIQA | 0.794 | **0.7971** | | HellaSwag | **0.763** | 0.7587 | | HumanEval | **0.207** | 0.189 | Minitron outperforms Nemotron-8B on 3 out of 4 benchmarks above despite using a tiny random subset of the 3.8T dataset used by Nemotron 8B. As requested, we also perform the experiment with the model being trained with 3.8T + 94B tokens. For fairness, we also perform the exact same further vanilla training on Minitron 8B resulting in a model trained with 94B + 94B tokens. The loss curves for the experiments are provided in Figure 1 in the 1-page PDF attached to the global rebuttal. Unfortunately, due to shortage of time, the new training jobs ran to 85% completion and we were unable to run downstream task evaluation. In the Figure: 1. Gray: Train nemotron-8B from scratch on the retraining dataset 2. Pink: Minitron 8B 3. Orange: Further vanilla training of nemotron-8B on the retraining dataset 4. Blue: Further vanilla training on minitron 8B on the retraining dataset 2 being significantly better than 1, and 4 being better than 3 showcases the efficacy of our technique. The experiment with KD on the Nemotron-8B model with Nemotron-15B as the teacher is not possible as these models use different tokenizers and therefore the logits will be misaligned between 2 models. Distillation of models with different tokenizers will be a great topic for future work. With the Nemotron-8B model (trained on 3.8T tokens) as a starting point, we would expect similar results to Minitron (Minitron pruning technique provides significantly better weight initialization compared to random initialization). Using the additional compute required to train Nemotron-8B on 3.8T tokens towards Minitron will make the latter even better. > **The figures and tables need much much more detailed captions** We absolutely agree that providing more details in the captions will help understand the underlying message. Some sample updated captions are provided below (unable to provide the full list of 12 changes due to space constraints, but will update them in the final paper): * Figure 2: High-level overview of our proposed iterative pruning and distillation approach to train a family of smaller LLMs. On a pretrained LLM, we first evaluate importance of neurons, rank them, trim the least important neurons and distill the knowledge from the original LLM to the pruned model. The original model is replaced with the distilled model for the next iteration of compression. * Figure 5: LM validation loss curve for retraining of two pruned candidates with (L2, L2) and (L2, Mean) metrics for (batch, sequence) aggregation strategies. --- Rebuttal Comment 1.1: Comment: I appreciate the authors for adding the additional experiments and the qualitative study in such a short amount of time. According to me most weaknesses are addressed in the rebuttal. I am increasing my score to a 7. --- Reply to Comment 1.1.1: Comment: We'd like to thank the reviewer for raising their score. Truly appreciate it! We are very happy to see that our rebuttal has addressed your questions and concerns.
Summary: This paper proposes Compact Language Models via Pruning and Knowledge Distillation, which combines various tricks and methods to compress a 14B model to 8B while achieving better performance than training from scratch. Strengths: The paper conducts extensive experiments, comparing the latest baselines, and the authors summarize extensive tricks for pruning the model. The pruned 8B model performs well, surpassing the model trained from scratch. The paper is well-written, and the overall structure is good. Despite having many conclusions, it does not confuse the reader. Weaknesses: Will the authors open-source the code? If the code and data are open-sourced, I would raise my score. When pruning a 70B model to 7B, would the method in the paper still work? Would this model perform better than pruning a 13B model to 7B? Technical Quality: 4 Clarity: 4 Questions for Authors: see weakness Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: na Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for their encouraging comments and insightful feedback. Please find our responses below: > **Will the authors open-source the code? If the code and data are open-sourced, I would raise my score.** **Data:** more details on the composition of the Nemotron-4 dataset that we use were recently made public [1,2]. Parmar et al. [1] describe the distribution of domains, type of speech, toxicity, and quality of all our crawl data in Section 6, classifier information for all the above attributes in Appendix B, and information about the multilingual and programming language distribution on page 17. Parmar et al. [2] detail our continuous training recipe in Table 2, Figure 1, 2 and 4. Unfortunately, due to individual licenses for sub-data, we are unable to distribute the full dataset. **Code:** 
the code is planned to be released as part of existing common libraries with a timeline of 1-2 months, with the first candidate being distillation with depth pruning, followed by width pruning (MLP, attention head, embedding). We are working on the modifications that will help apply the method to any existing models in Hugging face, and this requires refactoring. 
 Models will also be released on HF with a permissive license. If required, we can also upload our checkpoints immediately to HF under an anonymous ID. > **When pruning a 70B model to 7B, would the method in the paper still work?** We observe that pruning should be performed iteratively in incremental steps. For example, as shown in Table 12 in the appendix, we observe an improvement of 4.6% points on MMLU by doing 15B->8B->4B instead of 15B->4B. Therefore one would expect better results by doing 70B->35B->15B etc. Pruning models of size 70B and bigger is definitely in our interest. This requires supporting _pipeline parallelization_ in addition to tensor parallelization. We are in the process of adding this functionality to the code-base and plan to release models as soon as they are available. **References:** 1. Parmar, Jupinder, et al. "Data, Data Everywhere: A Guide for Pretraining Dataset Construction." arXiv preprint arXiv:2407.06380 (2024). 2. Parmar, Jupinder, et al. "Reuse, Don't Retrain: A Recipe for Continued Pretraining of Language Models." arXiv preprint arXiv:2407.07263 (2024).
null
null
Rebuttal 1: Rebuttal: We sincerely thank all reviewers for their insightful feedback and comments. We have posted individual responses for each review, making every effort to provide additional results to support our responses within this limited rebuttal period. We hope our rebuttal addresses all reviewer concerns. Pdf: /pdf/0c42e845e1e22beaec41c8b2f27e9707cbef5581.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Generating Code World Models with Large Language Models Guided by Monte Carlo Tree Search
Accept (poster)
Summary: This paper explores a novel and promising direction of model-based RL, proposing to represent the dynamic model and the reward model using code world models, which are Python programs that can be executed and rollout environments. With such code world models, we can learn a policy to maximize the return predicted by the models. To achieve this, the paper proposes a framework, Generate, Improve and Fix with Monte Carlo Tree Search (GIF-MCTS), that can synthesize code world models from a pre-collected dataset of environment interactions. This paper also introduces a benchmark for evaluating the performance of code world models. The experiments show that the proposed framework GIF-MCTS can reliably model environments and allow for learning policies that achieve reasonable performance. I believe this work explores an interesting and promising research direction. Yet, I am concerned with the assumptions of offline datasets, the missing comparisons to offline RL methods, and the insufficient description of related work. Hence, I am slightly leaning toward rejection at this moment, but I am willing to increase my score if my concerns are addressed in the author's rebuttal. Strengths: **Motivation and intuition** - The motivation for modeling environments using codes is interesting and convincing, and this work presents an effective framework to achieve it. **Technical contribution** - Given that WorldCode has explored the online setup, this work explores the offline setup, which offers an unique perspective to the code world model idea. **Clarity** - The overall writing is clear. The authors utilize figures well to illustrate the ideas. Weaknesses: **Dataset collection** The authors stated that the dataset "includes at least some low-scoring and some relatively high-scoring behavior." I am wondering how we can do this without accessing true reward functions. Given a new environment that we have not seen before, how could we collect such a dataset to learn a code world model? **The effect of dataset quality** It is unclear how the quality of the dataset would affect the code world model performance. Can we learn a reasonable code world model with just trajectories collected by a random policy? **Online setting** This paper focuses on an offline setup, where the code world model learns from a pre-collected offline dataset. I am curious how we can extend the framework so that it can improve the code world model and the policy constructed from the offline dataset, just like how the research studies how to fine-tune policies learned using offline RL algorithms. **Access to physics engines as a tool** The proposed framework writes a code to represent a world model for each environment from scratch. It could be difficult to precisely simulate some interaction that requires modeling physics. It would be interesting to see if granting access to physics engines, e.g., calling MuJoCo, could generalize to physics-intensive environments. **Comparison to offline RL methods** This work mainly compares the proposed framework with WorldCoder, which also builds a world model. I am curious about how the proposed framework compares to offline RL methods, which simply learn a policy from the dataset to maximize the return. **Related work: programmatic RL** The related work section is not comprehensive. Highly relevant programmatic RL works, which represent policies as programs and interact with environments by executing the programs, are missed from the section, such as: - Programmatically Interpretable Reinforcement Learning (ICML 2018) - Imitation-projected programmatic reinforcement learning (NeurIPS 2019) - Programmatic Reinforcement Learning without Oracles (ICLR 2022) - Learning to synthesize programs as interpretable and generalizable policies (NeurIPS 2021) - Hierarchical programmatic reinforcement learning via learning to compose programs (ICML 2023) - Show me the way! Bilevel search for synthesizing programmatic strategies (AAAI 2023) - Reclaiming the source of programmatic policies: Programmatic versus latent spaces (ICLR 2024) **Clarity: offline setup** When I read the introduction and the abstract, I thought it focused on an online model-based RL setting. Then, I realized this is an offline setup while reading Section 3. I suggest the authors revise the introduction and the abstract to align the expectations. Technical Quality: 2 Clarity: 3 Questions for Authors: See above Confidence: 5 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful review. We appreciate that you found our motivation compelling and our framework effective. We also appreciate your recognition of our unique perspective on the offline setup of the code world model idea. In the following we address your concerns and we are confident that by including the changes described below the manuscript's quality will be significantly improved. **1. Dataset collection.** To collect a dataset for a novel environment, one could in practice use a specialised exploration policy, such as in [1], [2] and [3] to mention a few. Including some relatively high-scoring behavior trajectories is common-practice in many offline RL datasets (see e.g., [4]). In case no reward is accessible from the environment (if we interpret correctly your question), one could still collect transitions without rewards and learn a partial Code World Model (CWM) that predicts only the next state. In general, dataset quality and exploration for offline RL are active areas of research, for which orthogonal advancements could benefit CWMs. **2. The effect of dataset quality.** Yes, in principle we can learn a reasonable CWM only with trajectories collected by a random policy. For example in our experiments on RTFM, we employed exclusively a random policy to collect the dataset, which covered the state space sufficiently well and in the best case we obtained a perfect model (Table 3 of the paper). If the environment is hard to explore and we are provided only a few random trajectories, it is still be possible to learn a CWM if the language description of the environment is accurate enough. We ran an additional experiment on RTFM: we collected 10 trajectories all resulting in failures, so that a reward of +1 is never observed. We synthesized a CWM with GIF-MCTS and GPT-4 using 50 calls. The resulting CWM is 100% accurate on the collected dataset and even correctly predicts a reward of +1 for positive transitions, which are not included in the dataset, thanks to the language description. **3. Online setting.** This is certainly possible. The simplest procedure is to iteratively **collect data** by interacting with the environment and **update the world model** using the new data, similarly to what is done in the model-based RL literature (see e.g. [5]). To collect data one could use either an exploration policy (point 1) or a variant of the planning algorithms used in our paper with extra noise added for exploration. Our method can be easily modified to update the CWM, for example by re-using the previous tree search and evaluating each program (node) on the updated dataset, before continuing to expand the tree. In general, we chose to work within the offline setup to focus on studying the overall applicability of the CWM approach without relying on specific exploration approaches that would affect the quality of the training dataset and possibly skew the results. We will add the discussion about dataset collection, dataset quality and extension to online setting to the paper. **4. Access to physics engines as a tool.** This is most definitely a valid direction for future research and we will mention that. We believe CWMs could be greatly empowered by access to simulation tools. In the specific case of MuJoCo, the simulations are almost fully taken care of by the library, and the resulting CWM would reduce to a trivial function call. Therefore, we feel that to be generalizable to other environments or embodiments, the simulation tool should operate at a more fundamental level. **5. Comparison to offline RL methods.** We performed a direct comparison with Conservative Q-Learning (CQL), a popular offline RL baseline. We train CQL on the same small dataset of trajectories used for the CWMs. The results are reported in the markdown tables of the common response. Overall, there is a balance between CQL and CWMs, with CWMs being more suited to discrete tasks and CQL outperforming CWMs in complex physics tasks. We also observe severe overfitting happening in CQL almost immediately, likely due to the small size of the provided dataset. As we point out in the paper, sample efficiency is one of the main promises of the CWM approach, as very few trajectories are needed to validate the model. **6. Related work: programmatic RL.** We thank the reviewer for pointing us to this interesting line of research and we will include an additional paragraph about programmatic RL in the Related Work Section. We also added some other works (with [6] and [7] specifically focusing on model-based programmatic RL) and some references for modern works using LLMs to generate programmatic policies. **7. Clarity: offline setup.** Thanks for the suggestion, we will clarify our setup. **References:** [1]: Pathak, Deepak, et al. "Curiosity-driven exploration by self-supervised prediction." International conference on machine learning. PMLR, 2017. [2]: Savinov, Nikolay, et al. "Episodic Curiosity through Reachability." International Conference on Learning Representations (2019). [3]: Ecoffet, Adrien, et al. "First return, then explore." Nature 590.7847 (2021): 580-586. [4]: Fu, Justin, et al. "D4rl: Datasets for deep data-driven reinforcement learning." arXiv preprint arXiv:2004.07219 (2020). [5]: Hafner, Danijar, et al. "Dream to Control: Learning Behaviors by Latent Imagination." International Conference on Learning Representations. [6]: Azad, Abdus Salam, et al. "Scenic4rl: Programmatic modeling and generation of reinforcement learning environments." arXiv preprint arXiv:2106.10365 (2021). [7]: Tsividis, Pedro A., et al. "Human-level reinforcement learning through theory-based modeling, exploration, and planning." arXiv preprint arXiv:2107.12544 (2021). --- Rebuttal Comment 1.1: Title: Re: Rebuttal by Authors Comment: Thank you for the rebuttal, which addresses some of my concerns. I am increasing my score to 5 (borderline accept), counting on the authors will revise the paper according to the promises, e.g., being upfront about the offline setup in the abstract and the introduction, including the comparisons to offline RL methods, etc. --- Reply to Comment 1.1.1: Comment: Thank you for your thoughtful consideration and for increasing your score. We are committed to revising the paper as promised, ensuring that the offline setup is clearly stated in the abstract and introduction, and including the necessary comparisons to offline RL methods. We appreciate your feedback and will make the necessary improvements.
Summary: The authors present a search algorithm based on Monte Carlo tree search to synthesize programs with LLMs. The contribution includes formulating a search space that is compatible with the functionalities of a LLM: the actions in the search tree include generating new lines of code, fixing bugs, and improving current implementation. The system is evaluated on the synthesis of programs for solving competition-level problems and programs encoding world models. As baselines, the experiments include recent works such as CodeRL, Parsel, and WorldCoder. Strengths: After reading the paper once, I believe its main contribution is to show the power of tree search in the context of synthesis with LLMs. A simple MCTS-based algorithm already produces programs able to attain better quality metrics than other methods not using search. The experiments are nicely done from an ablation perspective: I quite enjoyed Table 6 in the appendix, where different versions of GIF-MCTS are evaluated. Weaknesses: The paper has 4 weaknesses that are worth mentioning, so the authors can react to them in the rebuttal. **1. Clarity of the paper could be much improved.** When reading Figure 2 and the description around it, I got confused with the use of the word "rollout." I think it is being used to mean two different things. The first is a portion of the code that a node represents. The second is the LLM call that generates that portion. I did not find in the paper the specification in terms of how many lines are generated in each "generate new lines" action and how many are generated in the rollouts. **2. The paper overclaims** > we improve the trade-off between exploration and exploitation in the UCT formula used for action selection. What was probably meant was that the paper uses a heuristic that achieves better empirical results. There are no theoretical improvements in the trade-off. > Calling code instead of LLMs for planning has the advantages of being precise, reliable, interpretable, and extremely efficient. None of these properties are actually properly evaluated in the paper. This might be a bit of nitpicking, but code can also be imprecise, unreliable, uninterpretable, and extremely inefficient. Anyone working as a programmer has already experienced all these negative properties of code. **3. Lack of discussion on data contamination** With the exception of the RTFM benchmark, nothing is said or discussed about possible data contamination. Is the search algorithm helping the LLM remember what is in its training data or is the overall solution doing something else? Are the models for RL tasks available online as code? Are the solutions to the programming problems available online? The lack of discussion about data contamination brings me to the question of what is the research question behind this work. Is it a retrieval question (i.e., can the system recover information used in the LLM's training data) or a reasoning question (i.e., can the system write programs that solve problems)? **4. Statistical significance is unclear** It is not clear whether the search was run only once or many times. The averages presented seem to be over a number of problems and episodes, but not on the number of times the entire system was executed. Technical Quality: 2 Clarity: 1 Questions for Authors: 1. Please explain the issue around the use of the word "rollout" and how the number of lines of code is determined in each call of the "generate new lines" actions. 2. How about running the fix bug action a number of times whenever a state representing a buggy program is generated? Instead of allowing MCTS to decide when to debug, why not debug right away? 3. Please comment on the data contamination issue mentioned above and clarify what is the research question behind this work. 4. Please comment on the statistical metrics and empirical design of the experiments. Confidence: 3 Soundness: 2 Presentation: 1 Contribution: 2 Limitations: In weaknesses, please see the parts related to overclaiming and data contamination. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the thorough review and the valuable suggestions to improve the clarity and soundness of our work. We appreciate your recognition of the power of GIF-MCTS and of the usefulness of the ablation studies on our method. We detail in the following how we are going to address your comments in the final version of our work. **1. Clarity.** **Rollout.** We consider each node as a full program, comprised of a "state" part and a "rollout" part. The **state** is the **first $L \times d$ lines**, where $L$ is the number of generated new lines per node and $d$ is the depth of a node in the tree. The **remaining part of the program** is denoted as the **rollout**, and we consistently use the term to mean only that. **LLM calls** correspond to actions (edges) in the tree, and each of them **produces a full program** (node). Please let us know if there is a specific part of the manuscript that creates this confusion and we will clarify it in the final version. **Generated lines.** The number of generated new lines $L$ is equal to 2 (Table 4 in the Appendix); we will report this information also in the main text. We also realised that in the main text we used $l$ rather than $L$ and we will fix this. **2. Claims.** We never meant to present imprecise claims and we thank the reviewer for pointing out the need for better wording of them. > we improve the trade-off between exploration and exploitation in the UCT formula used for action selection. We will change to "we *propose a heuristic that empirically improves* the trade-off between exploration and exploitation in the UCT formula used for action selection" (justified in Appendix A). > Calling code instead of LLMs for planning has the advantages of being precise, reliable, interpretable, and extremely efficient. We will change to "Calling code instead of LLMs for planning has *potential* to be *more* precise, reliable, interpretable, and extremely efficient." Addressing the mentioned properties one by one: - Precise: code can natively access a calculator and as such is numerically precise. - Reliable: a program passing all unit tests can be deemed more reliable than a black-box prediction from an LLM. - Interpretable: as mentioned in our response with reviewer u7M2, we will add concrete examples of Code World Models (CWMs) generated by our method, as mentioned to Reviewer u7M2. We include one example on Ant-v4 in the common response. In general, we find that all generated programs are clearly interpretable by a proficient human programmer. - Extremely efficient: we report in Table 7 of the appendix (referred in the Introduction section) the inference time of our generated CWMs vs inference time of an LLM. **CWMs are 4 to 6 orders of magnitude faster.** **3. Data contamination.** **RTFM Benchmark:** The RTFM benchmark is not available online, and our method's success on it provides evidence that our solution is not merely retrieving information from the LLM's training data. **Programming Problems:** The programming problems we used are sourced from three main websites. The benchmark authors managed to crawl reference solutions for only two of these sites. Performance across all methods and models in the competition split is correlated with the source websites of the problems, but not with the availability of the solutions: the highest results are obtained from Kattis, the only site where solutions are not available online. Notably, all methods and models achieve a 0% pass rate for the 41 problems from AtCoder, for which reference solutions are available online. **Gym Environments:** While we observe that some parts of the gym environments recall implementations available online (e.g., constants' values in the CartPole environment), the logic of the step function remains distinct from the reference model. We include one model-generated environment in the common response, and we will add more in the final manuscript, along with links to the most popular online implementations. **Fair Comparison:** Data contamination is a known issue in almost all studies involving LLMs, not just ours. However, our method is compared to baselines using the same underlying models, ensuring that its superior performance is not biased by potential data contamination. We will add this discussion to the Appendix of the paper. **4. Statistical significance.** The search was run once per each problem, due to the computational cost. For APPS, considering that we have 1000 independent problems, this is more than enough to tell apart with statistical significance the results of GIF-MCTS and the baselines. For CWMB, the uncertainity is higher. We ran 2 extra seeds for each environment (resulting in 18 environments * 3 seeds total executions for our method) for GIF-MCTS and WorldCoder using Llama 3 70B and obtain the following results: | Model | Method | Budget | Discrete Accuracy | Discrete Return | Continuous Accuracy | Continuous Return | |----------------------|-----------------|---------|---------------|---------------|---------------|---------------| | Llama 3 70B| GIF-MCTS (ours) | 50 | **0.84±0.03** | **0.76±0.03** | **0.35±0.03** | **0.22±0.01** | | Llama 3 70B | WorldCoder | 50 | 0.79±0.04 | 0.60±0.04 | 0.32±0.03 | 0.19±0.01 | CWM still outperforms WorldCoder with smaller error margins, confirming the statistical significance of our method, especially in the discrete case. We unfortunately could not repeat this for GPT-4 due to budget concerns. **5. When to fix bugs.** Great question. Sometimes, generating a bug-free program from scratch will be more promising than trying to fix a buggy program multiple times, whereas other times the opposite will be true. Rather than deciding a priori how many times to debug right away, we believe that letting MCTS decide will lead to lower regret. A further relevant discussion is present in Appendix A. --- Rebuttal Comment 1.1: Comment: Thank you for clarifying the issues raised in my review. I have increased the overall score of my review.
Summary: This paper primarily investigates the application of LLMs to synthesizing world models for reinforcement learning environments, where the world model is expressed as a Python program implementing the state transition and reward functions. Concretely, starting from available environment simulation code (e.g., a benchmark environment using the MuJoCo physics simulator), the authors first obtain a number of trajectories. An off-the-shelf LLM is then prompted with to reimplement the environment, starting from a textual description (and without explicit access to the original code), and the collected trajectories are used as test cases to verify correctness. This is a challenging code synthesis problem for which the authors propose an iterative algorithm to refine and improve the generated program with multiple LLM calls, formulated as monte-carlo tree search. To me, the contributions of this paper are: - The GIF-MCTS method for effective iterative program synthesis wrt available test cases, which is separately evaluated on the APPS code generation benchmark. - The concept of code world models (as outlined above) which shows promise in efficiently obtaining world models from observational data - A benchmark which covers common environments used in Deep RL that enables the community to propose further advances Strengths: I found the main idea of the paper to be quite original and interesting, i.e., generating the code for a world model rather than utilizing LLMs directly. The authors show that this works quite well in environments with discrete action space: they often obtain perfect or near-perfect prediction accuracy as well as rewards approaching those achieved by planning with an oracle world model. In continuous environments, where the original implementations rely on a physics simulation engine, the results are less convincing -- this is understandable to me as I would expect LLMs (or humans, for that matter) having trouble reimplementing those without access to the original or a substitute simulation engine. Whether code world models are a promising direction to ultimately tackle planning in complex or real-life environments remains to be seen, of course; however, I would expect paper at hand and the proposed benchmark to inspire further works in this relevant direction. I also liked the separate evaluation of their MCTS code synthesis method on APPS where it produces strong results, and it's worth noting that there are full papers proposing similar iterative prompting for code generation. I hope the authors make this part of the paper available for easy re-use. The paper itself is well-written and, while it is pretty packed, I found that I could easily locate and understand the method and details on problem setup and evaluations. Weaknesses: I would appreciate if the authors included examples of LLM-generated environment implementations in the appendix. I would be interested in how the continuous action space environments are being tackled, or to check for overlap with the original environment implementations (which For the planning results it would be good to list the true rewards obtained along with the rewards of a random policy and SOTA model-free/model-based results from the RL literature. While beating SOTA results here is not the focus of the paper, it would provide a helpful perspective for readers familiar with those environments. Technical Quality: 3 Clarity: 4 Questions for Authors: I am not totally clear on the LLM sample budget for the evaluation on APPS. From L258 ff: "B is the budget for the number of LLM calls", Table 1 lists pass@20, but L618 in the appendix mentions 50 Llama 3 calls per problem. L618 also mentions 1000 problems, whereas the competition split on which you report results is smaller. Confidence: 3 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: I found the paper's discussion of its limitations to be upfront and comprehensive and have nothing to add. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the kind words about our work. We are glad that you found it interesting and believe it will inspire future works! We agree that continuous physics simulations are particularly challenging and we also were not particularly surprised by that, but we will be looking to address this in future work, for example by integrating the method with external physics simulators and APIs. **1. Examples of LLM-generated environments:** >I would appreciate if the authors included examples of LLM-generated environment implementations in the appendix. I would be interested in how the continuous action space environments are being tackled, or to check for overlap with the original environment implementations In the limited space of the rebuttal, we include a Code World Model (CWM) example for the Ant environment (continuous action and state spaces) in the common response. In the final manuscript, we will include two more (one gym environment with discrete action space and RTFM), together with how they compare to the ground truth source code. We performed an additional qualitative study on the generated programs and found that while certain aspects of the gym environments resemble available online implementations (such as the values of constants in the CartPole environment), the logic of the step function is distinct from that of the reference model for all generations. **2. Comparison with RL:** > It would be good to list the true rewards obtained along with the rewards of a random policy and SOTA model-free/model-based results from the RL literature. We performed a direct comparison to a random baseline, the oracle baseline (using the same planner as the CWMs but with the ground truth environment) and with Conservative Q-Learning [1], a popular offline RL baseline that can work for both discrete and continuous action spaces. We train CQL on the same small dataset of trajectories used for the CWMs. The results are reported in the markdown tables of the common response. Overall, there is a balance between CQL and CWMs, with CWMs being more suited to discrete tasks and CQL outperforming CWMs in complex physics tasks. We also observe severe overfitting happening in CQL almost immediately, likely due to the small size of the provided dataset. As we point out in the paper, sample efficiency is one of the main promises of the CWM approach, as very few trajectories are needed to validate the model. There is also a considerable gap between CQL and the oracle planner, which represents an upper bound for the CWM approach given our choice of planners. **3. LLM sample budget:** > I am not totally clear on the LLM sample budget for the evaluation on APPS. From L258 ff: "B is the budget for the number of LLM calls", Table 1 lists pass@20, but L618 in the appendix mentions 50 Llama 3 calls per problem. L618 also mentions 1000 problems, whereas the competition split on which you report results is smaller. The mention of 50 calls in the Appendix was a typo, thank you for catching it! Indeed, the correct budget used for APPS is 20 LLM calls corresponding to 20 programs generated and evaluated on the unit tests. As for the number of problems, we work with the test set of the APPS dataset which is composed of 5000 total problems, of which 1000 make up the competition split, so 1000 is the correct number of problems we evaluated on. Could you point us to where you got the impression that the competition split is smaller? **References:** [1]: Kumar, Aviral, et al. "Conservative q-learning for offline reinforcement learning." Advances in Neural Information Processing Systems 33 (2020): 1179-1191.
Summary: This paper proposes Generate, Improve, and Fix with Monte Carlo Tree Search (GIF-MCTS) for generating Code World Models (CWMs) using Large Language Models (LLMs). The authors present code representations of reinforcement learning (RL) environments, enabling the application of LLM algorithms for code generation across various domains. They introduce the Code World Models Benchmark (CWMB), which comprises 18 diverse RL environments to evaluate their approach. GIF-MCTS demonstrates better performance compared to the baselines of APPS, CWMB, and a grid-world environment. Strengths: The paper is well-written and effectively demonstrates that MCTS can facilitate agentic behaviors and problem-solving across multiple domains. The motivation and the rationale are clear, as we expect the idea of agentic behavior of searching with diverse actions can help solve more complex problems. The appendix includes extensive ablative studies that offer a detailed analysis of the method's components. Weaknesses: **Reliance on pre-training.** I didn’t fully understand why learning the world model is necessary. MCTS is an inference-time algorithm. The reliance on pre-training for the large language models might be a limitation. Specifically, does GIF-MCTS need to predict all transitions well in the offline dataset to perform well on a task? For coding, the agent doesn’t need to accurately predict how a human solves a problem. It’s sufficient if the agent can find a correct solution, possibly using MCTS. **Novelty.** There seems to be limited novelty in this framework. Related works have explored agentic behaviors of using “implementing a function”, “fixing bug” as actions, like SWE-agent [1]. Using tree search and sequential decision-making are also explored in the literature [2]. The novelty seems to be a combination of both. References [1] Yang, John, et al. "Swe-agent: Agent-computer interfaces enable automated software engineering." arXiv preprint arXiv:2405.15793 (2024). [2] Zhang, Shun, et al. "Planning with large language models for code generation." arXiv preprint arXiv:2303.05510 (2023). Technical Quality: 3 Clarity: 3 Questions for Authors: Can the authors clarify why GIF-MCTS is better than WorldCoder? Is it because of the advantage of the tree search algorithm (while WorldCoder is mainly a sampling algorithm)? Offline pre-training seems to be expensive. For domains like code generation, is pre-training necessary? For new domains that the agent may not have seen in the pre-training data, does it help to use in-context learning to show the agent some example trajectories? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The limitations are clearly discussed in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for taking the time to review our work and provide your detailed feedback. We are grateful for your comments and we hope that the following will clarify the details and novelty of our work. **1. Pre-training:** > Reliance on pre-training. I didn’t fully understand why learning the world model is necessary. MCTS is an inference-time algorithm. The reliance on pre-training for the large language models might be a limitation. and > Offline pre-training seems to be expensive. For domains like code generation, is pre-training necessary? For new domains that the agent may not have seen in the pre-training data, does it help to use in-context learning to show the agent some example trajectories? We would like to clarify a potential misunderstanding regarding our approach. We do not train nor fine-tune any LLM, but only use off-the-shelf LLMs (e.g., Llama-3, GPT-4). Specifically, we learn a Code World Model (CWM) that can be used in model-based RL as a substitute for the dynamics and reward functions of the environment, allowing an agent to search over action strategies using methods such as CEM or standard MCTS. Separately, we propose GIF-MCTS, an iterative generation method tailored for generating CWMs with LLMs. Within it, we make use of in-context learning to show trajectories to the model as you suggested, specifically with the improve action. We developed GIF-MCTS specifically because we found prompting the LLM directly to not be effective. **2. Predicting transitions:** >Specifically, does GIF-MCTS need to predict all transitions well in the offline dataset to perform well on a task? In our work, we assume this is the case. More precisely, GIF-MCTS produces the CWM, which in turn should predict correctly all transitions in the offline dataset to perform well as a world model. However, there may be cases where the model does not need to be uniformly accurate everywhere, particularly in suboptimal regions rarely visited by the planning policy. > For coding, the agent doesn’t need to accurately predict how a human solves a problem. It’s sufficient if the agent can find a correct solution, possibly using MCTS. It is not fully clear to us what the reviewer means by this. Our work involves two agents: the GIF-MCTS one, whose objective is to write a correct CWM, and the model-based RL agent, which uses vanilla MCTS (or CEM) with the CWM to solve the RL task (e.g., CartPole). Neither of the two ever tries to predict how a human solves a problem and no human data is used to learn the world model. For coding, the reward function used by GIF-MCTS (which guides the generation process) is the fraction of passed unit tests-- no full solutions are compared against. **3. Novelty:** > Novelty. There seems to be limited novelty in this framework. Related works have explored agentic behaviors of using “implementing a function”, “fixing bug” as actions, like SWE-agent [1]. Using tree search and sequential decision-making are also explored in the literature [2]. The novelty seems to be a combination of both. We respectfully disagree. Our main novelty lies in formulating the concept of Code World Models, a framework to learn RL world models written in code. Additionally, we introduce a benchmark to evaluate this approach, and finally propose GIF-MCTS as a code-generation algorithm with an LLM, specifically tailored for the CWM framework. Hence, while GIF-MCTS can be seen as novel combination of agentic behaviour and tree search, our novelty goes beyond that. **4. Comparison with WorldCoder:** > Can the authors clarify why GIF-MCTS is better than WorldCoder? Is it because of the advantage of the tree search algorithm (while WorldCoder is mainly a sampling algorithm)? Good point, thank you for raising it. We believe that GIF-MCTS outperforms WorldCoder because it considers a more diverse set of programs. The main difference is that WorldCoder initially generates a single complete program, which becomes the ancestor for all future programs. In contrast, GIF-MCTS can generate multiple programs either from scratch or from partial programs by taking the "generate new lines" action at the root node or subsequent nodes, which better explores the solution space. A further ablation study ("No Generate action" in Table 6 of the Appendix) supports this finding: using a tree search (like GIF-MCTS) but only with refinement actions (like WorldCoder) results in lower performance compared to our method. We will add this explanation to the Discussion section in the main text. --- Rebuttal 2: Title: Thanks for the response Comment: Thank you for the responses and clarifications. The clarification on pre-training does clear up my confusion. I thought the training data set $D$ (line 148) is used to train a language model to generate world models. That was a misunderstanding. I wanted to make sure thatm I am following the logic of the paper correctly. Could you confirm if the following points are accurate? - GIF-MCTS is a new tree search algorithm evaluated on the APPS dataset, where it outperforms baseline algorithms like WorldCoder. - GIF-MCTS is then applied to Code World Model learning and evaluated on the Code World Models Benchmark and Read to Fight Monsters. The results show that GIF-MCTS learns a better world model than WorldCoder, and planning with the GIF-MCTS-learned world models achieves better returns. If these points are correct, it would be helpful to clarify them in the paper. Initially, I thought we needed to learn a world model for APPS, which confused me. --- Rebuttal Comment 2.1: Comment: Thank you for your careful review and for taking the time to clarify your understanding. We are glad that the confusion was cleared away and we can confirm that your points are accurate. We will clarify that GIF-MCTS is a tree search-based code generation algorithm and that we use APPS to evaluate its performance on the general code synthesis task, but that APPS and CWMs are a related but separate application and that the method is framed differently for APPS (for example the dataset D becomes the collection of unit tests). We hope that this helps to explain the novelty of the paper: in our view, the central contribution is the concept of a Code World Model and the possible use cases for RL, while GIF-MCTS is a tree search algorithm we designed specifically for the task of synthesizing better CWMs, with the evaluation on APPS serving the purpose of generally evaluating the method on a widely used benchmark. Please let us know if there are any concerns that remained unaddressed and thank you again for your time and effort. --- Rebuttal 3: Title: Thanks for your response Comment: Thanks for confirming that my understanding is correct. Overall I believe learning a world model represented by code is an interesting and promising idea, which is also validated empirically. I believe the paper would benefit from improved clarity and the new results discussed in the rebuttal. I have raised the score to 6 (Weak Accept).
Rebuttal 1: Rebuttal: We thank all the reviewers for engaging with our work and providing their precious feedback. In addition to our individual answers, we include the following material: **Example of generated Code World Model.** Reviewers u7M2 and 3vW3 asked for examples of Code World Models (CWMs), being interested in the generated world models for continuous environments and in possible overlaps with the online implementations. In Figure 1 of the attached PDF we report the CWM generated by our method for the Ant-v4 environment. For reference, the official implementation of the environment can be found at the official Gymnasium GitHub repository of the Farama Foundation (at gymnasium.envs.mujoco.Ant_v4). **Comparison with RL baselines.** Reviewer u7M2 asked for reference a comparison with SOTA RL and Oracle baselines, and reporting the raw returns, while reviewer 49z7 was specifically interested in a comparison with offline RL. We report in the following tables the average reward obtained over 10 episodes for a random policy, a SOTA method Conservative Q-Learning (CQL) [1], planning agents with the CWM obtained by GIF-MCTS (ours) respectively with Llama 3 and GPT-4, and a planning agent with oracle access to the true environment (Oracle). CQL is a SOTA offline RL method, trained with 10 epochs for 100 steps per epoch (1000 total) using the *same* dataset used to learn our CWMs. We chose 1000 steps to match the data to gradient steps ratio from the original CQL paper. Since our replay buffers are much smaller, we started to observe severe overfitting for CQL with more training steps. | **Environment (Discrete)** | **Random** | **CQL** | **GIF-MCTS (ours) Llama 3** | **GIF-MCTS (ours) GPT-4** | **Oracle** | |------------------------------|------------|----------|-----------------------------|---------------------------|------------| | Blackjack-v1 | 0 | -0.3 | -0.6 | **-0.1** | 1 | | CliffWalking-v0 | -1169.2 | N/A | **-90.2** | -100 | -100 | | Taxi-v3 | -798.5 | -740 | **-353.9** | -408.8 | -124.5 | | CartPole-v1 | 24.4 | **317.6** | 277.4 | 310.4 | 494 | | MountainCar-v0 | **-200** | **-200** | **-200** | **-200** | -200 | | Acrobot-v1 | -500 | **-295** | -500 | -494.2 | -500 | --- | **Environment (Continuous)** | **Random** | **CQL** | **GIF-MCTS (ours) Llama 3** | **GIF-MCTS (ours) GPT-4** | **Oracle** | |------------------------------|------------|----------|-----------------------------|---------------------------|------------| | Pendulum-v1 | -1122.8 | -1218.2 | -1232.2 | **-739.8** | -373.6 | | Reacher-v4 | -43.7 | -11.5 | **-9.2** | -11.2 | -6.8 | | Pusher-v4 | -149.9 | **-52.4** | -61.1 | -63.3 | -30.3 | | InvertedPendulum-v4 | 8.3 | **66.7** | 13.1 | 10.9 | 42.5 | | InvertedDoublePendulum-v4 | 49 | **164** | 60 | 53.4 | 241.6 | | HalfCheetah-v4 | -304.5 | **-1.3** | -150.3 | -22.8 | 893.3 | | Hopper-v4 | 32.2 | **137.4** | 62.6 | 23.3 | 229.1 | | Swimmer-v4 | -5.9 | **28.4** | -2.7 | 8.1 | 317.8 | | Walker2d-v4 | -0 | **278** | 22.3 | 11.5 | 334.7 | | Ant-v4 | -33.2 | **998** | 867.7 | 896.8 | 1304.7 | | Humanoid-v4 | 139.4 | **393.3** | N/A | 162.3 | 1860.7 | | HumanoidStandup-v4 | 33240.2 | **51045.7** | N/A | 29405.9 | 138076 | N/A for CQL indicates a failed run, while for GIF-MCTS indicates a failure in synthesizing a syntactically correct CWM. In general, planning with the generated CWMs works best on discrete environments, where our original results also indicate that the CWMs are of a higher quality. However, CWMs also reach competitive results in some complex physics tasks, such as Pendulum-v1, Reacher-v4 and to a lesser extent Ant-v4, Pusher-v4 and HalfCheetah-v4, without direct access to the original physics simulator. **References:** [1]: Kumar, Aviral, et al. "Conservative q-learning for offline reinforcement learning." Advances in Neural Information Processing Systems 33 (2020): 1179-1191. Pdf: /pdf/0f786b577e962fae28465baaac86f0609d7ff77f.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Minimax Optimal and Computationally Efficient Algorithms for Distributionally Robust Offline Reinforcement Learning
Accept (poster)
Summary: This paper studies robust offline RL with function approximation under the specific setting of $d$-rectangular linear DRMDPs, where the nominal environment is a linear MDP with simplex feature space. The authors propose two learning algorithms and establish instance-dependent upper bounds for the suboptimality. The derived information-theoretic lower bound shows that the proposed VA-DRPVI algorithm is near-optimal up to a factor of $\widetilde{O}(\sqrt{d})$. Strengths: (+) The paper derives an information-theoretic lower bound, which depends on the novel uncertainty function $\Phi((\Sigma_h^*)^{-1},s)$ and could be of independent interest to the community. (+) The suboptimality for the proposed VA-DRPVI algorithm nearly matches the lower bound. Weaknesses: (-) The derived suboptimality bounds (in Theorems 3.4 and 4.2) and the lower bound (in Theorem 5.1) require the number $K$ of the trajectories in the offline dataset to scale with $\text{poly}(d,H)$, which could be quite restrictive in the offline setting. Technical Quality: 3 Clarity: 2 Questions for Authors: 1. Is it possible to establish suboptimality bounds and the lower bound for universal $K$? I guess there would be some "burn-in" terms. 2. Ma et al. (2023) consider a similar setting, except that the model uncertainty is measured by the KL divergence instead of the TV divergence in this paper. Can you point out the differences between this work and theirs? Also, are there any specific reasons for considering the TV divergence, and what are the unique challenges in the analysis? 3. The paper considers linear MDPs with simplex feature space. Can you explain the challenges in extending the results to general linear MDPs? 4. A minor suggestion on wording: In part 3 of the contribution section (the last paragraph on p.g. 2), the paper claims that "...which implies that VA-DRPVI is minimax optimal in the sense of information theory". However, as parameter $\beta_2$ scales with $\widetilde{O}(\sqrt{d})$, it can be better to switch to the word *near-optimal* instead of *minimax optimal*. Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: Limitations have been addressed in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for your positive feedback on our work. We hope our response fully addresses all your questions. --- ### 1. Requirement of the number of trajectories $K$ to be poly(d,H). This is an interesting question. We acknowledge that our current analysis requires the sample size $K$ has a large order dependence on $d$ and $H$. Similar requirements are also necessary for standard offline RL with linear function approximation (Yin et al., 2022, Xiong et al. 2022). It is still unsure if we could construct upper and lower bounds for full-range $K$ in the robust linear function approximation setting. We will leave this problem for future study. --- ### 2. Comparison with Ma et al. (2022). We acknowledge that Ma et al. (2022)’s work is the most closely related to ours. However, we note that there are several technical flaws in their paper. We summarize the main issues here: * The proofs of main lemmas (Lemma D.1 and Lemma D.2 in the arXiv version) related to suboptimality decomposition and the proof of theorems are incorrect. * The sample complexity results for the two algorithms are both incorrect on the order of $d$. * The Assumption 4.4 on the dual variable of the dual formulation of the KL-divergence is too strong to be realistic. * The concept of mis-specification in Section 5.2 is not well-defined, which makes all results in Section 5 vacuous. Given these technique flaws, we believe the fundamental challenges of $d$-rectangular linear DRMDPs are not properly addressed in their work. It is also unsure whether $d$-rectangular linear DRMDPs with KL-divergence uncertainty set are solvable by their current methodologies. Moreover, in our work, we start with the setting of $d$-rectangular linear DRMDPs with TV-divergence uncertainty set. We address the essential challenges and explore the fundamental limit of $d$-rectangular linear DRMDPs. --- ### 3. Reasons for considering TV-divergence and challenges in the analysis. There are two main reasons we consider the TV-divergence: 1) TV-divergence in a commonly studied distance in the field of DRMDPs, it serves as a proper “start” setting to address the fundamental challenges of DRMDPs with linear function approximation; 2) We also tried the KL-divergence uncertainty set, however the analysis heavily relies on an unrealistic assumption on the dual variable of the duality for KL. Thus, it requires further effort to figure out if the linear DRMDP with KL-divergence uncertainty set is both theoretically and practically solvable. Thanks to the simple format of the duality for TV, we can simply focus on the essential challenges of the linear DRMDP setting. To see the main challenge of the analysis, we first note that existing analysis of standard linear MDPs highly relies on the linear dependency of the Bellman equation on the nominal kernel. However, the consideration of model uncertainty of DRMDPs disrupts the linear dependency (see (2.1a)) of the robust Bellman equation on the nominal kernel, where the offline dataset is collected. This nonlinearity leads to unique challenges in both upper bound and lower bound analysis, and we are the first to solve those challenges. --- ### 4. Extending the results to general linear MDPs. Generalizing the results to standard linear MDPs presents challenges primarily due to the construction of uncertainty sets. In particular, the design of the $d$-rectangular uncertainty set relies on the fact that the factor uncertainty sets are defined around probability distributions, since the total variation distance $D_{\text{TV}}(\cdot||\cdot)$ is a distance measure for probability distributions. Further, it is non-trivial to define uncertainty sets for the standard linear MDP to ensure (1) it contains valid probability distributions and (2) the optimization among the uncertainty set can be efficiently solved (by duality for example). We leave this extension for future research. --- ### 5. Wording suggestion. Thanks, we will take your advice and revise the “minimax optimal” to “near-optimal”. --- We hope we have addressed all of your questions/concerns. If you have any further questions, we would be more than happy to answer them and if you don’t, would you kindly consider increasing your score? --- **References** [1] Yin, Ming, Yaqi Duan, Mengdi Wang, and Yu-Xiang Wang. "Near-optimal Offline Reinforcement Learning with Linear Representation: Leveraging Variance Information with Pessimism." In International Conference on Learning Representation. 2022. [2] Xiong, Wei, Han Zhong, Chengshuai Shi, Cong Shen, Liwei Wang, and Tong Zhang. Nearly Minimax Optimal Offline Reinforcement Learning with Linear Function Approximation: Single-Agent MDP and Markov Game. In International Conference on Learning Representations (ICLR). 2023. --- Rebuttal Comment 1.1: Title: Thank you Comment: Thank you for your detailed responses. I tend to maintain my score. Responding: After reading references [1] and [2], I can see that the lower bound on $K$ results from the concentration properties of variance estimators. Hence, it seems quite confusing that a suboptimality upper bound for uniform $K$ cannot be derived for Algorithm 1 as it does not involve such estimators. --- Reply to Comment 1.1.1: Comment: Thank you for your reply and for acknowledging our responses. We answer your further questions as follows. --- ### Suboptimality upper bound for uniform $K$. We would like to acknowledge that your intuition is correct. For Algorithm 1, if we adopt the Suboptimality Decomposition Lemma C.1 proposed in our current paper, as well as the uniform concentration argument for analyzing the estimation error developed by Jin et al. (2021), we can get an instance-dependent suboptimality upper bound for uniform $K$ for Algorithm 1 without Assumption 3.3. However, this would lead to an additional $O(\sqrt{d})$ in the order of the upper bound of Algorithm 1. More importantly, the focus of our work is to explore the fundamental limit and intrinsic characteristics of the offline d-rectangular linear DRMDP. We proposed Algorithm 1 to use it as the backbone of the more advanced Algorithm 2, which could achieve the nearly minimax optimal suboptimality bound. Therefore, we analyze Algorithm 1 using the method presented in our paper so that it serves as a warm-start for us to work out the theoretical analysis pipeline needed to achieve our goal (near-optimal instance-dependent upper and lower bounds). Further, we would like to clarify that it is unsure if we can derive an upper bound for uniform $K$ for **Algorithm 2** due to the reason we state in the sequel. Note that Jin et al. (2021)’s analysis framework does not incorporate the reference-decomposition technique, and thus cannot be leveraged to achieve the near-optimal suboptimality in the $d$-rectangular linear DRMDP setting. Consequently, adapting Jin et al. (2021)’s analysis framework mentioned in the above paragraph for analyzing Algorithm 1 would be a distraction/detour from our main goal. Nevertheless, we will add a remark to discuss more on this in the next version of our manuscript. We would be more than happy to discuss more if you have further questions. **References** [1] Ying Jin, Zhuoran Yang, and Zhaoran Wang. Is pessimism provably efficient for offline RL? In International Conference on Machine Learning, pages 5084–5096. PMLR, 2021.
Summary: This paper proposes minimax optimal and computationally efficient algorithms with linear function approximation in the context of distributionally robust offline RL. The authors incorporate multiple new techniques in theoretical analysis, e.g., variance information, suboptimality and estimation uncertainty decomposition, robust value function shrinkage, and family of hard instances. Strengths: 1. The paper is technically sound with most claims supported sufficiently. 2. The theoretical analysis seems novel. Weaknesses: Quality: 1. In Line 175, the expression requires calculating the minimum value of the entire space. This operation is computationally intractable when considering continuous state space, which violates the "computationally efficient" claim made by authors. 2. In Line 202, I don't find Assumption 4.1 and Remark 4.2 in the paper. Additionally, the "fail-state" assumption doesn't necessarily hold in multiple MuJoCo environments, i.e., some environments have negative minimum values. 3. There are no experimental results provided in the paper. Although it is a theoretical paper, it is better to include some toy experiments to verify the proposed algorithm. Clarity: 1. Since the authors only focus on linear function approximation, it is better to clarify it in the open questions (Lines 51-52). 2. (Lines 97-98) Assoud's method -> Assouad's method? Additionally, it is better to provide references about Assouad's method. 3. (Lines 201-203) It is better to provide references about the Nelder-Mead method. Significance: There are several works providing theoretical guarantees for general function approximation in online robust RL (e.g., [58]) and offline RL (e.g., [Chen and Jiang, 2019]). Some discussions are missing about why general function approximation is not considered here for offline robust RL settings. [Chen and Jiang, 2019] Information-Theoretic Considerations in Batch Reinforcement Learning, ICML 2019. Technical Quality: 2 Clarity: 2 Questions for Authors: Please see the details in "weakness". Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: There is no potential negative social impact of this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for your valuable time and effort in providing detailed feedback on our work. We hope our response will fully address all your questions. --- ### 1. Computational tractability. We would like to note that we discussed the computational tractability of our algorithm on Line 178 - Line 203. Specifically, if the ‘fail-state’ assumption holds, then the minimal value is trivially equal to zero hence no computation is needed. If the ‘fail-state’ does not hold, we can use some heuristic methods, such as the Nelder-Mead method, to search for the minimal value. Thus, our algorithms are in general computationally tractable. We have also conducted numerical experiments to show the computational tractability of our algorithm. In terms of computation efficiency, 1) our algorithms leverage the linear function approximation, and do not need to estimate the robust value at each state-action pair as those methods developed for the tabular DRMDP setting; 2) Moreover, thanks to the structure of linear DRMDPs, we do not need to solve the dual problem at each state-action pair. Thus, our algorithm is computationally efficient compared to those methods developed for tabular DRMDPs and general $(s, a)$-rectangular DRMDPs. This is consistent with the claim of computationally efficient algorithms in theoretical RL (Jin et al., 2019, Xiong et al. 2022, Yin et al. 2022), which stating the runtime and the sample complexity should not depend on the number of states, but should depend instead on an intrinsic complexity measure of the function class. --- ### 2. In Line 202, I don't find Assumption 4.1 and Remark 4.2 in the paper. Additionally, the "fail-state" assumption doesn't necessarily hold in multiple MuJoCo environments, i.e., some environments have negative minimum values. We believe there is a misunderstanding. Here we are discussing previous work’s assumption 4.1 and remark 4.2 (Liu and Xu, 2024: https://proceedings.mlr.press/v238/liu24d/liu24d.pdf). We highlight that our work does **not** need the fail-state assumption. For environments that the fail-state does not hold, we can 1) either modify the reward of the environment to rescale the minimal value (of an absorbing state) to zero, which creates a fail-state, or 2) use heuristic methods to search for/approximate the minimal value. --- ### 3. Numerical experiments Thanks for your suggestion. We have done some experiments to verify our proposed algorithm. Please see the overall response for more details. --- ### 4. Clarify the linear function approximation in the open question. We will revise the open question as follows: Is it possible to design a computationally efficient and minimax optimal algorithm for robust offline RL with linear function approximation? --- ### 5. Typos and references. Thanks for pointing out these issues. We have revised the typo and added the references. --- ### 6. Justification of why linear function approximation rather than the general function approximation is considered in this work. In this work, we focus on offline DRMDPs with linear function approximation under the setting of $d$-rectangular linear DRMDPs with TV-divergence uncertainty sets, which is an understudied area with unique fundamental challenges compared to standard offline RL. It would be an interesting future research direction to incorporate general function approximation techniques, such as those in Chen and Jiang (2019) into offline DRMDPs. We would add the following paragraph in the introduction section to motivate our setting: Although standard offline MDPs based on linear function approximation have exhibited theoretical success (Jin et al., 2021; Yin et al., 2022; Xiong et al., 2022), DRMDPs with linear function approximation remain understudied. In particular, DRMDP encounters unique difficulties when applying linear function approximations, even when the source domain transition kernel is linear. The dual formulation in worst-case analyses induces extra nonlinearity for the function approximation. Consequently, the theoretical understanding of offline DRMDPs with function approximation remains elusive, even when the approximation is linear. In this work, we aim to address the essential challenges and explore the fundamental limit of this setting. In the conclusion section, we will add the following sentences to explore future research directions: Moreover, we would like to extend the pessimism principle utilized in this work beyond linear function approximation to explore general function approximation in DRMDPs. Leveraging the techniques for general function approximation in standard offline RL (Chen and Jiang, 2021), we would explore the unique challenges and fundamental limits of practically solving DRMDPs with general function approximation. --- We hope we have addressed all of your questions/concerns. If you have any further questions, we would be more than happy to answer them and if you don’t, would you kindly consider increasing your score? --- **References** [1] Zhishuai Liu, and Pan Xu. "Distributionally robust off-dynamics reinforcement learning: Provable efficiency with linear function approximation." In International Conference on Artificial Intelligence and Statistics, pp. 2719-2727. PMLR, 2024. [2] Yin, Ming, Yaqi Duan, Mengdi Wang, and Yu-Xiang Wang. (2022). "Near-optimal Offline Reinforcement Learning with Linear Representation: Leveraging Variance Information with Pessimism." [4] Ying Jin, Zhuoran Yang, and Zhaoran Wang. (2021). Is pessimism provably efficient for offline RL? [4] Xiong, Wei, Han Zhong, Chengshuai Shi, Cong Shen, Liwei Wang, and Tong Zhang. (2022). Nearly Minimax Optimal Offline Reinforcement Learning with Linear Function Approximation: Single-Agent MDP and Markov Game. [5] Jin, Chi, Zhuoran Yang, Zhaoran Wang, and Michael I. Jordan. (2019). Provably efficient reinforcement learning with linear function approximation. --- Rebuttal Comment 1.1: Comment: Thank you for your clarification! I have increased my score, but I still have two more questions. 1. As I mentioned, the "fail-state" assumption doesn't necessarily hold in multiple MuJoCo environments, while the Nelder–Mead technique is a heuristic search method that cannot guarantee convergence to global optimal. In these cases, I think the suboptimality gap cannot hold anymore as shown in Theorem 4.2. 2. I admit that there are some challenges in the setting of d-rectangular linear DRMDPs. However, given existing online robust RL [58] and offline robust RL [26] have shown the theoretical guarantees under general function approximation, it is still not clear why authors don't consider general function approximation directly. In the authors' rebuttal, this part is somewhat dodged by highlighting the difficulty of d-rectangular linear DRMDPs. --- Reply to Comment 1.1.1: Comment: Thank you for your reply. We answer your further questions as follows. --- ### Q1. About the "fail-state" assumption and Nelder–Mead technique. In theory, we would like to clarify that Theorem 4.2 does not rely on the “fail-state” assumption. The minimization over $s$ can be treated as an oracle, a common approach in theoretical RL analysis [1, 2, 3, 4]. Therefore, **our theoretical results are valid for any setting that satisfies the $d$-rectangular linear DRMDP structure**. Specifically, [1, 2, 3, 4] focus on standard linear MDPs and propose computationally efficient methods with the help of an oracle. For example, in the Bellman operator, the maximization over the action space $\mathcal{A}$, especially when continuous, is treated as an oracle. Similarly, in our work, assuming access to an oracle for minimization over the (potentially continuous) state space allows us to concentrate on the fundamental limits and challenges of $d$-rectangular linear DRMDPs with TV-divergence uncertainty sets. In practice, the minimization/maximization can be approximated. We acknowledge that in RL, a gap often exists between theoretical guarantees and actual performance in practice. However, our focus in this work is on providing a rigorous theoretical understanding of the offline $d$-rectangular DRMDP setting, which aligns with the approach of other theoretical research and meets NeurIPS standards. Furthermore, we plan to explore ways to address the issue of minimization over the state space in future work, which arises from the strong duality for TV distances (see Proposition G.1). As an alternative, we could consider replacing the TV-divergence uncertainty set with a Kullback-Leibler uncertainty set [6] or a $\chi^2$ uncertainty set [5], both leading to strong duality without the minimization term. We anticipate these approaches will lead to more practical algorithms for $d$-rectangular linear DRMDPs, though they are beyond the scope of this current paper. --- ### Q2. Why not consider general function approximation directly. We would like to further elaborate on this issue from the perspectives of uncertainty sets and algorithm design. **Panaganti et al. (2022) ([26] in our paper)**: [26] study offline $(s,a)$-rectangular DRMDPs with general function approximation. However, the $(s,a)$-rectangular uncertainty set can lead to overly conservative policies, particularly when the transition probabilities exhibit inherent structure [7,8]. To address this issue, our work focuses on structured uncertainty sets, specifically the $d$-rectangular linear DRMDP. This structured uncertainty set allows for a linear representation of robust value functions, resulting in less conservative policies compared to the general $(s,a)$-rectangular uncertainty set. This is why we do not follow [26] in studying general function approximation. In fact, our study of function approximation under structured uncertainty sets represents a significant contribution of our work, distinguishing it from [26]. **Zhou et al. (2023) ([58] in our paper)**: [58] explores the Double-Sampling uncertainty set and the Integral Probability Metric uncertainty set. They focus on scenarios with a simulator, proposing a robust Natural Actor-Critic algorithm with finite-time convergence guarantees, supported by extensive numerical experiments to demonstrate robustness. However, their method cannot be easily extended to address the $d$-rectangular uncertainty set of linear DRMDPs due to fundamental differences in uncertainty set formulations and algorithm design (value iteration-based versus policy gradient-based). We acknowledge that the method in [58] and our work are orthogonal. This distinction highlights potential avenues for complementary research in the future. We will add the above discussion in our revision. If you have further questions, we are more than happy to discuss with you. --- **References** [1] Jin et al. Provably efficient reinforcement learning with linear function approximation. In COLT 2020. [2] He et al. "Nearly minimax optimal reinforcement learning for linear markov decision processes." In ICML 2023. [3] Xiong et al. Nearly Minimax Optimal Offline Reinforcement Learning with Linear Function Approximation: Single-Agent MDP and Markov Game. In ICLR 2023. [4] Yin et al. "Near-optimal Offline Reinforcement Learning with Linear Representation: Leveraging Variance Information with Pessimism." In ICLR 2022. [5] Shi et al. The curious price of distributional robustness in reinforcement learning with a generative model. arXiv preprint, 2023. [6] Zhou et al. Finite-sample regret bound for distributionally robust offline tabular reinforcement learning. In AISTATS 2021. [7] Goyal et al. Robust markov decision processes: Beyond rectangularity. Mathematics of Operations Research, 48(1):203–226, 2023. [8] Ma et al. Distributionally robust offline reinforcement learning with linear function approximation. arXiv preprint, 2022.
Summary: This paper considers the distributionally robust Markov decision process (or sometimes robust MDP in the literature). In particular, it considers the linear RMDP with $d$-rectangularity TV uncertainty set, which decouples the uncertainty set from the state-action pair. Two algorithms are proposed to solve this, with the second algorithm being an improvement to the first. Extensive analysis is provided. Strengths: 1. The paper is well-written and easy-to-follow 2. Results are strong and thorough 3. Problem is well-motivated by introducing a hard instance (lower bound results) Weaknesses: 1. Comparison with other existing works is a bit unclear Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Can authors provide more comparison with existing linear MDP with $d$-rectangularity? So besides the difference in the type of uncertainty set used, can you compare your work with [1] in details, especially the results and methodologies? 2. Continuing the first question, in particular, I noticed that your coverage assumption is stronger than [1]. It seems that you need the behavioral policy to cover states and actions that the optimal policy seldom visits as well. [1] states that with pessimism, they are able to achieve single-policy concentrability (w.r.t. only optimal policy). Could you share some insights? [1] Xiaoteng Ma, Zhipeng Liang, Li Xia, Jiheng Zhang, Jose Blanchet, Mingwen Liu, Qianchuan Zhao, and Zhengyuan Zhou. Distributionally robust offline reinforcement learning with linear function approximation. arXiv preprint arXiv:2209.06620, 2022. Minor things 1. Question marks on line 618 Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: No other limitation I'd like to bring up Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for your positive feedback on our work. We hope our response will fully address all your questions. --- ### 1. Comparison with Ma et al. (2022). We acknowledge that Ma et al. (2022)’s work is most closely related to ours. However, we note that there are several technical flaws in their paper. We summarize the main issues here: (i) The proofs of their main lemmas (Lemma D.1 and Lemma D.2 in the arXiv version) related to suboptimality decomposition and the proof of theorems are incorrect. (ii) The sample complexity results for the two algorithms are both incorrect on the order of $d$. (iii) Assumption 4.4 on the dual variable of the dual formulation of the KL-divergence is too strong to be realistic. (iv) The concept of mis-specification in Section 5.2 is not well-defined, which makes results in Section 5 vacuous. Given these technique flaws, we believe the fundamental challenges of $d$-rectangular linear DRMDPs are not properly addressed in their work. It is also unsure whether $d$-rectangular linear DRMDPs with KL-divergence uncertainty set are solvable by their current methodologies. Moreover, in our work, we start with the setting of $d$-rectangular linear DRMDPs with TV-divergence uncertainty sets. We address the essential challenges and explore the fundamental limit of $d$-rectangular linear DRMDPs. In the next version of our manuscript, we plan to add a paragraph discussing the comparison with Ma et al. (2022) in detail and further elaborate on our contributions. --- ### 2. Stronger coverage assumption. First of all, Assumption 5.1 in Ma et al. (2022) is problematic, since the right hand side has an additional dependence on $d$, which makes the right hand side unreasonably large. The assumption 5.1 should be in this form: there exists some absolute constant $c>0$, such that for any $(i, h, s, P) \in [d]\times[H] \times \mathcal{S} \times \mathcal{U}^{\rho}(P^0)$, we have $$ \Lambda_h \succeq \lambda I + K\cdot c \cdot \mathbb{E}^{\pi^{\star},P}\big[(\phi_i(s,a)\mathbf{1}_i)(\phi_i(s,a)\mathbf{1}_i)^\top |s_1=s \big], $$ which is exactly the robust partial coverage assumption in Blanchet et al. (2023, Assumption 6.3). On the one hand, a closer examination of Assumption 3.3 in our paper reveals that it guarantees a weaker version of the robust partial coverage assumption: there exists some constant $c^{\dagger}>0$, such that for any $(i, h, s, P)\in[d]\times[H]\times \mathcal{S} \times \mathcal{U}^{\rho}(P^0)$, we have \begin{align} \Lambda_h \succeq \lambda I + K\cdot c^{\dagger}/d \cdot \mathbb{E}^{\pi^{\star},P}\big[(\phi_i(s,a)\mathbf{1}_i)(\phi_i(s,a)\mathbf{1}_i)^\top |s_1=s \big]. \end{align} Nevertheless, Assumption 3.3 does not directly imply the robust partial coverage assumption. On the other hand, assumption 3.3 is essential in achieving the minimax optimality of our algorithm. A similar phenomenon also appears for the non-robust offline RL with linear function approximation. Specifically, leveraging the pessimism principle, Jin et al. (2021) show that the partial coverage assumption is enough for the provable efficiency of offline RL with linear function approximation. However, to achieve minimax optimality, Yin et al. (2022) and Xiong et al. (2022) show that a stronger full-type coverage is needed. Lastly, in a concurrent work (Wang et al, 2024) of ours, the same assumption is also required to achieve tighter dependences on $d$ and $H$ (see Assumption 4 and Theorem 2 of their paper). --- ### 3. Question marks on line 618. Thanks for pointing out this typo. We have fixed it in our manuscript. --- We hope we have addressed all of your questions/concerns. If you have any further questions, we would be more than happy to answer them and if you don’t, would you kindly consider increasing your score? --- **References** [1] Ying Jin, Zhuoran Yang, and Zhaoran Wang. Is pessimism provably efficient for offline rl? In International Conference on Machine Learning, pages 5084–5096. PMLR, 2021. [2] Xiong, Wei, Han Zhong, Chengshuai Shi, Cong Shen, Liwei Wang, and Tong Zhang. Nearly Minimax Optimal Offline Reinforcement Learning with Linear Function Approximation: Single-Agent MDP and Markov Game. In International Conference on Learning Representations (ICLR). 2023. [3] Blanchet, Jose, Miao Lu, Tong Zhang, and Han Zhong. Double Pessimism is Provably Efficient for Distributionally Robust Offline Reinforcement Learning: Generic Algorithm and Robust Partial Coverage. In Thirty-seventh Conference on Neural Information Processing Systems. 2023. [4] Wang, He, Laixi Shi, and Yuejie Chi. Sample complexity of offline distributionally robust linear Markov decision processes. arXiv preprint arXiv:2403.12946 (2024). --- Rebuttal Comment 1.1: Comment: Dear authors, Thank you for responding to my questions. I am satisfied with your answer. I decided to increase my rating. Good luck! --- Reply to Comment 1.1.1: Comment: Thank you very much for your positive feedback! We will revise our paper according to your constructive reviews. Best, Authors
Summary: This paper presents a theoretical study of distributionally robust MDPs. Their findings show that function approximation is both different and harder in robust RL as compared to offline RL. They show matching information theoretic lower bounds for their novel algorithm. Strengths: 1. The paper has positioned itself and it's improvement over past work really well (Table 1). 2. The paper's result about the information theoretic lower bound has a lot of theoretical value. 3. The appendix is well written and the proofs are sound. Weaknesses: 1. More motivation on practical scenarios where DRMDPs are applicable will be useful. 2. Small simulation examples would further strengthen the paper to ensure that the constants in the sample complexities are reasonable. Technical Quality: 3 Clarity: 3 Questions for Authors: There is also literature on theoretical multi-task RL, which while not focussed on the distributional robustness aspect, still tackle the problem of planning in an environment with uncertain dynamics. For example see : https://arxiv.org/pdf/2402.12570 and follow up works. I'd like the authors to cite and write the differences to Theorem 1 of the referenced paper, which use function approximation for the representation as well, and come up with uncertainty gaps for different state action pairs of the transition model, and still give a suboptimality gap on the policy planning in the downstream target task (see Theorem 2). Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: There are no significant technical limitations of this paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for your positive feedback on our work. We hope our response fully addresses all of your questions. --- ### 1. More motivation on practical scenarios where DRMDPs are applicable. In many real-world applications, the agent only has access to a **single** source domain, which can be different from target domains where the policy is deployed, assuming the same task is being performed. Importantly the target domains are **unknown** and can vary during deployment. Given the limited information and large uncertainty, one may hold a pessimistic perspective hoping the learned policy is robust enough that would perform well even under the worst-case transition. This basic idea leads to the DRMDP framework, which models the unknown target domain in an uncertainty set defined around the source domain. We take the control of infectious diseases for an example. Suppose we have access to a simulator (source domain), which emulates one specific real-world disease infection process (target domain). However, the simulator does not have perfect knowledge about the complex real-world environment, but it is reasonable to assume that the real-world environment is close to the simulator, thus lying in an uncertainty set around it. Notably, the real-world environment can be any environment in the uncertainty set, and the DRMDP framework provides a worst-case guarantee for policies learned merely through the simulator. --- ### 2. Sample simulation examples. Thanks for your suggestion. We have done some experiments to verify our proposed algorithm. Please see the overall response for more details. --- ### 3. Literature on theoretical multi-task RL. Thank you for bringing this work to our attention. We will add the following paragraphs to the related work section. Besides the distributionally robust perspective to solve the planning problem in a nearly unknown target environment, another line of work focuses on transfer learning in low-rank MDPs (Cheng et al., 2022; Lu et al., 2022; Agarwal et al., 2023; Bose et al., 2024). Specifically, the problem setup assumes that the agent has access to information of several source tasks. The agent learns a common representation from the source domains and then leverages the learned representation to learn a policy performing well in the target tasks with limited information. This setting is in stark contrast to the setting of DRMDP, where the agent only has access to the information of a single source domain, without any available information of the target domain, assuming the same task is being performed. This motivates the pessimistic attitude of the distributionally robust perspective. Among the aforementioned works, Bose et al. (2024) studied the offline multi-task RL, which is the most closely related to our setting. In particular, they investigate the representation transfer error in their Theorem 1, stating that the learned representation can lead to a transition kernel that is close to the target kernel in terms of the TV divergence. Note that the uncertainty is induced by the representation estimation error, which is different from our setting assuming that the uncertainty comes from perturbations on underlying factor distributions. Nevertheless, this work provides evidence that TV divergence is a reasonable measure to quantify the uncertainty in transition kernels and motivates a future research direction in learning robust policies that are robust to the uncertainty induced by the representation estimation error. --- We hope we have addressed all of your questions/concerns. If you have any further questions, we would be more than happy to answer them and if you don’t, would you kindly consider increasing your score? --- **References** [1] Rui Lu, Andrew Zhao, Simon S Du, and Gao Huang. Provable general function class representation learning in multitask bandits and mdp. Advances in Neural Information Processing Systems, 35:11507–11519, 2022. [2] Yuan Cheng, Songtao Feng, Jing Yang, Hong Zhang, and Yingbin Liang. Provable benefit of multitask representation learning in reinforcement learning. Advances in Neural Information Processing Systems, 35: 31741–31754, 2022. [3] Alekh Agarwal, Yuda Song, Wen Sun, Kaiwen Wang, Mengdi Wang, and Xuezhou Zhang. Provable benefits of representational transfer in reinforcement learning. In The Thirty Sixth Annual Conference on Learning Theory, pages 2114–2187. PMLR, 2023. [4] Bose, Avinandan, Simon Shaolei Du, and Maryam Fazel. "Offline multi-task transfer RL with representational penalization." arXiv preprint arXiv:2402.12570 (2024). [5] Zhishuai Liu, and Pan Xu. "Distributionally robust off-dynamics reinforcement learning: Provable efficiency with linear function approximation." In International Conference on Artificial Intelligence and Statistics, pp. 2719-2727. PMLR, 2024. [6] Ying Jin, Zhuoran Yang, and Zhaoran Wang. Is pessimism provably efficient for offline rl? In International Conference on Machine Learning, pages 5084–5096. PMLR, 2021. --- Rebuttal Comment 1.1: Comment: I thank the authors for their detailed response. I request to maintain my positive rating.
Rebuttal 1: Rebuttal: ## Overall Response We would like to thank all reviewers for their insightful and detailed reviews and comments. We have addressed the comments from the reviewers and revised the manuscript accordingly. In the following, we would like to provide overall responses to several common questions raised by reviewers. ### 1. Experimentation Per reviewers’ requirements, we conduct numerical experiments to illustrate the performances of our two algorithms. In the attached PDF, we present additional experimental results. Specifically, we leverage the simulated linear MDP (Figure 1) proposed by Liu and Xu (2024) and adapt it to the offline setting. In particular, we set the behavior policy as the random policy that chooses actions uniformly at random at each state-action pair $(s, a)$ and stage $h$. The number of trajectories (i.e., sample size) of the offline dataset is set to 100. We compare our algorithms with their non-robust counterpart, PEVI, proposed by Jin et al. (2021). Figure 2 shows the performances of the learned policies of three algorithms. We conclude that both of our proposed algorithms are robust to environmental perturbation compared to the non-robust PEVI. Furthermore, VA_DRPVI slightly outperforms DRPVI in most settings. These numerical results are consistent with our theoretical findings. ### 2. Comparison with Ma et al. (2022) We acknowledge that the work of Ma et al. (2022) is the most closely related to ours. However, we note that there are several technical flaws in their paper. We summarize the main issues here: * The proofs of main lemmas (Lemma D.1 and Lemma D.2 in the arXiv version) related to suboptimality decomposition and the proof of theorems are incorrect. * The sample complexity results for the two algorithms are both incorrect on the order of $d$. * Assumption 4.4 on the dual variable of the dual formulation of the KL-divergence is too strong to be realistic. * The concept of mis-specification in Section 5.2 is not well-defined, which makes all results in Section 5 vacuous. Given these technique flaws, we believe the fundamental challenges of $d$-rectangular linear DRMDPs are not properly addressed in their work. It is also unsure whether $d$-rectangular linear DRMDPs with KL-divergence uncertainty set are solvable by their current methodologies. Moreover, in our work, we start with the setting of $d$-rectangular linear DRMDPs with TV-divergence uncertainty sets. We address the essential challenges and explore the fundamental limit of this setting. --- **References** [1] Liu, Zhishuai, and Pan Xu. "Distributionally robust off-dynamics reinforcement learning: Provable efficiency with linear function approximation." In International Conference on Artificial Intelligence and Statistics, pp. 2719-2727. PMLR, 2024. [2] Ying Jin, Zhuoran Yang, and Zhaoran Wang. Is pessimism provably efficient for offline RL? In International Conference on Machine Learning, pages 5084–5096. PMLR, 2021. [3] Ma, Xiaoteng, Zhipeng Liang, Jose Blanchet, Mingwen Liu, Li Xia, Jiheng Zhang, Qianchuan Zhao, and Zhengyuan Zhou. "Distributionally robust offline reinforcement learning with linear function approximation." arXiv preprint arXiv:2209.06620 (2022). Pdf: /pdf/78e890362ecf856393ac09bbfb325b818a21fd28.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
MSPE: Multi-Scale Patch Embedding Prompts Vision Transformers to Any Resolution
Accept (poster)
Summary: 1. They analyze the resolution adaptability in ViT models, identify the patch embedding layer as the crucial component, and provide a low-cost solution. 2. They propose Multi-Scale Patch Embedding (MSPE), which enhances ViT models by substituting the standard patch embedding layer with learnable, adaptive convolution kernels, enabling ViTs to be applied on any input resolution. 3. Experiments demonstrate that with minimal training (only five epochs), MSPE significantly enhances performance across different resolutions on classification, segmentation, and detection tasks. Strengths: 1. They found two problems in the optimization target function in FlexiViT and they illustrate similarity in patch embeddings does not ensure the best performance. 2. MSPE allows ViTs to handle variable input resolutions directly, significantly improving performance on low-resolution inputs and maintaining high performance on high-resolution inputs. 3. MSPE is easy to apply to existing ViT models as it only modifies the patch embedding layer, avoiding the need for high-cost training or extensive model modifications. 4. MSPE significantly improves performance across different visual tasks, including image classification, segmentation, and detection, especially in comparison to previous methods like FlexiViT and ResFormer. Weaknesses: 1. This method assumes that the input x follows a normal distribution x∼N(0,1) for finding the optimal solution in downscaling scenarios. However, this assumption may not hold true for all input images, particularly in real-world scenarios where image data often does not follow a normal distribution. 2. Equation (3) isolates the optimization to gθ, but a more holistic approach that includes Encϕ could potentially lead to better alignment of the entire model's parameters, ensuring more coherent and efficient learning across different resolutions. Technical Quality: 3 Clarity: 3 Questions for Authors: It's impressive that MSPE achieves much better results with low-resolution images. Is it possible to also show the performance with higher image resolutions larger than 1000? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: No Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for providing us with your valuable feedback and suggestions. We appreciate your input and have carefully considered your questions. Below, we provide detailed responses to each of them: > #### **W1: Assuming $\mathcal{X} \sim \mathcal{N}(0, 1)$ in Image Processing** We completely agree with this point. FlexiViT aims to equalize patch embeddings, using the assumption $\mathcal{X} \sim \mathcal{N}(0, 1)$ for downscaling solutions. However, in real-world scenarios, image pixels do not necessarily meet this assumption, leading to poor FlexiViT performance at low resolutions. In light of this issue, we introduce learnable patch embedding and propose a more reasonable optimization condition that minimizes loss across different resolutions, which could compensate the limitation of the above assumption. In other words, our method does not rely on $\mathcal{X} \sim \mathcal{N}(0, 1)$ assumption, and the experimental results in Sec. 4 demonstrate that our method significantly outperforms FlexiViT on low-resolution inputs. We would revise the motivation part (Sec. 2.4) to clarify it. > #### **W2: Exploring Optimization of Encoder** We appreciate the reviewer's insightful question. In this study, we aim to apply minimal modifications to various pre-trained ViT models, finding that optimizing only the patch embedding layer achieves excellent performance. Indeed, optimizing the encoder can increase the model's performance but adds computational costs. Furthermore, optimizing the encoder must be handled carefully to avoid degrading existing representations and causing knowledge forgetting. Optimizing the patch embedding layer does not impact the original model's performance, as MSPE only optimizes added convolutional kernel parameters. Below are the experimental results for optimizing the encoder, showing performance improvements at other resolutions but a decrease at the standard 224x224 resolution. We hope these results address your concerns about encoder optimization. We will add the above discussions in the revised version. | **Method** | **Resolution** | | | | | | :----------- | :------------: | :--------: | :--------: | :--------: | :--------: | | | 56 | 112 | 168 | 224 | 448 | | Vanilla | 54\.67 | 78\.75 | 83\.66 | 85\.10 | 53\.81 | | MSPE | 77\.94 | 83\.75 | 84\.70 | **85\.10** | **85\.11** | | MPSE+Encoder | **79\.33** | **83\.94** | **84\.75** | 84\.73 | 85\.07 | > #### **Q1: Testing on High-Resolution Images** Thank you for bringing up this important point. Below are our experimental results at higher resolutions, where we tested two types of ViT models, including Non-overlapping patch embedding (ViT) and overlap patch embedding (PVT). We will add the results of high-resolution in the revised version. | | **Method** | **Resolution** | | | | | | | | | | | :-: | :--------- | :------------: | :--------: | :--------: | :--------: | :--------: | :--------: | :--------: | :--------: | :--------: | :--------: | | | | 224 | 448 | 672 | 896 | 1120 | 1792 | 2240 | 2688 | 3360 | 4032 | | ViT | Vanilla | 85\.10 | 53\.04 | 5\.17 | 0\.68 | 0\.33 | 0\.13 | 0\.08 | 0\.08 | 0\.08 | 0\.08 | | | MSPE | **85\.10** | **85\.11** | **85\.13** | **85\.14** | **85\.14** | **85\.15** | **85\.14** | **85\.14** | **85\.13** | **85\.16** | | PVT | Vanilla | 83\.12 | 73\.64 | 71\.18 | 67\.04 | 67\.04 | 64\.19 | 63\.73 | 63\.50 | 64\.44 | 63\.26 | | | MSPE | **83\.12** | **82\.44** | **83\.11** | **82\.76** | **83\.02** | **82\.89** | **82\.93** | **82\.65** | **82\.97** | **82\.58** | --- Rebuttal Comment 1.1: Comment: Thank you for clarifying the weaknesses. Your explanation has resolved my concerns. I'm glad to see the tremendous performance on the higher resolution images. I maintain my score as accepted.
Summary: The paper introduces Multi-Scale Patch Embedding (MSPE), a novel approach to enhance Vision Transformers (ViTs) by allowing them to adapt to variable input resolutions without resizing images to a fixed resolution. MSPE uses multiple variable-sized patch kernels and selects the optimal parameters for different resolutions, maintaining performance across tasks likes image classification, segmentation, and detection. Strengths: 1. MSPE replaces the standard patch embedding layer in Vision Transformers (ViTs) with multiple learnable adaptive convolution kernels, enabling the effective handling of different input resolutions. 2. MSPE demonstrates improved performance across varying resolutions in classification, segmentation, and detection tasks with minimal additional training. Weaknesses: 1. This paper does not experiment with other more effective positional encoding strategies. The paper mentions that linear interpolation of positional embedding has achieved acceptable performance in existing work. 2. The paper analyses the importance of patch embedding but lacks corresponding experimental evidence comparing its importance to other components like encoder and positional encoding. Additionally, the components may show different impacts for different tasks; it does not analyze how the importance of these components may vary across different tasks. 3. The training efficiency of MSPE is highlighted, yet the potential long-term benefits of additional training epochs are not fully explored. It would be beneficial to see more detailed comparisons over extended training periods to understand better the model’s convergence behavior and potential overfitting issues. 4. Minor Error: In 2.1 Vision Transformer Models, the shape of image `x` is defined as `h*w*c`, but the kernel shape obtained is `h_k * w_k * d`. Shouldn't the kernel shape be `h_k * w_k * c * d`? Technical Quality: 2 Clarity: 2 Questions for Authors: 1. Is selecting the hyperparameter λ during the training process crucial for the model's performance? Can different values of λ yield good results? 2. Can you elaborate on the differences between MSPE with FlexiViT? 3. Is there a rationale behind selecting the hyperparameter K and its corresponding shape in MSPE, or was it arbitrarily chosen? 4. Did the authors attempt to apply the model to uncommon resolutions (e.g., 200x150) to test its generalization performance? Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable feedback. We have carefully considered your questions and would like to address them as below: > #### **W1: Experiments of Positional Encoding Strategies** Thanks for your comment. MSPE employs the vanilla position encoding approach, a learnable $(N, DIM)$ vector where $N = \left\lfloor \frac{h}{h_k} \right\rfloor \times \left\lfloor \frac{w}{w_k} \right\rfloor$ For non-standard input resolutions, we resize the $(N, DIM)$ vector to $(N_{new}, DIM)$ using linear interpolation, commonly used in ViT models as **_resample absolute position embedding_**. Experiments in Sec. 4 demonstrate that, using the vanilla position encoding strategy, MSPE significantly outperforms other methods with complex position encoding strategies, such as ResFormer and NaviT. Actually, we intend not to use other complex positional encoding methods as they do not align with pre-trained ViT models. As shown in the results below, changing the position encoding significantly reduces the performance of pre-trained models (ViT/B-16), and retraining does not recover the expected performance. | **Method** | **Training Epoch** | | | | | | :----------------------- | :----------------: | :----: | :----: | :----: | :----: | | | 0 | 5 | 10 | 20 | 50 | | Learned 2D (NaViT) | 0\.10 | 80\.36 | 81\.29 | 81\.85 | 82\.33 | | Global-Local (ResFormer) | 0\.10 | 77\.97 | 78\.80 | 79\.54 | 80\.42 | | Vanilla | **85\.10** | - | - | - | - | > #### **W2: Analysis of Patch Embedding and Its Relative Importance** Thank you for this comment. We focus on improving patch embedding rather than the encoder or positional encoding for two reasons. - **First**, as demonstrated by the analysis in W1, altering the positional encoding strategy necessitates retraining the trained ViT models, incurring significant costs. - **Second**, modifying the encoder presents similar challenges. Our goal, as detailed in motivation (Sec. 2.4), is to adapt pre-trained ViT models to different resolutions with minimal changes. Modifying patch embedding is easy to implement, compatible with pre-trained ViT models, and significantly improves performance at various resolutions without compromising the original performance. - Does the analysis yield different conclusions for different tasks? In Sec. 2.4, we examined the impact of patch embedding on classification tasks by analyzing the class token at various resolutions. In classification, the class token carries global semantic information and serves as the feature for classification. In segmentation and detection tasks, such as SETR and ViTDeT, the class token is similarly utilized as the feature map for segmentation and detection. Thus, the analysis of the class token is consistent in principle across these segmentation and detection models. The experimental results in Tables 5 and 6 support our findings, demonstrating performance improvement in segmentation and detection tasks. > #### **W3: Analysis of Extended Training Epochs** Thanks for your suggestion. We have provided results for **1, 3, 5, 10, and 20** training epochs in Table 5 of Appendix B, which we hope will address your concerns. These results show that MSPE stabilizes after 5 epochs, and 20 epochs result in slight improvements without overfitting issues. One advantage of our method is that it requires only 5 training epochs, because MSPE only modifies the patch embedding layer of the ViT model and has few trainable parameters. Therefore, very few training epochs are needed. > #### **W4: Minor Error** Thank you for highlighting this error. We agree with your suggestion about the kernel shape, and we corrected it in the revised version of the paper. > #### **Q1, Q3: Analysis of $\lambda$ and $K$** - The loss function is defined as: $$\mathcal{L}_{\theta}(x,y)= \sum _{i=1}^K \ell [(\text{Enc} _{\phi} (g _{\widehat{\theta} _i} (B _{r}^{r _i}x)), y ] + \lambda \cdot \ell [(\text{Enc} _{\phi}(g _{\theta}(x)), y ] $$ $\lambda$ is a hyperparameter to prevent performance degradation. The ablation study in Figure 7(a) presents results for $\lambda$ values of 0, 1, and 2, indicating: **(1)** The parameter is essential for maintaining performance, with $\lambda=1$ enhancing accuracy by 4.2% over $\lambda=0$. **(2)** The hyperparameter shows limited sensitivity, as increasing from $\lambda=1$ to $\lambda=2$ makes little difference. The revised version will include a wider range of $\lambda$ results. - $K$ represents the number of convolutional kernels used for patch embedding. It is impractical to use a unique kernel size for each resolution. In MSPE, both the size and ratio of kernels $(w_{\theta}^i, b_{\theta}^i)$ are adjustable. The ablation study in Figure 7(b) demonstrates that $K=4$ is adequate, employing kernel sizes of 16x16, 12x12, 8x8, and 4x4. > #### **Q2: Differences between MSPE and FlexiViT** FlexiVit optimizes patch embeddings consistently across resolutions through explicit analytical transformations. However, it has two disadvantages: - **First**, it imposes a strict sufficient condition for handling varying resolutions, but the similarity in patch embeddings does not ensure optimal performance. - **Second**, in the downscaling scenario, there is no analytical solution for this goal. Differently, MSPE optimizes patch embeddings by minimizing the objective function across different resolutions, employing multiple dynamically adjustable patch embedding kernels to handle different resolutions effectively. Experimental results demonstrate that MSPE significantly outperforms FlexiVit. > #### **Q4: Test at Non-Standard Resolutions** In Figures 1, 4, and 9 of our paper, we tested many uncommon resolutions, such as 192x144, 80x192, 896x128, etc. Our method can also be applied to resolutions like 200x150. --- Rebuttal 2: Comment: The response has resolved my concerns, and I have raised the score.
Summary: This paper proposes to substitute the standard patch embedding with multiple variable-sized patch kernels. This eliminates the need to resize the original image. Extensive experiment results are shown to demonstrate the benefits. Strengths: The problem is well defined and the proposed method is sound. Convincing results are shown to support the claims. Weaknesses: NA Technical Quality: 4 Clarity: 4 Questions for Authors: The highest resolution shown is 448, 2x the training resolution 224. A typical cell phone camera has a resolution around 2K. Could the proposed method bridge this 10x gap? Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for providing valuable feedback and suggestions. We greatly appreciate your input and completing the necessary experiments. Here is our detailed response to your questions: >**Q: Experimental Results on More Than 10x Resolution** Our method can be directly applied to high-resolution images because the shape and size of the patch embedding kernels dynamically adjust according to the input resolution and have multiple convolution kernels corresponding to different resolution ranges. Below are our experimental results for high-resolution images. We tested models with non-overlapping patch embedding (ViT) and overlapping patch embedding (PVT). We will include those results in the revised version. |     | **Method** | **Resolution** |            |            |            |            |            |            |            |            |            | | :-: | :--------- | :------------: | :--------: | :--------: | :--------: | :--------: | :--------: | :--------: | :--------: | :--------: | :--------: | |     |            | 224            | 448        | 672        | 896        | 1120       | 1792       | 2240       | 2688       | 3360       | 4032       | | ViT | Vanilla    | 85\.10         | 53\.04     | 5\.17      | 0\.68      | 0\.33      | 0\.13      | 0\.08      | 0\.08      | 0\.08      | 0\.08      | |     | MSPE       | **85\.10**     | **85\.11** | **85\.13** | **85\.14** | **85\.14** | **85\.15** | **85\.14** | **85\.14** | **85\.13** | **85\.16** | | PVT | Vanilla    | 83\.12         | 73\.64     | 71\.18     | 67\.04     | 67\.04     | 64\.19     | 63\.73     | 63\.50     | 64\.44     | 63\.26     | |     | MSPE       | **83\.12**     | **82\.44** | **83\.11** | **82\.76** | **83\.02** | **82\.89** | **82\.93** | **82\.65** | **82\.97** | **82\.58** | --- Rebuttal Comment 1.1: Comment: Thanks for adding new results. I assume these are up sampled images and hence little change in accuracy. If so, please make it clear in the final version.
Summary: The paper aim to address the challenge of adapting ViTs to variable input resolutions, which is a critical issue often overlooked in real-world applications. The authors propose a new method named Multi-Scale Patch Embedding (MSPE), which enhances the patch embedding layer by incorporating multiple variable-sized patch kernels. This approach allows the model to process images of varying resolutions without resizing, thus maintaining performance across different resolutions. Strengths: 1. Problem Identification: This paper clearly identifies the gap in current ViT models, i.e. they are unable to handle variable input resolutions effectively. This is a crucial problem as real-world images often come in various resolutions. 2. Sound Approach: The proposed MSPE substitutes the standard patch embedding layer with learnable adaptive convolution kernels, which allows the model to adjust to different input resolutions dynamically. 3. Theoretical Analysis: The paper offers a thorough theoretical analysis of the problem and the proposed solution. This includes a detailed discussion on the patch embedding layer's role and the limitations of existing methods like FlexiViT and ResFormer. 4. Comprehensive Experiments: The authors provide extensive experimental results demonstrating the effectiveness of MSPE across various tasks, including image classification, segmentation, and detection. Weaknesses: 1. Technical Contribution: Although the proposed method improves the multi-resolution performance of ViTs, it essentially solves an engineering problem from an engineering perspective. 2. Scalability Concerns: Although the method is low-cost and compatible with existing ViT models, the paper does not fully address potential scalability issues when applied to very large datasets or extremely high-resolution images. 3. Ablation Studies: The paper could benefit from more detailed ablation studies to isolate the contributions of different components of the MSPE and better understand the impact of each design choice. 4. Real-World Applications: While the experiments are comprehensive, the paper could include more real-world application scenarios to demonstrate the practical benefits and robustness of MSPE in diverse settings. Technical Quality: 3 Clarity: 3 Questions for Authors: See Weakness. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The authors addressed the limitation in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We greatly appreciate your insightful comments and suggestions, as they have been helpful in refining and enhancing our work. We have thoroughly reviewed all of your points and have addressed your concerns as outlined below: > #### **W1: Technical Contribution** Thank you for your kind feedback. In fact, images of different sizes and aspect ratios represent different data distributions, and effectively handling these varying resolutions is critical to addressing data distribution challenges in open-world settings. This is particularly crucial for real-world applications, as image resolutions are diverse, not fixed. However, ViTs lack robustness to different resolutions. As demonstrated in our experiments (Sec. 4.1), a model used for a 224x224 resolution significantly underperforms when processing inputs at 448x448 or 128x128 resolutions. Previous works faced two major issues: **(1)** introduced complex modules and strategies that are incompatible with existing ViTs architectures, requiring the entire model to be retrained; **(2)** insufficient adaptability across various sizes and aspect ratios, limiting their performance to multiple resolutions. Our method does not require costly training or complex modifications, making it easily applicable to most ViT models. Moreover, it significantly outperforms previous methods. The above discussion will be added in the revised version. > #### **W2: Scalability Concerns** We agree that scalability is crucial for method applicability. In Sec. 4 of our paper, we conducted extensive experiments across multiple scenarios and datasets, including ImageNet-1K, COCO2017, ADE20K, and Cityscapes, as well as on various ViT models like ViT, DeiT III, and PVT. To further address your concerns about scalability, we also performed high-resolution tests on ImageNet-1K. The following experimental results demonstrate that our method performs excellently across large datasets and high-resolution settings. We will add those results to the revised version. | | **Method** | **Resolution** | | | | | | | | | | | :-: | :--------- | :------------: | :--------: | :--------: | :--------: | :--------: | :--------: | :--------: | :--------: | :--------: | :--------: | | | | 224 | 448 | 672 | 896 | 1120 | 1792 | 2240 | 2688 | 3360 | 4032 | | ViT | Vanilla | 85\.10 | 53\.04 | 5\.17 | 0\.68 | 0\.33 | 0\.13 | 0\.08 | 0\.08 | 0\.08 | 0\.08 | | | MSPE | **85\.10** | **85\.11** | **85\.13** | **85\.14** | **85\.14** | **85\.15** | **85\.14** | **85\.14** | **85\.13** | **85\.16** | | PVT | Vanilla | 83\.12 | 73\.64 | 71\.18 | 67\.04 | 67\.04 | 64\.19 | 63\.73 | 63\.50 | 64\.44 | 63\.26 | | | MSPE | **83\.12** | **82\.44** | **83\.11** | **82\.76** | **83\.02** | **82\.89** | **82\.93** | **82\.65** | **82\.97** | **82\.58** | > #### **W3: Ablation Studies** Thank you for your suggestions. We conducted extensive ablation experiments in Sec. 4.4 and Appendix D, including training epochs, model size, hyperparameters, kernel count (K), and resizing methods. The results are displayed in Figures 6, 7, 8, 10, and Table 6. The experiments show that our method is robust to hyperparameters and that increasing training epochs and kernel counts boosts performance. We achieve good performance and basic convergence with 5 training epochs and 4 kernels. We also evaluated different resizing methods, including nearest, bilinear, and bicubic, with PI-resize proving the most effective, as shown in Figures 6 and 8. We hope these detailed results address your concerns about ablation studies. We would revise the ablation study part (Sec. 4.4) to make it clearer. > #### **W4: Real-World Applications** We completely agree with this. Adapting to different resolutions is a key challenge for ViT models in real-world applications. Our method was evaluated on multiple real-world datasets such as Cityscapes, ADE20K, ImageNet-1K, and COCO2017, including tasks like classification, detection, and segmentation. The City dataset collects street scenes from various cities; ADE20K includes scenes from more than 150 settings; ImageNet-1K and COCO2017 are extensive, covering numerous real-world environments. Tables 5 and 6 in our paper show that MSPE significantly improves across various resolutions and task settings. We hope these experiment results confirm the substantial potential for real-world application. --- Rebuttal Comment 1.1: Comment: The rebuttal addresses most of my concerns. I would raise my score.
Rebuttal 1: Rebuttal: We sincerely thank the reviewers for their detailed and perceptive comments, which have significantly refined and improved this paper. We are grateful that the reviewers appreciate our paper in various aspects, including its well-defined problem and theoretical analysis [tsgU, RdsV, 9L8r], simple but solid method [tsgU, RdsV, gLBy, 9L8r], and remarkable performance [tsgU, RdsV, gLBy, 9L8r]. In this work, we explore the challenges of applying ViT models at different resolutions and propose MSPE, which improves ViT models by replacing the standard patch embedding layer with multiple learnable, adaptive convolution kernels. This makes ViTs adaptable to any input resolution without requiring high-cost training or extensive model modifications. *** Reviewers are particularly interested in how our method performs at high resolutions. Below are the results at high resolution. The results demonstrate that MSPE significantly enhances performance at high resolutions, enabling a single ViT model to effectively cover resolutions from 28x28 to 4032x4032, which previous methods could not achieve. |     | **Method** | **Resolution** |            |            |            |            |            |            |            |            |            | | :-: | :--------- | :------------: | :--------: | :--------: | :--------: | :--------: | :--------: | :--------: | :--------: | :--------: | :--------: | |     |            | 224            | 448        | 672        | 896        | 1120       | 1792       | 2240       | 2688       | 3360       | 4032       | | ViT | Vanilla    | 85\.10         | 53\.04     | 5\.17      | 0\.68      | 0\.33      | 0\.13      | 0\.08      | 0\.08      | 0\.08      | 0\.08      | |     | MSPE       | **85\.10**     | **85\.11** | **85\.13** | **85\.14** | **85\.14** | **85\.15** | **85\.14** | **85\.14** | **85\.13** | **85\.16** | | PVT | Vanilla    | 83\.12         | 73\.64     | 71\.18     | 67\.04     | 67\.04     | 64\.19     | 63\.73     | 63\.50     | 64\.44     | 63\.26     | |     | MSPE       | **83\.12**     | **82\.44** | **83\.11** | **82\.76** | **83\.02** | **82\.89** | **82\.93** | **82\.65** | **82\.97** | **82\.58** | *** For other specific questions, we provide detailed responses to each reviewer below. We will carefully revise the paper based on all the feedback from the four reviewers. Thank you once again for your valuable suggestions!
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
UMB: Understanding Model Behavior for Open-World Object Detection
Accept (poster)
Summary: The paper presents Understanding Model Behavior (UMB), a framework for Open-World Object Detection (OWOD) that not only detects unknown objects but also analyzes the decision-making with text attributes. UMB first prompts large language models to generate text attributes. Then, it models the empirical, in-distribution, and out-of-distribution probabilities to estimate the likelihood of an object being predicted as a foreground. Based on this, UMB can infer the similarity of unknown objects to known classes, thereby identifying the most important attributes. It significantly improves over previous state-of-the-art methods on the Real-World Object Detection (RWD) benchmark, demonstrating its effectiveness in identifying unknown objects. Strengths: 1. The motivation for modeling fine-grained text attributes in open-world learning is reasonable and interesting. 2. The performance gains on benchmarks are significant in both known and unknown classes. 3. The method can predict attributes for unknown classes, benefitting practical usage. Weaknesses: 1. The compared methods are not convincing, lacking some more advanced open-world object detection works. 2. The ablative studies are not sufficient. 3. There are some typos leading to the reading difficulty. Technical Quality: 3 Clarity: 2 Questions for Authors: 1. The performance comparison in Table 1 lacks some advanced open-world object detectors [1-3] 2. It is suggested that more ablative studies be conducted to justify fine-grained designs, including the linear interpolation in PMM, different texture attribute designs, and shift & scale in Eqn.11. The corresponding experimental analysis should also be given. 3. There are some mismatches between Fig. 2 and the method descriptions. I cannot find Class-agnostic Text Embedding in Sec 4.1, and I am confused about the Fig.3 caption "To establish To establish." 4. The literature review should cover more recently published works on open-set learning for object detection [1-4]. [1] Zohar, O., Wang, K. C., & Yeung, S. (2023). Prob: Probabilistic objectness for open-world object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 11444-11453). [2] Wang, Z., Li, Y., Chen, X., Lim, S. N., Torralba, A., Zhao, H., & Wang, S. (2023). Detecting everything in the open world: Towards universal object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 11433-11443). [3] Ma, S., Wang, Y., Wei, Y., Fan, J., Li, T. H., Liu, H., & Lv, F. (2023). Cat: Localization and identification cascade detection transformer for open-world object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 19681-19690). [4] Li, W., Guo, X., & Yuan, Y. (2023). Novel Scenes & Classes: Towards Adaptive Open-set Object Detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision (pp. 15780-15790). Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: No potential negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate the valuable feedback you provided on our work, particularly regarding the comparisons with recent works, the ablation experiments, and the content discrepancies. Below is our detailed response to each of your comments: **Comparison with Recent Works**: The references [1]-[4] you mentioned include three distinct areas: open-set object detection [4], open-vocabulary object detection (OVC) [2], and open-world object detection (OWOD) [1, 3]. Open-set object detection (OSOD) requires the detector to be trained on a closed set without any annotations of unknown objects. During testing, the detector is required to detect both known and unknown objects. OVC (Section 2.1) narrows the distance between regions and text during training, leveraging pre-trained models on large-scale datasets and using pre-trained encoders (e.g., CLIP) to enhance generalization. In testing, the matching between customized text and regions determines the class to which a region belongs. OWOD imposes stricter requirements, involving two components: detecting unknown objects of interest and incremental learning. The former, similar to OSOD, demands the detector to identify unknown objects during inference, while the latter requires the detector to classify some unknown objects as known classes and fine-tune the model to accommodate the new classes. These two processes simulate real-world applications where unknown objects are continuously labeled, and annotators select interesting targets for incremental learning, thereby enhancing the detector's performance. We have placed the problem definitions in Appendix (A.1), and a diagram illustrating the annotator's involvement in the entire process is provided in Figure 1. Among these references, those relevant to our work are [1, 3]. However, these methods do not utilize the foundation model, training from randomly initialized weights, resulting in poor performance. For example, in the COCO and VOC mixed M-OWODB, OWL-Vit achieves 79% unknown recall with the assistance of LLM, while PROB only achieves 19.4%. This result was obtained under the zero-shot setting of OWL-Vit, and if OWL-Vit were fine-tuned, the gap would further widen. Therefore, we follow FOMO and convert the evaluation dataset to the RWD benchmark. The RWD is more complex and adheres to the few-shot setting, where using detectors like PROB and CAT, which are trained from random weights, is less appropriate. In fact, when we simply implemented PROB on the RWD, we found it could not even detect unknown classes. Thus, we did not compare PROB and CAT because they rely on large amounts of training data, which is not compatible with the few-shot setting of RWD. We have discussed more relevant and recent methods, such as FOMO, and foundation models with the same supervision in the paper, with technical details provided in the appendix. If you have further questions, please refer to the appendix, where we explain the problem definition, benchmark setup, and additional experiments in more detail. Furthermore, we have cited PROB and CAT in our paper, as seen in references [20] and [5]. **Ablation Studies**: In the ablation study, we provide incremental comparisons. Initially, we used averaged text embeddings and OOD probability as the baseline. We then added ID probability and PMM (Gaussian and Weibull distributions) to demonstrate the contribution of each component. This ablation study structure is consistent with that of PROB and CAT. We have included additional comparison experiments involving linear interpolation (LI) and sliding window (SW), as detailed in Table 2 of the attached document. The results indicate that the use of LI or SW individually does not lead to significant performance improvements. Both methods demonstrate only marginal enhancements over the original approach, indicating that neither LI nor SW can accurately model the data distribution when used in isolation. This is further substantiated by the fitting performance illustrated in Figure 2 (the attached document). The combined use of LI and SW yields the most substantial performance gains and achieves a more accurate fit to the original data distribution. Regarding the different text attribute designs you mentioned, using different LLMs could lead to unfair comparisons. Although we could utilize the latest LLMs like GPT-4o to generate text attributes or even combine multiple LLMs to generate richer text attributes, doing so would create an imbalance when comparing with models like FOMO and OWL-Vit. Therefore, we did not modify other settings to ensure fairness. Additionally, shift & scale is the default setting of the benchmark model. To align the output of OWL-Vit, we adopted the pre-trained shift & scale to rescale the similarity. **Content Discrepancies**: We are grateful for your observation regarding the mismatches between Figure 2 and the method descriptions, as well as the presence of typos. We sincerely apologize for these errors, which occurred due to inconsistencies between the final version of the manuscript and the initial structure. In the original version, we placed the problem definitions in Appendix A.1 as a separate section following the related work. However, as the detailed method descriptions took up more space than anticipated, we moved this section to the appendix, resulting in misaligned section indices in Figure 2. We have since conducted a thorough review of the entire manuscript, correcting all mismatches and typing errors. Regarding the algorithm's process, we have provided explanations in both Section 3 and Figure 2. Additionally, pseudocode implementation is provided at the beginning of the appendix, which we hope will help clarify any remaining questions. We hope our explanations help address your concerns and clarify our research strategies and decisions. Please reach out if you have further suggestions or need more information. Thank you again for your valuable feedback. --- Rebuttal Comment 1.1: Comment: Thank you for the clarifications, which have addressed my concerns. To prevent potential confusion about the task setting, the authors are suggested to include the aforementioned differences with related tasks, such as OSOD and OVC, in the final version. I would like to raise my score to weak accept.
Summary: This paper aims to understand the model’s behavior in predicting the unknown category. First, the authors model the text attribute and the positive sample probability, obtaining their empirical probability, which can be seen as the detector’s estimation of the likelihood of the target with certain known attributes being predicted as the foreground. Then, they jointly decide whether the current object should be categorized in the unknown category based on the empirical, the in-distribution, and the out-of-distribution probability. Finally, based on the decision making process, the authors can infer the similarity of an unknown object to known classes and identify the attribute with the most significant impact on the decision-making process. Strengths: 1. The paper is written well and is easy to understand. 2. The studied problem is very important. 3. The results seem to outperform state-of-the-art. Weaknesses: 1. Is it able to utilize more effective OOD score for equations 13-14? 2. It might be more comprehensive to report more metrics, such as AUROC and FPR. 3. It might be useful to include ablation results on prompt templates and LLM used. Technical Quality: 3 Clarity: 3 Questions for Authors: see above Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: n/a Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate your valuable feedback on our work, particularly regarding the concerns related to the detailed technical design. Below is our detailed response: **OOD Score**: To detect unknown objects, we proposed a method that models the probability of an object being predicted as foreground. By modeling attribute similarity and the positive sample probability of the object, we generate the empirical probability (Empirical Prob). By combining the Empirical Prob with in-distribution probability (ID Prob), we can ensure the possibility that each sample will be recognized as a foreground object. Based on this, UMB can extract potential objects of interest from the background and label them as unknown. However, there is a challenge in distinguishing between known and unknown categories. The ID Prob is used for linear combination to determine the probability of known categories, while the Empirical Prob is also derived based on the confidence of known categories. As a result, both the ID and Empirical Prob are higher for a known object. This is why we introduce the out-of-distribution probability (OOD Prob). However, in our paper, we implemented a simple OOD Prob measure that assesses the uncertainty of known categories, specifically the probability that it belongs to the background. Admittedly, a more complex and representative probability score could be developed to calculate the current object's background probability. However, this would introduce additional computational overhead and complicate the UMB pipline. Therefore, we opted for the simplest method to measure the background probability, which can be accomplished with two matrix operations, ensuring the algorithm's simplicity and efficiency. These settings allowed UMB to achieve significant performance improvements in detecting both known and unknown categories, with a 5.3 mAP increase, establishing a new SOTA. **More Metrics**: In Table 1 of the attached PDF, we provide a comparison of additional metrics. We included the early OWOD (Open-World Object Detection) metrics used to evaluate model performance in unknown categories (WI, A-OSE, and Recall). WI assesses the difference in recognition accuracy between known and unknown categories, while A-OSE measures the number of errors where the model mistakenly classifies unknown objects as known categories. Our method still achieved leading performance, such as a consistently lower A-OSE compared to FOMO, while also achieving a higher recall. The AUROC and FPR you mentioned are more commonly used to evaluate classification tasks. Object detection, however, consists of both classification and localization tasks. Typically, a detector's post-processing filters out all regions predicted as negative samples, meaning that true negatives (TN) are not present in the prediction results. Therefore, metrics that rely on TN, such as FPR, cannot be computed. Thus, the evaluation of detection tasks generally relies on the results predicted as positive samples by the detector, such as recall and precision. However, both early OWOD evaluation metrics and individual recall or precision metrics have their limitations. Relying solely on recall can lead to a detector greedily predicting all possible regions as positive samples, while relying solely on precision can make the detector overly conservative. The early OWOD metrics emerged because detectors performed poorly on unknown objects, focusing on evaluating specific aspects of performance in unknown categories, such as A-OSE. However, with the introduction of open-vocabulary object detection models, detectors' performance on unknown categories has improved. Therefore, we adopted mAP, the most widely accepted metric for object detection, as it balances precision and recall and evaluates the detector's performance across different thresholds. This is also why mAP is typically used to evaluate detector performance in closed-set object detection. We understand that specific scenarios might require additional metrics, so we provided TP for each dataset, which, combined with other metrics, can be used to calculate other values, such as FP. We will include this table as part of the appendix to provide additional information. Additionally, we will release the training and validation settings and final weights to support the application and development of the OWOD. **Additional Experiments**: Prompt templates and the LLM used are crucial in our detector. For instance, using more templates to generate visual embeddings and utilizing different LLMs could achieve higher performance; however, this would lead to an unfair comparison with FOMO or OWL-Vit. Therefore, to prevent such disparities, we adopted consistent configurations during the training and inference of UMB to ensure fairness. If you are interested in further experiments, we have provided more detailed information on related experiments in the appendix. The above is our response to the concerns you raised regarding the detailed technical design. We hope that these explanations can clearly answer your questions and help you better understand the strategies and decisions we adopted in our research. We again express our gratitude for your valuable feedback. If you have any further suggestions or require additional information, please feel free to contact us.
Summary: This paper proposes a new solution for the challenging task of Open World Object Detection (OWOD) by exploring the reasons behind non-classification and then using the textual attributes of unknown objects to infer the most similar known category. Evaluation results on multiple real-world application datasets show significant performance improvements. Strengths: A new approach to solving classification problems in Open World Object Detection, and verifying its effectiveness. Weaknesses: The writing of some mathematical symbols needs to be improved. Technical Quality: 3 Clarity: 3 Questions for Authors: (1)The limitation discussion is a bit insufficient. It could be better to add some visualizations of the failure examples (2)Some mathematical variable symbols(e.g., Eq.6 and Eq.7 ) are not standard. It is recommended that more common and standard notations be used. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their constructive feedback, which will help us improve the quality of our manuscript. Below, we address each of the points raised. **Limitation Discussion**: We appreciate the reviewer's suggestion to enhance the limitation discussion with visualizations of failure examples. We agree that visual representations can provide clearer insights into the nature of the limitations. In the revised manuscript, we will include several visualizations illustrating common failure modes encountered during our experiments. These visual examples will be included in the Appendix of the paper, as illustrated in the attached PDF (Figure 1). We present typical cases of detection failures from each dataset, focusing on recall capability and detection accuracy. For instance, in the Aquatic dataset, UMB failed to detect small orange fish, while in the Aerial dataset, it did not successfully recall vehicles. These instances reveal the detector’s shortcomings in recall capability. Moreover, in the Game and Surgery datasets, UMB displayed occurrences of repeated predictions. Nevertheless, UMB still outperformed FOMO in overall performance. Specifically, in the Aquatic dataset, UMB accurately located the contours of the fish, whereas FOMO showed deviations in contour localization and even missed similar objects. Furthermore, FOMO incorrectly identified the reflection of the photographer’s shoes in the glass as an unknown object, demonstrating lower precision. Similar issues were observed in the Aerial and Game datasets, where FOMO often confused objects with the background, resulting in new erroneous predictions, such as misidentifying rooftops as a single object in the Aerial dataset. However, UMB did not commit the same errors in these cases. In summary, although UMB also exhibited some false detections in certain scenarios, it outperformed FOMO in both detection accuracy and the ability to recall potential objects. These visualizations will highlight specific instances where the proposed method did not perform as expected, along with a brief analysis of each case. By doing so, we aim to provide a more comprehensive understanding of the limitations and the contexts in which they occur. **Mathematical Variable Symbols:** We appreciate the reviewer's attention to the mathematical notation used in our paper. To improve clarity and consistency, we will revise the mathematical symbols in Equations 6 and 7 to align with more commonly accepted and standard notations. This will ensure that the equations are easily understood by a broader audience and adhere to conventional mathematical standards. Thank you once again for your valuable feedback. If you have further suggestions or need more information support, please feel free to contact us. Thank you again for your recognition and valuable feedback on our work. --- Rebuttal Comment 1.1: Comment: Thank the authors for their response, which addressed my concerns. I am keeping my final rating at 8: Strong Accept.
Summary: This paper introduces a new open-world object detection model (UMB) aimed at understanding the model's behavior when predicting unknown categories. By modeling text attributes and positive sample probability, the paper proposes a joint decision-making method based on empirical probability, in-distribution probability, and out-of-distribution probability to infer the similarity of unknown objects to known classes and identify the attributes that have the most significant impact on the decision-making process. The evaluation results on the Real-World Object Detection (RWD) benchmark show that this method surpasses the previous state-of-the-art (SOTA) with an absolute gain of 5.3 mAP for unknown classes, reaching 20.5 mAP. Strengths: First to focus on understanding the model's behavior in unknown category predictions. Proposes a new framework (UMB) capable of detecting unknown categories and understanding model behavior using textual descriptions. Shows significant performance improvements on the Real-World Object Detection (RWD) benchmark, especially in unknown categories. Weaknesses: 1. The method relies on high-quality text attributes and positive sample probability modeling. Performance may be affected if data quality is poor or attributes are incomplete. 2. While performing well on the RWD dataset, its generalization ability to other domains and datasets needs further verification, possibly requiring more testing in different environments 3. The robustness of using the most influencing attributes in the decision process to infer the similarity between unknown and known objects needs further validation. 4. For large models, the performance and quality requirements for samples and attributes are high. The categories, quantity, and quality of samples and attributes significantly affect network performance and need further analysis. Technical Quality: 3 Clarity: 2 Questions for Authors: What are the computational resource requirements for this method in practical applications? Is the method feasible in resource-constrained environments? Can this method maintain the same performance improvements on other datasets and in other domains? Further validation in more real-world environments is needed to ensure its broad applicability. How robust is the use of the most influencing attributes in inferring the similarity between unknown and known objects? Could noise or outlier data affect the decision-making process? Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: The method is quite complex and may require substantial computational resources and time for training and inference. This could limit its use in practical applications with limited computational resources. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We greatly appreciate your valuable comments on our work, particularly your concerns about similarity verification, resource consumption, and deployment. Here are our detailed responses: **Similarity Verification**. Ideally, we would like to annotate every unknown object in the RWD dataset and identify its most similar known class. However, this approach presents several challenges: 1) Workload issue: A single image may contain multiple unknown objects, such as stones, making it impractical to annotate each unknown object and find the most similar known class in practice. 2) Subjectivity and inconsistency: When manually annotating the similarity between unknown objects and known classes, subjective judgments are inevitable. These can lead to differing opinions among annotators regarding the similarity of the same unknown object to known classes, resulting in inconsistent annotations. Moreover, the diversity and complexity of unknown objects make it difficult to establish a unified annotation standard. Therefore, manual evaluation of similarity is not only unfeasible in workload but also difficult to ensure the objectivity and consistency. In addition, our evaluation focuses on how the detector determines the similarity between unknown objects and known classes during the decision-making process, which reflects the detector’s understanding of the current object. In subsequent improvements, we can use this to customize the distribution of the dataset. For example, when the detector considers that unknown object A is similar to known class B, and we want to intervene in this result, we can add some samples that are not similar to the known class B but belong to unknown object A through incremental learning to urge the detector to distinguish them. This is why we propose the concept of the unknown object most similar to the known class. Despite the challenges of manual annotation, we provide an approximate evaluation in Appendix (A.7). We found that the detector may produce duplicate predictions when predicting unknown objects. These duplicate predictions reflect the detector’s confusion between the current unknown object and known classes. Therefore, we collected all duplicate predictions and calculated the overlap between these duplicate predictions and the known classes given by the detector to indirectly evaluate the accuracy. Specifically, we calculated the IOU of all known and unknown predictions. If the IOU is greater than 0.95 (the highest value of the COCO standard), the current known and unknown prediction are considered overlapping. In the left figure, we show the number of these duplicate predictions, ending with _Repeat. Subsequently, we calculated the number of matches in these duplicate predictions, ending with _matched, indicating the cases where the most similar known class predicted by the detector is consistent with the overlapping class. In the right figure, we show the corresponding accuracy. From the experimental results, the accuracy reached about 90%. This indirect evaluation proves that UMB has high accuracy in predicting the unknown object most similar to the known class. **Resource consumption**. We provided them in Appendix (A.4). All experiments were conducted a single NVIDIA GeForce RTX 4090 GPU. Since OWL-Vit has made all weights public, we can download these weights as a starting point and freeze them. Subsequently, we use text encoder to encode all sentences into text embeddings, and training these text embeddings for known class detection. To detect unknown objects, we only introduce a module (TAM) that needs to be trained. The training of TAM is lightweight, and it only needs to train up to 5 groups of probability models. For example, using the Gaussian model, the number of parameters to be trained is 10, including 5 variances and 5 means. This is why we can complete all experiments on a single NVIDIA GeForce RTX 4090. For the inference, compared to the original OWL-Vit, we only introduced the calculation of OOD (Out-of-Distribution), ID (In-Distribution), and Empirical probability. However, these three parts of the calculation can be completed through a few matrix operations. Compared with the inference computation of OWL-Vit, the increase in computation is negligible. Therefore, our inference resource consumption is almost consistent with OWL-Vit. **Deployment**. The deployment of object detection is a very complex issue, which needs to consider the restrictions of different devices and environments. At present, the most commonly used deployment scheme is to convert the original PyTorch model into a TensorRT model, and use the optimization acceleration of the TensorRT model (such as FP16 or INT8 inference) to achieve real-time inference. When deploying the UMB model, all texts can be preprocessed into text embeddings, and the text encoder can be discarded, and only the visual encoder is finally deployed on the device. This makes the deployment scheme consistent with OWL-Vit. The newly introduced probability calculation part follows the standard PyTorch implementation, so it can also be converted into a TensorRT model. As we mentioned in inference resource consumption, the computational burden brought by our method is negligible. Therefore, our method can run normally on all devices where OWL-Vit models are applicable. In addition, our method can also serve as a pluggable module, migrated to other OVC-style detectors (e.g. YOLO-World), giving them the ability to detect unknown objects and understand the decision-making process. The above are our responses to similarity verification, resource consumption, and deployment. We hope that these explanations can clearly answer your questions and help you better understand the strategies and decisions we adopted in our research. If you have further suggestions or need more information support, please feel free to contact us. Thank you again for your recognition and valuable feedback on our work. --- Rebuttal Comment 1.1: Comment: Thank you for the author's response, which solved most of my problems. I maintain my final rating: 6 Weak Accept.
Rebuttal 1: Rebuttal: We sincerely appreciate your valuable feedback, which has been instrumental in guiding us to improve our work. In the attached document, we have provided additional experiments, including examples of detection failures (Figure 1), a comparison of more metrics (Table 1), further ablation studies (Table 2), and the corresponding fitting visualizations (Figure 2). We hope this additional information will help you better understand the content conveyed and the technical design implemented in the paper. Thank you once again for your feedback. If you have any further questions or require additional clarification, please do not hesitate to contact us. Pdf: /pdf/1dd0e6acead95370832a5d02089cb8aa2c579d11.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Aligning Model Properties via Conformal Risk Control
Accept (poster)
Summary: The paper connects conformal risk control to define prediction sets containing results that satisfy a property. In short, assume that function $f$ (a trained model) does not satisfy some property $\mathcal{P}$, we can define a prediction interval around results of $f$ s.t. those intervals satisfy $\mathcal{P}$ with $1 - \alpha$ probability. The authors first generalize conformal risk control to multidimensional conservative parameter, through a proxy from $\mathbb{R} ^d \mapsto \mathbb{R}$, and use that in addressing the property testing. They define the risk as if there is no output in the prediction interval that satisfies the property. Strengths: The idea is novel. Defining conformal property satisfaction based on property testing and conformal risk control opens a new area of work based on black-box post-hoc modifications of the model toward a desired property. I believe it can be further used in model-explainability, fairness, or tweaking the model toward constrained predictions. The idea is nicely developed and nicely presented. It can easily engage an audience how is not familiar with conformal risk control or property testing. I would recommend an additional introduction to the aforementioned topics in the appendix, however the current version is also really nice. I also find the way the authors address fairness in section 5 (through bias in measurement error) really interesting. Weaknesses: 1. Notations in the paper can be introduced in a better way. For instance a mathematic notation of $\mathcal{P}$, which can be a subset of the function space. 2. There are minor typos in the mathematical notation e.g. in line 88, $x \sim \mathcal{D}$ but $f(X) \neq g(X)$. Also I do not understand why the authors used $\mathrm{Min}$ instead of $\min$; I believe that might have to relate to the fact that they are minimizing over vectors but still it could be indicated by $\min_{\boldsymbol{\lambda}\in\boldsymbol{\Lambda}}$. Also shouldn't we find the minimal $\boldsymbol{\lambda} _\mathrm{min}$ instead of $\boldsymbol{\Lambda} _\mathrm{min}$? Technical Quality: 3 Clarity: 4 Questions for Authors: 1. Why did the authors choose monotonicity and concavity as the properties to test. Is there any real-life use-case of those properties or does more justifiable properties remain as a possible future work? Ofcourse there is a dicussion on applications of monotonicity to causal inference, and fairness but is there any chance that the authors could provide experiments leading to those areas? 2. Definition 2 has some confusing points: - Should there be any nested property over the prediction sets for various lambdas? Like if $\lambda_a \le \lambda_b$ then $\mathcal{C}(x_i; \lambda_a) \subseteq \mathcal{C}(x_i; \lambda_b)$? If yes then how the order is defined in this vector space? Does the nested property hold for a linear combination of $\lambda_i$ values or the nested property should be present for any $\lambda_i$. - How the interval is defined over function? Is it just the function value $\pm$ some radius around it or is it a something apply over function parameters? 3. In line 156, shouldn't we find the minimal $\boldsymbol{\lambda}_\mathrm{min}$ instead of $\boldsymbol{\Lambda}_\mathrm{min}$? Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: To the best of my understanding, the paper defines the guarantee over a property, and a joint guarantee of property + accuracy is not defined in its framework. I see that their extension of risk control supports multi-dimensional set constant not a multi-dimensional risk. However, I believe this is not a shortcoming of the method but a possible open problem to solve. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for a positive and confident assessment of our paper and appreciate the clear attention to detail shown by the reviewer. Both the strengths and weaknesses/questions/limitations make it evident to us that the reviewer invested significant time understanding the core of our paper. We thank the reviewer for the direct assessment of our work as novel and capable of opening a new area in post-hoc modification of models to align with desired properties. We also thank the reviewer for finding our paper engaging to read and potentially applicable to domains such as explainability and fairness, and appreciate the interest given to Section 5 on the random feature model theory and its connections to alignment and fairness. We find all of the reviewer's listed weaknesses helpful and easily addressable. We completely agree with the reviewer on the listed notation fixes and typos, we are happy to address those in a final version. In particular, we agree that we can use a lower case $\min$ notation in place of Min. With regards to the question of finding the minimal $\lambda_{\text{min}}$ instead of $\Lambda_{\text{min}}$, this distinction was drawn to clarify that the set of minimal elements (which can have cardinality more than 1) is first formed by finding the set of $\lambda$ that give a bounded risk below $\alpha$, and then $\lambda_{\text{min}}$ is found by finding an argmin of the function $g$ from this initial set. Ultimately we agree that the notation in this section should be more clearly clarified to avoid confusion. Thanks to the reviewer for highlighting this and delving into detail on the multi-lambda risk control algorithm. We thank the reviewer for their questions that clearly indicate engagement with our work. For question 1, we feel that monotonicity is a natural first property to study in this framework. Monotonicity is well-established as a desirable property in many settings as described by the Wang and Gupta (2020) paper we cite at the beginning of Section 4.1. Moreover, it is also well-studied in the property testing literature. Following the reviewer's request for additional experiments, we next demonstrate new experiments in fairness and medicine to highlight the usefulness of monotonicity. Although we lack the time or scope to provide a comprehensive comparison of our method to the extensive fairness literature, given the reviewer’s request we have included in the pdf document attached to the global rebuttal a figure for results of applying our method to the UCI ML repo “Student Performance” dataset. This dataset is often used as a benchmark in the fairness literature (see “A survey on datasets for fairness-aware machine learning” Quy et al. 2022). Following the literature, we take “sex” to be the protected attribute and the final grade “G3” to be the real-valued outcome of interest. The attached figure demonstrates the lambdas needed to achieve a constant predicted grade when varying “sex” with a tolerance of 0.1 for $\alpha$. Also given the reference made to applications of monotonicity in medical triage at the beginning of Section 4.1, we have included another figure in the attached pdf showing the results of our method on the UCI ML repo “Liver Disorders” dataset. Here we take the target of average number of drinks per day (specified as target in the repo) to be monotonically increasing in the feature “mcv,” of which increasing levels are known to be predictive of liver disorders. This example from the high-stakles medical domain emphasizes that we believe monotonicity has significant importance in improving trust, interpretability, and hence adoption of clinical models. For question 2, we wish to thank the reviewer for sharing their confusion with Definition 2. This definition was intentionally abstract, but we realize now it may have lacked sufficient clarity. In the original CRC paper, $C_\lambda$ is just taken to be any arbitrary function of the model and calibration data that outputs a prediction set, and outputs an increasingly large set as $\lambda$ increases. It is not stated explicitly but appears to be assumed implicitly that if $\lambda_a \leq \lambda_b$, then $C_{\lambda_a} (X_i) \subseteq C_{\lambda_b} (X_i)$. We operate under the same implicit assumption that for $\lambda_a, \lambda_b \in \mathbb{R}^k$, if $\lambda_a \leq \lambda_b$, then $C_{\lambda_a} \subseteq C_{\lambda_b}$. However, this does not have to be strictly increasing; we can have $\lambda_a < \lambda_b$, but $C_{\lambda_a} = C_{\lambda_b}$. This nested property holds for every dimension due to the fact that we follow the partial ordering of $\mathbb{R}^k$ as defined in the main text. For the question of how the interval is defined over function, we can define this as $C_{\boldsymbol{\lambda}}(f):\mathcal{X} \to \mathcal{P}(\mathcal{Y})$ is a set valued function where at each point $X_i$ we have $C_{\boldsymbol{\lambda}}(f)(X_i) = C_{\boldsymbol{\lambda}}(X_i)$. Thus it does not necessarily have to be a radius around the function output $f(X_i)$, but this is natural when operating in the reals. We point the reviewer to our discussion of the connection to Yadkori et al. (2024) within Section 6 Related work, which exemplifies a use case of our methodology that does not involve real-valued outputs. We hope the above discussion helps make Definition 2 more mathematically precise, and we look forward to the chance to revise this definition in a final version. For question 3, similar to the discussion above on the lambda notation we agree that this can be improved and will do so for the final version. Finally, we agree with the reviewer with respect to both of the two stated limitations and also believe that these are exciting avenues for future research. Again we would like to thank the reviewer for their compellingly positive and extensive assessment of our work.
Summary: This article offers an alternative approach to alignment by testing whether trained models follow specific properties. This is done, for instance, on monotone or concave functions. A small modification for the use case is made to the Conformal Risk Control setting to allow vectorial parameters. Using CRC and property testers (POTs), a guarantee is obtained that the predictor approximately satisfies the property. Strengths: The topic of the paper is interesting, and the approach is novel, to my knowledge. This approach's applications are quite broad and should interest communities interested in alignment, robustness, or, for instance, linking to certain physical applications. A few applications are presented in experiments. Finally, I find the paper, overall, well written. Weaknesses: However, several parts could be improved. Section 2.1 is unclear, and the introduction of lambda without details is bound to confuse readers not familiar with CRC. Moreover, I found section 3.2 to be overly verbose and lacking clear, explicit mathematical formulations of the loss, particularly but not exclusively. In case it is a misunderstanding, I believe a rewriting and clear formalism would help in understanding. Moreover, I want to emphasize the lack of links between different sections of the papers. Notations, like the function "g" are reused with different meanings, and the threshold "epsilon" disappears from the method. Additional typo: line 147 "setof". Technical Quality: 3 Clarity: 3 Questions for Authors: Although early in the article, it's specified that " we can imagine a user who wishes to determine whether a pre-trained model f belongs to P but is unable to train a model of f ’s size" (42), line 178 proposes to "modify" the model. How is that done? Moreover, in the appendix (table 4), unconstrained models have varying performance when alpha varies, how is that explained? (moreover, no notion of uncertainty is provided on the metrics) Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: I believe it's worth mentioning that this multidimensional CRC is essentially one-dimensional, as it replaces the non-increasingness in lambda by one in a mapping of lambda to R. Moreover, the monotonicity in the l1 norm of lambda is often not verified (think of different slopes for the different dimensions), and several experiments do not use this result. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for what we believe is an overall positive assessment of our paper. We especially appreciate that they find the topic interesting and our approach novel. We also agree this is a standout strength of this work. Additionally we thank the reviewer for observing that the generality and flexibility of our approach can make the work attractive to a broad audience, including alignment researchers, AI safety in robotics and other physical applications, etc. We also feel optimistic about our ability to alleviate the reviewer’s questions, perceived weaknesses, and remarks concerning limitations of this work. We find almost all of the comments to be simultaneously fair and readily addressable. We begin with the questions. Firstly, we address the question “Although early in the article, it's specified that, ‘we can imagine a user who wishes to determine whether a pre-trained model $f$ belongs to $\mathcal{P}$ but is unable to train a model of $f$ ’s size’ (42), line 178 proposes to "modify" the model. How is that done?” The “modification” to f that we are referring to in Definition 3 is abstract in the sense that it is considering the case in which $f$ had output $\tilde{Y}$ instead of its actual output $f(X)$. No actual modification to the model is made, the underlying function $f$ remains a black box. We will edit this definition to make this point more clear. We thank the reviewer for bringing this confusion to our attention. The second question, “Moreover, in the appendix (table 4), unconstrained models have varying performance when alpha varies, how is that explained? (moreover, no notion of uncertainty is provided on the metrics)” is even easier for us to address. This was simply a typo; the 114 for $\alpha=0.05$ should have been 115 as for $\alpha=0.1$ and $\alpha=0.01$, after which you can observe the performance of the unconstrained model is consistent across varying $\alpha$. We thank the reviewer for pointing out this typo. The reviewer expressed concerns regarding multi-lambda risk control. We agree that the generalization is rather straightforward from the original conformal risk control and essentially mimics the one-dimensional case due to the mapping to $R$. We attempt to clearly state in both the proposition statement in the main text and the proof in the appendix that the proof closely mimics the original result. It is not our intention to frame this result as a significant technical contribution but rather a novel perspective on conformal risk control with application to alignment. We also believe that even if the generalization to multi-lambda conformal risk control is straightforward, the literature has overlooked it so far, even in instances in which it would have been helpful. In particular, as mentioned in the global response, Yadkori et al. (2024) utilize two separate conformal procedures to calibrate two separate functions, which could have been simultaneously calibrated in tandem using multi-lambda conformal risk control. Other works, such as “Conformal Risk Control for Ordinal Classification” (Xu, 2023), assume that lower and upper bounds on a prediction set are given by functions l(lambda) and u(lambda) but requiring both ends of the interval to grow simultaneously may not be generally necessary; multi-lambda conformal risk control provides a flexible implementation of the conformal risk control algorithm in such settings. We thank the reviewer for informing us that Section 2.1, introducing the preliminaries for property testing, is insufficiently clear. We hope to better explain the main ideas needed from property testing in this section and provide a more comprehensive introduction to property testing in the appendix in the final version, as suggested by Reviewer Hsyg. We appreciate the feedback on insufficient clarity regarding the definition of the loss function in Section 3.2. We can rewrite this definition similarly to as follows: $$ L_i = \ell \left( C_\lambda (X_i), Y_i \right) = 0 \text{ if } \exists \hat{Y} \in C_\lambda (X_i) \text{ s.t. } T_i (\hat{f}, X_i, Y_i) = \text{Accept} \text{, otherwise } 1 $$ $$ \text{where} \quad \hat{f} (X) = f(X) \text{ if } X \not= X_i \text{, or } \hat{Y} \text{ if } X = X_i $$, Where $T_i$ for $i=1,...$ is an infinite sequence of testers with fixed randomness for each $i$. That is, each $T_i$ is a deterministic function of its inputs $(\hat{f}, X_i, Y_i)$. We hope this definition provides sufficient clarity on the loss function. We also look forward to the chance to clean and align the notation around $g$ and $\varepsilon$, as mentioned by the reviewer. Thanks to the reviewer for pointing out the typo “setof” on line 147. We will fix these errors in the final version. We hope all of these responses are satisfactory to the reviewer. As mentioned at the beginning, we appreciate that the reviewer gave us an overall positive review with respect to the significance, novelty, and applicability of our work. We also appreciate the 3s across the board from the reviewer on Soundness, Presentation, and Contribution. We hope our comments above have sufficient gravity to positively shift the reviewer’s overall score and assessment of our paper. --- Rebuttal Comment 1.1: Comment: I appreciate the author's rebuttal, the improved clarity, and the fact that some inconsistencies were typos. I will increase my score accordingly. --- Reply to Comment 1.1.1: Comment: We appreciate the reviewer's increased score. Thank you for the continued engagement with our paper and for appreciation of the improved clarity and corrected typos. Thank you again.
Summary: The authors propose a way to solve the problem of alignment in Machine Learning using Conformal Risk Control. They first expand the previous work of conformal risk control to multidimensional parameters $\boldsymbol{\lambda}$, then used this extension to propose a way to test if a function belongs to a certain class of functions $\mathcal{P}$ that we assumed are aligned with user interests. Finally, they demonstrate how this method can be used to test monotonicity of functions. Strengths: * The theoretical foundations of the paper seems pretty solid. The methodology is clear and the paper is quite easy to follow * The monotonicity and concavity examples are pretty convincing * Detailing the linearity case help understand the theoretical foundation and reasoning of the method Weaknesses: * There is only a single example with results. Presenting a method for concavity is interesting but it would be nice to see it applied to a real world examples * There is still a significant gap between monotonicy/concavity and alignment as we understand it in AI. Technical Quality: 3 Clarity: 3 Questions for Authors: see weakness in concavity. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: * The authors advertise this paper as being useful in GenAI. What are examples of potential class of functions that would satisfy alignment with a generated text / image for example. It is indeed a good first step to test the belonging to a class, but defining that class can be very hard in the most important application Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for their positive assessment of our paper. We appreciate the recognition of the solid theoretical foundations, the clarity of our methodology, and the overall readability of the paper. We also value the opportunity to address your concerns and clarify our contributions. Regarding the reviewer’s first stated weakness: "There is only a single example with results. Presenting a method for concavity is interesting, but it would be nice to see it applied to real-world examples," we believe there may be a misunderstanding. We included two additional examples on real datasets for monotonicity in the appendix and a real-world example for the concavity method. We briefly mentioned these at the end of the Results section in 4.1, but we will make this more explicit in the final version. We hope the reviewer finds these additional examples convincing of the strength of our paper. The reviewer’s other stated weakness concerns the gap between monotonicity/concavity and alignment in AI. We appreciate the desire for a more detailed discussion on this matter. As discussed in our abstract and introduction, our goal is to propose a framework for interpreting and performing alignment in settings with outputs less interpretable and amenable to human feedback. Thus, our goal was to show that this framework could extend from properties like monotonicity to more complex properties of generative models. Our methodology allows us to define and align desirable properties of generative models, provided we can test for these properties at a per-input level. We included this discussion in our general response but will restate it here for the reviewer’s convenience. We want to highlight this claim via a section that may have been too buried originally. The connection to Yadkori et al. (2024) in Section 6 Related Work was not only to acknowledge their work but also to highlight the applicability of our methodology to a generative AI setting. We discuss that their use of conformal risk control to mitigate LLM hallucination can be captured as a specific case of our more general property alignment perspective. We discuss that “not hallucinating” can be considered a property of the function $(a, f)$, where $a$ determines whether the model should abstain, and if not, the function $f$ gives the output. This example shows how our approach of forming the conformal intervals is not limited to subsets of the reals since, in this application, the conformal interval is either just $f(X)$ or $\{f(X), Abstain\}$ for sufficiently conservative lambda. While including an application of our approach for LLM alignment is beyond the scope of this paper and not feasible for this rebuttal, we believe this discussion highlights the potential applicability of our approach to generative AI. Our methodology potentially enables one to define and align desirable properties of generative models as long as there is a way to test for these properties at a per-input level. We also wish to draw attention to the importance of monotonicity through an additional example provided in Figure 1 of the attached PDF. Although we lack the time or scope to provide a comprehensive comparison of our method to the extensive fairness literature, we have included a figure for results of applying our method to the UCI ML repo “Student Performance” dataset. This dataset is often used as a benchmark in the fairness literature (see “A survey on datasets for fairness-aware machine learning” Quy et al. 2022). We take “sex” to be the protected attribute and the final grade “G3” to be the real-valued outcome of interest. The attached figure demonstrates the lambdas needed to achieve a constant predicted grade when varying “sex” with a tolerance of 0.1 for $\alpha$. Given the reference to applications of monotonicity in medical triage at the beginning of Section 4.1, we have included Figure 2 in the attached PDF showing the results of our method on the UCI ML repo “Liver Disorders” dataset. Here, we take the target of the average number of drinks per day (specified as the target in the repo) to be monotonically increasing in the feature “mcv,” which is known to be predictive of liver disorders. This example from the high-stakes medical domain emphasizes that monotonicity significantly improves trust, interpretability, and the adoption of clinical models. We also point the reviewer to another newly included experiment in Figure 3a and 3b at the end of the general response, highlighting the applicability of the framework to a more complex safety alignment problem of avoiding side-effects. The details of this experiment are included in the global response. We hope these examples demonstrate that while monotonicity and concavity may not match the usual notion of alignment in generative AI, they are important properties of models. Our methodology allows for application to more complex settings, such as generative AI and RL. --- Rebuttal Comment 1.1: Comment: Thank you for your taking the time to provide this detailed answer. I was also questioning the improvement from the multidimensional $\lambda$ but have been convinced by your answer to reviewer 9EgD. The applications to LLMs with the non hallucination is also pretty convincing. Running an experiment against Yakdori et al. (2024) method would have made it clearer to the reader on how well it improves from that baseline to use the multidimensional $\lambda$ instead of separate ones, but obviously there is not enough time for it. I still think 7 is the appropriate rating for this paper. --- Reply to Comment 1.1.1: Comment: Thank you for your continued engagement with our paper and maintaining your positive review of the work. We are happy to hear that our answer to reviewer 9EgD with regards to the multidimensional $\lambda$ and our discussion regarding applications to LLMs was convincing. Thank you again.
Summary: This paper proposes a method to post-process a pre-trained model to align with a subset of hypotheses on which the specific desired behaviors can be attained. The proposed method relies on proximity oblivious testers to give detection for the misalignment, based on which a conformal risk control process is used to calibrate the prediction interval to guarantee the farness to the desired subset of hypotheses. Strengths: 1. The considered problem is a general and important. The properties to be aligned can include many possible instances. 2. The paper is overall well written and easy to follow. Weaknesses: 1. The proposed aligning method is largely built on conformal risk control to calibrate the prediction interval, so the original technical contribution needs to be highlighted. 2. The generalization to multi-dimension conformal risk control is a bit straightforward and an immediate result from the original conformal risk control. The technical challenges need to be highlighted also. 3. The proposed method depends on the reliability of the POT, as mentioned in Line 181. However, to guarantee a rigorous prediction, it is important to consider the failure and errors from POT and how to guarantee the valid prediction in this case in details. Technical Quality: 2 Clarity: 3 Questions for Authors: N.A. Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: N.A. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for their time and effort in reviewing our paper and providing valuable feedback. We appreciate that the reviewer acknowledges the generality and importance of the problem we considered, as well as the flexibility of our approach concerning the properties that can be addressed. Below, we provide responses to each of the three listed weaknesses. The first listed weakness: “1. The proposed aligning method is largely built on conformal risk control to calibrate the prediction interval, so the original technical contribution needs to be highlighted.” We appreciate the call to better highlight our technical contribution. An important contribution of this work is combining two unrelated topics of conformal risk control and property testing to tackle the alignment problem. To the best of our knowledge, this is the first. We feel this methodological approach, along with our main theorem, is a strong contribution. We also want to bring to the review’s attention our technical contributions in Section 5. We bring attention to these results in our global response, but repeat here for convenience. Our theoretical results in this section build on random feature model theory (Mei and Montanari 2023, Misiakiewicz and Montanari 2023) to obtain insights regarding the impact of model size and sample on adherence to properties $\mathcal{P}$. We show that even if the true data generating process adheres to $\mathcal{P}$ if there is small noise bias then overparameterized models will fail to satisfy a desired property $\mathcal{P}$ regardless of how much data is collected. This result has significant implications with respect to the persistent need for alignment techniques regardless of how much training data large models are trained on. We hope this clarifies the strength of the technical contributions of our paper. The second listed weakness: 2. The generalization to multi-dimension conformal risk control is a bit straightforward and an immediate result from the original conformal risk control. The technical challenges need to be highlighted also. We agree with this point; we attempted to clearly state in both the proposition statement in the main text and the proof in the appendix that the proof closely mimics the original result. However, we do feel that the literature has overlooked the potential of something like multi-lambda conformal risk control so far, even in instances in which it would have been helpful. In particular, as mentioned in the global response, Yadkori et al. (2024) utilize two separate conformal procedures to calibrate two separate functions, which could have been simultaneously calibrated in tandem using multi-lambda conformal risk control. Other works, such as “Conformal Risk Control for Ordinal Classification” (Xu, 2023), assume that lower and upper bounds on a prediction set are given by functions l(lambda) and u(lambda) but requiring both ends of the interval to grow simultaneously may not be generally necessary; multi-lambda conformal risk control provides a flexible implementation of the conformal risk control algorithm in such settings. The third weakness: 3. The proposed method depends on the reliability of the POT, as mentioned in Line 181. However, to guarantee a rigorous prediction, it is important to consider the failure and errors from POT and how to guarantee the valid prediction in this case in details. We believe there is room for clarification on this point. While for general property testing algorithms, there may be two-sided errors, for POTs, we can guarantee that if $f$ satisfies property $\mathcal{P}$, then with probability one, the POT will correctly accept this function. In other words, there can be no false negatives or type II errors. The POTs can indeed have a false positive or type I error. This does not impact our main result, however, since for any valid testing algorithm (which we assume $T$ is), any function that $T$ accepts on at least $1-\alpha$ fraction of points is by definition as most $\alpha$ far from having the property that $T$ tests for. It is possible, however, for future work, that results concerning the distribution of the loss functions on test points (since here we only have a guarantee a bound on the expectation of the loss) may need to take into account the error probability of the tester. We thank the reviewer for bringing up this point, as it is closely aligned with ideas on our minds for potential future directions. We hope that our responses to each of these listed weaknesses sufficiently address the reviewer’s concerns and can convince them of the work’s contribution. As the reviewer mentioned, the address problem of this work is both general and interesting, and our approach is powerful and flexible. We hope the reviewer is more convinced that the approach is also theoretically and technically strong regarding both the new approach to alignment and the results regarding the persistent need for alignment based on the random feature model theory. Again, we would like to thank the reviewer for engaging with our work and hope their judgment of the work has become increasingly positive. --- Rebuttal Comment 1.1: Comment: We would like to thank the reviewer again for their initial review and feedback on our paper. We hope that the reviewer may find our provided discussions and clarifications within our rebuttal convincing, but we would also be happy to address any further questions the reviewer may have regarding our paper or our rebuttal. Thank you again for your time and effort.
Rebuttal 1: Rebuttal: We thank all reviewers for their positive feedback and critical assessment of our paper. We aim to address remaining concerns and reinforce the contributions of our work. This response focuses on key points that may have been underemphasized, aiming to further convince the reviewers who gave us Accept (7) scores of our work’s quality and persuade those who gave us Borderline reject (4) scores to reconsider positively. Reviewers appreciated the flexibility and applicability of our approach to various problems, noting its potential for domains such as generative AI, physical applications, and fairness. There were questions, however, around our focus on monotonicity and how our approach extends to generative AI. We want to emphasize that our methodology allows us to define and align desirable properties of generative models as long as we have a way to test for these properties at a per-input level. We wish to highlight this claim via a section of the paper that may have been too buried originally. The connection to Yadkori et al. (2024) within Section 6 Related work was not intended only for paying correct dues to their work, but also to highlight the applicability of our methodology to a generative AI setting. We discuss that their use of conformal risk control to mitigate LLM hallucination can be captured as a specific case of our more general property alignment perspective. We discuss there in Section 6 that “not hallucinating” can be considered the property of the function $(a, f)$, where $a$ determines whether the model should abstain, and if not, the function $f$ gives the output. This example also shows how our approach of forming the conformal intervals is not limited to subsets of the reals since, in this application, the conformal set is either just $f(X)$ or $\\{f(X), \text{Abstain}\\}$ for sufficiently conservative $\lambda$. We also highlight the use of multi-lambda conformal risk control at the end of Section 6, since Yadkori et al. (2024) use a separate conformal procedure for calibrating their match function and abstention function, but using multi-lambda conformal risk control and our methodology would allow these functions to be calibrated in tandem. Including a generative AI experiment is beyond this paper’s scope, but our discussion highlights our approach's potential. Our methodology enables defining and aligning generative models' desirable properties if testable at a per-input level. A significant technical contribution, perhaps underemphasized, is in Section 5. Our theoretical results in this section build on random feature model theory (Mei and Montanari 2023, Misiakiewicz and Montanari 2023) to obtain insights regarding the impact of model size and data quality and volume on adherence to properties $\mathcal{P}$. We show that even if the true data generating process adheres to $\mathcal{P}$, small noise bias will cause overparameterized models to fail to satisfy the desired property $\mathcal{P}$, regardless of how much data is collected for training. This result has significant implications: even if we keep making models bigger and use unlimited training data, we still need alignment techniques because training data always has some small imperfections. Finally we want to share in this global response an additional experiment that we hope hammers home the general applicability of our method. We consider the “box” or sokoban environment [1] as presented in [2,3]. In this sokoban environment, of which we attached an image in the pdf, the Agent is rewarded for reaching the goal state in as few moves possible (-1 reward per move, 50 reward and termination of episode upon reaching goal state). There is also a box, and if the agent moves in the direction of the box the box is pushed in that direction. We train a tabular Q-learning agent for this goal, which expectedly converges to the optimal performance of 45. However, the authors of [2] consider the case in which we wish to penalize the agent from committing an “irreversible” action, which in this case is an action that pushes the box into a corner. They add a penalty term to the reward that punishes deviations from the initial environment’s trajectory to account for this. We use our alignment via conformal risk control to approach this safety problem. Our pretrained tabular Q-learning agent was not trained for side effects. Here, $f$ is $Q(s,a)$ mapping state, action pairs to q-values. The property of interest is whether it induces irreversible actions—whether the $\text{argmax}_a q$ value action for a state is irreversible. Our calibration set is $X_i = s_i$ and $Y_i = 1$ if $a_i = \text{argmax}_a Q(s_i,a)$ is irreversible, 0 otherwise. The loss function for CRC is 0 if the highest q-value action is reversible or if the interval around the highest q value includes a reversible action’s q-value, otherwise 1. Using CRC, we find optimal lambda pairs, with $\lambda^+=15.3$, $\lambda^-=0$. We provide a q-values distribution in the attached pdf. Our agent chooses the action with the max q-value if its interval is disjoint from others; otherwise, we defer to a safe action checker, choosing the highest q-value among safe actions. This approach is akin to the abstention problem in Yadkori et al. (2024), allowing the pretrained model to output its optimal action unless deemed too risky, in which case it checks for safe actions. Through our alignment via conformal risk control procedure we obtain an agent that achieves the optimal score of 43 while avoiding potential side-effects. [1] AI safety gridworlds. J Leike et al. [2] Penalizing side effects using stepwise relative reachability. V Krakovna, L Orseau, R Kumar, M Martic, S Legg [3] Avoiding side effects by considering future tasks. V Krakovna, L Orseau, R Ngo, M Martic, S Legg - Advances in Neural Information Processing Systems, 2020 Pdf: /pdf/902f49655076d71a7de3fd92813d1f991d37bfa5.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Weisfeiler and Leman Go Loopy: A New Hierarchy for Graph Representational Learning
Accept (oral)
Summary: The paper introduces the $r$-loopy Weisfeiler-Leman ($r$-$l$WL) test, an innovative hierarchy of graph isomorphism tests, and the corresponding GNN framework, $r$-$l$MPNN. This new approach extends the counting capabilities of previous algorithms, specifically allowing the counting of cycles up to length $r+2$ and homomorphisms of cactus graphs. Empirical validation demonstrates the expressiveness and performance of $r$-$l$MPNN on both synthetic and real-world datasets. Strengths: The paper's strengths are rooted in its originality, introducing a novel algorithm ($r$-$l$WL) and corresponding GNNs ($r$-$l$GIN) that significantly enhance the expressivity of graph neural networks. These contributions are supported by rigorous theoretical proofs and empirical validation. Specifically, $r$-$l$WL demonstrates the ability to count cycles up to length $r+2$ and homomorphisms of cactus graphs, substantiated with detailed mathematical proofs. The experiments use several synthetic datasets to validate the counting power and expressiveness of $r$-$l$MPNN effectively. Furthermore, the paper contextualizes its contributions within prior work, highlighting the limitations of existing methods and demonstrating how $r$-$l$WL and $r$-$l$MPNN address these gaps. Overall, the claims are well-supported by theoretical proofs and empirical results, indicating a clear improvement over existing methods. Weaknesses: Some of the mathematical proofs are complex and may be difficult for readers without a strong background in graph theory and GNNs. Providing additional intuitive explanations or examples could improve accessibility. While the empirical validation is strong, it could be expanded to include a broader range of real-world datasets to further demonstrate the robustness and generalizability of the approach. Technical Quality: 3 Clarity: 4 Questions for Authors: The paper is generally well-written and clear, although some sections could benefit from additional explanations or examples to aid understanding. I have a few questions: 1. How does the computational complexity of $r$-$l$WL compared to existing higher-order WL variants like $3$-WL in practice, especially when dealing with large and dense graphs? 2. What are the limitations of $r$-$l$WL in terms of scalability and memory usage, particularly when applied to real-world datasets with varying degrees of sparsity? 3. In Table 1, why didn't your method perform well on the Extension (100) and CFI (100) datasets compared to 3-WL and PPGN? Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: The authors have addressed the limitations related to the complexity of higher-order GNNs and the scalability issues associated with $k$-WL. However, a more detailed discussion on the limitations of $r$-$l$WL in terms of computational overhead and potential impact on large-scale applications would be beneficial. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their thorough review, for acknowledging the originality and rigor of our paper, and for voting to accept. We address each point individually below. “W/Q” numbers the weakness or question, followed by our response. --- > **W1**: “Some of the mathematical proofs are complex and may be difficult for readers without a strong background in graph theory and GNNs. Providing additional intuitive explanations or examples could improve accessibility.” **A1**: We agree that some of the mathematical proofs are difficult for readers without a strong background in graph theory and GNNs. To improve accessibility, we have included visualizations of some proofs and counterexamples in the appendix, see, e.g., Figure 10,11,12 or Section H. Following Reviewer 1sHr’s suggestion, we add additional intuitive explanations before every proof in the respective section. Thank you for the suggestion! > **W2**: “While the empirical validation is strong, it could be expanded to include a broader range of real-world datasets to further demonstrate the robustness and generalizability of the approach.” A2: Thanks for recognizing our experiments! We expand the experiments by including the peptides-functional and peptides-struct datasets from the LRGB paper [1]. Our first preliminary results without any sweeping or positional encodings suggest improved performance over standard baselines: | | Peptides Structural (MAE $\downarrow$) | Peptides Functional (AP $\uparrow$) | |---------------|-----------------|---------------------| | GCN | 0.3496 ± 0.0013 | 59.30 ± 0.23 | | GINE | 0.3547 ± 0.004 | 54.98 ± 0.79 | | GatedGCN | 0.3420 ± 0.0013 | 58.64 ± 0.77 | | $7$-$\ell{}$GIN | 0.2513 ± 0.0021 | 65.70 ± 0.60 | [1] Vijay Prakash Dwivedi, Ladislav Rampásek, Michael Galkin, Ali Parviz, Guy Wolf, Anh Tuan Luu, Dominique Beaini: Long Range Graph Benchmark. NeurIPS 2022. > **Q1**: “How does the computational complexity of $r$-$\ell{}$WL compared to existing higher-order $k$-WL variants like $3$-WL in practice, especially when dealing with large and dense graphs? **A1**: When dealing with dense graphs, i.e., graphs with $n^2$ edges the computational complexity of $r$-$\ell{}$WL is comparable to the complexity of $k$-WL. In the worst-case scenario, assuming a complete graph, every node has $n$ neighbors. Hence, the preprocessing step requires $n*n^{r+2}$ operations and the number of forward operations is also exponential in $r$. This is a limitation for large and dense graphs. We noted this in our manuscript, see Page 9 under Limitations. From a practical perspective, our method currently scales well to the Peptides dataset, even when using $r=7$, which contains 15k graphs with on average 151 nodes and 307 edges. > **Q2**: “What are the limitations of $r$-$\ell{}$WL in terms of scalability and memory usage, particularly when applied to real-world datasets with varying degrees of sparsity?” **A2**: Our $r$-$\ell{}$WL algorithm is particularly well-suited for sparser graphs, as demonstrated in our experiments. This focus allows us to efficiently handle a wide range of real-world datasets that exhibit sparsity. While there are scalability and memory challenges when dealing with denser graphs, we see this as an exciting opportunity for future enhancements. To increase scalability and manage memory usage more effectively, one potential future direction includes random subsampling methods to subsample paths per node. We are actively researching ways to optimize our method to better manage dense graphs, thereby broadening the applicability of our algorithm across diverse datasets. > **Q3**: “In Table 1, why didn't your method perform well on the Extension (100) and CFI (100) datasets compared to 3-WL and PPGN?” **A3**: Wang et al. [1] note that 3-WL and PPGN “surpasses most GNNs in CFI graphs due to k-WL’s global receptive field.” Whereas local WL variants like our $r$-$\ell{}$WL exhibit lower performance due to their limited receptive fields. We also mention that CFI graphs are particularly challenging; specifically, 60 pairs are distinguishable only by 3-WL, 20 by 4-WL, and 20 remain indistinguishable even by 4-WL. To improve performance, we would need to increase $r$. However, this runs into memory issues given our current resources, as CFI graphs contain many paths in the $r$-neighborhoods, often having a high number of edges (up to 742). Our method successfully distinguished 95 out of 100 pairs of extension graphs, which aligns closely with the performance of the baseline models as detailed in Table 2. This demonstrates that our algorithm is competitive on the extension graphs. We are actively exploring ways to optimize our method to handle such challenging graphs more effectively in future work. [1] Wang, Yanbo, et al. "An Empirical Study of Realized GNN Expressiveness." International Conference on Machine Learning. PMLR, 2024. --- Thank you again for your valuable feedback, which we will incorporate into a potential camera-ready version, making our work even more accessible! --- Rebuttal Comment 1.1: Title: Satisfied with the answers Comment: Thank you to the authors for addressing my comments and for their thoughtful responses to the other reviewers' feedback. I continue to believe this is a good paper and will maintain my score.
Summary: The paper proposed a hierarchy of graph isomorphism tests and a corresponding GNN framework, $r$-$\ell$-MPNN while showing the ability to count homomorphisms of cactus graphs. Strengths: The strengths of the paper are: * The ability to count homomorphisms of cactus graphs without any additional explicit substructure counts. * Scalability towards large datasets, especially when the graphs are sparse for these datasets. * The paper is well-written and easy to understand. Weaknesses: The weaknesses of the paper is that while the paper is well written, there are some terms undefined, or unexplained such as HASH; see the first question in the following section. Technical Quality: 3 Clarity: 3 Questions for Authors: 1) What is the HASH function? do you have an example of such a function? 2) Can you add tables concerning the times of your method compared to other methods? 3) What is the motivation behind using $r=5$ in the experiments? would any $r>5$ result in worse results or simply better results by small significance on the expense of might higher time? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors have adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their thorough review, acknowledging our contributions, and voting to accept our paper. We address each point individually below. “W/Q” numbers the weakness or question, followed by our response. --- > **W**: “The weaknesses of the paper is that while the paper is well written, there are some terms undefined, or unexplained such as HASH; see the first question in the following section.” **A**: Thank you for your feedback. We have carefully revised our manuscript to ensure that all important terms, including the HASH function (see below), are clearly defined. > **Q1**: “What is the HASH function? do you have an example of such a function?” **A1**: The term HASH function is standard in color refinement algorithms such as $r$-$\ell{}$WL. It refers to any arbitrary injective function on multisets. In practice, one can use SHA-256 of the sorted multiset as an example. For the neural variant, $r$-$\ell{}$GIN, the HASH function can be realized by summation followed by an MLP. We have added this clarification to the paper. > **Q2**: “Can you add tables concerning the times of your method compared to other methods?” **A2**: We have included tables comparing our methods to other state-of-the-art methods in terms of memory and runtime. Please refer to Table 9 for these comparisons. Please let us know if you are interested in any other specific comparison. We are happy to add more comparisons. > **Q3**: “What is the motivation behind using $r=5$ in the experiments? would any result in worse results or simply better results by small significance on the expense of might higher time?” **A3**: We chose $r=5$ because it matches the cycle-counting power of 3-WL. For the second question, please refer to Table 8, which compares predictive performance for ZINC12K when varying $r$. This table demonstrates that $r=5$ provides a good balance between accuracy and computational efficiency. Increasing $r$ enhances the representation power and complexity of the model, which can negatively impact generalization performance. Therefore, it is crucial to strike a balance between expressivity, computational cost, and generalization abilities. To address the Reviewer's question more concretely, we tested a $12$-$\ell{}$GIN on ZINC12K. This configuration resulted in a test MAE of $0.075\pm0.003$, which is within the standard deviation of the performance of a $5$-$\ell{}$GIN. The runtime increased by approximately 20%, which remains more efficient than other higher-order GNNs, such as those based on the $3$-WL test. We will include the results for $r=6,\ldots,12$ in Tables 8 and 9. --- Thank you again for your valuable suggestions, which have helped improve the clarity and contribution of our work. Please let us know if you have any more questions or suggestions! --- Rebuttal Comment 1.1: Title: Good rebuttal Comment: The authors satisfactorily addressed my questions. However, I have kept my score, increasing my confidence to 4 in light of the authors' answers.
Summary: In this paper, the authors propose a loopy version of the Weisfeiler-Lehman (WL) algorithm. This version utilizes an extended notion of neighborhood, incorporating paths between standard neighboring vertices to update vertex coloring. By parameterizing the length $r$ of these paths, we obtain $r$-$l$WL. The results include a hierarchy for $r$-$l$WL based on $r$ and a comparison with $k$-WL. The most technical result is that $r$-$l$WL is expressive enough to determine homomorphism counts of $(r+2)$-cactus graphs. Experiments show that $r$-$l$WL is competitive in detecting substructures. Strengths: **S1 Interesting Idea:** The inclusion of paths in the neighborhood is an elegant and innovative way of extracting local information around a vertex. This approach allows for rigorous theoretical analysis and captures cactus graphs, which is a significant advantage. **S2 Experiments:** The authors empirically demonstrate that the neural version of $r$-$l$WL captures crucial information for graph learning tasks. **S3 Capturing Cacti:** The most technically involved proof shows that cactus graphs, or at least their homomorphism counts, can be captured. This is a noteworthy result. Weaknesses: **W1 Capturing Cacti:** The motivation for focusing on cactus graphs is unclear. The authors should better justify why this class of graphs is interesting and relevant, and provide examples in the main paper. **W2 Insufficient Comparison with Subgraph GNNs:** The paper mentions that subgraph GNNs are bounded by $3$-WL, but there are subgraph GNNs beyond $3$-WL, such as $k$-OSAN s (ordered subgraph aggregation networks, Qian et al.), which can capture features that $k$-WL cannot and are bounded by $(k+1)$-WL. The paper overlooks this line of work and lacks comparison with $k$-OSANs, both empirically and theoretically. For example, specifying paths of length $r$ and a central node requires special subgraphs of size $r+1$ (the path plus central node), suggesting that $r$-$l$WL may be included in $(r+1)$-OSAN? This could imply that some results stem from $k$-OSAN properties. A more detailed comparison is needed. **W3 Experimental Comparison:** The experiments do not seem to include any subgraph GNNs that capture properties beyond 3-WL. The authors should make more clear what are the capabilities of the methods with which they compare. **W4 Unlabeled Graphs:** The paper does not address vertex labels. Can the approach be generalized to vertex labels? Additionally, $c^{(0)}$ in section 3.2 is undefined. Minor Comments: The authors sometimes use obscure references. For instance: - line 71: Do you mean Tinhofer or Dvorak? - line 96: Why refer to Dimitrov 2023 for the notion of graph invariant, which has existed for ages? Technical Quality: 3 Clarity: 3 Questions for Authors: Please comment on **W1**, **W2** and **W3**. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: This has been addressed in a satisfactory way by the authors. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank you for your thorough review and valuable suggestions. We include a theoretical and experimental comparison with $k$-OSAN in an updated manuscript and believe the points below address the Reviewers' questions adequately. --- > **W1**: “The motivation for focusing on cactus graphs is unclear…” **A1:** Our primary focus was on developing a scalable and expressive method. Our analysis revealed a close connection to cactus graphs, a significant class between trees and tree-width 2 graphs. While the class of tree-width 2 graphs is larger, no less than cubic time GNN is known that can count all tree-width 2 graphs. Hence, our method can provably capture a smaller class than 3-WL, but is more scalable and local. This is a trade-off we make. Note that this shows that our method, while being local, is not less expressive than other non-local variants, e.g., k-OSAN. See the next answer. Moreover, many chemical datasets contain cactus graphs; $58.77\\%$ of ZINC250K are cactus graphs. Consider, for instance, rings with hydrogen atoms attached or any other structure (e.g., carboxyl groups). The presence of such structures, while being cactus graphs and not mere cycle graphs, can significantly alter the molecular properties of a more complex graph. The practical significance of being able to count cactus graphs is also shown by the improved predictive performance of our model when applied to chemical datasets. We are happy to follow the Reviewer's suggestion and include this discussion on the relevance of cactus graphs in our manuscript. > **W2**: “Insufficient Comparison with Subgraph GNNs: …” **A2**: Thank you for highlighting this important direction. We will address this in detail in a potential camera-ready version, referencing Qian et al. (2022). They introduce k-OSAN and vertex-selected k-OSAN (k-VSAN), both bounded by $k+1$-WL but incomparable to $k$-WL. We can make the following simple observation: For every $k$, there exists an $r$ such that $r$-$\ell{}$WL is not less powerful than $k$-OSAN and $k$-VSAN, following from the fact that both are less expressive than $(k+1)$-WL and Corollary 1 in our manuscript. Moreover, as a corollary of our Theorem 2, we show that for every $k \geq 1$, there exist infinitely many graphs that $1$-$\ell{}$WL can separate but $k$-VSAN cannot distinguish. This shows that the Reviewer’s conjecture that “r-$\ell{}$WL may be included in k-OSAN” does not hold. The proof of this result parallels the proof of Corollary 2 ii) in our manuscript: (B. Zhang et al., 2024) characterized the class of patterns $\mathcal{F}^{\mathrm{sub}(k)}$ that $k$-VSAN can homomorphism-count. There exist infinitely many cactus graphs in $\mathcal{M}^3$ that are not in $\mathcal{F}^{\mathrm{sub}(k)}$. A combination of Theorem 3.8 in (B. Zhang et al., 2024) and Theorem 2 in our manuscript shows that there are infinitely many graphs that $1$-$\ell{}$WL can separate but $k$-VSAN cannot distinguish. We will present this result in the main paper and a detailed proof in the appendix. This result demonstrates that even our least expressive algorithm is not less powerful than any $k$-VSAN algorithm, which may be of independent interest, given that the expressive power of $k$-VSAN increases with $k$, along with its computational complexity. Since there are pairs of graphs that $1$-$\ell{}$WL cannot distinguish but $k$-VSAN can, this proves that $1$-$\ell{}$WL and $k$-VSAN are incomparable for any $k$. > **W3**: “Experimental Comparison: …” **A3**: We are happy to update our manuscript and include OSAN in the BREC dataset-baseline. To go beyond $3$-WL with $k$-OSAN, we would need to consider at least $2$-OSAN, which has a complexity of $n^2m$, where $n$ is the number of nodes and $m$ the number of edges. This complexity is too high for our computational resources, which is why we did not use $2$-OSAN or higher-order variants as baselines, consistent with other research in this area. However, if the Reviewer can provide a reference with $2$-OSAN baseline results, we are happy to include those in our paper! We will also clarify the capabilities of other methods: Table 1: PPGN has $3$-WL expressivity and can count up to 7-cycles and homomorphism-count all graphs of tree-width 2. Nested GNNs are strictly between $1$-WL and $3$-WL. GSNs include explicit subgraph counts. While GSN is more powerful than $1$-WL, its exact expressive power depends on the chosen pattern to be counted. Table 2: We have a strict hierarchy in the expressive power of the baselines we chose: MPNN ≤ Subgraph GNN ≤ local $2$-GNN ≤ local $2$-FGNN. These variants, apart from MPNNs, are more expressive than $1$-WL and can subgraph-count up to 7-cycles in theory. Their homomorphism-expressivity is fully characterized in (B. Zhang et al., 2024). We will add these explanations in more detail and details for other real-world experiments to our appendix. > **W4**: “Unlabeled Graphs: …“ **A4**: Yes, we can easily account for graphs with vertex labels, both for the patterns we aim to subgraph- or homomorphism-count and for the graphs themselves. Note that in our empirical evaluation, all GNNs use node attributes, if available in the datasets. Our proofs require minimal modifications to accommodate vertex labels. We will follow your suggestion and update our manuscript to include these modifications. Thank you! Minor Comments: Line 71: We referred to Tinhofer, who proved that two graphs are fractionally isomorphic if and only if $1$-WL does not distinguish them. This result was used by Dell et al. (2018) to prove that $1$-WL is equivalent to the homomorphism-counts of all trees. We would ask the Reviewer kindly to specify the Dvorak references so that we can add a citation if applicable. Line 96: We will remove the Dimitrov 2023 reference from this line. --- Thank you again for your valuable feedback, which led to new insights and results. Please let us know if you have any more suggestions or questions! --- Rebuttal Comment 1.1: Title: Good rebuttal Comment: I have read the rebuttal and I am pleased with the responses. I would indeed appreciate if the case for cactus graphs is made stronger in the paper, as is explained here in the rebuttal. Moreover, the connection with subgraph GNNs, whether OSANs or other, should be explored or at least discussed briefly in related work. Based on the rebuttal and I am happy to accept the paper and to raise my score. The paper I referred to is "On recognizing graphs by numbers of homomorphisms by Zdeněk Dvořák.
Summary: This introduce $r$-loopy Weisfeiler-Leman, a new hierarchy of graph isomorphism test and a corresponding GNN framework. It achieves good cycle counting power and surpuss $k$-WL in some cases. The power of r-lWL is examined in various synthetic and real-world datasets. Strengths: 1. Strong theoretic results with concrete proof. 2. The algorithm is local with good expressivity. Weaknesses: 1. More datasets [1, 2] can be included to evaluate expressivity, especially long range expressivity. [1] Vijay Prakash Dwivedi, Ladislav Rampásek, Michael Galkin, Ali Parviz, Guy Wolf, Anh Tuan Luu, Dominique Beaini: Long Range Graph Benchmark. NeurIPS 2022. [2] Yanbo Wang, Muhan Zhang. Towards Better Evaluation of GNN Expressiveness with BREC Dataset. arxiv/abs/2304.07702 Technical Quality: 3 Clarity: 3 Questions for Authors: 1. For any pair of non-isomorphic graph, can $r$-lwl differentiate them with large enough $r$? (similar that $k$-WL can solve graph isomorphism problem for graph of node less than $k$). Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their thorough review, acknowledging our contributions, and voting to accept our paper. We address each point individually below. “W/Q” numbers the weakness or question, followed by our response. --- > **W**: “More datasets [1, 2] can be included to evaluate expressivity, especially long range expressivity.” **A**: Thank you for your suggestion. We have already included the BREC dataset in our manuscript, as detailed in Table 4. Regarding [1], we do not expect more expressive local GNNs, such as our proposed $r$-$\ell$GIN, to outperform state-of-the-art methods on LRGB, as the predictive performance appears to correlate with the GNNs' ability to capture long-range dependencies. However, we are enthusiastic about testing our algorithm on these tasks. Our first preliminary results without any sweeping or positional encodings suggest improved performance over standard baseline: | | Peptides Structural (MAE $\downarrow$) | Peptides Functional (AP $\uparrow$) | |---------------|-----------------|---------------------| | GCN | 0.3496 ± 0.0013 | 59.30 ± 0.23 | | GINE | 0.3547 ± 0.004 | 54.98 ± 0.79 | | GatedGCN | 0.3420 ± 0.0013 | 58.64 ± 0.77 | |$r$-$\ell{}$GIN | 0.2513 ± 0.0021 | 65.70 ± 0.60 | We are happy to follow Reviewer pJ5G’s suggestion and update our manuscript with these experiments. > **Q**: “For any pair of non-isomorphic graph, can $r$-lwl differentiate them with large enough $r$? (similar that $k$-WL can solve graph isomorphism problem for graph of node less than $k$).” **A**: Thank you for your insightful question. It is indeed an open question and part of our ongoing research. While it is established that for every $k$, there exists an $r$ such that $r$-$\ell$WL is not less powerful than $k$-WL (see Corollary 1 in our manuscript), this does not prove that increasing $r$ will enable $r$-$\ell$WL to solve graph isomorphism universally. We conjecture that $k$-WL and $r$-$\ell$WL are incomparable, where $r$ is chosen as described in Corollary 1. We acknowledge the complexity of this topic and appreciate the opportunity to explore it further. --- Thank you again for your insightful review and positive comments, especially regarding our work's strong theoretical contributions. Please let us know if you have any further questions!
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Stability and Generalization of Adversarial Training for Shallow Neural Networks with Smooth Activation
Accept (poster)
Summary: The paper studies generalization bounds for two-layer networks trained using several variants of adversarial training. The main results are a bound on the stability of the algorithm, which in turn provides a bound on the generalization and the robust accuracy. Strengths: - As far as I know, the results presented in this paper are novel. The approach of studying the generalization capabilities of adversarial training through the lens of stability seems like a promising direction with interesting results. - The paper studies 2-layer networks (where the weights of the second layer are fixed) which go beyond current research in this field, and potentially beyond the NTK/kernel regime. - The results of the Moreau Envelope are also novel in this context AFAIK, and provide tighter bounds than GD or SGD analysis (although they are computationally less efficient) Weaknesses: Although the results of this paper are interesting, there are some issues with the proofs and with some of the setting of the problem, specifically the amount of over-parameterization. In more detail: 1) The assumption on m I think is a bit misleading. If I understand correctly, the authors assume that $m \geq T^2 \eta^2$ which means that the change of the weights when doing GD is very small compared to the number of neurons. This in turn means that slightly changing the loss (by a single sample), and beginning from the same starting point will lead to almost the same weights, hence the stability result. However, I don’t think this assumption makes sense in more practical settings. It basically means that the number of iterations is so small (or the learning rate is so small) that most weights barely move from their initialization. This is also emphasized in Theorem B.1, where the change depends on $\exp(T\eta /\sqrt(m))$. This means that even if m is slightly smaller than $T^2\eta^2$, (e.g. $m = (T\eta)^{1.5}$) then the difference changes exponentially, and the algorithm is very non-stable. This issue is not addressed in the paper and is only briefly mentioned in the proof intuition. 2) The bounds in Theorem 3.2 don’t make sense. If $\alpha_1(\eta,T) > 1$ then the r.h.s of the bounds is negative. Do the bounds only work when $\alpha_1(\eta,T) < 1$? If so, this means that $T<1/\eta\cdot\beta_1$ so the number of iterations might be very small compared to the learning rate. This also seems relevant to Theorem 3.6 with $\alpha_2$ 3) There is an issue with the value of T in Corollary 3.3. Which value of T gives that $1/(1/\alpha_1) < 1.1$? Suppose $\beta_1$ is a constant, then we need $T<1/\eta$ to get a constant term rather than $T< 1/\eta^2$, no? 4) Line 509 - I don’t understand this, how is Eq (4) related to \epsilon_gen? I will be happy to see the author’s response to these issues and will consider raising my score accordingly. Technical Quality: 2 Clarity: 3 Questions for Authors: 1) Given $\epsilon > 0$, how should we set the parameters of the problem (i.e. $T, \beta_1, \eta, ||W-W_0||, m$ etc.) to get generalization error and robust accuracy smaller than $\epsilon$? 2) What happens when m is chosen independently of T - although we allow it to depend on n, i.e. the overparameterized regime compared to the number of samples. Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: The authors discuss the limitations of their results. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### W1: Regarding $m\geq \eta^2 T^2$ Reviewer has a correct understanding of why stability holds, i.e., the change of the weights when doing GD is very small compared to the number of neurons. We utilize the weakly convex robust loss (see Lemma 4.1) as well as $m\geq O(\eta^2 T^2)$ to establish stability.(see Theoorem 4.3) Note that training neural networks (i.e., implementing the ERM rule) is known to be computationally hard, even for a two layer network with only three hidden nodes. This has been known since the early 90s. Recent results in deep learning theory focus on two-layer neural networks that are greatly overparameterized ($m$ is large) and involve a certain scale of initialization. Under these settings, we can bound the training error of GD after T iterations by $1/\sqrt{T}$. This is the state-of-the-art results in computational theory of deep learning. Now, the important point we want to make here is that the key to the analysis is ensuring that the weights of the trained network do not change by much from initialization. One can then show that training neural networks is similar to dynamics of kernel methods. This is the dominant framework for computational learning theoretical guarantees for training neural networks and is aptly called the “lazy regime” or the “neural tangent kernel” (NTK) setting. The early works that laid the foundations for this theory are [r1, r2, r3]. ### W2: Regarding $\alpha_1(\eta,T)<1$ Yes, for that result of ours to make sense we do need $\alpha_1(\eta,T) < 1$. However, we can also give a result that is of the same form as the expression in equation (4) of [r4] and get a factor of $C\alpha_1(\eta,T)\cdot (1+\alpha_1(\eta,T))$ instead of $C\frac{\alpha_1(\eta,T)}{1-\alpha_1(\eta,T)}$. In that case we don’t need $\alpha_1(\eta,T) < 1$. ### W3: Regarding $\frac{1}{1-\alpha_1}<1.1$ It is possible that the reviewer did not realize $\eta=O(\frac{1}{\sqrt{T}})$. In order to get $\frac{1}{1-\alpha_1}<1.1$, we need $\alpha_1$ to be a small positive real number, and equivalently, $\eta\sqrt{T}$, $\frac{\eta T}{n}$ and $\sqrt{\beta_1\eta T}$ are small positive real numbers (through the definition of $\alpha_1$). Note that we have selected $\eta=c_0\frac{1}{\sqrt{T}}$ in Corollary 3.3, plugging in $\eta$, we equivalently require $c_0$, $c_0\frac{\sqrt{T}}{n}$ and $c_0\beta_1\sqrt{T}$ to be small positive real numbers. In Corollary 3.3, we assumed $T\leq O(n^2)$ and $T\leq O(1/\beta_1^2)$, so we can choose $c_0$ to be a proper (small) positive constant so that the three inequalities hold. ### W4: Regarding line 509 The definition of the generalization gap is the test loss minus the training loss $\epsilon_{gen}=L_{rob}(W_t)-$$\hat{L}_{rob}(W_t;S)$ so we can get line 509 by subtracting $E_{S\sim\mathcal{D}^{n}}\hat{L}_{rob}(W_t;S)$ on both sides of Eq (4), so the left hand side will have the generalization gap. ### Q1: Given $\epsilon>0$, how should we set the parameters of the problem to get generalization error and robust accuracy smaller than $\epsilon$? The relationship is a bit involved since our bounds are post-hoc and not a-priori. In other words, our bound is instance specific, and depends on the given training data and the output of the algorithm. This is in stark contrast to bounds that are based on uniform convergence and hold simultaneously for all hypotheses in the class – there you can ask how many samples or iterations you need to ensure suboptimality. However, note that uniform convergence bounds can be overly pessimistic/vacuous. Anyhow, even though there is not a simple answer, below we try to give an intuitive understanding. Let’s say you want to bound the expected robust test loss by $(1+\epsilon)$ times the expected training loss (see Lemma 4.2 relating the two quantities). Then, To ensure $\frac{1}{1-C_x\cdot \alpha_1}\leq 1+\epsilon$, we need $\alpha_1\leq\frac{\epsilon}{C_x(1+\epsilon)}=O(\epsilon)$. We can give two possible ways of setting the different parameters to ensure that $\alpha_1 = O(\epsilon)$. Recall that $\alpha_1=O(\eta\sqrt{T}+\frac{\eta T}{n}+\sqrt{\beta_1\eta T})$. Set $\beta_1 = O(\epsilon^2)$, $n=\Theta(1/\epsilon)$, $T=\Theta(1/\epsilon^2)$, $\eta=O(\frac{1}{T})$, or set $\beta_1 = O(\epsilon^3)$, $n=\Theta(1/\epsilon^2)$, $T=\Theta(1/\epsilon^4)$, $\eta=O(\frac{\epsilon}{\sqrt{T}})$. Check that $\eta\sqrt{T}=O(\epsilon)$, $\frac{\eta T}{n}=O(\epsilon)$ and $\sqrt{\beta_1\eta T}=O(\epsilon)$. ### Q2: What happens when m is chosen independently of T We can choose $m \geq \Theta(n^2)$, and then set $\eta T\leq O(n)$. Note though that it seems superfluous as there could be other regimes that could still give us the condition we need. This is why prior works (e.g., [r4, r5]) also state the assumption as we do. This also makes sense as our generalization bounds are based on algorithmic stability which depends on parameters $\eta$ and $T$. If we were using tools from uniform convergence, then m would naturally depend on the sample size and input dimension d. [r1] Du, Simon S., et al. "Gradient descent provably optimizes over-parameterized neural networks." arXiv preprint arXiv:1810.02054 (2018). [r2] Arora, Sanjeev, et al. "Fine-grained analysis of optimization and generalization for overparameterized two-layer neural networks." International Conference on Machine Learning. PMLR, 2019. [r3] Ji, Ziwei, et al. "Polylogarithmic width suffices for gradient descent to achieve arbitrarily small test error with shallow relu networks." arXiv preprint arXiv:1909.12292 (2019). [r4] Richards, Dominic, et al. "Stability & generalisation of gradient descent for shallow neural networks without the neural tangent kernel." Advances in neural information processing systems 34 (2021): 8609-8621. [r5] Lei, Yunwen, et al. "Stability and generalization analysis of gradient methods for shallow neural networks." Advances in Neural Information Processing Systems 35 (2022): 38557-38570. --- Rebuttal 2: Comment: I thank the authors for their response. Other reviewers (including myself) found it strange that the overparameterization depends on the learning rate and number of iterations. However, after reading the author's response I agree that this dependence is indeed the right one for stability analysis of generalization bounds, contrary to uniform convergence analysis. To conclude, my questions are answered, and I will raise my score to 7. I think the authors should add in the final version several clarifications that were raised from the reviewer's question. I also think it would strengthen the paper to include the proof for \alpha_1 > 1.
Summary: Adversarial Training is a popular method to train models that enhance robutness to adversarial examples. In recent years, a lot of papers study the generalization of adversarial training in various models. In this paper, the authors study the generalization of adversarial training for a special shallow neural networks with uniform stability, they show the theoretical reults for three different algorithms. Strengths: 1. The authors show theoretical results for a special shallow neural networks with logistic loss, which is non-convex and non-smooth. 2. The authors discuss stability and generalization guarantees of three variants of adversarial training. 3. The proof of this work is clear to read. Weaknesses: 1. The model studied in this work is a special neural network. By fixing the weights of last layer, it has only one layer of trainable parameters, it is unclear if the results shown in this special neural network can be generalized to regular neural netowrk. 2. The authors claim that they use a over-parameterized neural network, while they explian "over-parameterized" as $m \geq O(\eta T)$. It is confused that $m$ depends on $\eta$ and $T$ but not depends on the number and dimention of inputs. 3. In section 4, the bound shown in theoretical results depend on $m$ with a same order. I check the proof of this work, and find that these bounds depend on $m$ since the weights of last layer is initilized from $\\{ \frac{1}{\sqrt{m}}, -\frac{1}{\sqrt{m}} \\}$, which means that the bounds depends on the initilization of weights of last layer. If the last layer is initilized from other parameters, especially, initilized from $\\{ \sqrt{m}, -\sqrt{m} \\}$, does the reult still hold for over-parameterized neural network? 4. The main techniques used in proofs depend on the weakly convex property of function, and uniform stability of algorithm. It is suggested to show the formal definition of weakly convex function. Technical Quality: 3 Clarity: 3 Questions for Authors: Is this possible to generalize the results from special neural network to regular neural network with mild assumptions like the Theorem 4.7 in [1]? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: This work study a speical neural network, and it is unclear whether the results can be generalized to regular neural netowrks. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### W1: The model studied in this work is a special neural network. By fixing the weights of last layer, it has only one layer of trainable parameters, it is unclear if the results shown in this special neural network can be generalized to regular neural netowrk. It is known since the early 90s that training neural networks is computationally hard, even for two layer networks with three hidden neurons. Modern theory of deep learning avoids those hardness results by considering (a) overparameterized two layer-networks, (b) a certain small-scale randomized initialization, and (c) freezing the top layer and training only the bottom layer. Under this setting, we can bound the training error of GD after T iterations by $1/\sqrt{T}$. This is the state-of-the-art results in computational theory of deep learning. Yes, it may be too specialized and not close to practice, but this is all we have currently. So, even in the standard setting, it remains unclear if these results can be generalized to “regular” networks. ### W2: The authors claim that they use a over-parameterized neural network, while they explian "over-parameterized" as $m\geq O(\eta T)$. It is confused that $m$ depends on $\eta$ and $T$ but not depends on the number and dimention of inputs. We state $m$ in terms of $T$ and the learning rate \eta as our generalization bounds are based on algorithmic stability which depends on those algorithmic parameters. Similar assumptions have also appeared in related prior works [r1] and [r2]. If we were using tools from uniform convergence, then m would naturally depend on the sample size and input dimension $d$. Note that we could still write the condition $m \geq O(\eta^2 T^2)$ as $m \geq \Theta(n^2)$ and $\eta T = O(n)$, but that would seem superfluous as there could be other regimes that could still give us the condition we need. [r1] Richards, Dominic, and Ilja Kuzborskij. "Stability & generalisation of gradient descent for shallow neural networks without the neural tangent kernel." Advances in neural information processing systems 34 (2021): 8609-8621. [r2] Lei, Yunwen, Rong Jin, and Yiming Ying. "Stability and generalization analysis of gradient methods for shallow neural networks." Advances in Neural Information Processing Systems 35 (2022): 38557-38570. ### W3: In section 4, the bound shown in theoretical results depend on $m$ with a same order. I check the proof of this work, and find that these bounds depend on $m$ since the weights of last layer is initilized from $\\{\frac{1}{\sqrt{m}},-\frac{1}{\sqrt{m}}\\}$, which means that the bounds depends on the initilization of weights of last layer. If the last layer is initilized from other parameters, especially, initilized from $\\{{\sqrt{m}},-{\sqrt{m}}\\}$, does the reult still hold for over-parameterized neural network? No. The scale of the initialization is important. As we state above, this is the case with the dominant framework for a computational theory of deep learning even in the standard setting. Such initialization has also appeared in prior works, e.g., [r3, r4, r5]. In particular, we need the initialization to be at that scale for our Lemma 4.1 to hold. [r3] Du, Simon S., et al. "Gradient descent provably optimizes over-parameterized neural networks." arXiv preprint arXiv:1810.02054 (2018). [r4] Arora, Sanjeev, et al. "Fine-grained analysis of optimization and generalization for overparameterized two-layer neural networks." International Conference on Machine Learning. PMLR, 2019. [r5] Ji, Ziwei, and Matus Telgarsky. "Polylogarithmic width suffices for gradient descent to achieve arbitrarily small test error with shallow relu networks." arXiv preprint arXiv:1909.12292 (2019). ### W4: The main techniques used in proofs depend on the weakly convex property of function, and uniform stability of algorithm. It is suggested to show the formal definition of weakly convex function. We will include a formal definition of weakly convex functions. ### Q1: Is this possible to generalize the results from special neural network to regular neural network with mild assumptions like the Theorem 4.7 in [1]? We are not sure what is reference [1], but see our response above regarding the state-of-affairs with computational learning theoretic results for training neural networks. --- Rebuttal Comment 1.1: Comment: Thanks for your response. My main concern is regarding the setting of an over-parameterized neural network. After reading the authors' response and the comments from other reviewers, my concerns have been addressed. I will raise my rating to a "Weak Accept" accordingly.
Summary: This work studies the stability of adversarial training in two-layer neural networks in binary classification problems. The authors study gradient-descent-based adversarial training, with nearly-optimal adversarial perturbations, in two-layer neural networks with a frozen second layer, and they obtain guarantees on the stability of the algorithm and its robust generalization. Furthermore, the paper considers a smoothened empirical robust loss and obtains guarantees for this case too. The proofs of the main results are deferred to the Appendix, yet a proof sketch is presented in Section 4 for the case of (exactly) optimal adversarial perturbations during training. Strengths: The paper is well written and makes it easy to follow its contributions and contextualize them with respect to prior work. Furthermore, the results are cleanly organized and presented. The main result is a stability bound on GD-adversarial training, which overcomes the non-convexity and non-smoothness of the objective. I particularly enjoyed reading the proof sketch section and, in particular, Lemma 4.1, which makes it clear how the analysis can sidestep the aforementioned challenges. I consider this work to be a valuable contribution to the field of theoretical robust (deep) learning. Weaknesses: The theoretical results hold for a special class of networks (two-layer with smooth activations and frozen 2nd layer weights) and do not appear to be particularly surprising or useful for inspiring practical ideas. This is the reason why I do not recommend a higher score. Perhaps the authors could discuss some of their assumptions in Section 5; for instance, they could comment on the assumption of eq. (1). Should we consider $\beta_1$ to be constant during training? Do we expect it to be smaller or larger as training progresses? Could it be empirically estimated? Similarly, the authors could comment on the feasibility of adversarial training with the smoothened loss. Technical Quality: 4 Clarity: 3 Questions for Authors: I have a few questions/suggestions: * Lines 9 and 10 in the abstract: If I understand correctly, there should be something added along the lines of “provided there exists a robust network around the initialization.” * Lines 193-194, “This view is also consistent with several empirical studies.”: can you please elaborate on this? * Theorem 3.2, for the result on the generalization gap you should explicitly state the loss function, right? (since eq. (3) is defined with respect to a generic function $f$). Confidence: 4 Soundness: 4 Presentation: 3 Contribution: 2 Limitations: The limitations of the work are thoroughly discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### W1: Detached from practice? Our is a theoretical result and theory always lags practice and rarely matches the practice perfectly. Having said that, we would like to remind the reviewer that it is known since the early 90s that training neural networks is computationally hard, even for two layer networks with three hidden neurons. Modern theory of deep learning avoids those hardness results by considering (a) overparameterized two layer-networks, (b) a certain small-scale randomized initialization, and (c) freezing the top layer and training only the bottom layer. Under this setting, we can bound the training error of GD after T iterations by $1/\sqrt{T}$. This is the state-of-the-art results in computational theory of deep learning. It is indeed too specialized and not close to practice, but this is all we have currently. ### W2: Feasibility of adversarial training with the smoothed loss. Yes, it is very feasible and practical and has shown to be successful in prior work. [Xiao 2024] discuss at length the feasibility of adversarial training with smoothed loss and present extensive empirical results. [Xiao 2024] "Uniformly Stable Algorithms for Adversarial Training and Beyond." arXiv preprint arXiv:2405.01817 (2024). ### W3: How to think about $\beta_1$? We should not think of $\beta_1$ as a constant. It is a parameter that you as an algorithm designer can choose. You can make it arbitrarily small by adding more computation. Large $\beta_1$ means that the quality of simulated adversarial attacks is poor, so indeed our bounds suggest that the robust generalization will not be good. This is what should be expected, it is not a weakness of our result. We further expand on our comment above. Recall that adversarial training involves finding an adversarial perturbation of every training example in the training set. Can we solve this optimization problem exactly? If so, then $\beta_1$ is equal to zero. Most works in theory of robust learning assume that. We argue that in practice that is not true. So, we allow for approximate optimization for finding an adversarial example during training. We introduce a parameter $\beta_1$ that is bound on the suboptimality of an adversarial example. Note that this is a design parameter that a practitioner can choose. If you choose a large beta, the training is easier but the resulting model is not good. So, of course, one should choose a small beta. This involves a computational cost that most previous papers ignore. The only paper that carefully shows how many iterations are needed to find a $\beta_1$ suboptimal attack is that of Mianjy and Arora (2023) – they show that we need $1/\beta_1^2$ iterations of PGD attack to find a $\beta_1$ suboptimal attack. [Mianjy and Arora 2023] “Robustness Guarantees for Adversarially Trained Neural Networks,” NeurIPS 2023. ### Q1: Lines 9 and 10 in the abstract: If I understand correctly, there should be something added along the lines of “provided there exists a robust network around the initialization.” While we give a bound on robust generalization in terms of distance from initialization, it does not mean that it is a necessary and the only condition for robust generalization. In fact, we can also give a robust generalization bound that does not depend on the distance from initialization. This follows using Theorem 3.1 and Lemma 4.2. ### Q2: Lines 193-194, “This view is also consistent with several empirical studies.”: can you please elaborate on this? Early stopping is a standard approach for algorithmic regularization. It has been shown to prevent overfitting in many settings, see for example [r1, r2, r3]. Here, we show that it is able to mitigate robust overfitting as well. [r1] ​​Caruana, Rich, Steve Lawrence, and C. Giles. "Overfitting in neural nets: Backpropagation, conjugate gradient, and early stopping." Advances in neural information processing systems 13 (2000). [r2] Rice, Leslie, Eric Wong, and Zico Kolter. "Overfitting in adversarially robust deep learning." International conference on machine learning. PMLR, 2020. [r3] Pang, Tianyu, et al. "Bag of tricks for adversarial training." arXiv preprint arXiv:2010.00467 (2020). ### Q3: Theorem 3.2, for the result on the generalization gap you should explicitly state the loss function, right? (since eq. (3) is defined with respect to a generic function $f$). Yes. We will state the loss function explicitly. --- Rebuttal Comment 1.1: Comment: I apologise for my delayed response. Thank you very much for your reply and the references provided (especially [Mianjy and Arora 2023]). I would advise you to reference this paper when introducing $\beta_1$, and to discuss its computational considerations.
Summary: This paper uses uniform stability to analyze adversarial training on wide shallow networks when the adversarial perturbations are $\beta_1$-optimal. Assuming there exists a robust network near initialization, in expectation the best network iterate has test loss that scales with $1/\sqrt{T}$. The results for GD are extended to SGD and when using Moreau's envelope. Strengths: - The bounds are tied to how strong the adversarial training attack is, providing an approximation to practical attacks while allowing for tractability in the theoretical analysis. - No assumptions are made on the data distribution, and the adversarial attack threat model is very general. Weaknesses: - The bounds are in expectation and not high probability bounds. - A large width is required for the bounds to hold. - The $\beta_1$ parameter is set in the worst-case, and as a result for practical attack algorithms is likely to be high. In fact, if $\beta_1 > 0$ is constant, then it appears Corollary 3.3 is vacuous. Technical Quality: 3 Clarity: 3 Questions for Authors: - How realistic is the assumption that there exists a robust network in the vicinity of the initialization? - Should I be thinking of $beta_1$ as a constant? If so, how should Corollary 3.3 be interpreted? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### W1: The bounds are in expectation and not high probability bounds. We can give high probability bounds based on [r1], which relates high probability generalization bounds with algorithmic stability, but they are looser than the bounds in expectation. We had those originally in the paper but chose to remove them as they are not very informative. ### W2: A large width is required for the bounds to hold. Training neural networks (i.e., implementing the ERM rule) is known to be computationally hard, even for a two layer network with only three hidden nodes. This has been known since the 90s. Recent results in deep learning theory focus on the setting where networks are greatly overparameterized. Indeed, all results giving computational guarantees for learning deep neural networks assume that the networks are overparameterized. So, it is not surprising that we need a similar condition in the adversarially robust setting. The assumption on width, $m\geq \eta^2 T^2$, has also appeared in related prior works [r2] and [r3]. Also, note that the way we present the results, we can also rewrite the condition above as $\eta T \leq O(\sqrt{m})$ and interpret it as early stopping . ### W3: The $\beta_1$ parameter is set in the worst-case, and as a result for practical attack algorithms is likely to be high. In fact, if $\beta_1>0$ is constant, then it appears Corollary 3.3 is vacuous. [TLDR] $\beta_1$ is not a constant, it is a parameter you can choose. You can make it arbitrarily small by adding more computation. Large $\beta_1$ means that the quality of simulated adversarial attacks is poor, so indeed our bounds suggest that the robust generalization will not be good. This is what should be expected, it is not a weakness of our result. Recall that adversarial training involves finding an adversarial perturbation of every training example in the training set. Can we solve this optimization problem exactly? If so, then $\beta_1$ is equal to zero. Most works in theory of robust learning assume that. We argue that in practice that is not true. So, we allow for approximate optimization for finding an adversarial example during training. We introduce a parameter $\beta_1$ that is bound on the suboptimality of an adversarial example. Note that this is a design parameter that a practitioner can choose. If you choose a large $\beta_1$, the training is easier but the resulting model is not good. So, of course, one should choose a small $\beta_1$. This involves a computational cost that most previous papers ignore. The only paper that carefully shows how many iterations are needed to find a $\beta_1$ suboptimal attack is that of Mianjy and Arora (2023) – they show that we need $1/\beta_1^2$ iterations of PGD attack to find a $\beta_1$ suboptimal attack. [Mianjy and Arora, 2023] “Robustness Guarantees for Adversarially Trained Neural Networks,” NeurIPS 2023. ### Q1: How realistic is the assumption that there exists a robust network in the vicinity of the initialization? [TLDR] Existence of a network that generalizes well in the vicinity of a certain way of initialization is a high dimensional phenomenon that is central to the NTK setting, a dominant framework for computational learning theory of deep neural networks. Nonetheless, our results also yield guarantees that do not depend on the distance from initialization. Most computational learning results for deep neural networks assume overparametrization and a certain initialization that ensures that the weights of the trained network do not change by much from initialization. Yet, these trained networks in the so-called lazy regime or the NTK setting, are guaranteed to generalize. So, assuming the existence of a network that generalizes well in the vicinity of initialization (chosen at the right scale) is a high dimensional phenomenon. We argue that if the conditions of our theorems are met (i.e., we consider over-parametrized settings), then such an assumption is not unrealistic. It is same as saying “how realistic is it that a unit ball has almost no volume and all of its mass is concentrated near the boundary.” Well it happens in high dimensions. While we give a bound on robust generalization in terms of distance from initialization, it does not mean that it is a necessary and the only condition for robust generalization. In fact, we can also give a robust generalization bound that does not depend on the distance from initialization. This follows using Theorem 3.1 and Lemma 4.2. ### Q2: Should I be thinking of $\beta_1$ as a constant? If so, how should Corollary 3.3 be interpreted? No, $\beta_1$ should not be treated as constant, it is a design parameter you can choose. Please see a detailed discussion above. [r1] Feldman, Vitaly, and Jan Vondrak. "High probability generalization bounds for uniformly stable algorithms with nearly optimal rate." Conference on Learning Theory. PMLR, 2019. [r2] Richards, Dominic, and Ilja Kuzborskij. "Stability & generalisation of gradient descent for shallow neural networks without the neural tangent kernel." Advances in neural information processing systems 34 (2021): 8609-8621. [r3] Lei, Yunwen, Rong Jin, and Yiming Ying. "Stability and generalization analysis of gradient methods for shallow neural networks." Advances in Neural Information Processing Systems 35 (2022): 38557-38570. --- Rebuttal Comment 1.1: Comment: Thank you for addressing my concerns, especially regarding $\beta_1$. I have raised my score to a 6. I wonder if requiring the $\beta_1$-optimal condition to hold everywhere can be relaxed (for example, to an averaged case), for networks where guaranteeing nearly optimal adversarial attacks is more difficult.
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Optimization Algorithm Design via Electric Circuits
Accept (spotlight)
Summary: This paper addresses the design of new optimization algorithms (centralized and distributed) which lend themselves better to theoretical analysis than existing optimization algorithms, which are tailored more towards establishing fast worst-case convergence guarantees. The novel part is that the authors borrow analogies from electric circuits, specifically RLC circuits (which have components consisting of resistors, inductors, and capacitors). Strengths: Noticing that iterations of a (small iteration, or "continuous-time") optimization algorithm tend to mimic the behavior of RLC circuits with nonlinear resistor components is interesting. The authors also provide a methodology for converting convergent continuous-time dynamics into discrete-time. RLC circuits have a rich history and body of literature and these findings can thus be extended to the realm of optimization algorithm design with provable convergence, which, outside of it just being interesting to apply RLC concepts, is also quite useful for convergence analysis of optimization algorithms. Weaknesses: The authors should specify in the abstract and introduction that the optimization algorithm design in question seems to only work for convex optimization problems, which doesn't seem to be clear until the official definition of (1). This is a significant drawback considering there are significantly more convergence guarantees for convex problems in general versus nonconvex/mixed-integer etc. The provable convergence is definitely of value, but is the speed/rate of convergence an improvement upon standard optimization algorithms? In practice, even if convergence isn't provable, for many practical problems most of the consideration would be towards optimization speed (and convergence would be achieved in practice but maybe not in theory). Technical Quality: 3 Clarity: 3 Questions for Authors: Do the equations for voltage across the inductor and current through the capacitor (page 3, line 93) have initial conditions as well? (e.g. initial charge across the inductor or current through the capacitor). If so, what are the analagous values for this in the optimization formation? I don't quite understand how the subdifferential operator enforces all of the considered V-I relationships. For example, if the x variables are analogous to voltage and the y variables are analogous to currents, wouldn't there also be relationships where x \in df(y)? Why is the 'equilibrium state' also indicative of the voltage across/current through a resistor being zero? Is this definition different than steady state? In Section 3.1, a negative resistor is used. Do all of the circuit laws (Ohm's, KCL, KVL) also hold for negative resistance? Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: There does not seem to be any potential negative societal impacts. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your positive review and thoughtful questions. We are glad that the reviewer found our framework to be "quite useful for convergence analysis of optimization algorithms". **W1.** We thank the reviewer for this precise point. In the revised paper, we will clarify (in the abstract and introduction) that our framework currently focuses on developing convergent algorithms for convex problems. Extending the framework to cover more general nonconvex problems, including mixed-integer optimization problems, is a long-term goal for this project. **W2.** We agree with the reviewer that obtaining fast/efficient algorithms (rather than merely convergent algorithms) is the ultimate goal of optimization algorithm design. Since this present work represents the very first steps in this framework, we focus on laying the basic foundations and introducing the computer-assisted methodology. Utilizing and extending this machinery to design fast/efficient algorithms is the very next step of our planned agenda. **Q1.** Yes, they do. The initial conditions, stated in line 128, require that $ v_L^0 $ and $ i_C^0 $ should be compatible with the other voltage and current values of the resistors and $\partial f$. Here, 'compatible' means that they should satisfy KCL and KVL. In terms of optimization formulation, this means the initial values should be compatible with the optimization algorithm. However, $v_L$ and $i_C$ are often eliminated from the optimization algorithm, so the issue doesn't arise in most implements of the algorithm. Let us clarify this with an example. Consider the gradient flow in section E.2 of the appendix. For the sake of simplicity, assume $f$ is differentiable and $D_C=I$. If we write out KCL, KVL, V-I relations, we have $$ \begin{align*} y &= \nabla f(x) \\\\ i_C &= y \\\\ v_C &= -x \\\\ \frac{d}{dt} v_C &= i_C. \end{align*} $$ The initial condition of $i_C$ means that the value should be compatible with the first 3 lines. That is, for the initial value $x^0$, the value $i_C^0$ should satisfy $i_C^0=\nabla f(x^0)$. Now let's think of the discrete counterpart. The last line corresponds to the update rule of the algorithm $$ v_C^{k+1} = v_C^{k} + h i_C^k $$ where $h>0$ is step size. Here, we obtain the algorithm by eliminating $v_C, i_C, y$, by solving the system of equations from the first 3 lines. Substituting $v_C^k = -x^k$, $i_C^k = y^k = \nabla f(x^k)$ we get $$ x^{k+1} = x^{k} - h \nabla f(x^k) .$$ Thus, $i_C$ has been eliminated, and the constraint $i_C^0=\nabla f(x^0)$ does not explicitly manifest in the discrete-time algorithm. **Q2.** The subdifferential operator enforces the V-I relationship specified by $\partial f$. The specific case mentioned by the reviewer can be enforced by considering the conjugate function $f^*$. Recall that if $f$ is closed convex proper, then Fenchel's identity tells us $(\partial f)^{-1} = \partial f^*$, thus $$ x \in \partial f(y) \iff y \in (\partial f)^{-1}(x) \iff y \in \partial f^*(x). $$ **Q3.** We believe the reviewer's notion of 'steady state' is the same as our 'equilibrium state'. We define the 'equilibrium state' as the condition where all the circuit states are constant. This happens when $v_L=0$ and $i_C=0$, as discussed line 968 in the appendix. In equilibrium for admissible dynamic interconnect, the current through the resistor must be zero, as otherwise the resistors will dissipate energy, and a circuit with strictly decreasing energy cannot be in equilibrium. **Q4.** Yes, all circuit laws also hold for negative resistance. The only unusual aspect of negative resistors is that they *generate*, rather than dissipate, energy. However, this is not a problem since, in the equivalent circuit shown on the right side of the figure in Section 3.1, the negative resistor cancels out with the positive resistor and energy dissipation $\frac{d}{dt} \mathcal{E}(t)\le0$ is still guaranteed. (So, the negative resistor is merely a conceptual tool.) We will further clarify this point in the revised paper. --- Rebuttal Comment 1.1: Title: Response to rebuttal Comment: I appreciate the authors' responses to my questions and comments and the examples provided to illustrate that the circuit laws still apply in the cases I mentioned. I do agree that this provides a good step towards speeding up optimization convergence, and is in general, an interesting finding that should be shared with the community. The paper should definitely be published somewhere, I just think for NeurIPS it's a bit borderline in terms of impact. Thus, I keep my 6/Weak Accept score.
Summary: This paper presents a methodology of designing an electric circuit whose continuous-time dynamics converge to the solution to its corresponding optimization problem (Theorem 2.2). Furthermore, the paper presents the discretization scheme of the continuous-time dynamics, generating convergent optimization algorithms. Strengths: This paper presents a unified framework of implementing various optimization problems into an electric circuit with static and dynamic interconnections. The paper also presents the scheme of generating new optimization algorithms by discretizing the analog circuit equation. Weaknesses: 1) The paper claims two novelties, one of which is interpreting the optimization problem as an RLC circuit. However, the reviewer disagrees with this novelty. The idea of implementing an optimization problem in an RLC analog circuit is classical and has been studied for a long time. More surveys and comparisons with other works are necessary. Chua, Leon, and Gui-Nian Lin. "Nonlinear programming without computation." IEEE Transactions on Circuits and Systems 31.2 (1984): 182-188. Wilson, G. "Quadratic programming analogs." IEEE transactions on circuits and systems 33.9 (1986): 907-911. Vichik, Sergey, and Francesco Borrelli. "Solving linear and quadratic programs with an analog circuit." Computers & Chemical Engineering 70 (2014): 160-171. 2) Theorem 2.2, the main theorem of this paper, seems to be a special version of the Willems stability condition. Although the authors refer to related works like [75], they should make more surveys and clarify the connection between the theorem presented in this paper and the original works on dissipativity theory by J. C. Willems. Willems, Jan C. "Dissipative dynamical systems part I: General theory." Archive for rational mechanics and analysis 45.5 (1972): 321-351. Technical Quality: 2 Clarity: 3 Questions for Authors: 1) The authors use the RLC circuit as an intermediate product to derive the optimization solver. As an alternative use, it might be possible to use the real-world RLC circuit directly as an optimization solver. Do the authors have any comments or perspectives on such a development? 2) As stated in Lemma 4.1, the convergence of the optimization algorithm is shown by performing a proper discretization that ensures dissipativity. Would it be possible to provide specific examples of such "proper" discretization in the main body of the paper? Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: This paper focuses on the theoretical analysis and interpretation of optimization problems, and it does not present its negative social impacts. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the positive and constructive feedback. We are glad that the reviewer found our "unified framework of implementing various optimization problems into an electric circuit" as a strength of the paper. **W1.** Thank you for bringing these references to our attention. We will include them and other related work we find from reading these references in the revised paper. Indeed, the reviewer is correct in that the relation between the optimization problem and V-I relations has been studied for many decades. We discuss this in more detail in section A of the appendix, lines 670–679. However, our intention was to claim novelty in using this correspondence to design optimization *algorithms* (as opposed to physical circuits), and, in our view, our approach is made complete with the PEP-based automated discretization. However, we now see that this distinction was not made sufficiently clear, so in the revised manuscript, we will rewrite the claims of novelty to make clear that the connection between optimization algorithms and electric circuits has been studied previously. (Also, we doubly thank the reviewer for sharing the Vichik–Borrelli 2014 reference. We just found that this paper also uses the notion of negative resistors. Since we also use negative resistors, we are happy to find a prior reference to this notion. We will include this discussion in the revised paper as well.) **W2.** Thank you for directing us to this reference. In the revised paper, we will adequately reference prior work on dissipative dynamical systems. From our reading of the reference you provide and the work [1], it seems like our analysis is indeed quite related to the Willems stability condition, but there were some minor differences. While the setups are related to ours, the fact that $v_C, i_L$ and $i_C, v_L$ may not converge in our setup makes the prior results by Willems not immediately applicable. In our setup, we allow cases where $v_C, i_L$ oscillate (for example, a circuit with a disconnected $L-C$ loop). Nevertheless, it seems that the energy functions and the analyses presented by Willems are closely related to our result, so we will adequately discuss this connection in the revised paper. [1] Willems, J. C. The generation of Lyapunov functions for input-output stable systems. *SIAM Journal on Control*, 9(1):105–134, 1971. **Q1.** We thank the reviewer for this suggestion. This is indeed an interesting direction with some challenges one should overcome. One crucial challenge would be implementing the nonlinear resistor $\partial f$ (for any convex $f\colon \mathbf{R}^m \to \mathbf{R} \cup \\{ \infty \\}$) in a real-world circuit. In [1], the authors explicitly model the constraints and objective functions of an LP and QP using analog circuits, but doing so for general convex problems will likely be more challenging. Another issue is that given the extreme efficiency of modern digital circuits (CPUs and GPUs), which implement standard optimization algorithms, making (analog) RLC circuits have some sort of advantage would likely be a significant challenge. Nevertheless, we do think this is an interesting direction worth pursuing. [1] Vichik, S., and Borrelli, F. Solving linear and quadratic programs with an analog circuit. *Computers & Chemical Engineering*, 70:160–171, 2014. **Q2.** Yes, we can. The **Example** on line 232 is an example of a "proper" discretization. Also, the algorithm DADMM+C (fully presented in line 1404 of the appendix) in the Experiments section is another example. We also provide a proof that this discretization is indeed proper, as Lemma I.1 in line 1423. We will include this discussion in the main body of the revised paper. However, we point out that our framework provides an automatic tool to find proper discretizations. Specifically, our PEP-based framework circuit finds an appropriate stepsize that makes the discretization proper. --- Rebuttal Comment 1.1: Comment: Thank you for your response. The reviewer's concerns have been addressed and appropriately reflected in the revised manuscript.
Summary: This paper proposes the use of RLC circuits to design optimization algorithms. The authors prove that circuit dynamics in continuous time converge to the solution of the optimization problem. By specifically designing the RLC components, this approach recovers many existing algorithms. The authors also introduce a PEP-based method to discretize continuous-time circuit dynamics and prove its convergence. Experiments demonstrate the effectiveness of their methods. Strengths: 1. The authors use RLC circuits to design new optimization algorithms, offering a novel and interesting perspective on algorithm design. 2. The authors provide the convergence of circuit dynamics in continuous time and the convergence of the algorithm in discrete time. 3. The authors provide an automatic discretization package, and experimental results show that the proposed method achieves fast convergence. Weaknesses: 1. Compared to classical optimization algorithms, discretization requires solving the performance estimation problem, which needs extra computation. 2. The convergence rate of the RLC-based algorithm is not discussed in the paper. Technical Quality: 3 Clarity: 2 Questions for Authors: Significant issues 1. Could you please provide more applications and examples for Problem (1) in Line 25 ? 2. In Line 104, could you please explain in detail about $\partial f$ electric device and why $\partial f$ enforces $y\in\partial f(x)$? 3. In Line 139, the authors use the energy definition in (6) instead of the total energy of the circuit, i.e., the sum of the energy of all components in the circuit. Could you please futher explain the reason for this choice? 4. Can the convergence rate of the proposed algorithm be analyzed? 5. When solving Problem (9) in Line 217, is it necessary to know $v^\star$ and $i^\star$ in advance? 6. When designing an algorithm, how should the parameters of the RLC components be chosen? For example, in the gradient descent method, how should $D_{\mathcal{C}}$ be selected? Minor issues 1. In Line 79, should nodes $1,...,m$ be $1,...,\tau-1$? 2. In Line 1222, should it be $y^k \in \partial f(x^k)$? Confidence: 2 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: N.A. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the constructive comments. We are pleased that you found our framework to be "a novel and interesting perspective on algorithm design." Additionally, we're glad that you recognized our automatic discretization package as a strength of our work. **W1.** To clarify, the role of the Performance Estimation Problem (PEP) in our framework automates the calculations and analysis for obtaining discretizations. So, it does not produce extra computation (it does not make the algorithm more expensive), but rather, it eliminates the need for humans to carry out the convergence proof of the discretized algorithm. As mentioned in line 628 of the prior works section, continuous-time-based optimization design is a powerful approach for designing new algorithms, but a shortcoming of this prior work has been that there was no principled way of performing discretization. Our PEP-based automated discretization overcomes this shortcoming. We believe that our PEP-based automated discretization is not a weakness but rather a substantive contribution that completes the methodology of continuous-time optimization algorithm design. **W2.** We agree with the reviewer that obtaining fast/efficient algorithms (rather than merely convergent algorithms) is the ultimate goal of optimization algorithm design. Since this present work represents the very first steps in this framework, we focus on laying the basic foundations and introducing the computer-assisted methodology. Utilizing and extending this machinery to design fast/efficient algorithms is the very next step of our planned agenda. However, we would like to point out that it is possible to extract an $O(1/k)$ convergence rate from the analytic convergence proof Lemma I.1 in line 1423 of the appendix, and a similar argument can be made more generally. We further elaborate on this point in our response to the reviewer's Question 4. ### **Questions: Significant issues** **Q1.** This form includes many problems that arise in distributed and decentralized optimization. For example, Sections E and F of the appendix discuss how we recover many classical centralized and decentralized optimization methods for their respective problems. For more details on the setup for each of these classical methods, see, e.g., Chapters 2, 3, and 11 of [1], and paper [2]. These optimization problems and their solution methods have a wide range of applications, for example, wireless sensory networks [3], federated learning [4], power grids, and many more [5]. We thank the reviewer for raising this point. We agree that it is important to at least briefly discuss the motivating applications of Problem (1), so we will include this discussion in the updated version of the paper. [1] Ryu, E. K., and Yin, W. *Large-scale Convex Optimization: Algorithms & Analyses via Monotone Operators.* Cambridge University Press, 2022. [2] Boyd, S., Parikh, N., Chu, E., Peleato, B., and Eckstein, J. Distributed optimization and statistical learning via the alternating direction method of multipliers. *Foundations and Trends® in Machine Learning*, 3(1):1–122, 2011. [3] Mota, J. F., Xavier, J. M., Aguiar, P. M., and Püschel, M. D-ADMM: A communication-efficient distributed algorithm for separable optimization. *IEEE Transactions on Signal Processing*, 61(10):2718–2723, 2013. [4] Zhou, S., and Li, G. Y. Federated learning via inexact ADMM. *IEEE Transactions on Pattern Analysis and Machine Intelligence*, 45(8):9699–9708, 2023. [5] Yang, T., Yi, X., Wu, J., Yuan, Y., Wu, D., Meng, Z., ..., and Johansson, K. H. A survey of distributed optimization. *Annual Reviews in Control*, 47:278–305, 2019. **Q2.** To start with the simplest example, consider the case $m=1$ and $f(x) = \frac{1}{2R} x^2$ with some $R>0$. Since $f$ is differentiable $\partial f=\nabla f$, and the V-I relation becomes $y=\nabla f(x)=\frac{1}{R}x$, i.e., $x=Ry$. Recalling $x$ is potential, and $y$ is current, the electric device $\partial f$ becomes the usual linear resistor with resistance $R$. As another example, an ideal diode is a device that blocks current flow if the voltage is less than $0$ (reverse biased) and allows current flow with no voltage drop otherwise (forward biased). Such an ideal diode can be modeled with a convex function $$ f(x) = \begin{cases} 0 & \text{if } x \leq 0 \\\\ \infty & \text{otherwise}. \end{cases} $$ So the V-I relation given by $\partial f$ has a vertical line at $x=0$, i.e., $$ \partial f(x) = \begin{cases} 0 & \text{if } x < 0 \\\\ [0, \infty] & \text{if } x = 0 \\\\ \emptyset & \text{if } x > 0. \end{cases} $$ Thus the electric device $\partial f$ becomes the ideal diode, as no current flows through the device when $x<0$, and current flows with zero resistance (infinite slope) otherwise. It does not result in $x>0$ because there is no voltage drop in forward bias. Using set-valued functions to describe V-I curves of circuit elements is a standard approach in circuit theory. So please think of $\partial f$ as something like a non-linear resistor generalizing resistors and diodes. **Q3.** We clarify that the energy in (6) is indeed the total energy in the circuit since resistors and $\partial f$ do not store energy. One slightly non-standard aspect is that the energy in the capacitors and inductors is computed after translating by $v\_C^\star$ and $i\_L^\star$. This is necessary because $\partial f$ is a non-linear resistor that is incrementally passive (instead of being simply passive). Therefore, shifting by the equilibrium values $v\_C^\star$ and $i\_L^\star$ leads to a dissipative energy. (If $\partial f$ is simply passive, then the circuit's equilibrium is at $x=y=0$, but the incrementally passive $\partial f$ allows the circuit to relax to the (non-zero) solution to the optimization problem.) --- Rebuttal Comment 1.1: Comment: I thank the authors for addressing all my concerns in detail. The additional theorem on the convergence rate of the discrete algorithm looks solid to me. The theoretical proof for the faster convergence rate of the discrete algorithm is an interesting direction. I am willing to raise the rating to 6. --- Rebuttal 2: Comment: **Q4.** Yes, we can extract an $O(1/k)$ convergence rate from our proven inequalities. We state and prove a more generalized statement here. The assumption is the same as Lemma G.1 (which covers the assumption of Lemma 4.1), and we restate it here for convenience. **Theorem.** Assume $f\colon \mathbf{R}^m \to \mathbf{R} \cup \\{ \infty \\}$ is a strictly convex function and the dynamic interconnect is admissible. Let a discrete-time optimization algorithm generate a sequence $\\{(v^k, i^k, x^k, y^k)\\}^{\infty}\_{k=0}$. Suppose there exists $\eta > 0$ such that for all $k=0, 1, \ldots$ the energy descent $$ \mathcal{E}\_{k+1} + \eta \langle x^k - x^\star, y^k - y^\star \rangle - \mathcal{E}\_k \leq 0 $$ holds. Then, for the Lagrangian defined as $L(x, z, y) = f(x) - y^T(x - E^\intercal z)$, we have $$ \min\_{k \in \\{0, 1, \dots, K\\}} \left( L(x^k, z^\star, y^\star) - f(x^\star) \right) \leq \frac{1}{(K+1)\eta} \mathcal{E}\_0 = O\left(\frac{1}{K}\right) $$ for $K=0,1,\dots$. *Proof outline.* Since we are considering the same assumption as in Lemma G.1, all arguments used in its proof are applicable. From line 1223 we have $$ 0 \leq \sum\_{k=0}^K \eta \langle x^k - x^\star, y^k - y^\star \rangle \leq \mathcal{E}\_0, $$ and from line 1229 we have $$ \langle x^k-x^\star, y^k-y^\star\rangle \geq L(x^k, z^\star, y^\star) - f(x^\star) \geq 0. $$ Combining two inequalities and dividing both sides by $K+1$, we get the desired conclusion. $\blacksquare$ All algorithms obtained by our framework, including DADMM+C, achieve this convergence rate. As mentioned in an earlier part of this response, doing a more refined analysis of convergence rates (establishing, say, linear rates of convergence in an automated fashion) is a follow-up direction of work that we are pursuing. **Q5.** Explicit knowledge of $v^\star$ and $i^\star$ is not needed. We can find the numerical values of $(\alpha,\beta,h)$ without explicit knowledge of the optimal values. For example, consider finding dissipative discretization for gradient descent. Let $f$ be $L$-smooth. The iterates are given by $x^{k+1} = x^k- h\nabla f(x^k)$. The energy is $\mathcal{E}\_k = \frac{1}{2}\\|x^k-x^\star\\|^2\_2$, where $x^\star$ is such that $y^\star = \nabla f(x^\star) = 0$. Then we want to find step size $h>0$ and $\eta>0$ such that $$\mathcal{E}\_{k+1}+\eta\langle x^k - x^\star, y^k - y^\star\rangle-\mathcal{E}\_k \leq 0.$$ Using the inequality arises from $L$-smoothness of $f$, $$ \langle x^k - x^\star, y^k - y^\star\rangle \geq \frac{1}{L} \\|y^k \\|^2\_2, $$ we can check that the discretization is dissipative for all $\eta < \frac{1}{2L}$ and $h \in [\frac{1}{L} \pm \frac{1}{L}\sqrt{1-2\eta L}]$. Thus, we can find proper $\eta$ and $h$ without explicitly specifying $x^\star$, by just representing it as a point satisfying the optimality conditions. Further details on this line of reasoning are available in the PEP papers [1, 2]. [1] Y. Drori and M. Teboulle. Performance of first-order methods for smooth convex minimization: A novel approach. *Mathematical Programming*, 145(1):451–482, 2014. [2] A. B. Taylor, J. M. Hendrickx, and F. Glineur. Smooth strongly convex interpolation and exact worst-case performance of first-order methods. *Mathematical Programming*, 161(1):307–345, 2017. **Q6.** One approach is to let the solver find the RLC values along with the discretization. (We recently implemented this functionality in ciropt.) Another approach is to try a few variations for the values for resistance, inductance and capacitance, and see for which of them the solver finds a discretization. ### **Questions: Minor issues** **Q1.** We clarify that this is not a mistake. For the circuit in Figure 2, for example, there are $\tau=8$ nodes, but only $m=5$ of the nodes (node $1, 2, \dots 5$) are connected to the terminals. Likewise, there are nodes that are connected to neither the terminal nor the ground. The potential of those nodes is denoted by $e$. **Q2.** Thank you for pointing out to us this typo; we will correct it. We appreciate the reviewer’s thoughtful questions. We hope we have adequately addressed the reviewer's concern regarding the extra computation related to the performance estimation problem and the possibility of obtaining convergence rates from our approach. If so, we kindly ask the reviewer to consider raising the score.
Summary: This paper presents a novel framework for designing optimization algorithms with electric RLC circuits. It contains two stages: 1. design an appropriate circuit whose equilibrium is the solution to the optimization problem. 2. discretize the continuous-time dynamics of the circuit to form a discrete-time algorithm. Theoretical guarantees are given on the convergence of the continuous-time dynamics. Strengths: 1. The idea of the paper is novel. Though prior works have explored the relationship between ODE and optimization algorithms, using RLC circuits to explain existing algorithms and design new ones seems a novel framework. 2. The authors demonstrate the proposed framework covers a lot of existing algorithms and can be extended to new ones with convergence guarantees, which is practically useful. 3. The paper is comprehensive and overall well-written. Weaknesses: 1. Some parts lack clarity. See *Questions*. 2. The theoretical results in this paper focus on the strongly-convex settings. It is unclear how can this be extended to convex or nonconvex settings which are more common in real-world problems. 3. It would be better if there was more discussion on the benefit of using this approach to design algorithms compared to the standard approaches. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. How to determine whether a circuit is admissible from construction instead of checking the equation on lines 98-99? 2. In line 125, under what "appropriate" conditions? 3. Although modifications on the RLC circuits of existing algorithms can lead to new algorithms, how to ensure these modifications of the RLC circuits still can reach equilibrium? Is there a general recipe? I would like to see more discussion on this. Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 4 Limitations: There are no separate sections in the main paper. However, the authors state the assumptions for their results. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are happy to hear that the reviewer found our proposed framework "novel" and "practically useful". **W1.** This is further addressed below. **W2.** The assumption of strong convexity is made to prove the well-posedness of the circuit ODE. We clarify that this assumption is not necessary for showing convergence of the discretized methods (Lemma 4.1). In an extension to non-convex setups, we expect the technical well-posedness of the circuit ODE to be somewhat challenging (especially for non-differentiable non-convex functions), but the convergence analysis should not be difficult, at least not more so than how non-convex optimization is usually difficult. **W3.** Existing optimization methods are designed either with the worst-case convergence analysis or to have fast empirical performance on some problems. This means that either the methods are too pessimistic and slow in practice or they do not have convergence guarantees. We present a new framework that allows to quickly design and explore provably convergent algorithms that are well suited to a problem at hand. **Q1.** As line 97 states, a dynamic interconnect is admissible if, at equilibrium, it reduces to the static interconnect. In the language of circuit theory, this can be stated as follows. First, at equilibrium, the dynamic components relax, i.e., capacitors become disconnects ($i_C=0$), and inductors become wires ($v_L=0$). Second, the circuit relations hold, which is expressed with KCL, KVL, and resistor relations. Third, at terminals $1, \ldots, m$ the V-I relation of the relaxed circuit is the same as that of the static interconnect, i.e., $x \in \mathcal{R}(E^T)$ and $y \in \mathcal{N}(E)$. The admissibility condition (lines 98-99) is a mathematical formalization of this. **Q2.** The conditions are concretely stated in the statement of Theorem 2.1 on line 128. **Q3.** In continuous-time, the intuition comes from the observation that the convergence of electric circuits to equilibrium hinges on incremental passivity, i.e., no component can generate energy, and the fact that resistors dissipate energy. Since convexity of $f$ leads to incremental passivity of $\nabla f$ (so $f$ also does not generate energy), the circuit will relax to equilibrium. When we discretize the dynamics, however, there is a possibility that the discretized process is no longer dissipative. Therefore, we obtain a formal convergence guarantee of the discretized dynamics through the use of the computer-assisted proof framework called the performance estimation problem (PEP). Using the PEP, we certify that the discretized process is also dissipative and, therefore, will reach equilibrium. This is a very general recipe. --- Rebuttal Comment 1.1: Comment: Thanks for the response. It addressed all my concerns. I would like to keep my current rating.
Rebuttal 1: Rebuttal: # Common Response We thank the reviewers for their thoughtful comments and suggestions. We are pleased that the reviewers generally find our framework novel and valuable. We address the reviewers' specific questions in the individual responses.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Spectral-Risk Safe Reinforcement Learning with Convergence Guarantees
Accept (poster)
Summary: This paper considers spectral-risk safe RL algorithm with convergence guarantee. In particular, the paper considers a constrained MDP approach where the objective is to maximize the expected cumulative reward subject to the constraint that the spectral-risk measure is below a certain threshold. The main challenge lies in solving the optimization problem as it requires solve a bi-level optimization framework. The paper proposes first proposes a policy gradient based approach for solving the inner problem, and then proposes a gradient-based approach to sample according to the optimal spectral risk-measure. Strengths: 1. Developing a safe RL policy with spectral risk measure is important research problem. 2. The paper provides convergence guarantees and effective policies. 3. The paper also provides practical approaches for implementation. The empirical performance is good. Weaknesses: 1. The paper is a little-bit dense to parse everything. 2. While the paper provides a convergence guarantee, the paper did not provide any non-asymptotic rates for convergence. In particular, the sample-complexity guarantee has not been provided. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. In order to solve the inner-optimization problem, the paper considers a primal-dual type algorithm for solving the feasibility function. What types of feasibility functions $F$ have been considered? Can the authors provide some outline on how the dual-variable updates as shown in Appendix B ensure feasibility? 2. For solving the outer optimization, the paper relies on minimizing the loss function. However, optimal policy that maximizes $J(\pi,\beta)$ is difficult because of the non-linearity in the feasibility function. Hence, the reviewer did not get how the algorithm solves the inner-optimization problem which seems to be different compared to the inner-optimization problem described in Section 5. 3. The convergence result (Theorem 6.3) relies on the Robbins-Monroe condition. Is it satisfied (in particular, for the optimization framework considered in Section 6 which is different from Section 5)? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Not as such. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the detailed review and valuable feedback on our paper. We appreciate the effort to review our work. Below is our response to the reviewer's comments. **Weakness (convergence rate):** Thanks for pointing out the need for convergence rate analysis. We have addressed the convergence rate of the proposed method in the general response above. Please check the general response. **Q1 (feasibility):** The feasibility function is defined as $\mathcal{F}(x)=0$ if $x\leq0$ else $\infty$, which is commented in the footnote of line 114. This function is used to express the constrained optimization problem as an unconstrained one in equation (6), since two problems "$\max_\pi J_R(\pi)$ s.t. $J_{C_i}(\pi)\leq d_i$" and "$\max_\pi J_R(\pi) - \sum_{i=1}^N\mathcal{F}(J_{C_i}(\pi) - d_i)$" are equivalent. When solving the risk-constrained RL problem, the feasibility function is not used directly; instead, the constraints are used as they are, as shown in equation (7). According to Theorem 5.3, an optimal policy for the inner problem can be obtained if the policy is updated using the proposed update rule, which is described below in line 169. As mentioned in lines 171-172, the proposed update rule requires satisfying the following conditions on $\lambda_t$ and $\nu_t$, which the reviewer refers to as "dual-variables": *1)* $\lambda_{t,i}$ and $\nu_t$ should exist within $[0, \lambda_\mathrm{max}]$, and *2)* $\lambda_{t,i}(J_{C_i}(\theta)-d_i)\geq0$ and $\sum_i \lambda_{t,i}=1$ when constraints are not satisfied. Various methods for choosing $\lambda_t$ and $\nu_t$ that satisfy the conditions are presented in Appendix B. By using one of these methods to calculate $\lambda_t$ and $\nu_t$, an optimal policy can be obtained according to Theorem 5.3, ensuring that the constraints are satisfied. A brief insight into the proof of Theorem 5.3 is as follows: the update rule is designed to give more weight to the objective function when the constraints are satisfied and more weight to the constraints when they are not. As a result, through repetitive policy updates, the policy converges within or on the boundary of the constraints. **Q2 (outer optimization):** The reviewer seems to think that calculating $J(\pi; \beta)$ requires solving another inner problem, which is different from Section 5. In other words, the reviewer seems to think that in order to train the sampler, it is necessary to find a policy that maximizes $J(\pi; \beta)$. However, this is not correct. The policy is only updated using the method introduced in Section 5, and only the sampler is trained in Section 6. There is no other inner problem. As seen in the equation below line 240, the sampler's loss function is composed of $J(\pi_{\beta, t}; \beta)$, where $\pi_{\beta, t}$​ is the policy obtained by solving the inner problem from Section 5 for $\beta$ over $t$ iterations. $J(\pi_{\beta, t}; \beta)$ can be calculated using value functions corresponding to $\pi_{\beta, t}$​, and the sampler is trained to output higher probabilities for $\beta$ that yield a higher $J(\pi_{\beta, t}; \beta)$ value. To be more specific, policies are trained for each of several $\beta$ using the method introduced in Section 5, and these policies are expressed as { $\pi_{\beta,t} | \beta \in B^N$ }. Then, we use these policies to calculate $J(\pi_{\beta,t};\beta)$ and use these values to train the sampler. To implement the set of policies, $\beta$ along with the state are added as inputs to the actor-network $\pi_\theta(a|s ; \beta)$, as detailed in Appendix C. Additionally, the explanation on lines 237-238 does not mean that we need to find $\max_\pi J(\pi; \beta)$, but rather demonstrates that $J(\pi; \beta)$ can be used as a measure of how well RCRL is solved. If there is any misunderstanding on our answer, please provide further clarification. **Q3 (Robbins-Monro condition):** Since it is possible to adjust the learning rate freely, the Robbins-Monro condition can be satisfied. Furthermore, as mentioned in the above response to Q2, there is no additional inner problem in calculating $J(\pi;\beta)$. Therefore, the method described in Section 6 does not need to satisfy the Robbins-Monro condition. --- Rebuttal Comment 1.1: Title: Follow up Comment: Thanks for your response. I will go over the comments carefully. As for the proof like CRPO that you provided in the general response, I have one question. Since this is a constrained problem, how you are ensuring that $J_R(\pi^*)-J_R(\pi_t)\geq 0$. Since this is a constrained problem, $\pi_t$ may be infeasible and may have a better value compared to $J_R(\pi^*)$. --- Rebuttal 2: Title: Author Response Comment: Thanks for the response. The question on the inequality seems to be arised from the proof of the convergence rate in the general response. In the proof, we said "$J_R(\pi^*) - J_R(\pi_t) \leq 0$ for $t \in \mathcal{N}$", and $t \in \mathcal{N}$ means that the policy at $t$ iteration, $\pi_t$, satisfies all constraints. Therefore, among the constraint-satisfying policies, $\pi^*$ has the maximum value of $J_R$, which results in $J_R(\pi^*) \geq J_R(\pi_t)$. --- Rebuttal Comment 2.1: Title: Not convinced Comment: Thanks for the quick explanation. However, it seems that (even in the CRPO case) there is a slack parameter $\eta$, hence, those policies may not exactly satisfy the constraints. For example, it may happen that $J_{c,i}^{\pi_t}>d_i$, but $J_{c,i}^{\pi_t}<d_i+\eta$. Thus, $\pi_t$ may not satisfy the constraint and may be infeasible. --- Reply to Comment 2.1.1: Title: Author Response Comment: Thanks for clarifying the raised question with an example. As the reviewer pointed out, we confirmed that there was an error in the proof. We have uploaded the revised proof to a comment on the general response above. Please refer to the comment.
Summary: This paper proposes a spectral risk measure-constrained RL algorithm, called spectral-risk-constrained policy optimization (SRCPO). This algorithm leverages the duality of spectral risk measures and treats the risk-constrained RL problem as a bilevel optimization. In this bilevel optimization problem, the outer problem is to optimize dual variables derived from the risk measures, while the inner problem is to find an optimal policy given these dual variables. The proposed algorithm is the first to guarantee convergence to an optimum in the tabular setting. Furthermore, the authors conduct experiments on continuous control tasks, and show that the proposed algorithm achieves the best performance among the compared RCRL algorithms. Strengths: 1. The studied problem, i.e., spectral risk measure-constrained RL, is well-motivated and can be applied to safety-critical scenarios, e.g., healthcare and finance. 2. This paper is very well-written and easy to follow. 3. The authors design a general algorithm framework and provide the convergence guarantee for a family of spectral risk-constrained RL problems. 4. Empirical evaluations are also provided to demonstrate the performance superiority of the proposed algorithm in practice. Weaknesses: 1. The idea of handling risk-sensitive RL by solving an inner problem (i.e., find an optimal policy under a fixed dual variable) and an outer problem (i.e., optimize dual variables) is not new. This idea appears in prior risk-sensitive RL works, e.g., (although these prior works may not consider constraints) [1] Wang, Kaiwen, et al. "Near-minimax-optimal risk-sensitive reinforcement learning with cvar." International Conference on Machine Learning, 2023. [2] Chen, Yu, et al. "Provably Efficient Iterated CVaR Reinforcement Learning with Function Approximation and Human Feedback." International Conference on Learning Representations, 2023. 2. The current format of Algorithm 1 is not clear enough. The specific algorithm steps should be included, e.g., the policy update, the distribution update (how to find $\xi$) and the sampling. 3. It seems that Theorems 5.3 and 6.3 are the theoretical guarantees for the inner and outer problems, respectively. Can the authors give the theoretical guarantee of the overall performance for the spectral risk measure-constrained RL problem? 4. The provided results only state that the algorithm will converge to the optimal policy. Can the authors comment on the convergence rate or sample complexity? Technical Quality: 3 Clarity: 3 Questions for Authors: Please see the weaknesses above. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Please see the weaknesses above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the thorough review and insightful comments on our paper. We appreciate the effort to review our work. The following is our response to the reviewer's comments. **Weakness 1 (inner and outer problems)** As the reviewer commented, handling risk by separating the given problem into the inner and outer problems can be considered as not new; instead, the proposed method for solving each problem can be considered novel. Therefore, we will modify lines 50-51 to emphasize the proposed process of the inner and outer problems rather than the bilevel optimization framework itself. **Weakness 2 (algorithm details)** Algorithm 1 only provides a brief overview of the proposed method, but Algorithm 2 in Appendix C shows details of the method, including information on the policy and the distribution update. To improve clarity, we will change the title of Algorithm 1 to "Overview of SRCPO," and at the end of Chapter 4, we will add a note indicating that a detailed algorithm is provided in Appendix C. **Weakness 3 (overall performance)** Through Section 5, we can find an optimal policy of the inner problem for various $\beta$. It means that we can find a set of optimal policies, $\Pi^*=${ $\pi_\beta^*|\beta \in B^N$ }, where $\pi_\beta^*$ denotes the optimal policy for $\beta$. Additionally, through Section 6, we can obtain an optimal sampler $\xi^*$ that samples only $\beta^* = \arg\max_\beta J_R(\pi_\beta^*)$. Thus, by solving the inner and outer problems, we obtain $\Pi^*$ and $\xi^*$. Sampling $\beta^*$ from $\xi^*$ and finding the corresponding policy in $\Pi^*$ leads to the optimal policy $\pi_{\beta^*}^*$ of the RCRL problem. In conclusion, by solving the inner and outer problems respectively, we can obtain an optimal policy for the original RCRL problem defined in equation (5), thus ensuring overall performance. **Weakness 4 (convergence rate)** We have addressed the convergence rate of the proposed method in the general response above. Please check the general response. --- Rebuttal 2: Title: Thank the authors for their response Comment: Thank the authors for their response. Since there exists a technical error in the proof, I think that this paper needs a major revision, and thus I decreased my score. --- Rebuttal Comment 2.1: Title: Author Response Comment: Thanks for the response. However, we would like to clarify that there are no technical errors in our paper. The error mentioned in the general response arose from an additional analysis of the convergence rate, which was conducted in response to the reviewers' requests. For more details, there was an error in the proof regarding the convergence rate in the general response, specifically the part "$J_R(\pi^*) - J_R(\pi_t) \geq 0$ for $t \in \mathcal{N}$," which has now been corrected to "one of the following must hold: (1) $|\mathcal{N}| \geq \frac{T}{2}$ or (2) $\sum_{t \in \mathcal{N}}(J_R(\pi^*) - J_R(\pi_t)) \leq 0$." The revised proof has corrected all errors, and the reviewer DPN9 has also increased the score from 6 to 7. We apologize for the error, which has now been corrected, and would like to reiterate that there are no errors in the paper itself.
Summary: In this paper, the authors have considered the framework of risk-constrained reinforcement learning (RCRL) by tackling risk-measure-based constraints. The non-linearity of risk meaures makes the convergence of RL schemes challenging. In this paper, a spectral risk measure-constarined RL algorithm is proposed. The proposed algorithm is based on a bilevel optimization. The outer problem deals with optimization of dual variables, while the inner one finds the optimal solution given the a set of dual variables. The authors claims that the proposed scheme is the first method which convergs to optimality for tabular MDP problems. Simulation results are presented where the proposed methods are tested on continuous control tasks. Strengths: The paper is well-written and easy to follow. The results and claims presented in the paper appears to be correct. Weaknesses: There are some limitations with respect to the constructed loss function and scalability of the adopted linear interpolation mechanism. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. The proposed approach approximates the spectrum by a discretized spectrum. What is the gap in performance with respect to the optimal solution of the original system? Although in Theorem 6.2, it is derived that as $M$ becomes large, performance gap reduces to zero, how does this $M$ scale as the number of states and actions increases? 2. The intuition behind the sampling process and why it works is not clear. 3. What is the rationale behind the loss function proposed by the authors? Why is the loss function novel? Is there any other loss function for which the scheme does work well? If yes, can the authors come up with a more general loss function? If no, then this is a limitation of the work that it works for only a specific type of loss function (constructed by the authors). Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the valuable feedback on our paper. We appreciate the effort in reviewing our work. We have carefully considered the reviewer's comments. Below, we address the concerns the reviewer has raised: **Q1 (performance gap):** In the general response above, we analyzed the performance gap caused by the spectrum approximation; please refer to the general response. Briefly, we first bounded the gap by the difference in performance of the original problem at different thresholds. Analyzing the performance difference at different thresholds in a constrained optimization problem is challenging due to discontinuous changes in the feasible set or optimal points. Thus, we made some assumptions to analyze the gap and found that the gap is bounded by $KR_\mathrm{max}/(C_\mathrm{max}M)$. This suggests that the gap affects the scale of the reward and costs rather than the state and action spaces. Therefore, we conjecture that the value of $M $can be set independently of the state and action spaces. **Q2 (intuition behind the sampling process):** Thanks for pointing out the need for clarification. Section 6.2 introduced *1)* why we need to learn the distribution of $\beta$ (sampler $\xi$), *2)* the training method, and *3)* the technique for sampling $\beta$ from $\xi$. Then, in Section 7, we presented the practical implementation method, which seems to be pointed out by the reviewer. As mentioned in lines 230-231, $\beta[j]$ is sampled within the range $[\beta[j-1], C_\mathrm{max}/(1-\gamma)]$. To implement this, distributions with finite intervals, such as the beta or truncated normal distribution, can be used. However, defining the interval of the distribution with $\beta$ complicates the gradient computation graph in PyTorch, increasing the likelihood of computational errors. To address this, we decided to sample the difference of $\beta$, which is motivated from the fact that $\beta[i] \sim [\beta[i-1], C_\mathrm{max}/(1-\gamma)]$ is equivalent to $\beta[i] = \beta[i-1] + \Delta$, where $\Delta \sim [0, C_\mathrm{max}/(1-\gamma) - \beta[i-1]]$. To further remove $\beta$ in the interval, we instead sampled $\Delta$ within $[0, C_\mathrm{max}/(1-\gamma)]$, but it can result in $\beta[i] > C_\mathrm{max}/(1-\gamma)$. Nevertheless, if $\beta$ increases, the value of $J_C(\pi; \beta)$ also increases due to the conjugate function in (2), resulting in reducing $J_R(\pi)$. As a result, the distribution of $\beta$ is trained to output low probabilities for such cases. We will add this explanation to the appendix. **Q3 (loss function):** First, we want to clarify the meaning of line 235, which seems to be pointed out by the reviewer. The meaning is that the process of solving the outer problem is novel rather than claiming that the proposed loss function itself is novel. Specifically, the proposed method enables the inner and outer problems to be solved concurrently rather than sequentially. Thus, we will change the phrase to "novel update process." Next, the rationale for structuring the loss indicated in lines 236 and 240 is as follows: The goal of the sampler is to produce a high probability for the optimal $\beta$. To achieve this, we need an indicator that can provide feedback on how well the RCRL problem is solved for a given $\beta$. Being inspired by the paper [R1], which converts constrained RL to unconstrained RL using penalty functions, we defined the indicator as shown in line 236. Additionally, as observed in the proof of Theorem 6.3, any function $J$ that satisfies $\max\_\pi J(\pi; \beta)= \max\_{\pi \in \{ \pi|J_{C_i}(\pi;\beta) \leq d_i }} J_R(\pi)$ can be used. One such $J$ could be $J(\pi;\beta) = J_R(\pi)$ if $J_{C_i}(\pi;\beta)\leq d_i$ for $\forall i$; otherwise, $-R_\mathrm{max}/(1-\gamma)$. Alternatively, instead of providing feedback, a method that uses the policy's value function to compute the optimal distribution in a closed form could be proposed. As mentioned earlier, since we have proposed a new approach to solving the outer problem, it does not seem necessary to propose a more general loss function. Developing more efficient and reliable loss functions could be done in future work. **References** - [R2] Zhang, Linrui, et al. "Penalized proximal policy optimization for safe reinforcement learning." arXiv preprint arXiv:2205.11814 (2022). --- Rebuttal Comment 1.1: Comment: Thank you for the detailed response which helped in having a better understanding of the paper. I have read all the responses and discussions. Some of my concerns are answered now. However, since the paper needs a revision in the technical part, I will keep my score as it is. I have no further questions. --- Reply to Comment 1.1.1: Title: Author Response Comment: Thanks for the response, and we are glad to hear that some of the reviewer's concerns have been addressed. We would like to clarify that the error was not in the submitted manuscript but in the global response (convergence rate analysis), which has now been resolved and does not affect the main theoretical results of the manuscript. --- Rebuttal 2: Comment: Thanks for the clarification. I have read the paper and response again, and I am happy to raise the score accordingly.
Summary: The paper provides the new spectral-risk-costrained policy optimization algorithm, that uses the duality of spectral measures, and bilevel optimization approach to address the risk constrained reinforcement learning problem, solving the inner primal problems and outer dual problem. The paper provides global convergence guarantees in the tabular setting. The paper supports the results with the experimental comparisons to other methods. Strengths: The paper is well written, despite many complex notions used in the flow, they all are clearly introduced. The theoretical result seems to be solid and important to the community. The idea to solve the dual problem of the risk-constrained rl using a sampler distribution seems to be novel too. The proofs are provided, however I did not check them carefully. The experiments demonstrate a good performance of the proposed method. Weaknesses: There are several things which are not very clearly defined and a few minor typos that I noticed. line 88: F_X is not defined, I believe it is a probability function? line 168: where the log in eq. (9) comes from? Conflicting notations: - Now, aplha is used both for denoting the learning rate and the risk parameter. Maybe, the authors could use different notations for these notions. - F is probably a probability function, and then a feasibility function (line 114). - in theorem 6.2 i is used to iterate over the discretization 1:M-1, but later used to iterate over constraints 1:N, and j is used for discretizations. maybe use j in thm. 6.2 and other places consistently too? Typos: line 102: an -> a Technical Quality: 4 Clarity: 3 Questions for Authors: all questions are above in the weaknesses section Confidence: 2 Soundness: 4 Presentation: 3 Contribution: 4 Limitations: The limitations are discussed. There is no potential negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer, Thanks for the positive review and valuable feedback on our paper. We appreciate the effort in reviewing our work. Below, we respond to each of the points mentioned. **Weakness 1 (line 88 and 168)**: $F\_X$ is the cumulative density function (CDF) of the random variable $X$. We will add the definition of $F_X$ to the paper after line 88. The logarithmic operator in eq. (9) comes from the following derivation of the policy gradient of the sub-risk measure (by differentiating $\pi'$ in eq. (8)): $\nabla\_\theta \mathcal{R}\_\sigma^g(G^{\pi_\theta}) = \nabla\_\theta\mathbb{E}\_{d\_\rho^{\pi\_\theta},\pi\_\theta}[A\_g^\pi(s,a)]/(1-\gamma) = \mathbb{E}\_{d\_\rho^{\pi\_\theta}}[\sum_a \nabla_\theta \pi_\theta(a|s) A_g^\pi(s,a)]/(1-\gamma) = \mathbb{E}\_{d\_\rho^{\pi\_\theta}, \pi_\theta}[\nabla_\theta \log \pi_\theta(a|s) A_g^\pi(s,a)]/(1-\gamma).$ **Weakness 2 (conflicting notations)**: Thanks for the comments on the conflict notations. We will change the notations as follows: 1. learning rate: from $\alpha$ to $\omega$. 2. Feasibility function: from $\mathcal{F}$ to $\chi$. 3. Iterator of the discretization in Theorem 6.2: from $i$ to $j$. We will also correct the typo too. --- Rebuttal Comment 1.1: Title: Thanks Comment: I thank the authors for their response. I keep my score and support acceptance.
Rebuttal 1: Rebuttal: # General Response We appreciate all the reviewers for their insightful comments and suggestions. In this response, we will address the common concern raised by the reviewers: convergence rate analysis and performance gap. **Convergence rate** We will analyze the convergence rate of the proposed method in Section 5 through finite-time analysis, similar to the approach used in CRPO [R1]. To achieve this, we make a few minor modifications to the policy update rule in line 169 as follows: 1. "if constraints are satisfied" is changed to "if $J_{C_i}(\pi_t)\leq d_i + \eta$ $\forall i$," where $\eta$ is a positive number. 2. The time varying step size $\alpha_t$ is changed to a fixed learning rate $\alpha$. Then, the following is satisfied: > Let $\alpha=(1-\gamma)^{2.5}/\sqrt{T}$ and $\eta = (2D + 20K^2)/((1-\gamma)^{2.5}\sqrt{T})$, where $D:=\sum_s d_\rho^{\pi^*}(s)D_\mathrm{KL}(\pi^*(\cdot|s)||\pi_0(\cdot|s))$ and $K:=\max(N\lambda_\mathrm{max}C_\mathrm{max}, \lambda_\mathrm{max}R_\mathrm{max}, R_\mathrm{max}, 1)$. > Then, $J_R(\pi^*) - \mathbb{E}\_{t\sim \mathcal{N}}[J_R(\pi_t)] \leq (2D + 20K^2)/((1-\gamma)^{2.5}\sqrt{T})$, and $\mathbb{E}\_{t\sim\mathcal{N}}[J_{C_i}(\pi_t)] - d_i \leq (2D + 20K^2)/((1-\gamma)^{2.5}\sqrt{T})$ for $\forall i$. **Proof:** By subtituting $|J_R(\pi^*)-J_R(\pi_t)|\leq 2R_\mathrm{max}/(1-\gamma)$ and $|J_{C_i}(\pi^*)-J_{C_i}(\pi_t)|\leq 2C_\mathrm{max}/(1-\gamma)$ into Eq. (30), $\sum_{t\in\mathcal{N}}(\alpha(J_R(\pi^*)-J_R(\pi_t))-2\alpha^2N\lambda_\mathrm{max}C_\mathrm{max}/(1-\gamma))+\sum_{t\notin\mathcal{N}}(\alpha\eta-2\alpha^2\lambda_\mathrm{max}R_\mathrm{max}/(1-\gamma))$ $\leq D + \sum_{t\in\mathcal{N}}2\alpha^2(R_\mathrm{max}+\alpha N\lambda_\mathrm{max}C_\mathrm{max})^2/(1-\gamma)^5+\sum_{t\notin\mathcal{N}}2\alpha^2(\alpha\lambda_\mathrm{max}R_\mathrm{max}+N\lambda_\mathrm{max}C_\mathrm{max})^2/(1-\gamma)^5$. Using the definition of $K$, $\sum_{t\in\mathcal{N}}\alpha(J_R(\pi^*)-J_R(\pi_t))+\sum_{t\notin\mathcal{N}}\alpha\eta\leq D+2T\alpha^2K^2(\alpha+1)^2/(1-\gamma)^5+2T\alpha^2K/(1-\gamma)$. Also, due to $J_R(\pi^*)-J_R(\pi_t)\geq0$ for $t\in\mathcal{N}$, $(T-|\mathcal{N}|)\alpha\eta\leq D+2T\alpha^2K^2(\alpha+1)^2/(1-\gamma)^5+2T\alpha^2K/(1-\gamma)$. Since $\alpha\eta T/2\geq D+2T\alpha^2K^2(\alpha+1)^2/(1-\gamma)^5+2T\alpha^2K/(1-\gamma)$, we can get $(T-|\mathcal{N}|)\alpha\eta\leq T\alpha\eta/2\Rightarrow |\mathcal{N}|\geq T/2$. Then, $J_R(\pi^*)-\mathbb{E}\_{t\sim\mathcal{N}}[J_R(\pi_t)]=\sum_{t\in\mathcal{N}}(J_R(\pi^*)-J_R(\pi_t))/|\mathcal{N}|$ $\leq2(D+2T\alpha^2K^2(\alpha+1)^2/(1-\gamma)^5+2T\alpha^2K/(1-\gamma))/(\alpha T)\leq(2D+20K^2)/((1-\gamma)^{2.5}\sqrt{T})$. Also, $\mathbb{E}\_{t\sim\mathcal{N}}[J_{C_i}(\pi_t)]-d_i\leq\eta=(2D + 20K^2)/((1-\gamma)^{2.5}\sqrt{T})$. **Performance Gap** Let $A(d)=${ $\pi|\mathcal{R}\_{\sigma}(X)\leq d$ }, $\tilde{A}(d)=${ $\pi|\mathcal{R}\_{\tilde{\sigma}}(X)\leq d$ }, where $X=G_C^\pi$. Then, the performance gap is $G:=\max_{\pi\in A(d)}J_R(\pi)-\max_{\pi\in\tilde{A}(d)}J_R(\pi)$. According to Lemma 6.1, $|\mathcal{R}\_{\sigma}(G_C^\pi)-\mathcal{R}\_{\tilde{\sigma}}(G_C^\pi)|\leq K/M$, where $K=C_\mathrm{max}\sigma(1)/(1-\gamma)$, so $A(d-K/M) \subset \tilde{A}(d)$. Thus, $G\leq\max_{\pi\in A(d)}J_R(\pi)-\max_{\pi\in A(d-K/M)}J_R(\pi)$. As a result, the gap is bounded by the performance difference between the two thresholds. However, in constrained optimization, this type of problem is challenging due to discontinuous changes in the feasible set or optimal points. Thus, we will use some assumptions to analyze the gap practically, which will be introduced below. Le $d_\rho^\pi(s,a):=(1-\gamma)\sum_t \gamma^t\mathbb{P}(s_t=s, a_t=a)$. Then, $(1-\gamma)(J_R(\pi)-J_R(\pi'))=\mathbb{E}\_{(s,a)\sim d_\rho^\pi}[A_R^{\pi'}(s,a)]=\langle d_\rho^\pi,A_R^{\pi'}\rangle=\langle d_\rho^\pi-d_\rho^{\pi'},A_R^{\pi'}\rangle\leq||d_\rho^\pi-d_\rho^{\pi'}||\_1 R_\mathrm{max}/(1-\gamma)$. The performance gap is expressed as $G=\max_{\pi'\in A'}\min_{\pi\in A}(J_R(\pi')-J_R(\pi))\leq\max_{\pi'\in A'}\min_{\pi\in A} ||d_\rho^{\pi'}-d_\rho^\pi||\_1 R_\mathrm{max}/(1-\gamma)^2$, where $A=A(d-K/M)$ and $A'=A(d)$. As a result, we need to find $\max_{\pi'\in A'}\min_{\pi \in A}||d_\rho^{\pi'}-d_\rho^\pi||\_1$. Since the one-norm above is the maximum distance between the two nearest policies in set $A$ and $A'$, we only need to evaluate the boundaries $\partial A$ and $\partial A'$. If $\pi'$ on $\partial A'$ is given, we need to find the closest policy $\pi$ on $\partial A$: $\min_{\pi\in\partial A}||d_\rho^{\pi'}-d_\rho^\pi||\_1=\min_\pi||d_\rho^{\pi'}-d_\rho^\pi||\_1$ s.t. $J_C(\pi')-J_C(\pi)=K/M$. Let $J_C(\pi)=\inf_g\mathcal{R}\_\sigma^{g}(\pi)=\mathcal{R}\_\sigma^{g^*}(\pi)$. Then, $J_C(\pi')-J_C(\pi)\leq\mathcal{R}\_\sigma^{g^*}(\pi')-\mathcal{R}\_\sigma^{g^*}(\pi)=\langle d_\rho^{\pi'}-d_\rho^{\pi},A_{g*}^\pi\rangle/(1-\gamma)\leq||d_\rho^{\pi'}-d_\rho^\pi||\_1 C_\mathrm{max}/(1-\gamma)^2$. Here, we assume that there exists an occupancy measure $d_\rho^{\pi}$ that satisfies the equality in the above equation. Then, $\min_{\pi\in\partial A}||d_\rho^{\pi'}-d_\rho^\pi||\_1=K(1-\gamma)^2/(C_\mathrm{max}M)\Rightarrow\max_{\pi'\in A'}\min_{\pi\in A}||d_\rho^{\pi'}-d_\rho^\pi||\_1=K(1-\gamma)^2/(C_\mathrm{max}M)$. Finally, the performance gap is bounded by $KR_\mathrm{max}/(C_\mathrm{max}M)$. Without the assumption, it is not guaranteed that the performance gap will always be bounded by $KR_\mathrm{max}/(C_\mathrm{max}M)$. Nevertheless, an insight can be gained that the gap is influenced by the scale of rewards and costs rather than the state space and action space. Therefore, it seems possible to conjecture that the proposed approximate approach is scalable. **References** - [R1] Xu et. al. "CRPO: A new approach for safe reinforcement learning with convergence guarantee." ICML, 2021.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Leveraging Hallucinations to Reduce Manual Prompt Dependency in Promptable Segmentation
Accept (poster)
Summary: The paper introduces an innovative approach called ProMaC (Prompt-Mask Cycle) to improve promptable segmentation. The primary goal of ProMaC is to reduce the dependency on instance-specific manual prompts by leveraging hallucinations from Multimodal Large Language Models (MLLMs) to generate more accurate and task-specific prompts. Strengths: Innovation in Utilizing Hallucinations: The paper's novel approach of leveraging rather than eliminating hallucinations to improve segmentation tasks is groundbreaking. It recognizes the potential of hallucinations to provide contextual insights, which can be valuable for enhancing model performance. Reduction of Manual Effort: By introducing task-generic prompts and iteratively refining them, ProMaC significantly reduces the need for manual annotation, making it more feasible for large-scale applications. Iterative Refinement Process: The iterative approach of ProMaC, which continuously improves prompts and masks, ensures higher accuracy and adaptability across various tasks and datasets. Comprehensive Evaluation: The paper provides extensive evaluations on multiple benchmarks, demonstrating the robustness and effectiveness of ProMaC in diverse and challenging scenarios. Adaptability and Versatility: ProMaC's success across different segmentation tasks, including those in medical imaging and transparent object detection, highlights its versatility and potential for broad application. Open-Source Contribution: The inclusion of code in the supplemental materials encourages further research and development, facilitating community engagement and collaboration. In summary, the paper makes a significant contribution to the field of promptable segmentation by introducing a novel method that leverages MLLM hallucinations, reduces manual dependency, and demonstrates superior performance across various challenging tasks. Weaknesses: Reliance on Hallucinations: Inconsistent Performance: The approach relies on hallucinations, which can be unpredictable and inconsistent. In scenarios where hallucinations are highly inaccurate, the performance of ProMaC could degrade significantly. Dependence on MLLM Quality: The effectiveness of leveraging hallucinations is heavily dependent on the quality and training of the Multimodal Large Language Models (MLLMs). Variations in the MLLM's training data and methodology can lead to differing hallucination patterns, affecting ProMaC's reliability. Complexity of the Iterative Process: Computational Overhead: The iterative process of refining prompts and masks can be computationally intensive. This complexity might limit the scalability of ProMaC for real-time applications or large datasets. Implementation Challenges: The method's implementation is intricate, involving multiple stages of hallucination generation, contrastive reasoning, and mask refinement. This complexity can pose challenges for practitioners looking to adopt the method. Generalization Concerns: Task-Specific Fine-Tuning: While ProMaC aims to reduce manual prompt dependency, the initial setup still requires careful selection of task-generic prompts. Ensuring that these prompts are effective across diverse tasks without additional fine-tuning could be challenging. Transferability: The approach might not generalize well to entirely new tasks or domains where the hallucinations generated by MLLMs do not align well with the actual task requirements. Technical Quality: 3 Clarity: 3 Questions for Authors: see the weakness Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: see the weakness Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for valuable feedback and for appreciating the idea of our method, strong justification of each component of our framework and extensiveness of our experiments. Following are the responses regarding your concerns. > *Inconsistent Performance: In scenarios where hallucinations are highly inaccurate, the performance of ProMaC could degrade significantly.* - Hallucinations are used solely to gather task-relevant information from the image and are effectively filtered by the Visual Contrastive Reasoning module and the subsequent Mask Semantic Alignment model to eliminate the negative impact of task-irrelevant information. This ensures the generated instance-specific prompts are accurate and not influenced by irrelevant hallucinations, maintaining robust performance even when hallucinations are unreliable. - To elaborate, as stated in Lines 143-150, by dividing the input image into patches at various scales, MLLM induces hallucinations based on varying object visibility. These hallucinations use prior knowledge to explore connections between the image data and the associated task, gathering task-relevant information. Simultaneously, the original image is also fed directly into the MLLM to obtain task-relevant candidates without relying on hallucinations. This ensures that even when hallucinations are highly inaccurate, the collected information still includes relatively reliable content. - Visual Contrastive Reasoning module then selectively reduces irrelevant semantics to remove irrelevant influences while benefiting from relevant insights obtained from pre-trained knowledge (hallucinations). When hallucinations are completely inaccurate, the information derived from them is discarded. Additionally, the Mask Semantic Alignment module further ensures that the generated masks align with the task semantics. > *Dependence on MLLM Quality.* As we explained in the limitations section, the quality of MLLMs affects our model's generalization ability, which is a potential direction for our future research. > *Computational overhead and implementation challenges.* - Computational Overhead: Traditional methods often require multiple GPUs for extensive training, leading to higher computational and memory demands. In contrast, as we discussed in Line 281, our method performs iterative test-time adaptation on a single 40G A100 GPU without the need for training, which is computationally efficient. Compared to GenSAM [19], which also uses test-time adaptation, our method achieves better performance with fewer iterations. As shown in Tab. 6(a), ProMaC reaches higher performance by the second iteration than GenSAM does after six iterations, further demonstrating the efficiency of our approach. - Implementation Challenges: Although our approach uses multiple modules, all module parameters are fixed and do not require training, making it easier to reproduce. In Tab. 7, we conducted three sets of experiments randomly and calculated the mean and variance. The results show that our training-free method exhibits very low variance, further proving its robustness. Additionally, we provide implementation details in the PyTorch Implementation Details section (Line 271-281) of the main text and Line 551-581 of the appendix. Our code is also available in the supplementary materials, ensuring that our method is easy to understand and implement. > *The initial setup still requires careful selection of task-generic prompts. Ensuring that these prompts are effective across diverse tasks without additional fine-tuning could be challenging.* - Task-generic prompt $P_g$ is decided by the task name rather than our manual selection, and has minimal impact on experimental results. Using the camouflaged animal detection task as an example, we selected "camouflaged animal" as $P_g$ because the task is named "camouflaged animal detection". Similarly, "polyp" is selected as $P_g$ because the corresponding task is named as "polyp detection". - To explore the impact of different $P_g$ prompts, we used ChatGPT to suggest two synonyms for "camouflaged animal": "hidden animal" and "disguised animal" as possible $P_g$ candidates. We evaluated the effect of different $P_g$, as shown in Tab. 2 of the attached PDF file. Although different $P_g$ prompts cause slight fluctuations in the outcomes, the overall performance remains stable. This demonstrates that as long as $P_g$ effectively describes the task semantics, our method can achieve relatively consistent segmentation performance. > *Transferability: The approach might not generalize well to entirely new tasks or domains where the hallucinations generated by MLLMs do not align well with the actual task requirements.* - Because our ProMaC can be easily applied to different MLLMs, we can ensure robust performance across various tasks and domains by selecting the most appropriate MLLM for each specific task. - PoMaC is designed to be flexible and can be adapted to different MLLMs that are more suitable for specific tasks or domains. By selecting an MLLM that has been pre-trained on data relevant to the target domain, we can improve alignment with task requirements and enhance performance. - For example, as shown in Tab. 1 in attached pdf, in medical imaging segmentation, ProMaC based on LLaVA performed modestly. However, when we used LLaVA-Med [1], which has stronger medical characteristics, as the base, ProMaC significantly outperformed both LLaVA-Med combined with SAM and LLaVA-based ProMaC. This demonstrates that by applying our ProMaC method to a more suitable base MLLM, we can achieve better performance, thereby ensuring transferability across various tasks. [1] Li C, Wong C, Zhang S, et al. Llava-med: Training a large language-and-vision assistant for biomedicine in one day[J]. Advances in Neural Information Processing Systems, 2024, 36. --- Rebuttal Comment 1.1: Comment: thanks the rebuttal from the authors. they solved all my concerns. I agreed with all the other reviewers that we should accept the paper.
Summary: The paper focuses on promptable segmentation, aiming to minimize the need for manually designing prompts. Specifically, it explores the hallucination issue in MLLMs and finds that hallucinations can reveal valuable contextual information, which could be largely beneficial to promptable segmentation tasks, especially for complex tasks such as camouflaged object segmentation. To this end, the paper proposes a Prompt-Mask Cycle generation framework (ProMaC) that iteratively refines the generated prompts and masks. The proposed method leverages hallucinations to mine more context-related information while simultaneously reducing irrelevant hallucinations for better segmentation results. Strengths: 1. The idea of this paper is quite interesting. Instead of merely mitigating hallucinations, the paper explores how to leverage them for better segmentation performance. The paper also provides detailed analysis to explain its motivation and illustrate how hallucinations can benefit the segmentation task. 2. Extensive experiments over diverse segmentation benchmarks demonstrate the effectiveness of the proposed method in tackling various segmentation tasks, including very challenging camouflage segmentation and general segmentation tasks. Weaknesses: Motivation 1. The author illustrates the motivation mainly using samples from the camouflaged object segmentation task (in Figure 1 and A.2). This makes it quite straightforward to understand the rationale behind the proposed method and how it can benefit the camouflaged object segmentation task. However, it would be better to involve more examples from different segmentation tasks to illustrate the motivation comprehensively, especially general segmentation like COCO and Pascal VOC. I am curious whether this phenomenon would still occur when the objects are more visible Method 1. Some terms are unclear and need explanation. For example, I’m not very clear about the meaning of the term ‘prompt’ used in the paper. Does it refer to text prompts or spatial prompts like boxes or points? From the method section, it seems that P_B and P_A are text prompts, and the instance-specific prompts refer to bounding boxes? Also, what does query P refer to? If its meaning differs from P_g, P_B, and P_A, it would be better to clarify this at the beginning of the method section. Since the method is based on prior work [19] and some terms are borrowed directly from [19], it is necessary to include a preliminary section to explain the background and terms. This would make the method section easier to follow. 2. Question about the proposed Multi-scale Chain of Thought Prompting. From my understanding, chain-of-thought prompting aims to achieve complex reasoning capabilities through multiple intermediate reasoning steps. Could the author please explain how this idea is reflected in the proposed method? Technical Quality: 3 Clarity: 2 Questions for Authors: Overall, the idea of this paper is quite interesting. However, it lacks some necessary explanations about the motivation and the proposed method. For example, the motivation is primarily illustrated using examples from camouflaged object segmentation. More diverse examples could help illustrate its broader potential. Additionally, some terms are unclear, suggesting a need for clearer definitions and possibly a background section explaining the foundational concepts. Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: The author has discussed the limitations of the work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the positive evaluation of our work and for acknowledging the originality of the proposed method as well as its significance for future research. We are glad that the reviewer finds the results impressive and the idea of this paper is quite interesting. > *It would be better to involve more examples from different segmentation tasks to illustrate the motivation comprehensively, especially general segmentation like COCO and Pascal VOC. I am curious whether this phenomenon would still occur when the objects are more visible.* We have also included additional samples for the polyp dataset and general object dataset in the attached PDF (Fig. 1) to illustrate our motivation. It is evident that our method, although specifically designed for segmentation tasks where visual cues are weak or ambiguous (e.g., hidden/camouflaged foreground objects in visually similar backgrounds), can also utilize hallucinations as prior knowledge to identify potential categories in complex images in general task, such as those in the VOC dataset (Fig. 1(b) in the attached PDF). However, when visual cues are strong, obvious, or distinct, the benefit of scene understanding through exploring hallucinations becomes less critical. In such situations, there is less likelihood of confusion between foreground and background, and less need for reasoning about all plausible backgrounds versus foregrounds, in such cases, our method may not provide significant improvements. (Fig. 1(c) in the PDF). > *Some terms are unclear and need explanation. For example, I’m not very clear about the meaning of the term ‘prompt’ used in the paper. Does it refer to text prompts or spatial prompts like boxes or points? From the method section, it seems that P_B and P_A are text prompts, and the instance-specific prompts refer to bounding boxes? Also, what does query P refer to? If its meaning differs from P_g, P_B, and P_A, it would be better to clarify this at the beginning of the method section. Since the method is based on prior work [19] and some terms are borrowed directly from [19], it is necessary to include a preliminary section to explain the background and terms. This would make the method section easier to follow.* The term 'prompt' used in the paper includes both text prompts and spatial prompts (bounding boxes and points). - As explained in Line 152-155, in each task, the input text prompt consists of two parts: prompt for bounding box prediction ($P_B$) and prompt for class prediction ($P_A$): 1. Prompt for bounding box prediction ($P_B$) instructs: "This image is from the $P_g$ detection task, output the bounding box of the $P_g$." 2. Prompt for class prediction ($P_A$) states: "Output the name of the $P_g$ and its environment in one word." Here, $P_g$ is a test-generic prompt that is consistent across tasks. For example, for camouflaged animal detection task, $P_g$ is "camouflaged animal", for polyp detection task, $P_g$ is "polyp". - $P_B$ and $P_A$ are fed into MLLM to infer instance-specific spatial prompt bounding box and instance-specific text prompt class name. This class name is then mapped into anther instance-specific spatial point prompts using spatial CLIP. These instance-specific spatial prompts (both point prompts and bounding box prompts) are subsequently fed into SAM to guide the segmentation process. - We refer to the queries inputted into the MLLM as P. - The terms borrowed from [19] will be explained in more detail in the final version. > *Question about the proposed Multi-scale Chain of Thought Prompting. From my understanding, chain-of-thought prompting aims to achieve complex reasoning capabilities through multiple intermediate reasoning steps. Could the author please explain how this idea is reflected in the proposed method?* As we explained in Line 143-190, our MCoT uses multiple intermediate reasoning steps on various chains to infer accurate instance-specific prompts from task-generic prompts. - First, as we mentioned in Line 146-156, each chain includes two intermediate reasoning steps: 1. MLLM first captions the image and 2. Based on this captions, MLLM infers the names and backgrounds of task-relevant objects. Without this initial image captioning step, the inferred instance-specific prompts would be inaccurate. - Second, The input image scales differ across various chains, leading to variations in the positions and completeness of task-relevant objects in each chain. This difference results in distinct intermediate reasoning paths, ensuring diverse information extraction across chains. This variation allows the MLLM to fully leverage prior knowledge from hallucinations to uncover potential task-relevant information. - Finally, the visual contrastive reasoning module aggregates effective information from different chains, eliminating the influence of task-irrelevant hallucinations. This multi-chain, multi-step reasoning process ultimately produces accurate instance-specific prompts. --- Rebuttal Comment 1.1: Comment: Thank the authors for the response. All my concerns have been well addressed. I'd like to raise my score to weak accept.
Summary: The paper introduces the Prompt-Mask Cycle generation framework (ProMaC), which innovatively uses hallucinations from Multimodal Large Language Models to refine segmentation prompts and masks. This method contrasts with traditional approaches by leveraging rather than eliminating hallucinations, enhancing task-related accuracy through an iterative prompting and masking process. ProMaC's efficacy is demonstrated across various benchmark. Strengths: - Unlike previous methods that considered hallucinations as negative, the authors view them as prior knowledge from the pre-trained model, first extracting task-relevant information and then validating it. This perspective is very insightful. - The article is well-structured and clearly written, making it easy to follow. - The experiments are comprehensive, conducted across various tasks and diverse datasets, demonstrating the method's effectiveness and robustness. Ablation studies are also thoroughly conducted on datasets from different tasks to better illustrate the contribution of each component. - The idea of combining SAM and inpainting to generate images without task-related objects to eliminate hallucinations is intriguing and could be adapted to more works aiming to reduce hallucinations. Weaknesses: - The authors demonstrate the outstanding performance of the proposed ProMaC method across various tasks. Could they provide comparison results between the predictions of the ProMaC method and the ground truth across iterations? This would offer a more visual representation of the method's performance. - The proposed method's computational and memory requirements aren't clearly discussed, which might be significantly higher due to the iterative nature of the Prompt-Mask Cycle and multiscale chain of thought prompting. This could limit its applicability in resource-constrained environments. Technical Quality: 4 Clarity: 3 Questions for Authors: 1)Dependence on Initial Prompts: While the iterative refinement process is beneficial, the initial quality of prompts still plays a crucial role. Poor initial prompts might lead to suboptimal starting points, impacting the overall effectiveness of the adaptation process. I would like to see some discussion on this. 2)The authors utilize the LLaVA1.5 as the base MLLM for experiments, achieving results comparable to weakly-supervised and even supervised training in tasks such as Camouflaged Animal Detection, whereas performance in tasks like Medical Image Segmentation is more modest. The authors attribute this to the inherent generalization limitations of LLaVA. Recently developed methods (e.g., GPT4o) have shown better generalization capabilities in medical imaging. Could the authors conduct experiments using these more robust MLLMs on specific sub-tasks to substantiate this perspective? Confidence: 5 Soundness: 4 Presentation: 3 Contribution: 4 Limitations: The paper encounters a few limitations, such as the potential dependence on the initial quality of prompts and potential resource intensity due to the iterative and computationally demanding nature of the ProMaC method. Additionally, the performance disparity across different tasks highlights the limitations of the underlying MLLM's generalization capabilities. Despite these challenges, the overall contribution of the paper remains significant. The novel approach to leveraging hallucinations as a resource rather than a drawback, coupled with comprehensive experiments and detailed reproducibility, underscores the paper's value. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the positive evaluation of our work and for acknowledging the value and insight of the proposed method. > *Could they provide comparison results between the predictions of the ProMaC method and the ground truth across iterations? This would offer a more visual representation of the method's performance.* On Page 16, in Fig. 6, we provide visualizations of segmentation results across different tasks as iterations progress. These results demonstrate that our method shows consistent performance improvement with each iteration across various tasks. Additionally, the quantitative experimental results in Tab. 6 further corroborate this finding, illustrating the incremental enhancement in performance as the iterations proceed. > *The proposed method's computational and memory requirements aren't clearly discussed, which might be significantly higher due to the iterative nature of the Prompt-Mask Cycle and multiscale chain of thought prompting. This could limit its applicability in resource-constrained environments.* Traditional methods often require multiple GPUs for extensive training, leading to higher computational and memory demands. In contrast, as we discussed in Line 281, our method performs iterative test-time adaptation on a single 40G A100 GPU without the need for training, which is computationally efficient. Compared to GenSAM, which also uses test-time adaptation, our method achieves better performance with fewer iterations. As shown in Tab. 6(a), ProMaC reaches higher performance by the second iteration than GenSAM does after six iterations, further demonstrating the efficiency of our approach. > *Dependence on Initial Prompts: While the iterative refinement process is beneficial, the initial quality of prompts still plays a crucial role. Poor initial prompts might lead to suboptimal starting points, impacting the overall effectiveness of the adaptation process. I would like to see some discussion on this.* The format of the initial prompt is fixed, with the only modifiable part being the task-generic prompt $P_g$. However, $P_g$ has minimal impact on experimental results. Using the camouflaged animal detection task as an example, we selected "camouflaged animal" as $P_g$ because the task is named "camouflaged animal detection". To explore the impact of different $P_g$ prompts, we used ChatGPT to suggest two synonyms for "camouflaged animal": "hidden animal" and "disguised animal" as possible $P_g$ candidates. We evaluated the effect of different $P_g$ on the results, as shown in Tab. 2 of the attached PDF file. Although different $P_g$ prompts cause slight fluctuations in the outcomes, the overall performance remains stable. This demonstrates that as long as $P_g$ effectively describes the task semantics, our method can achieve relatively consistent segmentation performance. >*The authors attribute this to the inherent generalization limitations of LLaVA. Recently developed methods (e.g., GPT4o) have shown better generalization capabilities in medical imaging. Could the authors conduct experiments using these more robust MLLMs on specific sub-tasks to substantiate this perspective?* In Tab. 8 on Page 16, we compare the performance of our method with different baseline methods using various MLLMs across different tasks. It is evident from the comparisons among the different baselines that the generalization capability of an MLLM directly impacts its performance on different tasks, which supports our explain. Furthermore, we evaluate an MLLM model based on LLaVA-Med [1] on the Polyp Image Segmentation task, as shown in Tab.1 in attached pdf file, which also support this claim. [1] Li C, Wong C, Zhang S, et al. Llava-med: Training a large language-and-vision assistant for biomedicine in one day[J]. Advances in Neural Information Processing Systems, 2024, 36. --- Rebuttal Comment 1.1: Comment: The authors addressed all my concerns. I suggest to accept this paper.
Summary: The paper introduces an iterative Prompt-Mask Cycle generation framework (ProMaC) with a prompt generator and a mask generator. The prompt generator uses a multi-scale chain of thought prompting, initially exploring hallucinations for extracting extended contextual knowledge on a test image. These hallucinations are then reduced to formulate precise instance-specific prompts, directing the mask generator to produce masks that are consistent with task semantics by mask semantic alignment. The generated masks iteratively induce the prompt generator to focus more on task-relevant image areas and reduce irrelevant hallucinations, resulting jointly in better prompts and masks. The results on several challenging datasets validate the effectiveness of it. Strengths: -A task-generic prompt is used to prompt the proposed ProMaC to perform segmentation. The results on several challenging datasets validate the effectiveness of it. -The authors use useful techniques, multi-scale chain of Thought Prompting, visual Contrastive Reasoning, and Contrastive Sampling Generation, to generate accurate instance-specific prompts. Weaknesses: -The authors mentions that the main motivation of this paper is to utilize hallucinations instead of eliminate it. However, the VCR module is used to “reducing” and “minimizing” it. This confuses me. The authors should clearly explain how this paper utilizes the hallucinations. - The iterative process and the integration of various techniques (e.g., MLLM, SAM, CLIP, and inpainting) add complexity, which may hinder the ease of implementation and understanding. -Compared with GenSAM [19], What is the main innovation of the Mask Generator? - At the first iteration, since masks have not yet been generated, what are the visual markers in this process? -The core of this task is how to generate accurate visual prompts through MLLM using a general text prompt. From my perspective, this task should be evaluated more on general datasets, such as COCO, to demonstrate the method's generalizability rather than on specific domain tasks (e.g., camouflage). Some comparison methods in this paper, such as X-Decoder and SEEM, are all generalized methods. Comparing with them, I believe, is unreasonable. I hope the author can discuss this issue with me. -Minor: The authors should list the genetic task prompts used for different tasks. Technical Quality: 3 Clarity: 3 Questions for Authors: See the weakness part. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The paper presents the limitation in the appendix. The authors suggest further exploration and research into the generalization potential of foundational MLLM models in future work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the careful reading and insightful comments. Following are the responses regarding your concerns. > *The authors should clearly explain how this paper utilizes the hallucinations.* As we explained in Line 143-150, we utilize hallucination to bootstrap a scene-understanding of each test image. By dividing the input image into patches at various scales, the MLLM induces hallucinations based on varying object visibility in diverse patches. These hallucinations use prior knowledge to explore connections between the image data and the associated task, gathering task-relevant information before eliminating irrelevant semantics in our reasoning process. This is critical when visual cue is weak / ambiguous for segmentation. We then selectively reduce irrelevant semantics in order to remove irrelevant influence whilst benefit from relevant influence obtained from pre-trained knowledge (aka hallucination). This is in contrast to existing methods which all aim to remove hallucination blindly regardless its usefulness in helping scene-understanding in an image, therefore improving segmentation when visual cues are weak / ambiguous. > *The iterative process and the integration of various techniques (e.g., MLLM, SAM, CLIP, and inpainting) add complexity, which may hinder the ease of implementation and understanding.* - Traditional methods often require multiple GPUs for extensive training, leading to higher computational and memory demands. In contrast, as we discussed in Line 281, our method performs iterative test-time adaptation on a single 40G A100 GPU training-free, meaning all module parameters are fixed from the start. This eliminates the need for training and parameter tuning, and therefore reducing implementation complexity. - We also provide implementation details in the PyTorch Implementation Details section (Line 271-281) of the main text and Line 551-581 of the appendix. Our code is available in the supplementary materials, ensuring our method is easy to understand and implement. > *Compared with GenSAM [19], What is the main innovation of the Mask Generator?* - As we explained in Line 212-213, the key innovation of our mask generator is the proposed Mask Semantic Alignment (MSA) module, which ensures that the generated masks align with task semantics—an achievement that GenSAM cannot match. - In GenSAM, the mask generator uses spatial CLIP to convert instance-specific text prompts into positive/negative point prompts and directly feeds them into SAM to obtain masks. However, since SAM is trained without category labels, it lacks label prediction capabilities. This leads to masks that do not meet task requirements when the point prompts fed into SAM are not accurate. - Our MSA module addresses this limitation by using similarity-weighted masks aggregation to ensure that the output masks are aligned with the task semantics. The effectiveness of our MSA module is demonstrated in the last two rows of the ablation experiment in Tab. 4 in original paper. > *At the first iteration, since masks have not yet been generated, what are the visual markers in this process?* As we explained in Line 199-201, in the first iteration, Multi-scale Chain of Thought Prompting (MCoT) infers predicted bounding boxes for multiple different patches. We use the union of these bounding boxes as the visual marker for the first iteration to guide the generation of the contrastive sample $X^{'}$. > *From my perspective, this task should be evaluated more on general datasets, such as COCO, to demonstrate the method's generalizability rather than on specific domain tasks (e.g., camouflage). Some comparison methods in this paper, such as X-Decoder and SEEM, are all generalized methods. Comparing with them, I believe, is unreasonable. I hope the author can discuss this issue with me.* - In Tab. 3(b), we have conducted experiments on three general datasets: COCO Object, PASCAL VOC, and Pascal Context, showing that our method achieves good performance on general segmentation tasks comparing to other unsupervised/weakly supervised methods. - As for generalized methods like X-Decoder and SEEM, it is ideal to fairly compare X-Decoder and SEEM on both general and specific tasks. However, these two methods are trained on general segmentation datasets with semantic labels, while our pre-trained model only trains with image mask pairs without semantic labels. Therefore, it is unfair to directly compare ProMaC with these methods on general segmentation tasks. Meanwhile, since neither these methods nor our ProMaC have been trained on specific domain datasets, comparing them on these tasks is fair. > *List the genetic task prompts used for different tasks.* As we explained in Line 273-276, the text generic prompt $P_g$ for the camouflaged animal detection task is “camouflaged animal”. For the two sub-tasks in medical imaging, the text generic prompts are “polyp” for polyp image segmentation and “skin lesion” for skin lesion segmentation. For the transparent object detection task, the text generic prompt is “glass”. --- Rebuttal Comment 1.1: Comment: After reading the author's response, most of my concerns have been addressed. I've decided to raise my current score.
Rebuttal 1: Rebuttal: We thank all the reviewers for their uniformly positive evaluations and valuable feedback. We appreciate fruitful suggestions of the reviewers that helped to improve the overall presentation of our work. We are encouraged by the positive comments from reviewers for the following: (i) The idea of this paper is quite interesting (Reviewers u8FA, fu4N, E6it, E5Cz) (ii) extensive experiments over diverse segmentation benchmarks demonstrate the effectiveness of the proposed method (Reviewers u8FA, fu4N, E6it) and (iii) the writing quality and the clarity of the presentation of the ideas (Reviewer fu4N). Additionally, as mentioned by reviewer E6it, we emphasize that another major strength of our algorithm is that it can reduction of manual effort and have good adaptability and versatility. In the response to the reviewers’ feedback, we have conducted additional experiments and motivation visualizations shown in the attached pdf. The attached pdf includes: - Fig. 1: More motivation visualization on medical image and general image tasks. - Tab. 1: ProMaC performance on the Polyp Image Segmentation task using LLaVA-Med. - Tab. 2: Performance comparison of ProMaC with different task-generic prompts $P_g$. We provide more details and address reviewers’ comments in the individual response to reviewers. We hope that our detailed response and additional experiments will help to increase reviewers’ confidence. Pdf: /pdf/ff8db7cd354c04d0ed4ebac8f7aafdd417dd97f0.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
SpikeReveal: Unlocking Temporal Sequences from Real Blurry Inputs with Spike Streams
Accept (spotlight)
Summary: The SpikeReveal paper uses a blurry RGB image and leverages pulse data captured by a spike camera of the corresponding scene to guide the image deblurring task. In terms of network design, the authors cascade a blind-spot network for denoising, a super-resolution network, and a deblurring network, employing a self-supervised approach to accomplish the deblurring task. Strengths: The SpikeReveal paper uses a blurry RGB image and leverages pulse data captured by a spike camera of the corresponding scene to guide the image deblurring task. In terms of network design, the authors cascade a blind-spot network for denoising, a super-resolution network, and a deblurring network, employing a self-supervised approach to accomplish the deblurring task. Weaknesses: The paper has several areas that could be improved to better achieve its stated goals. Addressing these issues would improve the robustness and validity of the comparative analysis. Technical Quality: 3 Clarity: 3 Questions for Authors: [Definition of t_s] There is an issue with the definition of t_s in line 109. When the first spike occurs, there is no previous spike to reference. Therefore, this definition needs to be revised. [Missing Ablation Study] Table 2: The module cascading ablation on GOPRO should include with BSN without SR with LDN and without BSN with SR with LDN. This would allow us to determine which component, BSN, SR, or LDN, contributes the most to the final performance. [Comparison with SpkDeblurNet] In Table 1 and Figure 4, has SpkDeblurNet been retrained using the same simulation data as the authors' method? Why do the restoration results of SpkDeblurNet in Figure 4 appear noticeably darker? Additionally, why does SpkDeblurNet perform better than the authors' method at Vth=1V_{th}=1? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors could consider collecting paired datasets to better address the limitations of simulating pulse camera data. While the authors mention using real data, it serves only as input to the network without providing a genuine ground truth for comparison. Addressing this by capturing paired real-world data could significantly enhance the validity and applicability of their findings. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are grateful for your detailed feedback and suggestions, which have helped us identify key areas where our manuscript can be improved. ***1. [Definition of t_s] There is an issue with the definition of t_s in line 109. When the first spike occurs, there is no previous spike to reference. Therefore, this definition needs to be revised.*** Thank you for pointing out this issue. We will address the problem you mentioned in the final version. ***2. [Missing Ablation Study] Table 2: The module cascading ablation on GOPRO should include with BSN without SR with LDN and without BSN with SR with LDN. This would allow us to determine which component, BSN, SR, or LDN, contributes the most to the final performance.*** We have supplemented the ablation study based on your request, including the configurations for BSN + LDN and SR + LDN. The overall results of the ablation study are as follows: **Table.R5: Ablation study on the proposed different modules.** | ID | BSN | SR | LDN | PSNR | SSIM | |:--------------------:|:-------------------:|:-------------------:|:-----------:|:--------------:|:-------------:| | I-1 | ✗ | ✗ | ✗ | 23.012 | 0.486 | | I-2 | ✓ | ✗ | ✗ | 24.634 | 0.661 | | I-3 | ✓ | ✓ | ✗ | 26.144 | 0.708 | | I-4 | ✗ | ✓ | ✓ | 26.172 | 0.633 | | I-5 | ✓ | ✗ | ✓ | 26.662 | 0.745 | | I-6 | ✓ | ✓ | ✓ | **27.928** | **0.786** | The table compares the performance of various configurations involving BSN, SR and LDN in terms of PSNR and SSIM. The baseline configuration (I-1), with no methods applied, shows the lowest image quality metrics. Adding BSN alone (I-2) significantly enhances both PSNR and SSIM, demonstrating effective noise reduction. When SR is added (I-3), further improvement is observed, reflecting enhanced detail recovery. The BSN + LDN setup (I-5) provides better results than SR + LDN (I-4), confirming BSN’s key role in enhancing quality over the SR network, which will amplify the image noise without the pre-processing of the BSN network thus degrading the image quality and reconstruction performance. Finally, the full combination of BSN, SR, and LDN (I-6) delivers the highest PSNR and SSIM scores, showcasing the synergistic effect of integrating all three methods for optimal image deblurring performance. ***3. [Comparison with SpkDeblurNet] In Table 1 and Figure 4, has SpkDeblurNet been retrained using the same simulation data as the authors' method? Why does SpkDeblurNet perform better than the authors' method at Vth=1V_{th}=1?*** Fig. 4 presents the comparison of different methods in the sequence reconstruction task for real-world scenarios. In real scenes, the absence of ground truth sharp frames means that the SpkDeblurNet method cannot be retrained, leading to noise-related image degradation. In contrast, the S-SDM method effectively overcomes the domain gap and restores sharp texture details, benefiting from its self-supervised mechanism. Both S-SDM and SpkDeblurNet are trained under the same simulation dataset while S-SDM is further fine-tuned in real-world scenarios. In Tab. 1, to mimic the domain gap between synthetic and real datasets and quantitatively represent this gap using PSNR and SSIM metrics, we designed our experiments such that all methods were trained with a scenario where $V_{th} = 1$ and evaluated across scenarios with different threshold values. The reason SpkDeblurNet performs better under the $V_{th} = 1$ condition is that it is constrained by strong supervision signal, which makes the network to learn the mapping from the blurry input and the spike stream to the sharp image. Under this specific setting, the strength of our self-supervised framework, S-SDM, to effectively overcome the domain gap is not fully realized. ***4. Why do the restoration results of SpkDeblurNet in Figure 4 appear noticeably darker?*** Please refer to the response in **To all reviewers**. ***5. The authors could consider collecting paired datasets to better address the limitations of simulating pulse camera data. While the authors mention using real data, it serves only as input to the network without providing a genuine ground truth for comparison. Addressing this by capturing paired real-world data could significantly enhance the validity and applicability of their findings.*** Simultaneously capturing spike streams, blurry image inputs, and clear images requires precise spatiotemporal calibration of three different cameras. This process is time-consuming and labor-intensive, with the potential for calibration inaccuracies. Thank you for your feedback. We will consider further improving our camera system in future research to develop a dataset that includes RGB inputs, spike streams, and ground truth clear images.
Summary: The work focuses on improving image sharpness from blurry inputs using spike cameras with high-motion capture rates. Addressing limitations of supervised learning in real-world scenarios, the authors introduce a self-supervised framework for spike-guided motion deblurring. Validation through extensive experiments on both real-world and synthetic datasets confirms its superior performance and generalization ability. Strengths: This paper introduces a pioneering approach to spike-guided motion deblurring, addressing the challenge of recovering sharp images from blurry inputs captured by spike cameras. It demonstrates high-quality research with rigorous theoretical foundations and thorough experimental validations on synthetic and real-world datasets. Weaknesses: 1,I'm curious about how the order of applying BSN (Blur to Sharp Network) and EDSR (Enhanced Deep Super-Resolution) influences model performance. Specifically, I wonder if this sequence could introduce additional artifacts during image restoration. Technical Quality: 2 Clarity: 3 Questions for Authors: see the weakness Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for dedicating your time to provide constructive criticism and recommendations for our article. ***I'm curious about how the order of applying BSN (Blur to Sharp Network) and EDSR (Enhanced Deep Super-Resolution) influences model performance. Specifically, I wonder if this sequence could introduce additional artifacts during image restoration.*** Before answering this question, we need to clarify why we need the BSN and EDSR networks. As stated in our paper, the SDM model has two main issues: 1. Short-exposure spike frames contain **significant noise** due to limited information contained during this period. 2. Two modalities have **inconsistent resolutions**, making it challenging to apply the SDM model directly to real-world scenarios. To address these issues, the self-supervised denoising network BSN and the super-resolution network EDSR are employed. The EDSR network is retrained based on the texture similarity between long-exposure spike frame and blurry input image. In the following, we first explain the difference of BSN-EDSR and EDSR-BSN two orders from the following perspectives: ***A. Theoretical Analysis*** **EDSR-BSN:** The advantage of this order is that it can recover as much information as possible from the low-resolution details, making it suitable for spike reconstructions that are rich in detail and have less noise. However, a drawback is that the super-resolution process may amplify the original noise in the image, making it difficult for subsequent denoising to completely remove the noise. **BSN-EDSR:** The advantage of this order is that the denoised image provides a sharper baseline for super-resolution, helping the algorithm more accurately reconstruct image details and avoid amplifying noise. The drawback is that if the denoising is too aggressive, it may remove important details from the image, making it challenging for super-resolution to recover these details, especially in the image where details are relatively limited. ***B. Quantitative Experiments*** In the GOPRO dataset, both scenarios mentioned above are present. Through evaluating the performance of both orders in the entire dataset, we find that the denoising-first, super-resolution-second approach (BSN-EDSR) yields better results, so we adpot this processing baseline. **Table.R4: Ablation study on the order of the BSN and EDSR networks.** | Method | PSNR | SSIM | |:---------:|:--------:|:-------:| | EDSR-BSN | 25.914 | 0.692 | | BSN-EDSR | **26.144** | **0.708** | ***C. Visual Comparison*** To understand this intuitively, we conduct a visual comparison on the GOPRO dataset, as shown in Fig. R3. In the example (street scene) above Fig. R3, it can be seen that the EDSR-BSN method effectively recovers some detailed features of the floor compared to the BSN-EDSR method. The EDSR-BSN method cannot accurately recover these details because the BSN network first removes them, leaving little information for the EDSR network to super-resolve. In the example below, regarding some background information, the EDSR-BSN method produces noticeable noise and image artifacts. (The magnified image regions use contrast enhancement to highlight detailed information.) This occurs because applying the EDSR to the reconstructed spike frame first enlarges the noise, resulting in a noisy final image. The subsequent BSN network cannot mitigate this due to the limited neighboring information available during the blind convolution process.
Summary: This work proposes a spike-guided self-supervised image deblurring algorithm that combines the high spatial resolution of RGB cameras with the high temporal resolution of spike cameras to obtain sharp RGB images in real-world scenarios. The self-supervised network addresses performance degradation issues found in existing supervised spike deblurring algorithms. The spike-guided module considers noise and spatial resolution alignment between the two cameras. Both subjective and quantitative experimental results show that the proposed algorithm achieves excellent generalization performance in both real and simulated scenes, surpassing previous spike-guided deblurring algorithms. Strengths: 1. The proposed self-supervised spike-guided RGB deblurring algorithm effectively addresses the synthetic-real domain gap performance degradation of previous algorithms. 2. The authors analyze the relationships between spike data, blurry RGB, and sharp RGB images, providing a theoretical foundation for the Spike-Guided Deblurring Model (SDM). 3. The paper's clear writing and figures make it easy to follow, with comprehensive experiments demonstrating the RGB-Spike binocular system's spatiotemporal alignment and generalization in real scenarios. Weaknesses: 1. While many existing image-based deblurring algorithms perform adequately, the introduction of spike cameras presents challenges such as aligning the two modalities. The authors have not sufficiently compared or explained the advantages of the spike-guided approach. 2. The robustness of the model to noise lacks systematic analysis and discussion. Is this primarily influenced by the BSN Loss described in Equation 9? Technical Quality: 4 Clarity: 4 Questions for Authors: This method primarily targets optical flow motion estimation. For more complex motion In addition to addressing the performance degradation issues of some supervised models in real-world scenarios, what other advantages does the proposed unsupervised model offer? Confidence: 5 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: The authors have stated the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful review and for pointing out potential issues and improvements in our paper. ***1. While many existing image-based deblurring algorithms perform adequately, the introduction of spike cameras presents challenges such as aligning the two modalities. The authors have not sufficiently compared or explained the advantages of the spike-guided approach.*** We aim to address this question from the following perspectives: **A. Theoretical Analysis**. Image-based methods for addressing motion blur often struggle to capture motion information accurately in real-world scenes because traditional cameras cannot precisely record motion details during the exposure. This limitation can lead to incorrect motion trajectories in real-world scenarios. For example, when using a conventional camera to capture a square object moving from left to right, the resulting blurry image does not inherently indicate whether the object is moving from left to right or right to left without prior information. Additionally, reconstructing the texture details of an object at any moment during the exposure from a single blurred image is difficult because it is an ill-posed problem with limited motion representation. However, spike streams, which contain rich motion and texture information, can effectively alleviate issues related to uncertain motion trajectories and missing texture details. **B. Experimental Results.** On the synthetic dataset GOPRO, image-based methods like BiT have achieved promising results. This is because the motion patterns in the GOPRO dataset are relatively consistent, and the methods for synthesizing motion blur are also uniform. However, as seen in Fig. 3, 4, 11, and Tab. 4, in real-world scenarios where the motion patterns differ significantly from those in the GOPRO dataset, the BiT algorithm encounters severe image degradation and inaccuracies in motion trajectory recovery, which is evident in both quantitative performance results and qualitative visual outcomes. **C. Ablation Study.** We conduct a simple ablation study using SpkDeblurNet on the GOPRO dataset in SpkDeblurNet to verify the contribution of spike streams in assisting deblurring. The comparison results are shown in the Tab. R3. It can be seen that the incorporation of the spike stream can improve about 5 db in PSNR for the single frame image deblurring task, which will behave better in the sequence reconstruction task since image-based methods suffer from severe motion ambiguity. **Table.R3: Ablation study on the effectiveness of the spike stream.** | Method | PSNR | SSIM | |:---------------:|:-------:|:-------:| | Image | 32.45 | 0.895 | | Image + Spike | **37.42** | **0.968** | ***2. The robustness of the model to noise lacks systematic analysis and discussion. Is this primarily influenced by the BSN Loss described in Equation 9?*** Please refer to the response in **To all reviewers**. ***3. For more complex motion In addition to addressing the performance degradation issues of some supervised models in real-world scenarios, what other advantages does the proposed unsupervised model offer?*** The core contribution of this paper is the introduction of a spike-based deblurring physical model, SDM, and a self-supervised spike-based deblurring model, S-SDM. The S-SDM model primarily addresses the noise and resolution mismatch issues present in the SDM model. In addition to overcoming the domain gap problem, the proposed unsupervised model has the following advantages: **A. Interpretability.** Our SDM model theoretically constructs the relationship between blurred images, spike streams, and sharp images. Compared to previous fully end-to-end spike deblurring networks like SpkDeblurNet, our method has a stronger theoretical foundation. **B. Implementability.** The SDM is a model-based motion deblurring method, which can be directly deployed in real-time systems and is relatively easy to implement. **C. Deployability.** Our S-SDM features a core motion deblurring network, LDN, with a very small parameter size (0.23M). Compared to the parameters of other supervised learning networks like SpkDeblurNet, the LDN network's parameters are only 2% of SpkDeblurNet's, making it more suitable for deployment in real-world motion deblurring scenarios. --- Rebuttal Comment 1.1: Comment: Thanks to the authors for their response and the insights on the model's noise robustness, as well as its advantages over other image-based deblurring algorithms. The additional explanations provided by the authors regarding the S-SDM method from the perspectives of Interpretability, Implementability, and Deployability were also very clear. I have no further questions. I raised my score to 7. --- Reply to Comment 1.1.1: Title: Thanks for your valuable time and increasing the score. Comment: Thank you sincerely for your insightful comments, valuable suggestions, kind appreciation of our work and increasing the score. Thanks a lot for your valuable time! Your time and input mean a lot to us.
Summary: This paper combines the RGB camera and the spike camera for image deblur. The key contributions consist a self-supervised learning framework for deblur and a real-world dataset RSB. Strengths: This paper presents a novel self-supervised framework for image deblur with spike camera. The network design is interesting. The paper is well-written. Weaknesses: 1 The originality is marginal. The whole framework is similar to [36]. Please clarify more on the difference between this paper and [36]. 2. The proposed LDN is also similar to the DCN in [36]. It would be better to see in the ablation whether LDN is better than DCN. 3. It can be observed that smaller V_th leads to better performance. What if V_th == 0.5? 4. The size and diversity of the RSB dataset is limited. It seems the RSB dataset only contains indoor scenes. Outdoor scenes with various objects are desired. Technical Quality: 3 Clarity: 4 Questions for Authors: See the weaknesses ------------------------------------ The response addressed my concerns. I raised my score to 6. Confidence: 5 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: See the weaknesses Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate the time and effort you have taken to review our manuscript. ***1. The originality is marginal. The whole framework is similar to [36]. Please clarify more on the difference between this paper and [36].*** Paper \[36\] is an ICCV23 publication on a self-supervised evnet-based motion deblurring algorithm named GEM. Besides both being self-supervised algorithms, the content and framework designs of S-SDM and GEM are completely different. We explain the differences between the two in the following aspects: **A. Camera Principles**: GEM is designed for event cameras while our S-SDM is applied on spike cameras. These two neuromorphic cameras leverage different sampling techniques—differential sampling for event cameras and integral sampling for spike cameras. This difference results in substantial variation in the framework of the two self-supervised motion deblurring methods. **B. Number of Blurry Frames**: GEM relies on two blurry frames to provide mutual information for constraint when designing the self-supervised loss function, as shown in Fig. R2. However, S-SDM only requires a single blurry frame to complete the self-supervised task, making it more suitable for deployment in online real-time motion deblurring systems compared to GEM. **C. Self-Supervised Pipeline**: The core of GEM’s self-supervised algorithm is to explore the relationship between two different blurry images, Blur1 and Blur2, with the aid of the event stream to achieve self-supervised deblurring. In contrast, the core of S-SDM’s self-supervised approach is to enhance the quality of pseudo-labels using a teacher network consisting of the SDM, BSN, and EDSR, while exploring the relationship between the latent sharp image and the input blurry image to achieve self-supervision, as shown in Fig. R2. **D. Loss Functions**: GEM’s core loss function is the Blur2Blur loss, which trains the network to learn Blur2 from Blur1 with the aid of the event stream. S-SDM’s core loss function is the blur consistency reblur loss, which ensures that the deblurred images obtained at different times during the exposure time, when recombined, are consistent with the original blurry image, as shown in Fig. R2. **E. Teacher Model**: GEM uses the SAN network as the teacher model for generalizing it to scenes at different spatiotemporal scales, while S-SDM’s teacher model consists of SDM, BSN, and EDSR, aiming to provide high-quality pseudo-labels for the LDN network. **F. Theoretical Analysis**: GEM analyzes the relationship between varying degrees of blurriness and event streams. In contrast, S-SDM studies single-frame blurry images, spike streams, and sharp images. Due to the different sampling principles of spike cameras and event cameras, the theoretical analyses of GEM and S-SDM are entirely different. ***2. The proposed LDN is also similar to the DCN in [36]. It would be better to see in the ablation whether LDN is better than DCN.*** We should first clarify that the network framework in the GEM paper is the Scale-aware Network (SAN) rather than DCN. DCN is merely a feature extraction module used to enhance feature extraction in the image domain. Regarding the issue mentioned, we want to clarify that the core contribution of this paper lies in designing a self-supervised processing pipeline for deblurring with spike cameras. **The network design is not the core contribution. As stated in the paper, we use the encoder-fusion-decoder multimodal fusion framework in designing the network in line with previous studies.** This framework has been widely used in many previous studies. In the SAN network, the core design is not the encoder-fusion-decoder framework but rather its unique MSFF block. **Why didn’t we explore further in network architecture?** The reason is that for the self-supervised learning framework designed in this paper, improving network performance depends mainly on how to enhance the quality of pseudo-labels. In the supervised learning framework spk2deblurnet, since the supervision signal is direct and explicit, exploring the network architecture to better fit the relationships between blurry images, spike inputs, and sharp images can effectively improve performance. However, for a self-supervised pipeline, a lightweight and simple network is sufficient. Therefore, the network in this paper consists only of convolutional layers, ResBlocks, and simple modules like CBAM. Finally, to prove our statement, we compare our designed LDN network with the SAN network designed in GEM in terms of PSNR, SSIM, Params, and Flops on the single-frame motion deblurring task: **Table.R2: Comparison between the SAN network in GEM and our designed LDN.** | Methods | PSNR | SSIM | Params (M) | Flops (G) | |:---------:|:-------:|:-------:|:------------:|:-------------:| | SAN [36] | 27.283| 0.773 | 2.36 | 107.84| | LDN (Ours) | **27.928**| **0.786**| **0.234** | **33.60** | As shown in Tab. R2, for the self-supervised deblurring task, our LDN network achieves better performance in terms of PSNR and SSIM while maintaining smaller model parameters and computational requirements. This demonstrates that for the self-supervised spike deblurring task presented in this paper, our designed simple network LDN is sufficient for this task while being parameters and computation lightweight. ***3. It can be observed that smaller V_th leads to better performance. What if V_th == 0.5?*** Please refer to the response in **To all reviewers**. ***4. The size and diversity of the RSB dataset is limited. It seems the RSB dataset only contains indoor scenes. Outdoor scenes are desired.*** Our RSB dataset comprises 10 video sequences, including 9 indoor scenes and 1 outdoor scene as shown in Fig. R4. Due to the time-consuming nature of time and space calibration across two modalities, we plan to collect more scenes and increase the diversity, which will be included in the submission version. --- Rebuttal Comment 1.1: Comment: Thanks for the response. I raised my score to 6. --- Reply to Comment 1.1.1: Title: We greatly appreciate your thoughtful feedback. Comment: We're especially grateful for the increased score and the time you've dedicated to reviewing our paper. Your insights and support are truly meaningful to us. Thank you!
Rebuttal 1: Rebuttal: We thank all reviewers for their constructive comments and positive feedback. We are pleased that our paper has been recognized as "well-written" [G1PY,VbQb] with a "pioneering approach" [RXBh] and our network design found "interesting" [G1PY]. Our proposed S-SDM is acknowledged for effectively addressing the synthetic-real domain gap [VbQb], providing a strong theoretical foundation [VbQb], and demonstrating excellent experimental results [VbQb, RXBh]. We are also grateful for the recognition of our method to synergistically enhance the deblurring results in a self-supervised manner [1hhy]. The reviewers' insights are invaluable, and we will integrate all suggestions to refine our work further. In this general response, we would like to address the crucial concerns regarding '***How $V_{th}$ influences the experiments.***' > **[Reviewer G1PY]**: It can be observed that smaller V_th leads to better performance. What if V_th == 0.5? The threshold $V_{th}$ has a multifaceted impact in this task. When the spike threshold is high, the spike firing rate of the spike camera decreases, but the dark current and other noise in the spike camera simulation remain unchanged. This results in a lower signal-to-noise ratio of the input information, leading to poorer deblurring performance. Conversely, when the spike threshold is too low, the spike camera's synchronous sampling mechanism causes issues. A lower threshold leads to multiple spikes being fired between intervals, but only one is read, resulting in less spike stream information during the readout time. Considering this problem from an extreme perspective: if the spike firing threshold is infinitely high, there will be no spike readout, only dark current noise. If the spike firing threshold is infinitely low, every sampling cycle will produce frames filled with ones, making it impossible to extract meaningful information. We further evaluated the deblurring performance of the S-SDM model in a sequence recovery task under different spike threshold values of 0.25 and 0.5. The experimental results are as follows: **Table.R1: Comparison of our method under different $V_{th}$.** | $V_{th}$| 0.25 | 0.5 | 1 | 2 | 4 | |:--------:|:--------:|:--------:|:--------:|:--------:|:--------:| | PSNR | 24.657 | 25.886 | 26.893 | 26.367 | 25.433 | | SSIM | 0.633 | 0.724 | 0.757 | 0.740 | 0.699 | From the above table, it can be seen that $V_{th}= 1$ achieves the best performance, which aligns well with our previous analysis. Additionally, we provide visualization comparison of the SDM model at a threshold of $V_{th} = 0.5$ in Fig. R1. It can be observed that even though the short exposure spike frame suffers from texture damage due to truncation errors in most regions, the SDM model effectively retains the correct texture information by utilizing the blurry image $B$. SDM model, while removing motion blur from the blurry input, significantly mitigating truncation errors caused by the low threshold in the spike frames, which further demonstrates the robustness of our method. > **[Reviewer VbQb]**: The robustness of the model to noise lacks systematic analysis and discussion. Is this primarily influenced by the BSN Loss described in Equation 9? The proposed method effectively alleviates degradation issues such as noise, primarily due to the design of its self-supervised framework. This design grants it strong generalization capabilities in both real-world and synthetic datasets, as demonstrated in Fig. 1 of this paper. We agree with your view that the robustness of our method against noise is largely attributed to the BSN loss function. Next, we will explain the role of the BSN loss from the following perspectives: **A. Working Mechanism.** The robustness of the BSN model against noise is primarily due to its use of a blind-spot convolution strategy. This approach employs self-supervision to eliminate noise in the short-exposure spike imaging frames. As a result, the BSN model can effectively remove noise of any type through retraining, offering strong generalization capabilities. **B. Visual Comparison:** From the ablation studies in Figs. 5 and 17, we can see that I-2 effectively eliminates the substantial background noise present in I-1, such as the noise around license plates and letter signs. This demonstrates the effectiveness of the BSN self-supervised loss. This loss function is also capable of effectively removing dark current noise present in real-world scenarios, thus ensuring generalization to real scenes. **C. Quantitative Analysis:** According to our latest ablation experiment results in Tab. R5, the BSN loss function effectively removes background noise both when used alone and when combined with SR and LDN. It improves PSNR by 1.6 dB and 1.8 dB, respectively, in these configurations. > **[Reviewer 1hhy]** Why do the restoration results of SpkDeblurNet in Figure 4 appear noticeably darker? The reason SpkDeblurNet appears significantly darker in Fig. 4 is due to its nature as a supervised learning method. It has only learned to extract deblurring texture details from the spike streams with fixed firing rates in the synthetic GOPRO dataset ($V_{th} = 1$ under this condition). When applied to real-world scenarios, if the spike firing rate in the real scene differs from that of the synthetic dataset, SpkDeblurNet misestimates the spike density. This results in darker outputs when the spike density is lower than in the synthetic dataset, and overexposure in scenarios where the spike density is higher, as seen in Fig. 12. Pdf: /pdf/c4c2192ef41d6dbab585aaea5fb0e87bd6542e57.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Evaluating the design space of diffusion-based generative models
Accept (poster)
Summary: This paper analyzes both the training and sampling errors of diffusion models. The analysis sheds light on two practically relevant design choices, which are the noise level weighting (during training) and discretization (during sampling). Strengths: - Figure 1 presents a clear qualitative picture of how to choose noise level schedules for training and sampling. In particular, this is the first theoretical work (to the best of my knowledge) to show how the sampling discretization schedule should be adapted depending on score errors arising because of imperfect training. - The sampling theoretical analysis is thorough and is very up-to-date with the sampling convergence literature. It extends previous results from the variance-preserving to the variance-exploding case, but the extension seems straightforward. - I am however not knowledgeable enough about the optimization literature to speak of the novelty of the training error results. Weaknesses: All the theoretical results are very hard to even parse, let alone extract intuition or guidance for practice. The authors argue that their results provide justification for the design choices of Karras et al. [31], but it seems to me that they are shoehorned to fit the chosen narrative. In other words, it is not clear to me that the exact same error bounds could not be used to justify any other schedule used in practice. I find Section 4.1 in particular to be very handwavy. I appreciate the attempt to connect theoretical results to practical choices, but I think the connection is too tenuous here. Some claims made in the text seem unsubstantiated to me: - The authors claim that the NTK can only be analyzed in very restricted settings (lines 103-104). I don't see why this is true, as the NTK applies to all architectures in the lazy regime. - I don't see how Theorem 1 implies that GD has an exponential convergence (lines 223-224) as product factors depend on $s$. - I don't see how Theorem 2 implies that the sampling error is almost linear in $d$ as $\gamma_k$ and $\sigma_t$ may depend on $d$. Minor: - line 140 and eq (4): $C$ should be $C_t$ as it depends on $t$ - line 150: aspectS - line 154: by A deep ReLU - line 176: introduce $W$ before line 174 - line 364: we maintain a fixed ~the~ total weighting Technical Quality: 3 Clarity: 1 Questions for Authors: - I don't understand the sentence in lines 296-299, which mentions empirical and population losses even though the paper does not tackle the generalization error. Also, I don't see how the central limit theorem applies here, as individual terms in the empirical average over the training set are not independent since the network parameters themselves depend on the training set. Could the authors please clarify? - Do the authors believe that their work can inform practical decisions? If yes, what new predictions can be made using their theoretical results? If no (which does not necessarily imply the results are not significant), I think the paper should be more honest about this. Confidence: 3 Soundness: 3 Presentation: 1 Contribution: 2 Limitations: Contrary to what is stated in the checklist, limitations are not discussed in the paper. There is no discussion section. - A first limitation is that only the approximation and optimization errors are studied, and generalization is not tackled. This means that the score network learns the empirical distribution of the training set, and thus reproduces training data points during sampling. This is only briefly mentioned in passing in the middle of the text but should be stated in the introduction (and potentially discussion). - Some assumptions are questionable. First, the first and last layer are not trained and left to their random initialization. Second, and more importantly, the chosen asymptotic setting corresponds to extremely wide networks trained on very few data points (probably so that they can memorize their training set). This thus corresponds to a rather unrealistic setting. All in all, I am divided about this paper. On the one hand, it provides a unified analysis of both training and sampling errors, which opens up understanding how to adapt the discretization schedule to score errors. On the other hand, independently of the mathematical correctness of the results (which were not checked), they seem very artificial (due to the restricted setting of the analysis in section 3, and the very qualitative arguments in section 4), and not clearly presented at all. It is thus not clear to me how they can be useful to the theory and practice communities. I recommend rejection, but I am willing to change my decision if the authors convince me that my assessment is incorrect. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We greatly thank the reviewer for all the helpful comments. Sorry for confusions created by our initial submission. We hope the itemized responses below, can do a better job in making things clearer, and will further accomplish this goal in a revision. > All the theoretical results are very hard to even parse, let alone extract intuition or guidance for practice. The authors argue that their results provide justification for the design choices of Karras et al. [31], but it seems to me that they are shoehorned to fit the chosen narrative. In other words, it is not clear to me that the exact same error bounds could not be used to justify any other schedule used in practice. I find Section 4.1 in particular to be very handwavy. I appreciate the attempt to connect theoretical results to practical choices, but I think the connection is too tenuous here. We appreciate an opportunity of explanation. Our biggest contribution is the analysis of the score training process. This is in contrast to the sampling process, which, as pointed out, is by now already much better understood thanks to an entire community. Our 2nd biggest contribution, in our opinion, is the general error bound obtained by fusing both training and sampling, and it applies to all design choices. Due to the complexity of deep learning where data, architecture, nonconvex optimization, and schedules are all mingled together, the theoretical result is indeed complicated, and it might have to, if an informative rate is desired. But like the reviewer pointed out, the theoretical result may be indeed difficult to parse due to being fully *quantitative* (it could also be due to our presentation, which we now improved significantly). Understanding that, we thus also tried to illustrate some of its *qualitative* implication by connecting to the practice, whose difficulty we appreciate that the reviewer acknowledged. To that end, we simply would like to understand one thing, namely whether/why the design choice of Karras et al. [31] is good, largely due to its impressive performance and hence popularity. And our theory indeed helped us obtain answers, which can be considered as our no.3 contribution. Regarding "it is not clear to me that the exact same error bounds could not be used to justify any other schedule used in practice", our bounds indeed can be used to justify other schedules as well, but we were not able to pack analyses of more schedules due to page limit. > The authors claim that the NTK can only be analyzed in very restricted settings (lines 103-104). I don't see why this is true, as the NTK applies to all architectures in the lazy regime. Apology for the confusion. We feel the inconsistent understandings between us and the expert reviewer might be due to different definitions of NTK. The reviewer's interpretation of "NTK" seems to be in a broad sense: any models using infinitesimal learning rate in the lazy regime can be referred to as the NTK training regime, which is definitely true. In contrast, "NTK" considered in our original version is in a narrow sense following the initial NeurIPS'18 NTK paper, meaning that proof techniques mainly leverage the Gram matrix of the NTK kernel. Based on this definition, current literature only proved the convergence under restricted settings (scalar output, or vector output of a two-layer network with only one layer trained) due to possibile difficulty lying in the structure of the Gram matrix. Methodologies other than the kernel technique, as discussed and used in this paper, can also reproduce many (narrow-sense) NTK results, but we would like to be precise in citing these literature and distinguishing lazy training from "NTK". More clarifications are already made in a revised version, and we appreciate the reviewer points out. > I don't see how Theorem 1 implies that GD has an exponential convergence (lines 223-224) as product factors depend on $s$. Thanks for an expert question. This $s$ dependence indicates that at each step $s$, the decay ratio between the next step and the current step can be different. However, such ratio has a uniform bound when choosing $j^*(s)= j^*=\min_j w(t_j)(t_j-t_{j-1})\bar{\sigma}\_{t_j}$, which does not depend on $s$. Therefore, the product term $\prod_{s=0}^k\left(1-C_6hw(t_{j^*(s)})(t_{j^*(s)}-t_{j^*(s)-1})\bar{\sigma}\_{t_{j^*(s)}}(\frac{m\sqrt{\log d}d^{2a-1.5-4c}}{n^2N})\right)$ is further upper bounded by $\left(1-C_6hw(t_{j^*})(t_{j^*}-t_{j^*-1})\bar{\sigma}\_{t_{j^*}}(\frac{m\sqrt{\log d}d^{2a-1.5-4c}}{n^2N})\right)^{k+1}$. Please kindly see lines 225-226 for similar illustration. > I don't see how Theorem 2 implies that the sampling error is almost linear in $d$ as $\gamma_k$ and $\sigma_t$ may depend on $d$. Theorem 2 presents a convergence result of $\text{KL}(p_\delta|q_{t_N})$ for general $\sigma_t$ and $\gamma_k$. In practice, $\sigma_t$ is usually chosen to be a function of $t$, independent of $d$, see [31, 52]. Once we determine what $\sigma_t$ to use, under the exponenital time schedule, the $d$-dependence in the sampling error is always of the form $$ E_I+E_D \lesssim d \frac{\log (T/\delta)^{k+1} }{N^k},\quad k=1,2. $$ Even though $T$ is chosen to be of order poly($d$), the overall sampling error is almost linear in $d$. Please refer to Appendix F.1 in the paper for a detailed discussion. > typos Sincerely appreciated. They will of course be corrected. -------- **We would like to kindly direct the reviewer to "Comment" for the response to the rest of the review.** We sincerely apologize for exceeding the character limit but we eagerly hope to thoroughly address the reviewer's concerns. --- Rebuttal 2: Title: Continued response to the reviewer Comment: > Q1. I don't understand the sentence in lines 296-299, which mentions empirical and population losses even though the paper does not tackle the generalization error. Also, I don't see how the central limit theorem applies here, as individual terms in the empirical average over the training set are not independent since the network parameters themselves depend on the training set. Could the authors please clarify? We deeply apologize for the confusions. Yes, this paper focuses on optimization and sampling errors and does not tackle generalization error. The reason we mention the population loss and empirical loss is the following. In training, we need to optimize the empirical loss. However, when estimating the whole generation error, the population loss is necessary (see Theorem 2, the term $E_S$). Therefore, the gap between the population and empirical versions is integrated in the full error analysis (Corollary 1). We will further clarify this point in our next version. For the $\epsilon_n$, we sincerely apologize; it was our mistake, and we forgot to correct it. The "central limit theorem" part should be the law of large numbers, i.e., this $\epsilon_n$ is the statistical error that converges to 0 as $n\to \infty$. We currently do not have refined statistical error estimation, and this limitation will be clarified in a revision. > Q2. Do the authors believe that their work can inform practical decisions? If yes, what new predictions can be made using their theoretical results? If no (which does not necessarily imply the results are not significant), I think the paper should be more honest about this. Thank you for a very important question. We focused a general theory and did not explore all possible ramifications, but here is one: when the score is very well trained, it is more preferrable to use the schedules in [Song et al. 2020] to sample/generate; otherwise, the design in [Karras et al. 2022] is more preferable. This important point is already clarified in a revision. However, if the reviewer wants us to state in the limitation section that we presented a general theory but did not explore enough practical implications, we are happy to do so. > Contrary to what is stated in the checklist, limitations are not discussed in the paper. There is no discussion section. > * A first limitation is that only the approximation and optimization errors are studied, and generalization is not tackled. This means that the score network learns the empirical distribution of the training set, and thus reproduces training data points during sampling. This is only briefly mentioned in passing in the middle of the text but should be stated in the introduction (and potentially discussion). We sincerely apologize for not clarifying it in a separate section. In our next version, we will add a discussion section and state clearly that we do not tackle the generalization in the introduction. > * Some assumptions are questionable. First, the first and last layer are not trained and left to their random initialization. Second, and more importantly, the chosen asymptotic setting corresponds to extremely wide networks trained on very few data points (probably so that they can memorize their training set). This thus corresponds to a rather unrealistic setting. We would like to thank the reviewer for this comments. First, the assumption that "first and last layer are not trained" is equivalent to applying fixed linear transforms respectively to the input and the output of the network. This will not affect the trainability of the model if the depth $L$ and width $m$ are properly chosen. Also, it is a commonly used assumption in theoretical analysis (see e.g. [2,3]; also lines 179-180 in the paper). Second, we agree with the reviewer that "wide networks trained on very few data points" is a little bit "unrealistic". However, this is the limitation of all the current theoretical works about the convergence of neural networks due to technical difficulties. Existing works (e.g. [1]) are making efforts to get closer to practical settings but the progress is very slow. If there are important references we missed, we'd deeply appreciate it; otherwise, we sincerely hope that the reviewer could consider if we could agree about the current stage of theoretical research, and allow us to explore in a sustainable way (a 3:Reject adds one straw instead; is our work really that bad, especially in the context of the existing literature?) [1] Zou et al. An improved analysis of training over-parameterized deep neural networks. NeurIPS 2019. [2] Allen-Zhu et al. A convergence theory for deep learning via over-parameterization. ICML 2019 [3] Cai et al. Neural temporal-difference learning converges to global optima. NeurIPS 2019. --- Rebuttal Comment 2.1: Comment: I thank the authors for their very detailed answer. This fully addresses my concerns about the validity of the results. After reading the other reviews and rebuttals, I am willing to increase my score. The main point that is still not clear for me is what we learn from the results in this paper. - This first contribution is the quantitative analysis of the learning and sampling errors. While sampling is well understood, learning is not, and the analysis in the paper suffers from the same limitations as the rest of the literature (although relaxing the very-wide assumption in the new theorem 1 above seems to be a major advance?). I suppose the main contribution lies in the novel gradient lower bound, which is relegated to the appendix, but that readers more expert than this reviewer may read and appreciate. * As a side remark, if theoretical progress is slow, then should we keep pushing in the same direction and keep submitting results with the same limitations but slightly more general assumptions every conference cycle? Or rather try to tackle these fundamental limitations with new approaches? I'm not sure that the first option is the most efficient way to drive progress in theoretical research. But this is more of a systematic issue of the field and beside the point here. - The second contribution is qualitative analysis of noise schedules in the literature. As the authors pointed out, this paper predicts that two different sampling schedules should be used based on the quality of the score approximation, which is an interesting hypothesis. My main qualm here still stands: the authors aim for quantitative statements, but it seems that the error analyses leave a lot of wiggle room in the epsilons and deltas to fit any desired schedule. To be clear, I see this as a negative point: it does not tell us which schedule is the best, and not even how to evaluate a given schedule beyond saying that it should have a few qualitative features. I am all in favor of qualitative analyses, which I are usually underrated in theoretical papers, but I think they should be _simple_. It would greatly strengthen the papers if the authors studied a simplified setting where the error bounds take a simple form, and it can be clearly seen by a non-expert reader what features of a sampling discretization matter. In its current state, the paper does not give _intuition_ about the problem, and readers have to trust the authors that their qualitative results are an accurate description of typical phenomena. --- Rebuttal 3: Title: Response to reviewer comments Comment: We sincerely thank the reviewer for carefully reading our rebuttal and being willing to increase the score. > This first contribution... Thank you for the comment and apologize for potential confusions. The main goal of this work is not to revamp the general methodology for analyzing neural network training, but rather to adapt existing tools to obtain a full theoretical guarantee of diffusion model's generation capability. The former is a holy grail, but the latter, we believe, is also rather important. Nevertheless, although the sampling process of diffusion model has been analyzed for ~2 years already and relatively well understood, it is only half of the generation pipeline, and no result existed that combines it with the other half, namely the score training. In fact, denoising score matching training has some significant difference from settings in which existing training analyses can directly apply, and we suspect this is why this problem has not been already solved before. Please allow us to elaborate: For the denoising score matching problem, there are a lot of special properties and additional issues. Specifically, the input and output data in the denoising score matching problem have specific structures such as: 1) The input data $X_{ij}=x_i+\bar{\sigma}\_{t_j}\xi_{ij}$ is noisy and therefore the whole model is non-interpolation, which is in contrast to the interpolation assumption in existing techniques. 2) The scales of the input and output data cannot be freely assumed in the theory. More precisely, the variances of the noise $\bar{\sigma}\_{t_j}$ in the input and output data $-\xi_{ij}/\bar{\sigma}\_{t_j}$ changes as $\bar{\sigma}\_{t_j}$ varies. Moreover, to favor the sampling process after training the score, the scale of input data needs to be of order $\sqrt{d}$, and the scale of outputs is at least of order $\sqrt{d}$. This is in sharp contrast to data assumptions in existing literatures. For example, in Allen-Zhu et al. [1], they assume the output data to be $o(1)$ and the input data to have order 1 scale. Therefore, while still leveraging the framework of existing theoretical analyses, we had to develop additional techniques to deal with the above issues. For 1, we decomposed the model into an interpolation component and a non-interpolation one in order to apply the existing techniques on the interpolation part. For 2, due to the special data structure and scalings, existing techniques, which rely on a commonly used data separability assumption, no longer suffice. Instead, we develop a new geometric technique to overcome this issue, and the only price to pay is that the order of the dimension of the input data $d$ should not be too small, which has now been significantly relaxed in the new version as stated in the global rebuttal. Hopefully our goal and contributions are better clarified by the above discussions. > The second contribution... We apologize for the confusion caused. Our full error analysis is completely quantitative, and we will now explain why our presentation on preferred schedule(s) is qualitative. This explanation could start with a question: why can't we just substitute the hyperparameter choices of a specific schedule in our error bound, get a concrete number, and then compare that number with other schedules' numbers, or even optimize that number to get better schedules? That is because our full error bound is a function of not only hyperparameters provided by a schedule, but also many other things (i.e. what the reviewer mentioned as "a lot of wiggle room"). For example, a) there is a dependence on the data distribution, which is reflected in a generalization gap term in the error bound; b) there is dependence on the network structure (for our setting, width $m$ and depth $L$); c) there is dependence on optimization hyperparameters. If all these were known, then a number could indeed be spit out, but then, we feel, it becomes a bit less interesting because we are no longer comparing general schedules per se. Therefore, we don't compare precise values, and instead adopt some qualitative arguments, so that some generic yet practical guides can still be provided. More precisely, we consider the following two general situations: $E_S\gg E_I+E_D$ and $E_S\ll E_I+E_D$. Then we substitute the two specific time and variance schedules into the error bound, and compare either the values or the complexities of the time steps under the same error as is shown in Table 2. The schedule with smaller value or complexity will be the preferable choice. The results in Table 2 may appear informal; however, they are indeed rigorously calculated in Appendix E3 and F (see lines 844,846,853-875), and could be formalized into corollaries or propositions. Here we chose not to present the full details in Table 2 merely due to the page limit. We will clarify this part in a revision. --- Rebuttal Comment 3.1: Comment: Thank you for the clarifications. I have increased my score.
Summary: This work develops theoretical bounds on the training error and sample complexity of score-based diffusion models, which further imply a complete convergence analysis (by combining the both). In addition, the derived bounds shed light on the efficient hyper-parameters selection, including re-weighting functions, time discretization schemes and variance schedules, supporting former empirical works in a principled manner. Strengths: 1. The problem setup is quite general, at least at the current stage. 2. This work provides a comprehensive and rigorous convergence analysis of diffusion models for *both* training and sampling. The analyzed score network also extends former theoretical literature. 3. The obtained mathematical bounds also imply beneficial insights to guide the hyper-parameters selection, which coincides with other experimental literature . Weaknesses: 1. The paper is organized in a almost technical way, and some details are not easy to follow. It would be helpful to improve the readability by adding e.g., (key) notations collections and formulation illustrations. Also, there is no conclusion section. 2. The derived bounds in main theorems are too technically complex. Is it possible to simplify them to summarize the core algebraic dependence on hyper-parameters (including the data dimension $d$, sample size $n$, model capacity $(m,L)$, discretization steps $N$)? 3. There are no experiments to support the theory. Particularly, is it feasible to numerically verify the exponential convergence in Theorem 1? 4. Are there any differences on the training between over-parameterized neural networks applied to normal supervised learning, and here in the generative modeling (as the score network)? Note that this work selects the *same* feed-forward network (without biases) for *all* time steps (see the equation between Line 173 and 174), ignoring the time information in the architecture of score networks. Technical Quality: 4 Clarity: 2 Questions for Authors: 1. Please provide more details of questions raised in the weaknesses section above. 2. The scale of model initializations is near-zero (see Line 181-182), which seems to be one of the typical configurations in the over-parameterization regime (lazy training?). Is this setting reasonable and representative in practice? It would be better to provide (numerical) examples about this, particularly in this generative modeling case. 3. Details 1: In Line 201, "... high probability (see Lemma..)." 4. Details 2: What is the definition of $a$ in Theorem 1 (see $d^{2a-1.5-4c}$)? 5. Details 3: In Proposition 1, there seems to be a confusion on the mixed use of $S$ and $S^*$ (particularly, Point 2). 6. Details 4: In Line 297-299, it states that $\epsilon_n$ (the distance between the empirical loss and population loss at the GD outputs) can be estimated by central limit theorem. Can the authors provide more details on it (since the learned model parameters are *dependent* of data)? 7. Details 5: In Table 1, why is the exponential schedule selected as $\bar{\sigma}_t=\sqrt{t}$? In VE-SDE, $\sigma_t$ is an exponential function of $t$ (and hence $\bar{\sigma}_t^2:=2\int_0^t \sigma_s^2 ds$). Note that in this case, Assumption 2 (Point 1) may not hold. Confidence: 4 Soundness: 4 Presentation: 2 Contribution: 3 Limitations: Based on the developed theory, it is worthy to further explore new weighting functions, time discretization schemes and variance schedules to better tune diffusion models. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We deeply thank the reviewer for all the helpful comments and sincerely appreciate the positive evaluation. Here are our itemized responses: > W1 We sincerely appreciate your comment and have done a major revision of the paper in accordance. > W2 Thank you for this advice. It is in general hard to simplify the dependency without choosing time and variance schedules. Therefore, in the following, we show the upper bound in our full error analysis under the time and variance schedules of [31] (a current gold standard in practices): $KL(p_\delta|q_{T-\delta}) \lesssim \frac{\mathrm{m}_2^2}{T^2} +\frac{d a^2 T^{\frac{1}{a}} }{\delta^{\frac{1}{a}}N}+(\mathrm{m}_2^2+d)( \frac{a^2T^{\frac{1}{a}}}{\delta^{\frac{1}{a}} N}+\frac{a^3T^{\frac{2}{a}}}{\delta^{\frac{2}{a}}N^2})$ $+\frac{C}{N}(C_1+(1-C_2 h (\frac{md^{\frac{a_0-1}{2}}}{n^3N^2}))^K),$ where $\mathrm{m}_2^2$ is the second moment of the data distribution and it is of order $d$ if the data distribution is Gaussian; $a$ is chosen to be 7 in [31]; $a_0\in(1/2,1)$; $C,C_1,C_2$ are some constants; $K$ is the total number of iterations. Please see upper bounds of sampling errors under more schedules in Appendix F1. We will try to make it less technical in our next version. > W3 We would like to thank the reviewer for helping us strengthen the paper. The figure in the attached pdf shows how the actual training is consistent with our theory. More precisely, the $y-$axis is in $\log$ scale; the training loss decays in a nonlinear way but it is clearly bounded by the exponential bound. > W4 Thank you for a very sharp observation. There are indeed time information in the model of our paper compared to the previous overparameterized neural networks. In short, $X_{ij}$ in our theory is actually an augmented version that concatenates both the original $X_{ij}$ and $t_j$. This way we no longer need to consider separately an additional time variable. More precisely, due to the data structure $X_{ij}=x_i+\bar{\sigma}\_{t_j}\xi_{ij}$, if we take $(x_i)\_d=0$ and $(\xi_{ij})\_d=1$, then the last element of this input vector is just $\bar{\sigma}\_{t_j}$, which is a function of time. This can be seen, for example, in Table 2: practical choices of $\bar{\sigma}\_{t_j}$ include $\bar{\sigma}\_{t_j}=t_j$ and $\bar{\sigma}\_{t_j}=\sqrt{t_j}$. However, we now realize this is a source of confusion. Apology, and it will be clarified in our next version. > Q1 Kindly see the above. > Q2 Thank you for the questions. Such initialization is indeed what is used in practice. For example, in EDM [31], which is a gold standard for the design of diffusion models, they provide several choices for initialization, including Xavier and Kaiming initializations using both uniform and Gaussian distributions. When initializing the weights by Gaussian, they rescale the standard normal by $\sqrt{\frac{2}{d_{\rm in}+d_{\rm out}}}$ and $\sqrt{\frac{1}{d_{\rm in}}}$ for a weight matrix of dimension $d_{\rm out}\times d_{\rm in}$. In our paper, $d_{\rm out},d_{\rm in}$ equal to $m$ or $d$ for different layers. Therefore, our initialization of all the weight matrices ($\mathcal{N}(0,\frac{2}{m})$ and $\mathcal{N}(0,\frac{1}{d})$) matches the order of at least one of those used in practice. > Q3 Apologize for the typo. This should be Lemma 4. We will add it in our next version. > Q4 We deeply appreciate that you carefully checked the details and apologize for this typo and the confusion caused. It was $a=1/2$, but we now further improved our rate and this $a$ is no longer needed. > Q5 We sincerely apologize for the typo. All the $S^*$ should be just $S$. We have already corrected it in a revised version. > Q6 We deeply apologize for this confusion and thank the reviewer again for the careful review. This is actually a mistake that we forgot to correct. The "central limit theorem" part should be the law of large numbers, i.e., this $\epsilon_n$ is the statistical error that converges to 0 as $n\to \infty$. We currently do not have refined statistical error estimation, and this limitation will be clarified in a revision. > Q7 The exponential schedule is the popular version from [52]; it is called so because the time schedule $(t_k)$ decays exponentially fast (see the last column of Table 1). Regarding its noise schedule, since its forward SDE is $\mathrm{d}X_t= \sqrt{2}\mathrm{d}W_t$, $\sigma_t=1$. Therefore, the variance $\bar{\sigma}_t^2=t$, which is not exponential. The Assumption 2 Point 1 is satisfied since we can always choose an appropriate scaled weight $w(t)$ in the training step. > Limitation We agree. This is actually nontrivial given the complexity (due to its dependence on many design factors) of our bound, and we hope the reviewer could consider the length and scope of a conference paper and allow us to do it properly in a future work because we'd also like to comprehensively verify everything empirically. [31] Karras et al. Elucidating the design space of diffusion-based generative models. NeurIPS 2022 --- Rebuttal Comment 1.1: Comment: Thanks for the clarifications by authors. I find most of them adequate, except the following three points: - Although the current manuscript can not be refined, would you please state the plan of notations collections, formulation illustrations, and the conclusion section? - Regarding the dependence on hyper-parameters of the upper bound, the dependence on model capacity $(m,L)$ and sample size $n$ seems still unclear, in the sense that the upper bound is not related to the network depth $L$, and also independent of the network width $m$ and sample size $n$ when the total GD iteration $K$ goes to infinity (see the answer of W2). - On the estimation error $\epsilon_n$, the analysis is definitely more complex than a trivial application of the law of large numbers, since the data and parameters are decoupled under this setting. Can authors kindly provide at least (promising) ideas to handle this difficulty? --- Rebuttal 2: Comment: We sincerely thank the reviewer for discussing with us and helping us improve the quality of our manuscript. > Although the current manuscript can not be refined, would you please state the plan of notations collections, formulation illustrations, and the conclusion section? Apologize for not having provided enough plan in the rebuttal. Here it is. For notation collections, we will add a detailed table in the appendix summarizing all our symbols for readers' easier references. For the formulation illustrations, we will add more remarks including: 1) more discussions on the special structure of this denoising score matching problem and how it is related and difference from traditional nonlinear regression problems (the former is not interpolating), 2) comparison between our assumptions and results of this model and previous overparameterized neural network models, especially weakened assumption (we do not require data separability as shown in Allen-Zhu et al. [1]), a decomposition of the non-interpolation setting in diffusion models compared to the interpolation ones used in previous models, and improved parameter dependence on the input dimension $d$ in the error bound, 3) better quantitative interpretation of our results under specific schedules with explicit parameter dependences. The conclusion section will start with the following: We provide a first full error analysis incorporating both optimization and sampling processes. For the training process, we provide a first result under a deep neural network and prove the exponential convergence into a neighborhood of minima, while for sampling, we also extend the current analysis to the variance exploding case. Moreover, based on the full error analysis, we establish quantitative understandings of the error bound under the two schedules. Consequently, we conclude a qualitative illustration of the "bell-shaped" weighting and the choices of schedules under well-trained and less-trained cases. Then we will discuss limitations and future directions. More precisely, the network architecture we used in the model is a deep ReLU network. Although being so far the most complicated arhitecture for theoretical results, it is still far from what is used in practice like U-Nets and transformers. Regarding the full error analysis, we only focus on the optimization and sampling error, and do not disect the generalization error. When bridging the theoretical results with practical designs of diffusion models, our results are mostly qualitative and we only compare two schedules under well-trained and less-trained cases. --- **We apologize for exceeding the character limit and would like to kindly direct the reviewer to our next Comment for the continuation of our reply.** --- Rebuttal 3: Title: Continued response to the reviewer Comment: > Regarding the dependence on hyper-parameters of the upper bound, the dependence on model capacity ($m,L$) and sample size $n$ seems still unclear, in the sense that the upper bound is not related to the network depth $L$, and also independent of the network width $m$ and sample size $n$ when the total GD iteration $K$ goes to infinity (see the answer of W2). Thank you for the expert questions. Here are technical explanations. In the simplified bound (for better seeing hyper-parameter dependences) that was provided in our last reply, namely $KL(p_\delta|q_{T-\delta}) \lesssim \frac{\mathrm{m}_2^2}{T^2} +\frac{d a^2 T^{\frac{1}{a}} }{\delta^{\frac{1}{a}}N}+(\mathrm{m}_2^2+d)( \frac{a^2T^{\frac{1}{a}}}{\delta^{\frac{1}{a}} N}+\frac{a^3T^{\frac{2}{a}}}{\delta^{\frac{2}{a}}N^2})$ $+\frac{C}{N}(C_1+(1-C_2 h (\frac{md^{\frac{a_0-1}{2}}}{n^3N^2}))^K),$ the constant $C_1$ actually hides many terms: $$ C_1=\epsilon_n+\bar{\mathcal{L}}(\theta_\mathcal{F})+|\bar{\mathcal{L}}(\theta_\mathcal{F})+\bar{C}|, $$ where $\epsilon_{n}=|\bar{\mathcal{L}}(\theta^{(K)})-\bar{\mathcal{L}}\_{\rm em}(\theta^{(K)})|$, $-\bar{C}$ is the true minimum of $\bar{\mathcal{L}}(\theta)$ defined in (5), and $\theta_{\mathcal{F}}=\arg\inf_{\{\theta:S(\theta)\in\mathcal{F}\}} |\bar{\mathcal{L}}(\theta)+\bar{C}|$ with $\mathcal{F}=\big\lbrace$ ReLU network function, with $d=\Omega({\rm{poly}}(\log (nN))), m=\Omega\left(\text{poly}\big(n,N,d,L,T/t_0\big)\right)\big\rbrace$. Then, $\epsilon_n$ is the statistical error, $\bar{\mathcal{L}}(\theta_\mathcal{F})$ is the estimation error, and $|\bar{\mathcal{L}}(\theta_\mathcal{F})+\bar{C}|$ is the approximation error. Regarding the depth dependence $L$, indeed it is not shown in the rate. However, it was actually reflected in the function approximation error $|\bar{\mathcal{L}}(\theta_\mathcal{F})+\bar{C}|$; for example, if $L$ is too small, this error could be large. We did not analyze the function approximation theory of neural network, which is another profound area. Since that area is extensively studied, one can just leverage the existing results, which will give $L$ dependence. Regarding the disappearing of $m$ and $n$ dependences as $K\to\infty$, here is how the dependences were hidden: 1) there is a lower bound of the width $m$ in Assumption 1; once $m$ is greater than this bound, i.e. the model is sufficiently overparametrized, it no longer has effect on the optimization error in the infinite $K$ limit. 2) Regarding the sample size $n$, its effect is mainly hidden inside the statistical error $\epsilon_n$. Please kindly see more details in the reply below. > On the estimation error $\epsilon_n$, the analysis is definitely more complex than a trivial application of the law of large numbers, since the data and parameters are decoupled under this setting. Can authors kindly provide at least (promising) ideas to handle this difficulty? We sincerely apologize for the confusion caused, which is likely due to our previous typo in the Corollary 1, and we deeply appreciate the opportunity to clarify: we did not intend to establish any estimation of the statistical error $\epsilon_n$, and our updated version actually looks like this: >> **Theorem.** Under the same conditions as updated Theorem 1 (global rebuttal) with $k=K$ and Theorem 2, we have $$ KL(p_\delta|q_{T-\delta}) \lesssim E_I+E_D+\max_k \frac{\sigma^2_{t_{N-k}}}{w(t_{N-k})} \bigg(\epsilon_{\rm{train}}+\epsilon_{n}+\bar{\mathcal{L}}(\theta_\mathcal{F})+|\bar{\mathcal{L}}(\theta_\mathcal{F})+\bar{C}|\bigg) $$ where $E_I,E_D$ are defined in Theorem 2, $\epsilon_{\rm train}$ is defined in updated Theorem 1 (global rebuttal), $\epsilon_{n}=|\bar{\mathcal{L}}(\theta^{(K)})-\bar{\mathcal{L}}\_{\rm em}(\theta^{(K)})|$, $\bar{C}$ is defined in (5), and $\theta_{\mathcal{F}}=\arg\inf_{\{\theta:S(\theta)\in\mathcal{F}\}} |\bar{\mathcal{L}}(\theta)+\bar{C}|$ with $\mathcal{F}=\big\lbrace$ReLU network function, with $d=\Omega({\rm{poly}}(\log (nN))), m=\Omega\left(\text{poly}\big(n,N,d,L,T/t_0\big)\right)\big\rbrace$. We'd deeply appreciate it just in case the reviewer is willing to shed more light on $\epsilon_n$'s estimation. In particular, the reviewer pointed out that the analysis is more complex than a trivial application of the law of large numbers, since the data and parameters are decoupled. May we confirm if "parameters" refer to neural network parameters, or training hyperparameters? If the latter, does the decoupling matter? If the former, we thought after one step of training, the NN parameters become coupled with the data. Or, could "decoupled" be, by any chance, a typo, and "coupled" was intended instead? We thought law of large numbers is actually easier when random variables are independent from each other, and when there are coupling (i.e. correlation) it could be more tricky. --- In any case, thank you again for your time and consideration!
Summary: This work studies the training and sampling process and then achieves the end-to-end analysis for diffusion models. For the optimization process, this work uses an over-parameterized NN and gradient descent to prove the convergence rate. For the sampling process, this work provides VESDE results and explains the great performance of SOTA VESDE. After these results, this work explains the “bell shaped” weight function used in application from the theoretical perspective. Strengths: 1. This work is the first one to analyze the optimization process with a deep NN and prove the exponential convergence into a neighborhood of minima. 2. This work makes the first step to explain the “bell shaped” weight function and the great performance of the state-of-the-art VE-based model from the theoretical perspective. Weaknesses: Weakness 1: The high-dimensional data assumption is strong. It would be better to discuss the application of high-dimensional data. Weakness 2: The technique novelty of Theorem 1 is needed to be discussed in detail. Section 3.1 claims that this work develops a new method for obtaining a lower bound of gradient. However, it does not discuss this method on the main page. It would be helpful to discuss the technique novelty in detail. Weakness 3: As shown in Section 3.1, this work mentions that the data separability assumption (corresponds to parameter $\delta$ in [1]) can not be directly used due to the property of diffusion models. This work replaces $\delta$ with $t_0/T$ when choosing $m$, which seems to be related to the $E_I$ term in Theorem 2. The technique challenge is unclear after replacing $\delta$ with $t_0/T$. Weakness 4: As shown in [2], their results can be straightforwardly extended to any linear SDE, including VESDE. Hence, it would be better to discuss the technique of Theorem 2. [1] Allen-Zhu, Z., Li, Y., & Song, Z. (2019, May). A convergence theory for deep learning via over-parameterization. In International conference on machine learning (pp. 242-252). PMLR. [2] Benton, J., Bortoli, V. D., Doucet, A., & Deligiannidis, G. (2024). Nearly d-linear convergence bounds for diffusion models via stochastic localization. Technical Quality: 3 Clarity: 2 Questions for Authors: Please see the Weakness part. Question 1. Can you relax the high-dimensional data assumption by using the low-dimensional manifold assumption [3]? Question 2. The guarantee in [4] is a pure $W_2$ guarantee and can not be directly compared to the $KL$ guarantee in this work. Can you discuss it in detail? Comment 1: It would be better to add a conclusion and limitation part at the end of this work. Typo 1: In line 201, the reference Lemma is unclear. [3] Tang, R., & Yang, Y. (2024, April). Adaptivity of diffusion models to manifold structures. In International Conference on Artificial Intelligence and Statistics (pp. 1648-1656). PMLR. Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: This work does not discuss the limitation and societal impace in a independent paragraph. Flag For Ethics Review: ['Ethics review needed: Research involving human subjects'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We greatly thank the reviewer for all the helpful comments. We especially appreciate the comment "...the first one to analyze the optimization process with a deep NN...the first step to explain the “bell shaped” weight...". Please kindly see our itemized replies below: >Weakness 1 We agree that this assumption is strong. Therefore, we improved our convergence theory, and the new version now uses a much more relaxed assumption, where the original version $$d=\Theta(m)$$ is replaced by $$d=\Omega({\text{poly}}(\log (nN))).$$ Note in the old version, the input dimension is the same order as the width of the network $m$, while in the new version, it only needs $d$ to be not too small, or equivalently, the number of data points $n$ and the number of time points $N$ should not be exponential in $d$. Please also see a new version of Theorem 1 in our global rebuttal. > Weakness 2 Thank you for pointing this out. Roughly speaking, we first decompose the gradient into terms based on one data point that has the largest loss value at the current iteration, and the rest $nN-1$ samples [1]. Then the gradient can be written in the form $w^\top v+w^\top u$ for some $w,v,u$. The main novelty of our proof lies in analyzing the angles between $v,u$. This allows us to show the probability of $w^\top v\ge 0,w^\top u\ge 0$ to be roughly of the order $d^{\frac{-3}{8}}$. Finally, this leads to a lower bound of the gradient by $w^\top v$. The proof is more involved but the above is the main technical innovation. > Weakness 3 We apologize for a critical confusion: the $\delta$ used in [1] and the $t_0/T$ in our paper serve different purposes, and we did not replace $\delta$ by $t_0/T$. As the expert reviewer knows, $\delta$ is a lower bound used in the data separability assumption in [1]. However, in our paper, our proof does not require such an assumption due to our new method and the diffusion model settings, and therefore there is no counterpart of this variable. More precisely, in [1], they require the output data to be in $o(1)$; in this case, the input data have to be well separated so that the gradient is locally strongly convex (obtaining the lower bound). In contrast, denoising diffusion models require Gaussian outputs that cannot be small, which help contribute to the strongly convexity and relax the requirement on the input data. Regarding $t_0/T$, it adds no extra technical challenges and is merely due to the weighted $\ell_2$ loss for diffusion models, which depends on different time points and variances, compared to the standard $\ell_2$ loss. Such weighting will enter the constant factors of different bounds and consequently cause the dependency in the width $m$. > Weakness 4 Results in [2] indeed can be extended to VESDE, for the sampling part where discretization error is analyzed. However, we respectfully think the detailed calculations still need to be done, and several not-so-straightforward steps we encountered included: 1) bounding the initialization error: [2] considered the VPSDE and uses KL-decay along the OU-process which is no longer true for VESDE. Therefore we instead handle the initialization error by the convexity property of KL divergence (Lem.10). 2) bounding the discretization error: we considered a general time-schedule, which makes it harder to bound the term $N_2+N_3$ in line 758-759. We utilize the asymptotic estimation $\min(\mathrm{m}_2^2, \bar{\sigma}\_t^2 d)\lesssim(1-e^{-\bar{\sigma}\_t^2})(\mathrm{m}_2^2 +d)$ to obtain the upper bound. Thanks to the reviewer, we will include more details of these in a revised version. > Question 1 We deeply appreciate the reviewer for proposing a nice possibility to improve our result and will carefully evaluate it. Meanwhile, as indicated in the reply to Weakness 1, we have already improved our results without additional assumptions. > Question 2 The reviewer mentioned [4] but didn't give the reference. We are guessing that the reviewer might mean [24 in our manuscript]. $W_2$ is in general not comparable to KL unless if we assume $p_{T-\delta}$ in Thm.2 satisfies Talagrand inequality (implying $W_2^2\le C\cdot\text{KL}$). The comparison to [24] is actually not fair as [24] assumes strongly log-concavity, which is much much stronger than our situation, where the data distribution can be arbitrarily non-log-concave and multimodal. In the updated version, we compare to [6] (Cor.1), whose convergence is established in TV for compact supported data distributions. In this case, we can see our result has better dimension dependence by Pinsker's inequality $2\text{TV}^2\le \text{KL}$. > Comment 1 Thank you very much for this comment. We will add it to our next version. > typos We sincerely appreciate! **Additional References** [5] Eldan. Taming correlations through entropy-efficient measure decompositions with applications to mean-field approximation. PTRF'19. [6] Yang et al. The Convergence of Variance Exploding Diffusion Models under the Manifold Hypothesis. --- Rebuttal Comment 1.1: Comment: Thanks for your careful response. The significant improvement of the dependence of $d$ is helpful. However, the proof sketch of the improved theorem is not provided. It would be better to provide an intuitive proof sketch, which is necessary to check the correctness. Furthermore, I think it would be better to discuss the limitation in the rebuttal phase and add an independent limitation paragraph in the next version. --- Rebuttal 2: Comment: We deeply thank the reviewer for reading our rebuttal and discussing with us. > The significant improvement of the dependence of is helpful. However, the proof sketch of the improved theorem is not provided. It would be better to provide an intuitive proof sketch, which is necessary to check the correctness. The main improvement is on proving the lower bound of the gradient. Therefore, we will provide a proof sketch for it, and the rest of the proof follows the similar framework as Allen-Zhu et al [1]. We first decompose the gradient of the $k$th row of $W_L$ $\nabla\_{(W_L)\_k}\bar{\mathcal{L}}\_{\rm em}(\theta)=\underbrace{\frac{1}{n}w(t_{j^*})(t_{j^*}-t_{j^*-1}){(W_{L+1})^k}^\top(\bar{\sigma}\_{t_{j^*}}W_{L+1} q_{i\^\* j\^\*,L}+\xi_{i\^\* j\^\*})q_{i\^\* j\^\*,L-1} {1}\_{(W_L q_{i\^\* j\^\*,L-1})\_k>0}}\_{\nabla_1}$ $+\underbrace{\frac{1}{n}\sum_{(i,j)\ne (i^*,j^*)}w(t_j)(t_j-t_{j-1}){(W_{L+1})^k}^\top(\bar{\sigma}\_{t_j}W_{L+1}q_{ij,L}+\xi_{ij})\,q_{ij,L-1} {1}\_{(W_L q_{ij,L-1})\_k>0}}_{\nabla_2}$ where $(i^*,j^*)$ indicates the sample index with the largest loss value. Then we first fix $(q_{ij,L-1})\_s=1$, and prove that the index set of both $(q\_{i^\*j^\*,L})\_s> 0$ and $\sum\_{(i,j)\ne (i^*,j^*)}w(t_j)(t_j-t_{j-1})\bar{\sigma}\_{t_j}{1} \_{(W_Lq_{ij,L-1})\_k>0}(q_{ij,L})_s>0$ is order $m$ with high probability. Next, we conditioned on the index set we've found, then we can decouple each element of $\nabla_{(W_L)_k}\bar{\mathcal{L}}\_{\rm em}$ with high probability. We then prove that with high probability, the event $(\nabla_1)_s> 0$ and $(\nabla_2)_s>0$ has probability at least of order $d^{(a_0-1)/2}$ where $a_0\in(1/2,1)$. Now, we deal with $(q_{ij,L-1})\_s$ and prove that if the above results hold for $(q_{ij,L-1})_s=1$, then there exists an index set with cardinality of order $m/(nN)$ such that $(\nabla_1)_s> 0$ and $(\nabla_2)_s>0$ also hold in this index set. In the end, combining all the steps above yields the lower bound. > Furthermore, I think it would be better to discuss the limitation in the rebuttal phase and add an independent limitation paragraph in the next version. We thank the reviewer for the suggestion. We will add a limitation paragraph to our paper. Below is a quick summary: The network architecture we used in the model is a deep ReLU network. Although being so far the most complicated arhitecture for theoretical results, it is still far from what is used in practice like U-Nets and transformers. Regarding the full error analysis, we only focus on the optimization and sampling error, and do not disect the generalization error. When bridging the theoretical results with practical designs of diffusion models, our results are mostly qualitative and we only compare two schedules under well-trained and less-trained cases. Finer analysis techniques are needed for providing a quantitative demonstration of the parameters in diffusion models, as well as for motivating more designs of weightings and schedules. We will leave them for future exploration. --- Rebuttal Comment 2.1: Comment: Thanks for the detailed discussion on the technique novelty and limitation. Since the rebuttal addresses my concerns, I will raise my score to $5$.
Summary: This paper provides a full error analysis (considering both training and sampling) for a class of score-based SDE diffusion models, and using the results to understand why and when certain time and variance schedules are preferred. Strengths: - The theoretical contributions of the paper are excellent - Overall the paper is well organized and the notations are clear Weaknesses: - The stated theorems are quite dense and are not easy to parse. The paper would benefit from having informal version of these results highlighting the key qualitative features - Only an SDE based sampler (the exponential integrator scheme) is considered, while comparisons with other samplers (such as the Probability Flow ODE) are missing Technical Quality: 4 Clarity: 4 Questions for Authors: - In Remark 1, it was stated that the KL divergence can be explicitly computed when the data distribution is Gaussian, would you provide such result in the appendix? Minor remarks: - typo in line 16: variance -> various - missing lemma number in line 201 Confidence: 3 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for all the helpful advice and comments and deeply appreciate the positive evaluation. >Weakness 1: The stated theorems are quite dense and are not easy to parse... We greatly appreciate the comment and will add more intuitive explanation or informal versions of results in our next version. >Weakness 2: Only an SDE based sampler (the exponential integrator scheme) is considered, while comparisons with other samplers (such as the Probability Flow ODE) are missing. Probability Flow ODE ([1]) is indeed another implementation. It reverses the diffusion as an ODE. Because of that, common tools like Girsanov no longer applies, and its analysis requires different approaches. However, there are existing anlyses (e.g., [2,3]) that assumes score is already learned within $\epsilon$ error, and these approaches can be used to replace the sampling component of our full analysis. We also note so far there is no verdict on which one is better, even just for the sampling part. In this work we simply chose to analyze the VE-SDE implementation as it was used in many celebrated papers [4,5]. But we agree that other sampling strategies could also be interesting future investigations! >Question 1: In Remark 1, it was stated that the KL divergence can be explicitly computed when the data distribution is Gaussian, would you provide such result in the appendix? When the target distribution is a Gaussian with mean $m$ and covariance $\sigma^2 I_d$, the score function can be eaxctly computed explicitly. In this case, $p_\delta$ and $q_{T-\delta}$ in Theorem 2 are both Gaussian and $\text{KL}(p_\delta|q_{T-\delta})$ satisfies: $$ \text{KL}(p_\delta|q_{T-\delta})=\frac{d}{2}\log M -\frac{d}{2} +\frac{d}{2M}+ \frac{\lVert m\rVert^2}{(\sigma^2+\bar{\sigma}_T^2)^2 M }, $$ with $M=\frac{\bar{\sigma}\_T^2+\sum_{k=0}^{N-1} \frac{(\sigma^2+\bar{\sigma}\_\delta^2)^2}{(\sigma^2+\bar{\sigma}\_{T-t_{k+1}}^2)^2}(\bar{\sigma}\_{T-t_k}^2-\bar{\sigma}\_{T-t_{k+1}})}{\sigma^2+\bar{\sigma}\_\delta^2}$. We will add the detailed computation in the appendix of our updated manuscript. [1] Y. Song et al. “Score-based generative modeling through stochastic differential equations”. In: International Conference on Learning Representations. 2021 [2] Chen, Sitan, et al. "The probability flow ode is provably fast." Advances in Neural Information Processing Systems 36 (2024). [3] Li, Gen, et al. "Towards faster non-asymptotic convergence for diffusion-based generative models." arXiv preprint arXiv:2306.09251 (2023). [4] Karras, Tero, et al. "Elucidating the design space of diffusion-based generative models." Advances in neural information processing systems 35 (2022): 26565-26577. [5] Song, Yang, et al. "Score-based generative modeling through stochastic differential equations." arXiv preprint arXiv:2011.13456 (2020). --- Rebuttal Comment 1.1: Title: Thank you for the rebuttal Comment: I am satisfied with the rebuttal and will keep my score.
Rebuttal 1: Rebuttal: We would like to thank all the reviewers for the helpful comments. We have improved our convergence theory, and the new version now uses a much more relaxed assumption, where the original version $$d=\Theta(m)$$ is replaced by $$d=\Omega({\text{poly}}(\log (nN))).$$ Note in the old version, the input dimension is the same order as the width of the network $m$, while in the new version, it only needs $d$ to be not too small, or equivalently, the number of data points $n$ and the number of time points $N$ should not be exponential in $d$. All the other assumptions remain unchanged and we obtain the following new version of the theorem: > **Theorem 1.** Define a set of indices to be $\mathcal{G}^{(s)}=\{(i,j)|f(\theta^{(s)};i,j)\ge f(\theta^{(s)};i',j')\text{ for all }i',j'\}$. Then given Assumption 1 and 2, for any $\epsilon_{\rm train}>0$, there exists some $M(\epsilon_{\rm train})=\Omega\left(\text{poly}\big(n,N,d,L,T/t_0,\log(\frac{1}{\epsilon_{\rm train}})\big)\right)$, s.t., when $m\ge M(\epsilon_{\rm train})$, $h =\Theta(\frac{nN}{m\min_j w(t_j)(t_j-t_{j-1}) \bar\sigma_{t_j} })$, and $k=\mathcal{O}(d^{\frac{1-a_0}{2}}n^2N\log(\frac{d}{\epsilon_{\rm train}}))$, with probability at least $1-\mathcal{O}(nN)\exp(-\Omega(d^{2a_0-1}))$, we have $$ \bar{\mathcal{L}}\_{\rm em}(\theta^{(k)})\le\prod_{s=0}^{k-1}\left(1-C_5 h \ w(t_{j^*(s)})(t_{j^*(s)}-t_{j^*(s)-1})\bar{\sigma}\_{t_{j^*(s)}} \left(\frac{md^{\frac{a_0-1}{2}}}{n^3N^2}\right)\right)\bar{\mathcal{L}}\_{\rm em}(\theta^{(0)}) $$ where the universal constant $C_5>0$, $a_0\in(\frac{1}{2},1)$, and $(i^*(s),j^*(s))=\arg\max_{(i,j)\in\mathcal{G}^{(s)}}w(t_{j})(t_j-t_{j-1})\bar{\sigma}\_{t_{j}}$. Moreover, when $K=\Theta(d^{\frac{1-a_0}{2}}n^2N\log(\frac{d}{\epsilon_{\rm train}}))$, $$ \bar{\mathcal{L}}\_{\rm em}(\theta^{(K)})\le \epsilon_{\rm{train}}. $$ This bound also has improved dependency on $d$, namely $d^{\frac{a_0-1}{2}}\in(d^{-1/4},1)$ as opposed to the old version, which is $d^{2a-1.5-4c}\sqrt{\log d}\in (d^{-1}\sqrt{\log d}, d^{-9/10}\sqrt{\log d})$. Pdf: /pdf/79b50f1c3648b0af2591aa14066944cd6c595284.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Articulate your NeRF: Unsupervised articulated object modeling via conditional view synthesis
Accept (poster)
Summary: This paper presents an unsupervised method to learn the pose and part-segmentation of articulated objects with rigid parts using conditional view synthesis. Their approach learns the NeRF of both states and extracts 3D voxels for optimizing the pose and segmentation. Strengths: Using 3D Voxel instead of mesh provides another approach. Weaknesses: Firstly, my main concern is about the novelty and contribution of this work. Compared to the work PARIS, which this paper frequently references, I did not find any significant innovative points. In fact, the method appears to have regressed. This method separates the optimization of pose and segmentation, first optimizing the pose and then optimizing the segmentation. In my opinion, this is not as effective as jointly optimizing them, since the segmentation results can also affect the pose optimization results. However, the 3D voxel approach used in this paper is not differentiable (Lines 150-151), so it cannot jointly optimize both. And the voxel grid refinement operation seems more like a simple engineering post-processing. Additionally, I have some issues with the experimental results. The values reported for the existing method PARIS in this paper differ from the results reported in the original paper. Have the authors considered comparing their method with other existing methods? Technical Quality: 2 Clarity: 1 Questions for Authors: Please see the Weakness section. Confidence: 4 Soundness: 2 Presentation: 1 Contribution: 1 Limitations: I agree with the limitation section, but, I find the method for finding U (Line 170) to be insufficiently robust. If an issue arises, all subsequent operations will fail. Flag For Ethics Review: ['No ethics review needed.'] Rating: 2 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for reviewing our paper. 1. **Alternating optimization:** ```This method separates the optimization of pose and segmentation, first optimizing the pose and then optimizing the segmentation. In my opinion, this is not as effective as jointly optimizing them, since the segmentation results can also affect the pose optimization results.``` We agree with the reviewer that the pose and segmentation solutions depend on each other and, however, we disagree that joint optimization is guaranteed to obtain better results. It is well known that simultaneously updating intertwined parameters can be challenging and result in poor solutions. Hence, we use a well-known principled optimization strategy, coordinate block descent [R1] that optimizes first pose and then segmentation iteratively. We also quantitatively supported our argument that our method with the suggested joint optimization (without the decoupled pose estimation (DP) and iterative refinement (IR), the first row in Table 5) performs poorly. Here, the segmentation head and the articulated pose tensor are jointly optimized during the training (Line 276 - 277). [R1] Wright, Stephen J. "Coordinate descent algorithms." Mathematical programming 151.1 (2015): 3-34. 2. **Novelty:** ```Compared to the work PARIS, which this paper frequently references, I did not find any significant innovative points.``` We refer the reviewer to the originality point in the reviewer guidelines of the conference and invite the reviewer to provide a more substantiated comment on their understanding of “significant innovation”. Our method significantly differs from PARIS in three ways. First, it only requires a single neural radiance field (NeRF) for each object, while PARIS requires one for each part. Second, our method can model objects with multiple moving parts by segmenting NeRF space, while PARIS is limited to two part objects (with one as static and the other one as dynamic). Third, our method performs consistently across different objects unlike PARIS thanks to the proposed initialization strategy and iterative 3-step optimization. Clearly, our method is composed of a novel combination of multiple steps which has not been applied to the related problems, and significantly differs from the prior work in terms of methodology and capability. 3. **Non-differentiable representation:** ```the 3D voxel approach used in this paper is not differentiable (Lines 150-151), so it cannot jointly optimize both.``` The 3D voxel strategy is used only for initializing the optimization (see Line 150-151). Therefore, it is not required to be differential. While the initialization can be noisy, we iteratively refine it by acquiring estimates from the segmentation head later and demonstrate its effectiveness in Table 5. 4. **Simple post-processing:** ```And the voxel grid refinement operation seems more like a simple engineering post-processing.``` The voxel grid refinement is not a post-processing step but an update procedure in the optimization that updates the 3D coordinates based on the estimation from the segmentation head to enable more accurate pose estimation in the next iteration. Please refer to line 185-194 for more details. 5. **Reproducing the prior work:** ```The values reported for the existing method PARIS in this paper differ from the results reported in the original paper.”``` The reviewer missed that we already pointed out the reproducibility issue of PARIS in the footnote of page 6. Similar problems for reproducibility for PARIS have been acknowledged as challenging by other research in issues on the official repo. The author’s response to the reproducibility issue in the official code repo is: ```“the training for each object is essentially an optimization process, which can be easily affected by the randomness. I was also encountering the unstable training issue you mentioned in my practice”``` For a fair comparison, we follow the default setting provided from the official repo, and reproduce the result on the officially released data. To account for the randomness, we report mean and std metric for 5 independent runs in our paper. In this setting, we can see that our methods are more robust across different objects and delivered more consistent performance compared to PARIS. Besides, concurrent work [R2] also reports the reproduced performance of PARIS using the officially released data, which also shows similarly lower performance for PARIS. [R2] Neural Implicit Representation for Building Digital Twins of Unknown Articulated Objects, CVPR 2024 6. **Comparison to other works:** ```Have the authors considered comparing their method with other existing methods?``` We have compared our method to the most recent and related work, PARIS. If the reviewer is aware of other related work which was not discussed in our submission, we are happy to provide further discussion. 7. **Robustness:** ```I find the method for finding U (Line 170) to be insufficiently robust. If an issue arises, all subsequent operations will fail.``` First, the U mentioned in line 170 refers to the projected 2D points on the image space, which is obtained using standard camera projection formula from 3D points $X_\ell$ but not found by our method like the 3D coordinates $X_\ell$. If the reviewer is referring to the 3D coordinates, we already provided rigorous evaluation of our method on multiple categories of objects and also further study the robustness of our method in case of noisy and incomplete initialization. We refer to Figure 2 and Figure 8 for more details. --- Rebuttal Comment 1.1: Title: Feedback Comment: I still do not believe that this work offers sufficient innovation and contribution compared to existing works, and there are significant issues with the experimental section. Regarding your statement: "we disagree that joint optimization is guaranteed to obtain better results. It is well known that simultaneously updating intertwined parameters can be challenging and result in poor solutions," I expect to see comparative experimental results or valid references to support this claim rather than an assertion without evidence. Furthermore, if both the segmentation process and the transformation process are differentiable, we can still iteratively update both. I do not believe that replacing a differentiable process with a non-differentiable one, relying on iterative approximation is a good idea. The voxel grid refinement operation is not a one-time post possessing, while it is still a simple engineering add-on. As you pointed out in your CVPR2024 review R2, I do not consider this work to be concurrent, as it was already published before your submission. Both R2 and the works they compared can be used as benchmarks for comparison. Since you found that the PARIS work could not be reproduced, it is clearly unacceptable to compare and only compare it against poorly reproduced results. --- Rebuttal 2: Comment: 1. **Sufficient innovation and contribution compared to existing works:** We refer the reviewer to the originality point in the reviewer guidelines of the conference and invite the reviewer to provide a more substantiated comment on their understanding of “significant innovation”. 2. **Comparative experimental results:** The reviewer missed the point in our response. We would like to reiterate that our method without the decoupled pose estimation (DP) and iterative refinement (IR), which is reported at the first row of Table 5, is the joint optimization baseline that is requested by the reviewer. This baseline jointly optimizes part segmentation along with part transformation parameters, and it does not use $X_{\ell}$. The results in Table 5 show that the joint optimization strategy performs poorly and the proposed alternating strategy is significantly better. 3. **R2, I do not consider this work to be concurrent, as it was already published before your submission:** R2 was published in the CVPR proceedings on 17 June after the NeurIPS submission deadline. The reviewer might be referring to the day that it became public in the arXiV, which is 1 April, 1 month and 21 days before our submission. Such a short duration makes it a concurrent work. The same opinion was shared in the review of Reviewer DWX2. We already discussed the differences to R2 in our response and indicated that this method uses two sets of RGB-D views as input and relies on the accurate depth information to estimate 3D object structure unlike our method that uses only RGB views. They do not provide results with only RGB views. Hence it is not comparable to ours. 4. **Both R2 and the works they compared can be used as benchmarks for comparison:** We already compared our method to the prior works (Ditto[R3] and PARIS) that use the same input type and supervision. Unlike ours, Ditto [R3] relies on accurate point cloud input which is significantly easier than performing the same task with multi-view input. We already cited and discussed [R3] in our submission. [R3] Jiang, Zhenyu, Cheng-Chun Hsu, and Yuke Zhu. "Ditto: Building digital twins of articulated objects from interaction." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022.
Summary: This paper introduces a novel method for decomposing articulated objects and predicting their articulation. The pipeline is trained without supervision. Initially, a "static NeRF" of the object's initial state is obtained. The method then employs part-aware rendering to optimize the pose-change tensor and object segmentation in a decoupled manner. Finally, it reconstructs the voxel grid and performs refinement to achieve high-quality results. Strengths: 1. The pipeline is end-to-end and the training is completely unsupervised. 2. The video results are provided and look impressive. Weaknesses: 1. There are still many artifacts that can be seen in the visualization results 2. The method only works in limited cases. Most of the examples have one or two joints and a large static part. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. To optimize the pose-changing tensor, it requires the difference between the target and initial views. What if the object does not have static parts? For example, when scissors cut something, both parts are moving. 2. The results presented in PARIS are not as poor as those in this paper, which the authors should address. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The paper discusses the limitations and shows some failure examples. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for reviewing our paper. 1. **Artifacts for visualization:** ```There are still many artifacts that can be seen in the visualization results``` The artifacts during rendering are indeed caused by the imperfect segmentation in the NeRF space, though they do not harm the pose and joint estimation performance. A challenge of segmenting in the implicit 3D NeRF space is that it is non-trivial to apply regularization to smooth the segmentation results as in standard segmentation methods that work in explicit 2D space. In principle, the artifacts can be reduced with more sets of images from different articulated poses. In future work, we plan to move to explicit 3D representations such as Gaussian Splatting to better regulate the segmentation masks. 2. **More evaluation:** ```“the method only works in limited cases. Most of the examples have one or two joints and a large static part.”``` A key advantage of our method over the prior work is its ability to model multiple parts. For the rebuttal, we have further evaluated our model on two more objects, a door with its frame (door-8867) where the frame is static and significantly smaller than the door, from the Sapiens dataset. And the other object is a simple object with 4 chained parts. In the first experiment, our model obtains better performance in all measured metrics compared to PARIS. In the second experiment, we demonstrate our method can model motions of movable parts in chained objects. Yet, how to recover the chained kinematics would be an open question and is out of scope in this work. Detailed results can be found in the attached 1-page PDF. We will include these results along with more such objects in the final version. 3. **Limitation:** ```“What if the object does not have static parts? For example, when scissors cut something, both parts are moving.”``` Indeed our method assumes one part of the object is static to the canonical frame in different articulated poses, which is a common limitation to ours and as well as the prior work. In such cases, the challenge is to segment and track object parts simultaneously. A potential solution could be ingesting a video input with multiple frames and using optical flow to track and cluster parts. However, modeling from video data is out of scope in this paper. 4. **Baseline performance:** ```The results for PARIS are far worse than the original paper, need to check on it.``` As we pointed out in the footnote in page 6, similar problems for reproducibility for PARIS have been acknowledged as challenging by other research in issues on the official repo. The author’s response to the reproducibility issue in the official code repo is:` ```“the training for each object is essentially an optimization process, which can be easily affected by the randomness. I was also encountering the unstable training issue you mentioned in my practice”``` For a fair comparison, we follow the default setting provided from the official repo, and reproduce the result on the officially released data. To account for the randomness, we report mean and std metric for 5 independent runs in our paper. In this setting, we can see that our methods are more robust across different objects and delivered more consistent performance compared to PARIS. Besides, concurrent work [R1] also reports the reproduced performance of PARIS using the officially released data, which also shows similarly lower performance for PARIS. [R1] Neural Implicit Representation for Building Digital Twins of Unknown Articulated Objects, CVPR 2024 --- Rebuttal Comment 1.1: Comment: Thanks for the authors' response. I do not have further concerns and I would like to increase my rating. --- Reply to Comment 1.1.1: Title: Thanks for the reply Comment: Thank you again for your review!
Summary: This paper presents an unsupervised framework that jointly learns articulations and part segmentations of objects with rigid parts from multi-view images. Specifically, they proposed a two-stage approach. In the first stage, a static NeRF is fitted to one of the object states. In the second stage, the optimization alternates between optimizing part assignment and part pose estimations. The key to their approach is a 3D voxel grid heuristic that helps initialize the part assignment. The proposed method shows promising quantitative and qualitative results in view synthesis, pose estimation, and part segmentation compared to the previous method PARIS. Notably, the proposed method also shows more consistent results across multiple runs. Strengths: The presented approach makes sense and shows remarkable results compared to the baseline. In particular: - No labels are required for training. - A single NeRF plus a small segmentation head is trained, making it more parameter-efficient compared to the baseline (PARIS). - The learned articulated structures show better geometry accuracy/consistency during motion, compared to the baseline approach. Overall, the proposed method is promising, the writing is easy to follow, and the problem it tackles is of great importance for a wide range of applications in robotics. Weaknesses: While the approach looks promising, it does have some room for improvement - As can be observed in the supplementary video, the segmentation is still noisy, containing floaters that have nothing to do with the moving part. - Using pixel difference as the heuristic for tagging moving parts is potentially problematic. For example, in a real-world setting, the constantly changing environment light produces color differences across views/states, which can greatly affect the tagging accuracy. - The examples presented still have a rather simple articulated structure (i.e., no complicated kinematic chains that connect multiple parts). Note that this is an open problem/common issue not specific to the proposed method. - The approach is limited to rigid deformation (as discussed in L303-304). Learning a deformation field may alleviate this problem, but this is out of the scope of this work. Technical Quality: 3 Clarity: 3 Questions for Authors: Below are some questions that I have: - A concurrent work [1] tackles the same topic and similarly achieves impressive results. It would be nice if the authors briefly discuss the work (what’s in common, and what set the two apart). Note that [1] was not yet published at the time of NeuRIPS submission, so this is just a nice-to-have discussion, and will not affect the final rating. Typo: - L212: geodesic distance should be e_g? [1] Neural Implicit Representation for Building Digital Twins of Unknown Articulated Objects, CVPR 2024 Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The paper adequately addresses the limitations in Section 5.5 and Section 6. It is also pretty awesome that the paper also presents failure cases and further analysis in Supplementary (A.3) Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for reviewing our paper! 1. **Problematic heuristic**: ```Using pixel difference as the heuristic for tagging moving parts is potentially problematic``` We do not use RGB pixel values but tag the moving parts based on the opacity difference (the alpha channel for RGBA images), which won’t be affected by the RGB values (see line 154). Thus, it works robustly even in adverse and fast-changing lighting conditions. Furthermore, as we shown in the supplementary, Fig 8.a, our pipeline is robust to noisy tagging initialization. 2. **Simple examples**: ```The examples presented still have a rather simple articulated structure. Note that this is an open problem/common issue not specific to the proposed method.``` Thanks for pointing out this limitation in our work. In case of multiple connected parts, our method would provide a global transformation of parts. In the presence of known kinematic structure, one can estimate the local transformations for each part. We would like to note that structure estimating is an actively studied problem by itself and out of scope for our paper. 3. **Limitation:** ```The approach is limited to rigid deformation (as discussed in L303-304). Learning a deformation field may alleviate this problem, but this is out of the scope of this work.``` This is indeed a discussed limitation of our method. Rigid articulated objects are ubiquitous in our daily life and hence modeling such objects is a valuable and challenging problem. We plan to explore different deformation parameterizations including non-rigid ones in our future work. 4. **Discussion for concurrent work [R1]** The concurrent work uses two sets of RGBD images for different articulation status as input. Thanks to the depth measurements, they could first reconstruct two 3D mesh models for different articulations and then compute part-level transformation based on 3D point correspondences. Part-segmentation is obtained by grouping points that share similar motions. In contrast, our method does not require measurements from a depth sensor and focuses on a more challenging problem setting. Hence we build a NeRF model based on one articulation status and utilize the 2D information from the second image set for part-segmentation in the NeRF space and part motion estimation. [R1] Neural Implicit Representation for Building Digital Twins of Unknown Articulated Objects, CVPR 2024 --- Rebuttal Comment 1.1: Title: Thanks for the responses. Comment: Thanks for addressing and clarifying the concerns I have. My apology for misunderstanding the pixel difference computation -- certainly using opacity only could avoid the above-mentioned problem. Just one little comment regarding the pixel-difference thing: it would be great if the pixel-difference part in Figure 2 could be improved. Figure 2 depicts that RGB image and foreground mask are used for pixel difference computation. Perhaps it would be better to explicitly mention/depict opacity here, so it would be more accurate/less ambiguous, and align with the text better. Other than that, all my concerns are addressed or properly discussed. --- Reply to Comment 1.1.1: Title: Thanks for the reply Comment: Thank you for your suggestion! We will improve the Figure 2 in the final version for better clarity.
Summary: The paper proposes a method for modeling articulated objects with neural radiance fields (NeRF). It employs a stage-wise training schema, first building a NeRF of the object in a reference configuration. Then a segmentation in parts and relative pose-changes are learned in an alternating fashion. A modified rendering equation and a corresponding volume sampling strategy are introduced that take into account the rigid deformations of the object parts. Strengths: The paper considers unsupervised segmentation in parts by adding a segmentation head to the "static NeRF". The latter is learned by using a number of images capturing the object in a reference configuration, while part segmentation and relative transformations are learned by using a number of images capturing the object in a different configuration. The proposed method approaches the problem of articulated object reconstruction with stage-wise training, while segmentation and pose estimation are learned in a decoupled way. The experimental evaluation shows that this approach performs better with respect to competing methods. In addition, the proposed method is able to reconstruct objects with multiple parts. Weaknesses: The text contains a few typos and other errors, like incomplete sentences, missing verbs etc. Some editing of the text is required. Additionally, the clarity of the presentation can be improved. It is true that the stage-wise learning schema is somewhat involved, hence some additional effort should be devoted in explaining the steps as clear as possible, especially in Section 4.2. One suggestion is to provide a more detailed explanation of Figure 3, and provide dimensions of matrices U, F etc. Also, the formula in L.167-168 does not seem to be correct, especially regarding how the viewpoint v' is used. Should v' be an argument of a function? Another aspect that is missing regards the efficiency of the proposed method. It is important to discuss training and inference times in comparison to other baselines. On a similar note, a discussion on how the number of iterations used at each stage affects the final result could be included in the ablations. Regarding experimental evaluation, ideally, comparison could also consider state-of-the-art articulated object reconstruct methods which are not based on NeRFs, such as [R1] and [R2]. Finally, not enough details are shared with respect to the models used, making reproducibility of the approach challenging. [R1] Kawana & Harada, Detection based part-level articulated object reconstruction from single RGBD image. NeurIPS 2023 [R2] Kawana, Mukuta, & Harada, Unsupervised pose-aware part decomposition for man-made articulated objects, ECCV 2022 ### Minor comments - L.10: incomplete sentence - L.71: needs rephrasing - L.105: "And the opacity value ..." incomplete - L.199: It is fine that the same subset of 3D PartNet-Mobility dataset used in [15] is considered for the comparison, but it would be interesting to report results on a wider selection of objects - Section 5: the order in which figures are presented is not following the text, this creates some confusion to the reader - L.270: "performs consistently performs" - L.292: "presented in second and third row" does not follow the structure of the figure Technical Quality: 3 Clarity: 2 Questions for Authors: - How does the number of iterations used at each stage affect the final result? Is the method sensitive to these hyper-parameters? - Can the method handle joints with more than one degree of freedom? Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: Limitations are discussed in the text. I think it would be important to discuss also challenges and limitations of multi-part object reconstruction. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the review! 1. **Typos and other errors**: Thanks ! We will fix them in the final version. 2. **Clarity:** ```explaining the steps in Section 4.2, provide dimensions of matrices U, F, the formula in L.167-168 ... how the viewpoint v' is used``` We use homogeneous coordinates for representing $X_{\ell}$ and convert it to a $4 \times k_\ell$ matrix by appending an extra 1 as scale to each of its 3D columns before plugging into the projection formula $U_{\ell} = K M_{\ell}^{-1} {v'} X_{\ell}$. Let $k_\ell$ denote the number of 3D points in $X_\ell$. In the projection formula, camera intrinsic parameters $K$ is $ 3 \times 4$, the inverse part-pose transformation matrix $M_\ell^{-1}$ in shape $4 \times 4$, camera viewpoint (extrinsic parameter) $\mathbf{v’}$ in shape $4 \times 4$, the projection result $U_\ell$ in shape $3 \times k_\ell$. Finally, we normalize each column of $U_\ell$ with its last entry to recover the real pixel location after projection. We will clarify these points in the final version. 3. **Efficiency**: ```Another aspect that is missing regards the efficiency of the proposed method``` Our method takes on average about 15 minutes. The inference time for 2-, 3-, 4-part objects are 0.58, 0.87, 1.23 seconds. While the PARIS takes 5 minutes and 6 VRAM for training and the inference time for a 2-part object is 10.20 seconds on average. The above figures are tested with AMD 7950x CPU and Nvidia RTX 4090 GPU. We will include these details in the supplementary. 4. **Hyperparameter sensitivity**: ```How does the number of iterations ... Is the method sensitive to these hyper-parameters?``` We use the same number of iterations, 6000 for optimizing $M_\ell$ and 2000 for optimizing $s_\ell$ in our method over all object instances (see Line 193). This ensures that the training converges. We observe that the variations in the results are negligible after the given number of iterations. We will provide a quantitative sensitivity analysis in the final version. 5. **Implementation details**: ```not enough details with respect to the models``` We built our part-aware NeRF model based on the proposal estimator in [R3] and the NGP-based radiance field in [R4]. We extended the resulting model for segmentation by adding 2-layer MLPs with ReLU activation and the hidden dimensionality 64 in both estimator and radiance field. More details will be added to the supplementary. We will further release our code and data with the final version. [R3] Mip-NeRF 360: Unbounded Anti-Aliased Neural Radiance Fields. CVPR 2022 [R4] Instant Neural Graphics Primitives. ACM Trans. Graph. 2022 6. **Comparison to prior work**: ```compared to state-of-the-art articulated object reconstruct methods ... such as [R1] and [R2]``` Thanks for pointing out the related work. The method in [R1] learns to model articulated objects from a single RGBD image by detecting parts and reconstructing parts separately. Unlike our method that is unsupervised, it is supervised and requires ground-truth for joint parameters, joint-type and part-level oriented bounding boxes. In addition, our method estimates 3D from a set of 2D images without requiring measurements from a depth sensor. The method in [R2] is an unsupervised method that learns to model articulated objects from point cloud data. It assumes a dense and accurate point cloud per object instance as input, while our method focuses on the more challenging task of estimating 3D from a set of 2D images alone. Hence, their results are not comparable to ours due to different input types and levels of supervision. We will include these works in our related work section. [R1] Kawana & Harada, Detection based part-level articulated object reconstruction from single RGBD image. NeurIPS 2023 [R2] Kawana, Mukuta, & Harada, Unsupervised pose-aware part decomposition for man-made articulated objects, ECCV 2022 7. **More extensive experiments**: ```... report results on a wider selection of objects``` Thanks for the suggestion. We reported our results in the benchmark that was used by the prior work, and additionally evaluated our method on objects with multiple parts and a real-object in the submission. We also provided an additional experiment on doors where the static part is smaller and a simple 4-parts chained object in the one-page pdf. In the final version, we will include examples from more categories such as door, fan, monitor that have not been included in the original subset. 8. **Modelling multiple degrees of freedom**: ```Can the method handle joints with more than one degree of freedom?``` Our method is not limited to estimating a single degree of freedom (DoF), as we do not restrict the part transformations to a single DoF. However, the dataset included single DoF joint objects only. Hence, we provide an extra experiment of a chained 4-parts object and estimate the 6 DoF $se(3)$ transformation for each movable part in the one-page PDF. 9. **Limitations:** ```... to discuss also challenges and limitations of multi-part object reconstruction.``` One challenge for multi-part object reconstruction is to reconstruct the occluded parts, as occlusions between parts are more likely to occur more when the number of parts increases. For example, in the two-drawer storage shown in Figure 11 of the supplementary, the drawers will be partially occluded in both articulation status. Another challenge would be about taking physical constraints into consideration. For example, the segmented parts should not have collisions with other parts, or the connected joints should not be detached during articulation pose estimation. Third, recovering complex kinematic structures like chained parts is still an open problem. Finally, a dataset consisting of objects with more complex kinematic structures is currently missing. Such a dataset would enable advancing the progress in articulation object modeling. --- Rebuttal 2: Title: Post-rebuttal comments Comment: The author responses have addressed my main concerns and clarified some important aspects regarding notation, comparison to prior work and implementation details. For these reasons, I increase my rating to 6.
Rebuttal 1: Rebuttal: # Global comment We sincerely thank all reviewers for their valuable and insightful comments. We are particularly encouraged by the positive feedback received and appreciate the opportunity to address the concerns raised. We are happy to address these common issues comprehensively in the following sections. ## More examples for evaluation (JqAE, DWX2, 1D4A). - Reviewer **JqAE**: ```it would be interesting to report results on a wider selection of objects.``` - Reviewer **DWX2**: ```The examples presented still have a rather simple articulated structure.``` - Reviewer **1D4A**: ```the method only works in limited cases. Most of the examples have one or two joints and a large static part.``` To address the concerns for evaluation, we carried out extra experiments for the object door with its frame, which is not included in the PARIS evaluation set (**JqAE**) and it has a larger movable part compared to the static part(**1D4A**). The quantitative results of our methods consistently outperform the PARIS. Additionally, we use our method to estimate the motions of 4-part chained objects. Since recovering the chained kinematic structure is out of scope in our work, we estimate the 6-DoF se(3) transform for each movable part (**JqAE**, **DWX2**, **1D4A**). Qualitative evaluations show the robustness of motion estimation of our method and details can be found in the one-page PDF. ## Artifacts for visualization (DWX2, 1D4A) - Reviewer **DWX2**: ```As can be observed in the supplementary video, the segmentation is still noisy, containing floaters that have nothing to do with the moving part.``` - Reviewer **1D4A**: ```There are still many artifacts that can be seen in the visualization results``` The artefacts during rendering are indeed caused by the imperfect segmentation in the NeRF space, though they do not harm the pose and joint estimation performance. A challenge of segmenting in the implicit 3D NeRF space is that it is non-trivial to apply regularization to smooth the segmentation results as in standard segmentation methods that work in explicit 2D space. In principle, the artefacts can be reduced with more images from different articulated poses. In future work, we plan to move to explicit 3D representations such as Gaussian Splatting to regulate the segmentation masks better. Pdf: /pdf/4a9f58fbabfde3dc7b688f24146e743a0e7c6843.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Computational Aspects of Bayesian Persuasion under Approximate Best Response
Accept (poster)
Summary: The paper considers the problem of BP under \delta-best responses of the receivers. This means that there might be multiple actions that are BR to a specific signalling scheme. This creates non trivial problems of the algorithmic problem of computing the optimal signalling scheme. The paper provides poly-time algorithms with constant number of actions or states and quasi-polynomial algorithms for general case. Strengths: The paper extends the problem of stackelberg equilibria with delta-BR to the special case of Bayesian persuasion. This is an interesting problem and has somewhat different flavour then the general stakelberg. Weaknesses: I would like to know more about the connection between the robust Stackelberg paper. Why do you need to prove again the hardness and do not reduct from the hardness of the robust Stackelberg paper? What are the differences between the results there and here. The current work for sure cite that one but fails to discuss properly what is implied and what is not. I feel that I need to be convinced that even if the results and the techniques of the two papers are similar, this one deserves a spot at neurips. I hope that the authors would discuss this in details in the rebuttal and then add such the discussion to the new version of the paper. Technical Quality: 3 Clarity: 3 Questions for Authors: see weaknesses Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful comments. **Need for a new reduction**: the fundamental reason that the two problems are not necessarily "comparably hard" is that the computational complexity depends crucially on (1) the representation and (2) additional special structure. It is certainly true that the Bayesian persuasion problem can be viewed as a Stackelberg game, but: - The flat representation of the Stackelberg game corresponding to a Bayesian persuasion instance has a huge strategy space. In particular, each candidate strategy of the sender is a (randomized) mapping from states to signals. This means the sender's strategy space is at least exponentially larger even in the classical setting where the revelation principle holds. As a result, a polynomial-time algorithm under the Stackelberg game representation might very well be an exponential-time algorithm under the Bayesian persuasion representation. This in particular means the Bayesian persuasion version might not be easier than the Stackelberg game version. - On the other hand, Bayesian persuasion is a *special* class of Stackelberg games, which means it might exhibit additional structure that makes the computational problem much easier despite the difference in representations. This in particular means the Bayesian persuasion version might not be harder than the Stackelberg game version. Indeed, our proof of hardness directly reduces from Subset Sum. **Comparison with Gan et al.**: below we provide a detailed comparison to the work by Gan et al. We will focus on concrete differences, which are easier to describe and verify. We will also include a shortened version in the main paper (if there's space), as well as the full comparison in an appendix. - Model: allowing the "agent" / "follower" / "receiver" to choose a response that is suboptimal by a given amount is a standard approach in algorithmic game theory when robustness is desired. Both our work and that of Gan et al. take this approach. However, the first major difference (both conceptual and technical) already shows in the respective models: the succinct representation of a Bayesian persuasion instance has the additional component of *states*. This in particular means the sender's strategy is a randomized mapping from states to posterior beliefs, which is, superficially speaking, of much higher dimensions than a Stackelberg equilibrium. In fact, since the classical revelation principle is no longer valid, one may suspect the former strategy space is infinite-dimensional (we show this is not the case). As we argue below, this has significant technical implications to the computation of an (almost) optimal strategy. - Structure of optimal strategies: our positive results rely crucially on a structural property of optimal strategies that we prove (Lemma 3.2), which doesn't have a counterpart in robust Stackelberg games. The property says that while there are infinitely many possible signals, restricted to optimal strategies, many of them can be grouped, and we only need to consider a finite number of representative signals. This in particular means the effective strategy space of the sender is finite-dimensional. Note that here, the receiver may choose different actions depending on the signal sent. In contrast, the leader's strategy space in a robust Stackelberg game is naturally finite dimensional, and the follower always chooses a fixed action in response to the leader's strategy. - Algorithm for fixed number of actions: our algorithm when the number of actions is fixed (Proposition 3.3) is a natural combination of the classical LP for Bayesian persuasion and the above structural property. In particular, we solve a single LP for the sender's optimal strategy. For comparison, the fixed-$n$ algorithm by Gan et al. generalizes the algorithm by Conitzer and Sandholm ("Computing the Optimal Strategy to Commit to"), which enumerates the follower's response and solves one LP for each possibility. - Algorithm for fixed number of states: here we deviate significantly from existing techniques. In particular, our algorithm relies crucially on the notion of symmetric difference graphs and connectivity therein. To our knowledge, such techniques have not been employed in the context of Bayesian persuasion or Stackelberg games. In contrast, there are no states in Stackelberg games to begin with. Note that the parameter $m$ in Stackelberg games plays an intrinsically different role than the number of states in our model, and Gan et al. present no efficient algorithm when $m$ is fixed (though the comparison itself may not be meaningful in the first place). - Hardness result: first we note that it's not uncommon for a well-motivated problem to be computationally hard, and the fact that both our problem and Gan et al.'s are hard doesn't necessarily mean the two are otherwise similar (also see our response to the first question above). Our hardness result is based on a fundamentally different reduction from Gan et al.'s. In particular, we reduce from the problem of Subset Sum, whereas Gan et al. reduce from Exact Cover by 3-Sets. Details of the two reductions bear virtually no similarity. - Approximation algorithm: our algorithm shares the same high-level idea with Gan et al.'s (as well as many other approximation algorithms involving the probability simplex): one first discretizes the probability simplex into a reasonable number of representative points, and then considers the problem restricted to these points. Despite this high-level similarity, the concrete algorithms are sufficiently different. Specifically, the two algorithms are based on their exact (and inefficient) versions respectively, which means our algorithm solves a single LP, and Gan et al.'s enumerates the follower's response and solves one LP for each possibility. Our algorithm also has to deal with additional challenges introduced by the states and the prior distribution. --- Rebuttal Comment 1.1: Comment: I'm not convinced by the authors response about the intrinsic differences between their work and the robust Stakelberg paper. For example the fact that you have to deal in principle with infinite strategies is obviously also true for Stakelberg games, and also in Stakelberg games many of those can be grouped (as your second bullet point). However I agree that the poly-time algorithms with constant number of actions or states are interesting and should be given more space. That said I cannot increase my score that was already somewhat on the positive side --- Reply to Comment 1.1.1: Comment: Thank you for your response. We would like to highlight the difference in the search space between the Stackelberg game and the Bayesian persuasion problem (although both are infinite). In Stackelberg games, the search space is constrained to the simplex of probability distributions over the principal's action set. In contrast, in Bayesian persuasion, the search space of signaling schemes is not pre-defined because one needs to define the signal space first, where designing signal space is also an important part of the problem. We also appreciate the reviewer's acknowledgment of our result with small state/action spaces, and we will include a more detailed discussion in revised versions of the paper.
Summary: This paper studies a variant of the Bayesian persuasion problem where, instead of best-responding, the receiver $\delta$-approximately best responds to the sender. Specifically, upon receiving a signal, the receiver takes the $\delta$-optimal action that is worst for the sender on the induced posterior belief. The authors study the complexity of computing the optimal robust signaling scheme for the sender, obtaining four results: (1) Unlike the classical best-response model where direct-revelation signaling schemes are sufficient to be optimal, direct-revelation schemes are sub-optimal by a factor of 2 in the approximate-best-response model. (2) The authors give a linear program with size $O(2^n nm)$ to compute an optimal robust signaling scheme (which is not a direct-revelation scheme); $n$ is the number of actions and $m$ is the number of states. The signal in this signaling scheme is interpreted as a tuple $\sigma = (A, \tilde a)$ of a set of $\delta$-best-responding actions $A$ and a best-responding action $\tilde a$. (3) Then, based on the observation that the number of feasible tuples $\sigma = (A, \tilde a)$ cannot be more than $O(n^{O(m)})$, the authors design an algorithm to compute the optimal robust signaling scheme with $poly( n^{O(m)} )$ complexity. (4) Finally, for the case of large $n$ and $m$, the authors show NP-hardness of exactly computing the optimal robust signaling scheme and give a quasi-polynomial-time approximation algorithm. Strengths: (1) [Significance] Bayesian persuasion with an approximately-best-responding receiver is a natural extension to the classical best-response model. As showed by the authors, this problem presents significant technical challenges because the classic idea of restricting to direct-revelation schemes no longer works. So, this problem is both conceptually and technically interesting. (2) [Quality] The results are comprehensive and non-trivial. Both positive and negative results for the general case are given. And positive results for the special cases of constant number of actions and constant number of states are also given. (3) [Originality] To design an efficient algorithm for the case of constant number of states (Section 4), the authors make the key observation that the number of feasible tuples is $O(n^{O(m)})$, proved using a fundamental theorem in computational geometry. This observation is unexpected at first sight and the connection with computational geometry is interesting. (4) [Quality] Discussion of related works, especially Appendix A, is comprehensive and clear. (5) [Clarity] Writing is clear. The introduction nicely summarizes the technical contributions and high-level ideas. Weaknesses: I don't see significant weaknesses. I only have a minor concern regarding the fit to NeurIPS. Most of the algorithmic game theory papers on NeurIPS are related to machine learning in some ways, but this paper is a pure AGT paper with no obvious machine learning components (at least in my opinion). Technical Quality: 3 Clarity: 4 Questions for Authors: (Q1) What is the running time of the QPTAS you design for the general case (Theorem F.1)? Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: **Suggestions:** (1) Typo: The equation between Lines 192 and 193: "$r(\mu_\sigma, a)$". (2) Typo: the $s$ in Definition 2.1 should be $\sigma$. (3) Typo: In Remark 2.2, the phrase "that achieves sender's objective" is confusing and can be deleted. (4) It is better to explicitly write $\sigma = (A, \tilde a)$ in the first contraint of the optimization problem in the Figure 1. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your detailed and insightful comments. **Suggestions**: Thank you for the helpful suggestions, we will revise our paper accordingly. **Running time of the QPTAS**: the running time generally depends on the specific algorithm chosen to solve the LP given in Theorem F.1. Algorithms for LP are quickly evolving, but we can apply their result as a black box to obtain running time guarantees in our problem. From a quick search, the state-of-the-art algorithm (or something close enough) for general LP seems to run in time $\tilde{O}(n^{2 + 1/18}L)$, where $n$ is the total number of variables and constraints, and $L$ is the number of bits required to encode input numbers. Combining this with the bound on the size of the LP in Theorem F.1, we get a running time of $\tilde{O}(n^4 m^{25\log(2n) / \varepsilon^2} L)$. It might be possible to apply special-purpose LP algorithms that exploit the structure of our LP formulation to get a better running time. In any case, the running time is polynomial in $n$, $m^{\log n / \varepsilon}$, and $L$ (the number of bits required to encode input numbers). --- Rebuttal Comment 1.1: Title: I think this paper has enough technique contribution to the literature Comment: I am happy with the authors' response and keep my rating of 7. Below, I'd like to discuss my support for this paper. I read other reviews. A common issue raised by other reviewers is the similarity to [Gan et al (2023), Robust Stackelberg Equilibria]. I am convinced by the authors' response to reviewer zPak that their work is significantly different from [Gan et al (2023)]. Although Bayesian persuasion and Stackelberg games are similar at a high level (they both belong to a class of Generalized Principal-Agent Problem as pointed out by reviewer FR8F), they require different techniques. For example, in Stackelberg games, the leader only needs to choose one mixed strategy (and this mixed strategy is unconstrained), while in Bayesian persuasion, the sender needs to choose a distribution over posteriors (subject to the constraint that the average of posteriors is equal to the prior) and the support of this distribution might be infinite. These two scenarios are significantly different just as unconstrained and constrained convex optimization problems are significantly different. The authors also mentioned other differences with [Gan et al, 2023] in their rebuttal. Although some of the authors' techniques for robust Bayesian persuasion are inspired by [Gan et al (2023)]'s techniques for robust Stackelberg games, the latter does not apply directly to Bayesian persuasion, as the authors argue in their rebuttal. So, I think this work has enough technical contribution to the literature, although the authors didn't discuss their contributions clearly in their submitted draft. Another contribution of this work that was not emphasized enough by the authors, in my opinion, is that: in the small-state-space case ($m$ is small), the authors obtain a $poly(n^{O(m)})$ algorithm to find the optimal robust signaling scheme, where $n$ is the number of actions of the receiver. But by following [Gan et al 2023]'s approach, one can only get a $poly(2^n, n, m)$ exponential-time algorithm because the algorithm needs to enumerate all the $2^n$ subsets of the $n$ actions to check whether they can be induced as a set of $\delta$-best-responding actions. The authors observe that the number of such feasible subsets cannot be more than $n^{O(m)}$ and design an efficient algorithm (Algorithm 1) to find all of them without enumerating all $2^n$ subsets. This observation is interesting, and it is a significant improvement over [Gan et al 2023] and a good contribution to the literature. --- Reply to Comment 1.1.1: Comment: Thanks for your support of our paper! We'll add a more detailed discussion of our technical contributions and highlight our results in the small state space setting in revisions of the paper.
Summary: This paper studies the Bayesian Persuasion problem under the condition that the receiver may respond suboptimally. The authors provide a few computational results on the problem from its computational hardness to the approximation algorithms. Strengths: The paper considers an important and realistic problem on how to optimize the sender's information design when the receiver may not respond optimally. The paper provides a few computational results on the problem, which is a valuable addition to the existing literature. Weaknesses: The result of this paper is almost a copy-paste of the paper, Robust Stackelberg Equilibria, by Gan et al. [2023]. Both the hardness result and the design approximation algorithm are identical, and it is unclear what is the technical contribution of this paper, except a slight change of the problem setup. Moreover, it has already been noticed by the community that the Stackelberg game and the Bayesian Persuasion problem share the same structure, and it is not clear what is the new insight that this paper provides. See e.g. [1]. While the paper Gan et al. [2023] is cited, none of the similarities in the technical results are discussed in the paper. This can raise an ethical flag. Hence, I urge the authors to provide a thorough discussion of the technical contribution of this paper and how it is different from the existing literature. [1] Jiarui Gan, Minbiao Han, Jibang Wu, and Haifeng Xu. Generalized Principal-Agency: Contracts, Information, Games and Beyond. Technical Quality: 3 Clarity: 2 Questions for Authors: Please address my concern in the weakness section. Confidence: 5 Soundness: 3 Presentation: 2 Contribution: 1 Limitations: n/a Flag For Ethics Review: ['No ethics review needed.'] Rating: 3 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your comments. We respectfully disagree with the view that our paper is "almost a copy-paste" of the work by Gan et al., and that the results are "identical". While we recognize the high-level similarities between Stackelberg games and Bayesian persuasion, we believe that the community continues to find independent interest in each of these problems. In particular: **High-level differences between Stackelberg games and Bayesian persuasion**: one could certainly argue that Bayesian persuasion instances are a particular class of Stackelberg games -- in fact, one could make similar arguments about other important fields of algorithmic game theory / economics, such as dominant strategy mechanism design. However, such a reductionist approach offers little help in understanding the specific structure of Bayesian persuasion even in the most basic settings (see, e.g., "Bayesian Persuasion" by Kamenica and Gentzkow). When it comes to computation (which is the main focus of our paper), yet another issue arises: the computational complexity of a problem depends crucially on its representation. The standard representation of an instance of Bayesian persuasion is typically much more succinct than the flat representation of the corresponding Stackelberg game (where the leader's strategy space is all feasible policies of the sender in the Bayesian persuasion instance), which means an "efficient" algorithm under the latter representation may not be "as efficient" under the former representation. Below we provide a **detailed comparison to the work by Gan et al.** We will focus on concrete differences, which are easier to describe and verify. In particular, it should be clear from the differences below that the results are not "identical". - **Model**: allowing the "agent" / "follower" / "receiver" to choose a response that is suboptimal by a given amount is a standard approach in algorithmic game theory when robustness is desired. Both our work and that of Gan et al. take this approach. However, the first major difference (both conceptual and technical) already shows in the respective models: the succinct representation of a Bayesian persuasion instance has the additional component of *states*. This in particular means the sender's strategy is a randomized mapping from states to posterior beliefs, which is, superficially speaking, of much higher dimensions than a Stackelberg equilibrium. In fact, since the classical revelation principle is no longer valid, one may suspect the former strategy space is infinite-dimensional (we show this is not the case). As we argue below, this has significant technical implications to the computation of an (almost) optimal strategy. - **Structure of optimal strategies**: our positive results rely crucially on a structural property of optimal strategies that we prove (Lemma 3.2), which doesn't have a counterpart in robust Stackelberg games. The property says that while there are infinitely many possible signals, restricted to optimal strategies, many of them can be grouped, and we only need to consider a finite number of representative signals. This in particular means the effective strategy space of the sender is finite-dimensional. Note that here, the receiver may choose different actions depending on the signal sent. In contrast, the leader's strategy space in a robust Stackelberg game is naturally finite dimensional, and the follower always chooses a fixed action in response to the leader's strategy. - **Algorithm for fixed number of actions**: our algorithm when the number of actions is fixed (Proposition 3.3) is a natural combination of the classical LP for Bayesian persuasion and the above structural property. In particular, we solve a single LP for the sender's optimal strategy. For comparison, the fixed-$n$ algorithm by Gan et al. generalizes the algorithm by Conitzer and Sandholm ("Computing the Optimal Strategy to Commit to"), which enumerates the follower's response and solves one LP for each possibility. - **Algorithm for fixed number of states**: here we deviate significantly from existing techniques. In particular, our algorithm relies crucially on the notion of symmetric difference graphs and connectivity therein. To our knowledge, such techniques have not been employed in the context of Bayesian persuasion or Stackelberg games. In contrast, there are no states in Stackelberg games to begin with. Note that the parameter $m$ in Stackelberg games plays an intrinsically different role than the number of states in our model, and Gan et al. present no efficient algorithm when $m$ is fixed (though the comparison itself may not be meaningful in the first place). - **Hardness result**: first we note that it's not uncommon for a well-motivated problem to be computationally hard, and the fact that both our problem and Gan et al.'s are hard doesn't necessarily mean the two are otherwise similar. Our hardness result is based on a fundamentally different reduction from Gan et al.'s. In particular, we reduce from the problem of Subset Sum, whereas Gan et al. reduce from Exact Cover by 3-Sets. Details of the two reductions bear virtually no similarity. - **Approximation algorithm**: our algorithm shares the same high-level idea with Gan et al.'s (as well as many other approximation algorithms involving the probability simplex): one first discretizes the probability simplex into a reasonable number of representative points, and then considers the problem restricted to these points. Despite this high-level similarity, the concrete algorithms are sufficiently different. Specifically, the two algorithms are based on their exact (and inefficient) versions respectively, which means our algorithm solves a single LP, and Gan et al.'s enumerates the follower's response and solves one LP for each possibility. Our algorithm also has to deal with additional challenges introduced by the states and the prior distribution. --- Rebuttal Comment 1.1: Comment: I have read the rebuttal as well as other reviews. While I appreciated the detailed comparison to the work by Gan et al. missing in the paper, I am still not convinced that the contribution from this paper is significant enough over Gan et al.. For example, I am not sure whether Lemma 3.2 should be viewed as a positive result (unique to this paper). To me this is an easy result based out of Gan et al.., where the number of \delta-best response regions in robust Stackelberg game is exactly the number of the signals to consider in the robust Bayesian persuasion. I do not think this observation is conceptually new. It is also somewhat unrealistic to consider this many signals in the persuasion problem. The hardness results are also expected (despite a deduction to different hard problems), because function concavification (in bayesian persuasion) is at least as hard as function maximization. In summary, I believe the paper needs to be further improved by explicitly exploring the connections and differences between Stackelberg game and bayesian persuasion problem. --- Reply to Comment 1.1.1: Comment: Thank you for your response. Lemma 3.2 is a generalization of the revelation principle to the approximate best response setting. The key distinction between this lemma and the result (Proposition 1) of Gan et al is that, in Stackelberg games, the search space (probability simplex of principal’s strategies) can be directly partitioned into sub-regions in terms of eps-best response sets, whereas in the Bayesian persuasion problem, there is no pre-defined search space since the signal space needs to be defined first. In terms of techniques, to prove that this signal space suffices, our approach requires an iterative proof that begins with any signal space and merges signals without reducing robust utility. This step is unnecessary in the robust Stackelberg games setting. We also want to emphasize that the fact that the impracticality of considering this many signals is exactly the motivation for our results with a small state space. We show that one can greatly reduce the number of signals to $n^{O(m)}$ by leveraging the structural insights of how delta response regions correspond to polytopes cut by polynomially many hyperplanes in a low-dimensional space. We further abstract the connectivity of those polytopes into a symmetric different graph, on which we then apply graph algorithms to efficiently search for the $n^{O(m)}$ number of useful signals. In contrast, the $2^n\cdot poly(m,n)$ complexity of Gan et. al’s algorithm does not benefit from this speed-up because the above structure is unique to the Bayesian persuasion setting.
Summary: This paper studies Bayesian persuasion settings under approximate best response, where the receiver may choose suboptimal actions based on their beliefs. The authors develop efficient algorithms to compute an (almost) optimal sender commitment. First, they show the failure of the revelation principle. Furthermore, the paper develops polynomial-time exact algorithms for small state or action spaces and a quasi-polynomial-time approximation scheme (QPTAS) for the general problem. It also shows that no polynomial-time exact algorithm exists for the general problem unless P = NP. Strengths: - The paper studies an interesting problem for the Bayesian persuasion community. - The paper shows some interesting results and characterizations. Weaknesses: - The two main results, QPTAS and the hardness result, are presented at the end of the paper without explanations or intuitions. I believe this aspect should be improved. - Why, in the 'algorithm with small state spaces', you need the 'explore' algorithm and you introduce the 'Symmetric difference graph'? Maybe I am wrong, so please correct me if I am, but I think you can simply take the vertices of the regions $\Delta_{(A,a)}$ (which are clearly exponential in $m$) and instantiate an LP with those vertices in the space of posterior distributions (see, e.g., Section 2.1 in [1]). This approach would require at most half a page and would simplify the current approach. - It is unclear to me how different your approach is compared to the one used by [2] when either the number of states or actions is fixed. - Finally, the knowledge of $\delta>0$ limits the contribution of the work. [1] Castiglioni, M., Celli, A., Marchesi, A., and Gatti, N. Online bayesian persuasion. Advances in Neural Information Processing Systems, 33, 2020. [2] Jiarui Gan, Minbiao Han, Jibang Wu, and Haifeng Xu. Robust stackelberg equilibria. In Proceedings of the 24th ACM Conference on Economics and Computation (EC), page 735, 2023. Technical Quality: 3 Clarity: 2 Questions for Authors: - Is the approach discussed above with small state spaces a possible approach? - Is your approach employable for a multi-type receiver? Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your detailed and insightful comments. **QPTAS and hardness not explained**: we agree this is suboptimal. We made a hard choice here due to the strict page limit. There is a high-level description of the QPTAS in Appendix F. We find the proof of the hardness result particularly interesting from a technical perspective, though we couldn't find a more concise way to describe the main idea due to the nature of such hardness reductions. We will use the extra page afforded by the camera-ready version to provide more explanations in the main paper. **Small state spaces algorithm**: we are not sure if we correctly understand your proposal, but based on what we understand: our algorithm is essentially doing what you proposed. The "explore" subroutine is needed precisely because we need to *find* the vertices of the regions $\Delta_{(A, \tilde a)}$ -- we don't know which $(A, \tilde a)$ pairs to consider a priori, and we can't consider all of them because just the enumeration would cost too much (time exponential in $n$). Therefore, we abstract the structures of polytopes in the “space of posterior distributions” into the “symmetric difference graph”, on which we can utilize the graph algorithms to efficiently find all feasible signals, i.e., “vertices” that you described. We are happy to continue the discussion if you find the above unsatisfactory. **Fixed-parameter algorithms, comparison to Gan et al.**: very superficially, our algorithm has to deal with states, while Gan et al.'s doesn't. In particular, we are not aware of techniques similar to the "explore" subroutine used in the fixed-number-of-states algorithm (given that it is in fact necessary) in similar contexts. Even our fixed-number-of-actions algorithm combines the classical LP for Bayesian persuasion with the structural property that we prove (Lemma 3.2), whereas Gan et al.'s algorithm generalizes the classical algorithm for Stackelberg equilibrium by Conitzer and Sandholm ("Computing the Optimal Strategy to Commit to"). As a result, our algorithm solves a single LP (which is in a sense necessary), and Gan et al.'s enumerates the follower's response and solves one LP for each possibility. Please also refer to our response to Reviewer zPak for more detail. **Knowing $\delta$**: the assumption can be partially relaxed since the sender can estimate $\delta$, e.g., through binary search. Once a guess of $\delta$ has been made, the sender can implement the corresponding strategy and determine if the guess is too optimistic by observing whether the realized payoff is lower than expected. Note that here we need certain monotonicity properties: if the guess is too small, then the payoff will be upper bounded by the actual optimal payoff, which is upper bounded by the optimal payoff under the guess; if the guess is too large, then the implemented strategy also works for any smaller $\delta$ (including the actual one), and the resulting payoff will be at least as good. We will discuss this if space permits. **Multi-type receiver**: our results should generalize to the case of a fixed number of types. The idea is to construct the type-free LP for each type separately, and then consider the "product LP" where each product signal is a vector of signals, one for each type. One would also modify the objective function to incorporate the prior distribution over types. --- Rebuttal Comment 1.1: Comment: I thank the authors for their responses. After carefully reading all the reviews and rebuttals, I still believe that the paper presents several technical similarities to the one by Gan et al. (2023). This is somewhat expected, given that the two settings share many similarities, but it certainly limits the contribution of the work. For this reason, I will keep my score unchanged.
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Separate and Reconstruct: Asymmetric Encoder-Decoder for Speech Separation
Accept (poster)
Summary: This paper proposes a novel transformer based architecture, sepreformer, for speech separation based on a siamese decoder network that operates on separated speech signals in the encoded space. Strengths: A major strength of the paper is it's state of the art performance on a variety of datasets, showing as high as 25dB on the WSJ 2 mix. In addition, sepreformer is quite computationally efficient when compared with other baseline methods. The supplementary materials include wav files which show the fidelity of the reconstructed examples. The ablation studies provide good insights into the contributions of different component, such as the depth of the encoder decoder. Details of the method are provided with high degree of reproducibility and description. Weaknesses: The first major weakness is the presentation quality of the paper. Starting with the abstract, which goes right into a discussion of feature length and computation without setting up the problem and overview. I think a lot of the high level picture is missing, especially related to the particular insights that help this method work better. Beyond the presentation, a main concern I have of this method is extending to beyond 2 sources. The experiments focus on two-speaker separation. It's unclear how well the method would generalize to scenarios with more than two speakers, which is an important real-world use case. It's also particularly important here because the authors propose separating the sources first and then using a siamese network for decoding the features back to speech. With more than 2 speakers, it's quite likely that the method would have a harder time as the features for each source get separated out earlier in the process. The experiments and comparisons primarily focuses on SI-SNRi. I'd like to see other metrics like PESQ or STOI considered as SNR does not always represent perceptual quality. Technical Quality: 3 Clarity: 1 Questions for Authors: Are there any experiments conducted on more than 2 speakers? Are there any metrics considered beyond SNR? Confidence: 3 Soundness: 3 Presentation: 1 Contribution: 3 Limitations: No Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1: The first major weakness is the presentation quality of the paper. Starting with the abstract, which goes right into a discussion of feature length and computation without setting up the problem and overview. I think a lot of the high level picture is missing, especially related to the particular insights that help this method work better.** A1: Thank you for your feedback on the presentation quality of the paper. We recognize that the abstract lacks a detailed problem formulation and overview, which can make it challenging for readers to grasp the context and significance of our work. In the revised paper, we will include an earlier explanation of the TasNet concept in the abstract to enhance clarity. Also, we agree that TasNet overview should be more detailed. Therefore, we will expand on the concept of TasNet and the task of speech separation in the introduction. Moreover, we agree that we did not provide sufficient explanation with high-level picture why proactively expanding the speaker dimension with early split (ESSD method) is more advantageous than the late split. To show effectiveness of the proposed method, we will clarify the task and challenge of speech separation, emphasizing that it requires generating more information than what is provided in the original input data. Also, we will clarify that 1. generating the separated feature at once just before the audio decoder (late split) can be overwhelming for the separator. 2. and, Instead, splitting the feature early and using a reconstructing decoder can ease the task for the encoder of separator. Please refer to Figure 6 in the appendix to demonstrate these points. By clarifying these points, we aim to improve the presentation quality clarity and make the high-level picture and insights of our method more accessible to the readers. **Q2: Beyond the presentation, a main concern I have of this method is extending to beyond 2 sources. The experiments focus on two-speaker separation. It's unclear how well the method would generalize to scenarios with more than two speakers, which is an important real-world use case. Are there any experiments conducted on more than 2 speakers?** A2: We appreciate your concern about extending our method to more than two speakers. We agree that demonstrating effectiveness in multi-source scenarios is crucial for real-world applicability. While our current paper focuses on two-speaker separation, we recognize the importance of exploring multi-speaker scenarios. We plan to include detailed experimental results and discussions on this topic in our future research, as noted in the conclusion of Section 6. We are confident in the scalability of our method. Our ESSD mechanism is designed to handle multiple speakers by naturally extending the feature dimension and increasing computation in proportion to the number of sources. This contrasts with conventional methods, which do not adjust the feature dimension based on the number of speakers, often limiting their effectiveness as the number of sources increases. We are currently working on experiments using the WSJ-{3,4,5}MIX datasets, which include scenarios with more than two sources. Preliminary results are promising; for example, our SepReformer-B achieved an SI-SNRi of 23.5 dB on WSJ-3MIX, nearing state-of-the-art performance. We anticipate further improvements with larger models and additional research. **Q3: The experiments and comparisons primarily focuses on SI-SNRi. I'd like to see other metrics like PESQ or STOI considered as SNR does not always represent perceptual quality. Are there any metrics considered beyond SNR?** A3: We primarily focused on SI-SNRi because most studies on speech separation (unlike speech enhancement) evaluated separation performance using only signal-level metrics of SNR, particularly for simple instantaneous mixtures without background noise and reverberation. However, we agree that reporting additional metrics such as PESQ and STOI for datasets like WHAMR!, which include noise and reverberation, would be valuable. The table below shows PESQ and eSTOI results, demonstrating competitive performance compared to the recent powerful TF-GridNet model [1]. | | SI-SDR (dB) | SDR (dB) | PESQ | eSTOI | | --- | --- | --- | --- | --- | | Unprocessed | -6.1 | -3.5 | 1.41 | 0.317 | | TF-GridNet[1] | 10.6 | 11.7 | 2.75 | 0.793 | | SepReformer-L | 11.0 | 12.5 | 2.77 | 0.796 | Therefore, we will include the PESQ and STOI for the WHAMR! dataset in the appendix. Additionally, evaluating speech recognition accuracy for separated speech using real data like LibriCSS [2] would be meaningful for real-world applications. We will include WER evaluation results for the LibriCSS dataset in the appendix. Although we cannot provide WER results now due to the need for additional training with background noise and reverberation, we include perviously evaluated comparison results of an early version of SepReformer with DPRNN for reference: | WER on LibriCSS | | | | | | | | --- | --- | --- | --- | --- | --- | --- | | Condition | Overlap Ratio in % | | | | | | | | 0S | 0L | 10 | 20 | 30 | 40 | | Oracle | 4.9 | 5.1 | - | - | - | - | | Input | 11.8 | 11.7 | 18.8 | 27.2 | 35.6 | 43.3 | | DPRNN | 10.6 | 10.4 | 12.7 | 16.6 | 20.8 | 23.5 | | Early version of SepReformer | 9.8 | 10.1 | 10.9 | 12.5 | 14.4 | 17.5 | Note that the performance of SepReformer is expected to be better than shown. --- [1] Z. -Q. Wang, S. Cornell, S. Choi, Y. Lee, B. -Y. Kim and S. Watanabe, "TF-GridNet: Integrating Full- and Sub-Band Modeling for Speech Separation," in IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 31, pp. 3221-3236, 2023 [2] Z. Chen et al., "Continuous Speech Separation: Dataset and Analysis," ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Barcelona, Spain, 2020, pp. 7284-7288
Summary: This paper presents a novel approach to time-domain speech separation, departing from the conventional chunk-based dual-path processing. The authors introduce an asymmetric encoder-decoder architecture, where the encoder analyzes features and splits them based on the number of speakers. A Siamese decoder reconstructs the separated sequences, learning to discriminate features without explicit speaker information. The use of global and local Transformer blocks for long-sequence processing eliminates the need for chunking, contributing to a more efficient and effective model. Strengths: 1. The paper introduces an innovative asymmetric encoder-decoder framework for time-domain speech separation, deviating from the standard chunk-based dual-path models. 2. Efficient Feature Discrimination: The Siamese decoder enables the model to learn to discriminate features directly without relying on explicit speaker information, leading to a more streamlined and potentially more robust separation process. 3. The proposed model achieves good performance on benchmark datasets while requiring significantly less computation than previous approaches. This demonstrates the potential for this method to be applied in real-world applications where computational resources may be limited. Weaknesses: The paper presents some interesting ideas, but their novelty and significance are questionable: 1. Transformer Usage: While the use of Transformer blocks is highlighted, similar architectures have been successfully employed in previous works like Sepformer, raising questions about the uniqueness of this contribution. 2. Limited Evaluation: The experimental results primarily focus on two-speaker separation, which is considered a relatively solved problem in the current state-of-the-art. The absence of evaluations on more challenging scenarios with three or more speakers limits the generalizability and impact of the findings. 3. Incomplete Comparison: The paper's claims of achieving state-of-the-art results are undermined by the lack of comparison with other important papers in the field. Notably, models like the "DIFFUSION-BASED SIGNAL REFINER FOR SPEECH SEPARATION" have reported superior performance (SI-SDR of 23.1dB), raising concerns about the validity of the SOTA claim. Technical Quality: 3 Clarity: 3 Questions for Authors: -- Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: -- Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1: Transformer Usage: While the use of Transformer blocks is highlighted, similar architectures have been successfully employed in previous works like Sepformer, raising questions about the uniqueness of this contribution.** A1: Thank you for your comment. We agree that the usage of Transformer blocks is now very common in the speech separation field. Sepformer model primarily replaces the LSTM in a dual-path structure, similar to DPRNN, with classic Transformer blocks. However, our proposed method does not use a dual-path approach. Instead, we designed Global and Local Transformers with EGA and CLA, replacing the simple multi-head self-attention (MHSA) to handle global and local modeling more effectively. Additionally, we have enhanced the vanilla FFN by integrating it with a gating mechanism to create the GCFN, which efficiently models adjacent frames. Therefore, our approach is substantially different from the straightforward application of Transformer, and we believe these enhancements represent significant contributions. Experimental results of Table3(b), Table3(c), and Table 5 show the effectiveness of proposed Local and Global Transformer alone, without SepRe method. **Q2: Limited Evaluation: The experimental results primarily focus on two-speaker separation, which is considered a relatively solved problem in the current state-of-the-art. The absence of evaluations on more challenging scenarios with three or more speakers limits the generalizability and impact of the findings.** A2: We appreciate your concern regarding the extension of our method to scenarios with more than two speakers. Although we focused on experiments using datasets including noisy and noisy-reverberant mixture to show our model’s generalizability in this paper, we are also confident that our ESSD mechanism will perform effectively with even more than two speakers compared to the conventional methods. Our ESSD method naturally extends the feature dimension and requires increased computation proportional to the number of sources to be separated. Meanwhile, conventional methods do not adjust the feature dimension based on the number of speakers. This limitation in the conventional methods is significant because the separation difficulty and network requirements vary with the number of sources. To validate this, we are conducting on experiments on the WSJ-{3,4,5}MIX datasets, which include scenarios with more than two sources. We expect that our model will still show SOTA performance in these multi-source scenarios. Specifically, in the WSJ-3MIX, our SepReformer-B showed an SI-SNRi of 23.5 dB, which is comparable to the SOTA performance, and we expect that the score will increase further for a larger model. Although these results were not included in the paper, the experimental results and their discussion will be included in our future work, as mentioned in the conclusion of Section 6. **Q3: Incomplete Comparison: The paper's claims of achieving state-of-the-art results are undermined by the lack of comparison with other important papers in the field. Notably, models like the "DIFFUSION-BASED SIGNAL REFINER FOR SPEECH SEPARATION" have reported superior performance (SI-SDR of 23.1dB), raising concerns about the validity of the SOTA claim.** A3: Thank you for your comment. We acknowledge the importance of comprehensive comparisons in validating state-of-the-art (SOTA) claims. The paper you mentioned [1] evaluates the separation performance based on the WSJ0-2MIX dataset, reporting an SI-SDRi of 23.1 dB. To address your concern, we have indeed compared our results with competitive models in Table 4 of our paper. - Our SepReformer-S model, which has a smaller model size and lower computational requirements than many recent powerful networks, shows an SI-SDRi of 23.0 dB, which is comparable to the performance of the diffusion-based model you mentioned. - Furthermore, our SepReformer-L model demonstrates an SI-SDRi of 25.1 dB, which to the best of our knowledge, represents state-of-the-art performance with a significant margin over existing models. We recognize the diffusion-based model’s performance as reported in [1] and [2], with results of 23.1 dB and 23.9 dB, respectively. However, it is important to note that these models did not include evaluations on noisy and noisy-reverberant datasets, and they did not provide details on model size and computational resources. Therefore, we chose not to include them in our comparison table. Nevertheless, we agree that it is still valuable to provide the results of the diffusion-based model in Table 4 according to your comment. Therefore, we will include the result of [2] which shows more competitive result than [1]. Thank you again for your valuable feedback. [1] Hirano, Masato, et al. "Diffusion-based Signal Refiner for Speech Separation." *arXiv preprint arXiv:2305.05857,* 2023. [2] Lutati, Shahar, Eliya Nachmani, and Lior Wolf. "Separate and diffuse: Using a pretrained diffusion model for better source separation." The Twelfth International Conference on Learning Representations. 2024.
Summary: The paper proposes SepReformer, an efficient time-domain separation network. The model is an encoder-decoder architecture that splits the output features of the encoder based on the number of speakers before feeding them to the decoder. Both encoder and decoder networks are comprised of transformer blocks that capture global and local characteristics of the signal in different time-scales. The proposed approach achieves state of the art results in 3 well-established datasets. Strengths: 1. Paper well-written and easy to follow. 2. Efficient architecture that produces state of the art results. 3. Thorough experimentation and ablation analysis of their approach. Weaknesses: 1. Testing on clean data. It would be interesting to see how the proposed model performs on noisy datasets. 2. Evaluation of the approach to the two-speaker separation problem. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. How does the early split strategy affect the overall computational efficiency compared to late split and other conventional methods? 2. Why was the alpha value set to 0.4, and have you experimented with other values to determine the optimal setting? 3. How does the model handle varying lengths of input sequences, and what is the maximum sequence length it can effectively process? 4. Are there any limitations or failure modes of the proposed method that have been identified during experimentation? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: 1. The paper does not discuss potential failure modes or limitations observed during experimentation, Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1: Testing on clean data. It would be interesting to see how the proposed model performs on noisy datasets.** A1: Thank you for your comment. We performed experiments on noisy and noisy-reverberant datasets in Table 4 to show our models generalizability power. Please refer to it. **Q2: Evaluation of the approach to the two-speaker separation problem.** A2: We appreciate your concern regarding the extension of our method to scenarios with more than two speakers. However, we are confident that our ESSD mechanism will perform effectively even with more speakers compared to conventional methods. Our ESSD method naturally extends the feature dimension and requires increased computation proportional to the number of sources to be separated. This approach is more reasonable compared to conventional methods, which do not adjust the feature dimension based on the number of speakers. This limitation in conventional methods is significant because the separation difficulty and network requirements vary with the number of sources. To validate this, we are conducting experiments on the WSJ-{3,4,5}MIX datasets, which include scenarios with more than two sources. We expect our model to show SOTA performance in these multi-source scenarios. Specifically, in WSJ-3MIX, our SepReformer-B showed an SI-SNRi of 23.5 dB, which is near SOTA performance, and we believe the performance will increase further for larger models. Although these results were not included in the paper, the experimental results and their discussion will be included in our future work, as mentioned in the conclusion of Section 6. **Q3: How does the early split strategy affect the overall computational efficiency compared to late split and other conventional methods?** A3: Thank you for your valuable comment. First, even with the same number of parameters, using the ESSD structure will approximately increase the computation by factor of the number of speakers in the decoder. Therefore, simply changing a late split structure to an ESSD structure can potentially increase the computation significantly if the channel size is kept constant. Adding a Cross-Speaker module would further increase the computation slightly. However, with the ESSD structure, we can reduce the channel size while still achieving higher performance, which would significantly improve computational efficiency relative to performance. As shown by comparing the second and third rows in the table below, even with a substantial reduction in channel size and computation, the performance improved. Comparing with traditional methods, the late split method in the first row shows competitive performance against models like Conv-TasNet, SuDoRM-RF, and DPRNN, achieving better performance with less computation using the Global-Local Transformer. The ESSD + CS (SepRe) structure in the third row demonstrated comparable or better performance with significantly reduced computation. Additionally, the proposed base model outperforms TF-GridNet with much smaller computation. This highlights the efficiency of the proposed SepRe method. We will make sure to clearly highlight these points in the revised paper to provide a better understanding of how the early split strategy impacts computational efficiency. | Case | F(scale) | Params. (M) | MACs (G/s) | SI-SNRi (dB) | | --- | --- | --- | --- | --- | | Ours + Late Split + origin dec | 64(Tiny) | 3.3 | 5.1 | 19.0 | | Ours + Late Split + origin dec | 128(Base) | 11.6 | 18.3 | 21.6 | | Ours + Early Split + shared dec +CS | 64(Tiny) | 3.7 | 10.4 | 22.4 | | Ours + Early Split + shared dec + CS | 128(Base) | 14.2 | 39.8 | 23.8 | | Conv-TasNet | - | 5.1 | 10.5 | 15.3 | | SuDoRM-RF | - | 6.4 | 10.1 | 18.9 | | DPRNN | - | 2.6 | 88.5 | 18.8 | | Sepformer | - | 26.0 | 86.9 | 20.4 | | SFSRNet | - | 59.0 | 466.2 | 22.0 | | ISCIT | - | 58.4 | 252.2 | 22.4 | | TF-GridNet | - | 14.5 | 460.8 | 23.5 | **Q4: Why was the alpha value set to 0.4, and have you experimented with other values to determine the optimal setting?** A4: We did not perform experiments with other values for the alpha parameter. While further tuning might potentially improve performance, we did not consider this aspect critical because the alpha value also decays as the training epoch proceeds. Therefore, we did not conduct extensive experiments to optimize this parameter. **Q5: How does the model handle varying lengths of input sequences, and what is the maximum sequence length it can effectively process?** A5: Thank you for this valuable question. During training, we fixed the input length to 4 seconds for efficiency. However, during evaluation, the model processes inputs of varying lengths in one batch, similar to most existing studies. We apologize for not explicitly stating this in the paper and will ensure to clarify this point in the revised version. The average length of test samples in the WSJ0-2Mix dataset is about 6 seconds, with a maximum length of about 15 seconds. The model can effectively separate sources within this range without significant issues. For sequences longer than 10-20 seconds, we can split the input into smaller segments of 5-10 seconds for processing. **Q6: Are there any limitations or failure modes of the proposed method that have been identified during experimentation?** A6: As mentioned in the conclusion, our method still struggles with cases involving varying numbers of speakers due to the fixed split layer. We believe this approach is important because, in practice, identifying the number of speakers in advance can be cumbersome. Our model still has limitations in this regard because it relies on a split layer with fixed input and output shapes. This means the network requires prior knowledge of the number of speakers to be separated, and individually trained models must have corresponding split layers to address different numbers of speakers. We will make sure to clarify these potential limitations in the revised paper. --- Rebuttal Comment 1.1: Comment: Thank you for your responses. After considering your rebuttal, I maintain my score.
Summary: The authors propose a neural network architecture to separate a speech mixture containing 2 speakers. The proposed U-net based architecture replaces the inter and intra chunk processing - a popular method for speech separation - with global and local attention mechanisms. They further propose a mechanism to reduce the computational cost of the model. The model does the speech separation early in the network and uses a decoder with a low parameter count - all thanks to its weight sharing strategy - to do the speech separation. Evaluation on simulated data shows improvement over the state of the art methods. Strengths: 1. The Efficient Global Attention (EGA) component of the model, as discussed in Section 3.3, appears to be a cost-effective method for using the global context. This is achieved by initially subsampling to a reduced number of frames, thereby reducing the computational cost associated with transformers, and subsequently upsampling to a larger number of frames. The anticipated loss incurred from the downsampling process is compensated through the implementation of a gating mechanism. 2. Results show all the proposed mechanics, namely: early split, multi-loss, shared decoder parameters, EGA module design gives improvement in Si-SDR 3. Interestingly, when the proposed methods were applied to existing architectures such as conv-tasnet and Sepformer, an improvement in SI-SNR was observed. 4. Appendix D, also shows the computation effectiveness of the model 5. Attached samples clearly showed the quality of separated speech using the proposed network. Weaknesses: 1. The authors have shown good results on a bunch of datasets, and the separated audios in the supplemental file are of high quality. But all these results are based on simulated data, so it makes you wonder how well the model would do with real data. It’d be great if the authors could show how the model performs on the Chime-6 dataset or Libricss, maybe through WER metrics or even objective speech perception measures after separation. Or at least, they could show us some samples after separation on real data. Technical Quality: 3 Clarity: 2 Questions for Authors: 1. Looking at Figure 2, I’m wondering if it’s really a good idea to use the same split network for all ‘r’ values. Would it be better to have a different ‘spk’ split for each ‘r’ ? 2. I believe Eq(1) should be a minimum of the Si-SNR and \tau since you refer to it in line 201? 3. In line 213, shouldn’t the multi loss be (1-\alpha / R) L + \alpha \sum_r L_r/R ? 4. It is not very clear why the authors refer to the decoder as a Siamese decoder. Is this because they share the same weights across all speakers? This is typically how a speech separation network is structured. I don't see any contrastive learning loss functions as part of the training loss to discriminate the speakers. Confidence: 5 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: Lack of evaluation on real data is a limitation of this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1: The authors have shown good results on a bunch of datasets, and the separated audios in the supplemental file are of high quality. But all these results are based on simulated data, so it makes you wonder how well the model would do with real data. It’d be great if the authors could show how the model performs on the Chime-6 dataset or Libricss, maybe through WER metrics or even objective speech perception measures after separation. Or at least, they could show us some samples after separation on real data.** A1: Thank you for your valuable feedback. We appreciate your point regarding the importance of evaluating the model on real-recorded datasets. We will include word error rate (WER) evaluation results for the LibriCSS dataset in the appendix. Additionally, we previously evaluated an early version of SepReformer on LibriCSS, similar to the proposed Ours+U-Net in Table 5. Although we cannot provide the WER from the latest SepReformer at this moment as it requires additional training with background diffuse noise and reverberation, we can share a comparison of an early version of SepReformer with DPRNN, using the Librispeech dataset and Room-Impulse-Response (RIR) simulations with background noise. | WER on LibriCSS | | | | | | | | --- | --- | --- | --- | --- | --- | --- | | Condition | | | | | | | | Overlap Ratio in % | 0S | 0L | 10 | 20 | 30 | 40 | | Oracle | 4.9 | 5.1 | - | - | - | - | | Input | 11.8 | 11.7 | 18.8 | 27.2 | 35.6 | 43.3 | | DPRNN | 10.6 | 10.4 | 12.7 | 16.6 | 20.8 | 23.5 | | Early version of SepReformer | 9.8 | 10.1 | 10.9 | 12.5 | 14.4 | 17.5 | Please note that SepReformer is expected to perform better than the early version. Both DPRNN and our model were trained for stable separation and denoising without dereverberation. Furthermore, as our work focuses on speech, reporting metrics such as PESQ and STOI for datasets like WHAMR!, which include noise and reverberation, would be valuable. We will include PESQ and STOI results for the WHAMR! dataset in the appendix. | WHAMR! | SI-SDR (dB) | SDR (dB) | PESQ | eSTOI | | --- | --- | --- | --- | --- | | Unprocessed | -6.1 | -3.5 | 1.41 | 0.317 | | TF-GridNet[1] | 10.6 | 11.7 | 2.75 | 0.793 | | SepReformer-L | 11.0 | 12.5 | 2.77 | 0.796 | **Q2: Looking at Figure 2, I’m wondering if it’s really a good idea to use the same split network for all ‘r’ values. Would it be better to have a different ‘spk’ split for each ‘r’ ?** A2: Thanks for your insightful comment. In our early version, we initially used different split layers for each stage. We then considered using a shared split layer, assuming that consistently separated feature sequences could benefit the reconstruction decoder. Interestingly, we observed more stable convergence and no significant difference in separation performance. To simplify our model and reduce the number of parameters, we opted for the shared split layer. However, as you suggested, we will add experiments comparing the two approaches in the appendix, as exploring shared split layers for skip connections in the U-Net structure is indeed worth investigating. **Q3: I believe Eq(1) should be a minimum of the Si-SNR and \tau since you refer to it in line 201?** A3: We really appreciate for your detailed comment and apologize for the typo. As you commented, Eq(1) should be a minimum of the Si-SNR and \tau. We will correct it. **Q4: In line 213, shouldn’t the multi loss be (1-\alpha / R) L + \alpha \sum_r L_r/R ?** A4: If all the losses at each stage, including $\mathcal{L}$ should be considered equally, the multi-loss $\hat{\mathcal{L}}$ can be set to $(1-\alpha) \mathcal{L} /R + \alpha \sum_r \mathcal{L}_r/R$. However, we thought the final loss of $\mathcal{L}$ alone is important as much as summation of all the auxiliary loss $\mathcal{L}_r$. Therefore, we set the multi-loss as $(1-\alpha) \mathcal{L} + \alpha \sum_r \mathcal{L}_r/R$. **Q5: It is not very clear why the authors refer to the decoder as a Siamese decoder. Is this because they share the same weights across all speakers? This is typically how a speech separation network is structured. I don't see any contrastive learning loss functions as part of the training loss to discriminate the speakers.** A5: Thank you for your valuable comment, and we apologize for the confusion caused by misuse of the term. As you pointed out, we named the weight-sharing part across all speakers the Siamese decoder. We did not use an additional loss to discriminate the speakers using speaker identities. Instead, we trained the model directly with a separation objective based on the Permutation Invariant Training (PIT) loss, which is conventional in speech separation network. Most speech separation networks are designed with a single feature sequence that is addressed before a late split layer. However, in our approach, given early separated feature sequences, whose similarity is relatively high (as indicated by grey line in Figure6(c)), the weight-sharing blocks can learn to discriminate speech sources (as indicated by orange and blue lines in Figure6(c)). This suggests that the discriminative learning is enhanced by the weight-sharing (or what we thought of as Siamese) structure. We believe that separation loss with PIT itself can operate as a kind of discriminative learning with ESSD structure, given the early split feature sequence. Nevertheless, we acknowledge that using the term ‘Siamese’ must be misleading since we did not apply a typical contrastive loss. Moreover, we concluded that prematurely using the term "Siamese" could be risky, as it may hinder considering scenarios involving more than two individuals in the future. Therefore, we will replace the term ‘Siamese’ with ‘weight-shared’ in the revised paper. [1] Z. -Q. Wang, S. Cornell, S. Choi, Y. Lee, B. -Y. Kim and S. Watanabe, "TF-GridNet: Integrating Full- and Sub-Band Modeling for Speech Separation," in IEEE/ACM TASLP, vol. 31, pp. 3221-3236, 2023 --- Rebuttal Comment 1.1: Title: Thanks for addressing the comments. Comment: I thank the authors for providing their rebuttal which address my concerns. I will retain the Accept score.
Rebuttal 1: Rebuttal: We would like to thank all the reviewers for their helpful comments and suggestions. We sincerely appreciate the time and effort they have dedicated to reading and commenting on the paper. Below, please find our point-by-point response to all the comments. We believe that it is greatly improved by incorporating the reviewers' suggestions and comments.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Noise-Aware Differentially Private Regression via Meta-Learning
Accept (poster)
Summary: The paper proposes a meta learning gaussian dp algorithm. It uses the same framework as ConvCNP but replaces the encoder during train and meta test with a noisy encoder obtained by adding gaussian noise. The paper is not particularly well written and I have trouble following the notation in the main paper and appendix. I have significant doubts about the correctness of the method due to conflation between continuous results (the integral of a gaussian kernel) with discrete results (the sum of the kernel at discrete points). Note: after feedback, the authors addressed my concerns about correctness. Strengths: The paper considers the use of meta learning to avoid an unfortunate trend in the dp literature which assumes the existence of large scale non private data with a similar distribution to a private dataset. Meta learning is a convenient possible alternative if one can create a good enough synthetic dataset. Assuming everything is correct, the technique outperforms some prior work on simple synthetic and real datasets. Weaknesses: The experiments consider alternatives that use similar methods (GP or DP-SGD) but not alternatives that use different methods for similar problems (regression/kernel density estimates, clustering, etc.). I believe the technique would outperform the alternatives anyway, but their omission is noteworthy. The experimental datasets are very simple and not comparable to the ones where public datasets are often assumed (images and NLP). Since the technical contribution lies in replacing r with a noisy r, there should be more challenging experiments and discussions on how to create simulated data for them. A few typos: - line 94, r should be theta - line 130, missing word after predictive - line 224 references eq 6 instead of 7 The delta value of 10^-3 is large compared to the literature. Setting it at 1/N puts it into the well-known privacy violating regime where one can return a random record with no noise, violating the privacy of someone with probability 1. The paper seems to be taking too much credit for minor things, like using Gaussian DP as a drop-in replacement in the work of Hall et al. (literally taking a theorem that was meant as a drop-in replacement of prior formulas and using it as a drop-in replacement for those formulas). That is not a contribution. The most important issue for last. I am not convinced that the algorithm is differentially private at the claimed parameter levels. The paper is missing a detailed pseudocode that puts everything together, but my understanding is that nothing changes except that wherever r is needed, a noisy version is used instead. The privacy properties should be provable from first principles (without relying on abstract kernel properties or an RKHS). Plug in the form of the kernel you are using and directly compute the sensitivity accounting for all of the discretized points being used. The reason I am asking for this is because the sensitivity calculation detailed in the appendix appears to be incorrect (it is not very well written and the notation is not explained well, so I am not 100% sure). The key to the proof seems to be that the gaussian kernel integrates to 1, but this is used to replace a discrete summation (i.e., \int_{-\infty}^\infty exp(-(x-y)^2))~dy is very different from \sum_{i=1}^n exp(-(x-y_i)^2)). The summation can diverge depending on what y_i are chosen. I would like to see a detailed privacy proof (with carefully explained notation) that explicitly uses the discretized point set (explaining exactly how the discretized points are chosen) and an explicitly defined covariance matrix. Technical Quality: 3 Clarity: 2 Questions for Authors: I would like to see a detailed privacy proof (with carefully explained notation) that explicitly uses the discretized point set (explaining exactly how the discretized points are chosen) and an explicitly defined covariance matrix. For more challenging problems, such as text and images, how would one create simulated data so that DPConvCNP would be competitive? Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: The authors addressed limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > The delta value of 10^-3 is large compared to the literature. Setting it at 1/N puts it into the well-known privacy violating regime where one can return a random record with no noise, violating the privacy of someone with probability 1. This would be true if we considered any $(\epsilon, \delta)$-DP mechanism. However, for our specific mechanism, we can always adjust $\delta$, and obtain a different $\epsilon$, which will be finite, so we will not be in the privacy violating regime you describe. Making this adjustment would only change the $\epsilon$ values in our plots for DPConvCNP. We would also need to make the same adjustment for DP-SVGP, which will not be identical, since DP-SVGP is based on DP-SGD. However, we expect these differences to be small, so the adjustment would not change the conclusions from our results. To illustrate this, if we had used $\delta = 10^{-5}$ instead, which is the recommended value by NIST, we would adjust the DPConvCNP $\epsilon$ values we use according to the following table: | Inititial $\\epsilon$ | Adjusted $\\epsilon$ | | -------- | ------- | | 1.0 | 1.51 | | 3.0 | 4.20 | However, please also note that we have conducted additional experiments, attached to the pdf in the global response, which should help put your concerns about the value of $\\delta$ to rest. > The paper seems to be taking too much credit for minor things, like using Gaussian DP as a drop-in replacement in the work of Hall et al. (literally taking a theorem that was meant as a drop-in replacement of prior formulas and using it as a drop-in replacement for those formulas). That is not a contribution. We believe our formulation in the Introduction and the rest of the paper gives appropriate credit to prior work, but are happy to hear if you can suggest a better formulation. We also point out that both reviewers Eg3Y and AqSn found this contribution to be meaningful and listed it as a strength of our work. > The most important issue for last. I am not convinced that the algorithm is differentially private at the claimed parameter levels. We break down this paragraph and answer each of the points below. > my understanding is that nothing changes except that wherever r is needed, a noisy version is used instead. This is correct, but we also add clipping to the signal channel to bound sensitivity. > The sensitivity calculation detailed in the appendix appears to be incorrect… The only properties of the kernel we use in the sensitivity calculation are that $0 \leq k(x, y) \leq 1$ and $k(x, y) = k(y, x)$, which are looking at point evaluations, not integrals or sums. The sums in Eq. (19) and (27) disappear because the two datasets in the supremum only differ in one datapoint, so all but one term of the sums are zero. The transition from the functional form that the functional mechanism releases to the evaluations on the discretised point set follows from Proposition 5 of Hall et al. (2013). We will add a corollary to Theorem 4.2 that clarifies this. We would be happy to improve the notation if you could point out any specific part that is unclear. > The privacy properties should be provable from first principles… I would like to see a detailed privacy proof [...] that explicitly uses the discretized point set… Here is a proof that uses the Gaussian mechanism on the discretised points directly. Let $k(x_1, x_2) = \exp(-\frac{(x_1 - x_2)^2}{\lambda})$, and let the discretised points be arbitrarily chosen $x_1, \dotsc, x_m$. Let’s look at the density channel $r_d$ first. DPConvCNP computes the vector $[r_d(x_1), \dotsc, r_d(x_m)]$, and adds Gaussian noise with covariance $\sigma_d^2 M$, where $M$ is an $m\times m$ matrix with $M_{ij} = k(x_i, x_j)$. By Lemma A.4, it suffices to find the upper bound $\Delta$ in Eq. (14) for the matrix $M$ and the vector $r_d$ taking the role of the function $f$ of the lemma. By Proposition 8 of Hall et al. (2013), the sensitivity in Eq. (14) is bounded by the RKHS sensitivity we look at in Appendix A.4, which gives the privacy guarantee we claim. The privacy of the signal channel is proven in the same way, and releasing both of them is a composition. > For more challenging problems, such as text and images, how would one create simulated data so that DPConvCNP would be competitive? These domains are likely too high-dimensional for the ConvCNP model to be applied. More concretely, while ConvCNPs have been applied to image-based data before (see e.g. Gordon et al. (2020)) these applications have been confined to image in-painting rather than, for example, image classification. In image in-painting with meta-learning, the context set is a single partially observed image consisting of context pixels. The ConvCNP can be applied to this setting, but preserving data-point (i.e. pixel) privacy is meaningless. In image classification with meta-learning, the context set consists of several images, to which the ConvCNP cannot be readily applied. On the other hand, some amortized adaptation models (see Requeima et al. (2019)) have been developed, however these involve entirely different architectures and representations to the ConvCNP, so the functional mechanism does not apply. Overall, while very interesting and important, tackling higher dimensional data is well beyond the scope of this paper. Reference: James Requeima, Jonathan Gordon, John Bronskill, Sebastian Nowozin, Richard E. Turner, Fast and flexible multi-task classification using conditional neural adaptive processes, NeurIPS 2019. Thank you for your feedback. We hope we have settled your concerns regarding the validity of our theory. Provided our comments and amendments have satisfied your concerns, we would like to invite you to consider increasing your score. --- Rebuttal Comment 1.1: Comment: Thank you for the clarifications. I am more confident in the correctness of the results and adjusted my score accordingly.
Summary: This paper equips meta learning framework with DP guarantees. Specifically, datasets are split into context and target subsets respectively, where an encoder learns good representations from abundant context data, and is able to generalize to a limited amount of target data that may be sensitive. Since the output embedding (function) of the encoder is a sum of kernel functions indexed by the context dataset, so functional DP mechanisms are considered to protect the individual privacy in the dataset. Particularly, the authors extend the classical functional DP mechanism to Gaussian DP (GDP), which has theoretical benefits compared to alternatives. Experiments in both synthetic data and real data show its effectiveness. Strengths: 1. Theoretically, the authors extend Gaussian DP to functional outputs, which may pave the way for follow-up works in related fields. 2. Empirically, the experiments are complete and extensive, i.e. on both synthetic and real tasks with comparison to existing works. 3. The paper is well-organized and easy to follow. Figure illustrations are clear and informative. Weaknesses: I am not familiar with the meta-learning framework (and related works), so it's likely I misunderstood some parts and had confusion about the meta-testing part. Normally, a DP classifier, regressor, or generative model only introduces noise perturbation in the training (e.g. DP-SGD). Once the training is complete, the trained DP model can be applied to test data without worrying about breaching DP guarantees, as ensured by the post-processing theorem. In this work, however, the noise is injected in both training and test stages, where the authors explained that it accounts for mismatch between training and test. This paradigm does not look optimal to me. So here I have a few questions: 1. Can you explain why adding noise to the test stage is necessary? For example, compared to pretraining on the synthetic data then DP fine-tuning on private data, what is the benefit of the current method? 2. It looks to me that if we need to add noise in the test stage, we need to accumulate the total privacy budget $\epsilon$ as more test data are coming in. Is it true? If so, how do you determine the $\epsilon$ in the experiment (e.g. eps=1 or 3 in Fig 5)? Technical Quality: 4 Clarity: 3 Questions for Authors: 1. I am not familiar with the neural process either. By reading the paragraphs starting at line 118, I don't see how it is different from a normal neural network. Particularly, how it is able to produce well-calibrated prediction, and how this is evaluated in the experimental results? 2. For the conditional NP, I don't see where is the conditional input, or what does the conditional mean here? 3. In Figure 4, I wonder how these lines are drawn. Do you use any analytical forms to calculate, or some libraries/packages to compute? Also, are they converted into the same $(\epsilon, \delta)$-DP notion? Because the $\epsilon$ in different DP notions are not the same (although they share the same notation). Confidence: 3 Soundness: 4 Presentation: 3 Contribution: 4 Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > I am not familiar with the meta-learning framework (and related works), so it's likely I misunderstood some parts and had confusion about the meta-testing part. Meta-learning has two phases, meta-training and meta-testing. You should view meta-testing as analogous to supervised learning: the input is a dataset, and the output is a function (model) $f$ that can make predictions at arbitrary inputs. Meta-training is the process of learning an algorithm which, when given a new dataset, produces a function (model) that can be queried at arbitrary inputs to make predictions. In the neural process literature, this is achieved using the encoder-decoder architecture of Eq. (1). With this understanding let us address your points below. We will also clarify these points in the revised paper. > Can you explain why adding noise to the test stage is necessary? Yes, here is why adding noise and clipping are necessary in the test stage. During the test stage, the DPConvCNP takes a previously unseen private dataset as its context set $D^{(c)}.$ (This would be the training data set in regular supervised learning.) Then, the DPConvCNP converts $D^{(c)}$ into a representation $r$ which is “published.” Once the representation is published, it can be passed through the rest of the architecture (the decoder) together with arbitrary test (target) inputs $x^{(t)}$ to make predictions for the corresponding test (target) outputs $y^{(t)}.$ If we do not apply noise and clipping in the test stage, then publishing $r$ would have no guarantees at all, and that could completely breach user privacy. By clipping and adding noise, we ensure that $r$ can be published with DP guarantees, and that arbitrarily many predictions (at arbitrary inputs) can be made using the decoder, without incurring any further privacy cost. We will add a corollary to Theorem 4.2 which states that Algorithm 2 with the DPSetConv encoder is DP with respect to meta-testing context set, in order to clarify precisely which part of the algorithm is DP, to better address this point. > [...] noise is injected in both training and test stages, where the authors explained that it accounts for mismatch between training and test. This paradigm does not look optimal to me. From our reply above, we hope it is clear why adding noise is necessary at test-time in order to preserve privacy. Let us now address meta-training time. At meta-training, we do not strictly need to apply noise or clipping, because the meta-training data are synthetic, so training the model on them does not incur a privacy cost. For example, we can train a standard ConvCNP model on synthetic data, and then replace its SetConv encoder with a DPSetConv encoder. The $r$ published from this model would still be $(\\epsilon, \\delta)$-DP at test time. However, applying noise and clipping changes the statistics of the representation $r$ in a way that this model is not trained for, so its predictions are catastrophically poor. Our observation is that we can address this issue by simply training the model with noise and clipping in place (i.e. with the DPSetConv) at meta-train time, teaching the model to account for these operations, and eliminating the mismatch. We will clarify which data is simulated and which is real in Algorithms 1 and 2. > It looks to me that if we need to add noise in the test stage, we need to accumulate the total privacy budget epsilon as more test data are coming in. Is it true? In our case, we assume that the meta-test context dataset is available all at once and does not change over time. This assumption is analogous to a supervised learning setting, where the training dataset (on which the supervised algorithm is applied) is fixed at the start. Just as in supervised learning, we can still make an arbitrary number of predictions at arbitrary locations, without any further privacy cost. It would be interesting to consider the scenario where more test data come in one by one or in batches, but this is beyond the scope of our current work. > For the conditional NP, I don't see where is the conditional input, or what does the conditional mean here? The term “conditional” neural process was introduced by Garnelo et al. (2018) to capture the fact that this meta-learning model models the conditional predictive distribution $p(y^{(t)} | x^{(t)}, D^{(c)}).$ We emphasize that we did not introduce this term, but rather the original authors of the CNP. > In Figure 4, I wonder how these lines are drawn. Do you use any analytical forms to calculate, or some libraries/packages to compute? Also, are they converted into the same (epsilon, delta)-DP notion? Because the epsilon in different DP notions are not the same (although they share the same notation). Yes, all of the noise levels in Figure 4 are calculated via closed form expressions and, if necessary, lightweight numerical solution routines. All of them use $(\epsilon, \delta)$-DP from Definiton 3.1 In more detail, the classical line uses Theorem 3.7. The GDP line uses Definition 3.2 to convert the $(\epsilon, \delta)$-bound into a GDP $\mu$-bound by numerically solving $\mu$ from Eq. (5), and finds $\sigma$ with Theorem 4.1. For the RDP line, we get an RDP guarantee from Corollary 2 of Jiang et al. (2023), which we convert to $(\epsilon, \delta)$ with Proposition 3 of Jiang et al. (2023). RDP has the $\alpha$ parameter that can be freely chosen, so we optimise $\alpha$ to minimise $\epsilon$ for a given $\sigma$, and finally solve for $\sigma$, which gives the plotted values. Both the optimisation and finding $\sigma$ can be done analytically in this case. > I am not familiar with the neural process either. [...] Due to the limited character count of the rebuttal, we have answered this in a separate comment below. Thank you for your valuable feedback. If our comments and amendments have satisfied your concerns, we would like to invite you to consider increasing your score. --- Rebuttal 2: Title: Additional Response Comment: > I am not familiar with the neural process either. [...] how it is different from a normal neural network. [...] how it is able to produce well-calibrated prediction, and how this is evaluated in the experimental results? Here we give a brief answer to your question following the standard exposition from the literature (see e.g. Garnelo et al. (2018)). While a neural process is parameterised by neural networks (the encoder and the decoder) it differs from standard supervised neural network models in two main ways: the architecture and the training method. In terms of architecture, all neural processes involve an encoder-decoder architecture where (a) the encoder is designed to be invariant with respect to permutations of the context points in $D^{(c)},$ i.e. permuting order of the entries in the dataset $D^{(c)}$ leaves the output representation $r$ invariant; and (b) the decoder is constructed such that, given the representation $r,$, the predictions for each target $y^{(t)}_i$ depend only on the corresponding inputs $x^{(t)}_i$ and no other variable (see Eeq. (1)). These design choices are important for ensuring _Kolmogorov’s extension theorem_ is satisfied, in order to define a valid stochastic predictive process. We won’t delve further into this, but refer you to Kolmogorov’s extension theorem in Oksendal (2013) if you are further interested. For our purposes here, we can summarise this by saying that NPs have a particular architecture, chosen to satisfy Kolmogorov’s extension theorem. In terms of training, a neural process is again substantially different from a simple supervised neural network. A supervised network is trained on a single supervised dataset, and as such it is prone to overfitting. By contrast a neural process is trained on a (possibly infinite) collection of datasets. During meta-training (see Aalgorithm 1), the neural process is trained to make predictions for an unseen target set $D^{(t)}$ given an observed context set $D^{(c)}.$ Because $D^{(t)}$ are unseen, the model must learn to produce not just an accurate mean prediction, but also a sensible confidence interval for these. Our experimental results validate that the neural process predictions are well calibrated both qualitatively (e.g. see the calibrated confidence intervals in Ffigures S.2 to S.5 in the appendix) as well as quantitatively: in Figure 6 we see that the DPConvCNP performance approaches that of the perfectly calibrated oracle predictors, which can only happen if the DPConvCNP predictions are also well-calibrated. We stress that well-calibrated predictions have been demonstrated extensively in the neural process literature (see e.g. confidence intervals for the ConvCNP in Gordon et al. (2020)), and our results suggest the DPConvCNP also produces well-calibrated predictions. B. Oksendal, Stochastic differential equations: an introduction with applications (2013) --- Rebuttal Comment 2.1: Comment: Thanks for the reply, I have raised my rating
Summary: Authors propose DPConvCNP, a meta-learning model with a functional DP mechanism. This model is a modification to the SetConv procedure, applying clipping and work from [Hall et al. 2013] with a tighter Gaussian mech. analysis from [Dong et. al 2022] to conduct a sensitivity analysis and privatize the algorithm. The authors then demonstrate empirically that DPConvCNP provides both performance and efficiency boosts relative to the baseline of applying DP-SGD to a neural GP process on both some synthetic tasks and the !Kung dataset. Strengths: Overall: The paper follows a standard format for a contribution in differential privacy for machine learning: adapt an existing ML approach (SetConv) in a DP setting through sensitivity analysis and the application of a DP mechanism (in this case, an adapted version of the functional mechanism given by Hall et. al). Then, they show that this approach outperforms a Naive application of DP to some other standard method (in this case, applying DP-SGD to a standard neural implementation of a GP, which would be an obvious first attempt at the problem). In the Appendix, I reviewed the proofs and lemmas for Theorems 4.1 and 4.2, they checked out and the analysis is well presented. I did not spend time checking Proposition B.1; it seems like the main trick there is noticing that you can apply linearity of expectation over the grid. It would be good if another reviewer has reviewed this. S1. I find the quality of this submission’s presentation, from the intro to the supplementary appendix/proofs, to be exemplary. I commend the authors for the clarity of writing, notation, proofs and figures - it is refreshing, and worth highlighting. This is minus my minor nitpicks below, which will be easy to address. S2. I find the approach intuitive, and the analysis sound. The experimental results are compelling, and this method is clearly extensible, as the authors hint at in their limitations section. S3. The authors application of the [Dong et al.] result to the [Hall et al.] functional mechanism is indeed a nice contribution, and should be useful for future work. Weaknesses: I am deferring more substantial weaknesses to the questions section - broadly, I would like to raise my score, given adequate answers to my questions. Nit: translation equivariant abbreviation TE should be introduced in the section where it is used heavily e.g. Section 3.2, not in the intro, along with a citation, which is confusing and gets lost for the reader. Nit: in Section 5, line 259, “we make the GP variational…” is maybe missing a word or two. Nit: In paragraph “Gradient based vs amortized…” in Section 3, the “on one hand…on the other hand” construction is difficult to follow, please restructure. Technical Quality: 3 Clarity: 4 Questions for Authors: Q1 : My main question/concern with the work is *how* important is leveraging a powerful hyperparameter tuning library (like BayesOpt from Optuna) to adjust hyperparameters for the empirical performance of the proposed method DPConvSet? Having worked with DP-SGD variants (like Opacus’s private engine for optimizers), it’s clear to me that minor fluctuations in hyperparameters affect performance drastically. Recent work [Papernot et. al https://arxiv.org/pdf/2110.03620], [Mohapatra et. al, https://arxiv.org/pdf/2111.04906], and [Koskela et. al, https://arxiv.org/pdf/2301.11989] touches on the importance of private tuning. Can the authors discuss the delta between untuned and tuned versions of their algorithms, to be more “honest” about the effects of hyperparameter tuning on both their baseline and their method? An experiment (even anecdotal) would be nice, although isn’t necessarily required. Q1.5 : Related in Q1, in section 4.3, you include privacy parameters in the meta training of DPConvCNP (as far as I can tell) by re-parameterizing. Can you explain how this maintains the ($\epsilon,\delta$)-DP guarantee if the iterative meta-learning procedure is conditioned on prior trainings? Q2 : It seems like a missed opportunity to discuss the shortcomings of standard supervised learners on small datasets, even a simple standard private regression (many open source implementations available, for example https://diffprivlib.readthedocs.io/en/latest/modules/models.html#linear-regression). Perhaps comparisons between standard learners and meta learners is not worth it in the data scenarios you explore? It’d be helpful for me if you could discuss this (a light experiment if appropriate). Q3 : Can the authors justify their choice of privacy hyperparameters in the text? I acknowledge that $\epsilon$ of 1.0+ is definitely reported on in the literature and used practically, but it’s good to contrast this with $\epsilon < 1.0$, as this is the more theoretically comfortable private regime, in a strict sense. Adding results for this wider range of privacy parameters would strengthen the contribution. Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 2 Limitations: The authors do a very nice job of highlighting the limitations of their work, alongside broader impacts. This is greatly appreciated! Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > Q1 : My main question/concern with the work is how important is leveraging a powerful hyperparameter tuning library (like BayesOpt from Optuna) to adjust hyperparameters for the empirical performance of the proposed method DPConvSet? We only use Optuna and BayesOpt on the DP-SVGP baseline. We did not perform any BayesOpt tuning on any hyperparameters of the DPConvCNP. Instead, rather than performing BayesOpt, the DPConvCNP “tunes” the parameters (but no hyperparameters such as model size) of the backbone network as well as the NNs for parameterising the DP clipping threshold $C$ and noise weight $t$ (Section 4.3) by performing gradient descent on synthetic data. We found our procedure to work well out-of-the-box and did not make any further efforts to optimize the DPConvCNP. > Q1.5 : Related in Q1, in section 4.3, you include privacy parameters in the meta training of DPConvCNP (as far as I can tell) by re-parameterizing. Can you explain how this maintains the (epsilon, delta)-DP guarantee if the iterative meta-learning procedure is conditioned on prior trainings? During meta-training, DPConvCNP only uses simulated data, so meta-training does not have any impact on the privacy of the real data. We include the noise addition, clipping and other privacy-related computations during meta-training in order to make the model learn the same task it will do during meta-testing, either for single privacy parameters or over a range of privacy parameters. In turn, at meta-testing, the DPSetConv ensures that the data representation of the private data is $(\\epsilon, \\delta)$-DP. > Recent work [...] touches on the importance of private tuning. Can the authors discuss the delta between untuned and tuned versions of their algorithms, to be more “honest” about the effects of hyperparameter tuning on both their baseline and their method? Both the DPConvCNP and the DP-SVGP are trained on simulated data without any privacy cost, so the work on private hyperparameter tuning is not relevant to our setting. > Q2: It seems like a missed opportunity to discuss the shortcomings of standard supervised learners on small datasets, even a simple standard private regression [...]. Perhaps comparisons between standard learners and meta learners is not worth it in the data scenarios you explore? Standard DP supervised learning algorithms would likely not work very well in our settings. For example, linear regression, which seems to be the only regression algorithm in the library you linked, would clearly need some nonlinear features to have any chance of working in our settings. Designing appropriate nonlinear features that work well with DP would be a non-trivial effort, to the point that we would effectively be designing a new baseline. In non-private settings, GPs are the state-of-the-art in the low-data regime, so we think the DP-SVGP is the most reasonable baseline we could use. We will further explain our choice of baseline in the revised version. > Q3: Can the authors justify their choice of privacy hyperparameters in the text? [...] Adding results for this wider range [$\\epsilon < 1$] of privacy parameters would strengthen the contribution. As you acknowledge in your question, our privacy parameter settings are standard in the literature. Relatively few papers go below $\\epsilon = 1.$ However, following your advice we have run some further experiments for $\\epsilon < 1,$ as well as smaller $\\delta.$ We have attached these results in the additional rebuttal pdf of the "global response." There, we observe that the DPConvCNP performs well even in such stricter privacy settings, given enough data. Thank you for your valuable feedback. If our comments and amendments have satisfied your concerns, we would like to invite you to consider increasing your score. --- Rebuttal Comment 1.1: Comment: I thank the authors for their clarifications and additional experiments at lower $\epsilon$ values, the effort is appreciated. Specifically, I feel as though their responses have fully helped me understand how their method preserves privacy; this indicates to me that they will be able to make clarifying changes to their paper were it accepted. I have a further clarification, which I hope the authors' can help me with. > During meta-training, DPConvCNP only uses simulated data, so meta-training does not have any impact on the privacy of the real data. We include the noise addition, clipping and other privacy-related computations during meta-training in order to make the model learn the same task it will do during meta-testing, either for single privacy parameters or over a range of privacy parameters. In turn, at meta-testing, the DPSetConv ensures that the data representation of the private data is -DP. Maybe I should have been more specific: is the simulated data making assumptions (e.g. that this information is public) about the domain/range of the input data it is simulating? Or is it inferring the domain/range directly? Or does it just use arbitrary data? Whatever the scheme, this needs to be made more clear; if the assumption is not that this information was public, for example, there would need to be a DP range estimation embedded in the algorithm. --- Reply to Comment 1.1.1: Comment: Thank you for your response. We are happy you have found our clarifications useful. With regards to your questions: > is the simulated data making assumptions (e.g. that this information is public) about the domain/range of the input data it is simulating? Or is it inferring the domain/range directly? Or does it just use arbitrary data? Whatever the scheme, this needs to be made more clear; The simulated meta-training data should ideally have similar statistics as the real data: the closer these statistics are, the better the sim-to-real approach will work. This can be achieved, for example, by generating diverse enough synthetic data to ensure that some of the meta-training datasets have similar statistics to the real data. Importantly however, the simulating generative process has to be picked without looking at the real data to avoid privacy leakage. With the Dobe !Kung data, we normalise the data so that age is in [-1, 1] and height and weight have zero mean and unit variance. We assume that the required statistics for these normalisations are public. In case the statistics were not public, they could easily be released with additional privacy budget. Inaccurate normalisations would only increase the sim-to-real gap and reduce utility, not affect the privacy analysis. The simulator produces the normalised data, and we picked its hyperparameters based on rough estimates of the weights, heights and ages of individuals we might expect to see in reality. We appreciate your point that the paper would benefit from a lengthier explanation of these nuances. We can easily fit this in the space afforded by the camera-ready version. We hope the above helps clarify any remaining questions you might have had about the paper. Please let us know if you’d like any further clarification.
null
null
Rebuttal 1: Rebuttal: Some reviewers had questions regarding how meta-learning differs from standard supervised learning, and the consequences of these differences from the point of view of privacy. To summarise, meta-learning has two phases, meta-training and meta-testing. Meta-testing is analogous to training in supervised learning: the input is a dataset, and the output is a function (model) that can make predictions at arbitrary inputs. Meta-training is the process of learning an algorithm which, when given a new dataset, produces a function (model) that can be queried at arbitrary inputs to make predictions. We assume that the data used during meta-training comes from a simulator, so using it has no implications on privacy. On the other hand, we consider the meta-test data to be private, and design DPConvCNP to guarantee its privacy. Since the model from meta-testing is made private, we can use the model to make an arbitrary number of predictions. We will make changes in the revision to further clarify these points, especially the distinction between private and simulated data. In particular, we will update the descriptions of Algorithms 1 and 2 in the following way to make it clear whether their input data is simulated or private (added words in italics): **Algorithm 1** **Input:** *Simulated* datasets $(D\_m)\_{m=1}^{M}$, encoder $\mathrm{enc}\_\phi$, decoder $\mathrm{dec}\_\theta$, iterations $T$, optimiser $\mathrm{opt}$. **Algorithm 2** **Input:** *Real* context $D^{(c)}$, $\mathrm{enc}\_\phi$, $\mathrm{dec}\_\theta$ We will also add the following corollary to Theorem 4.2 to make it clear which parts of DPConvCNP are private: **Corollary.** Algorithm 2 with the DPSetConv encoder from Algorithm 3 is $(\epsilon, \delta)$-DP with respect to the real context set $D^{(c)}$. Lastly, we would like to raise to the reviewers’ attention, particularly reviewers Eg3Y and aFHD, that we ran additional experiments with stricter DP $(\\epsilon, \\delta),$ following their advice that this would strengthen our manuscript. Our results show that the DPConvCNP produces sensible predictions even in strict privacy settings, given enough data. Pdf: /pdf/1b84eff735e7a554c0d30f0eaf2d55b96d32b179.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Enhancing Chess Reinforcement Learning with Graph Representation
Accept (poster)
Summary: The paper introduces a new variant of AlphaZero, AlphaGateau, with a neural network architecture based on GNNs, that allows the model to generalize across board sizes. The new architecture outperforms the original AlphaZero architecture under certain conditions. Strengths: The paper follows a promising direction, using GNNs in an AlphaZero-like model to utilize the graph nature of many board games. AlphaGateau is an original model that shows potential in terms of data efficiency and fast training. It learns much faster than AlphaZero during the first 100 training iterations, saturating at a much higher Elo compared to that of AlphaZero, which does not improve much in this timeframe. It is clear that when both models are tested on equal grounds, in the setting used, AlphaGateau is significantly more data efficient, and improves its Elo score much faster than AlphaZero. These are significant improvements on the original AlphaZero model if they hold under rigorous evaluation. Weaknesses: Evaluation & comparison with AlphaZero: While figure 5 clearly shows that AlphaGateau learns much faster than AlphaZero at early training, the paper does not provide any information on how the two models compare when they are fully trained. AlphaGateau seems to reach a performance plateau within 100 steps, but the AlphaZero model trained by the authors would likely keep on improving for orders of magnitude more training steps. For comparison, the authors used a model size roughly 10% of Silver et. al.'s AlphaZero, but trained for 100 steps compared to the original 700,000 steps. It is not clear if AlphaGateau's performance will be comparable with AlphaZero, given enough training steps. I am aware that training the models for 10^5 steps is extremely expensive and not feasible for this paper. Even so, the correct solution would have been to fit the model size to the training budget and train a significanly smaller AlphaZero model, for longer time and on smaller batches (or to heavily reuse training data between optimization steps). Doing a comparison of smaller-scale models could show if AlphaGateau is a comparable model to AlphaZero. In the paper's current form, the reported Elo gap does not give the reader any knowledge about how AlphaGateau compares to a fully-trained AlphaZero, since Elo calculation is unreliable when using such large gaps. For example, the gap between the best performing AlphaGateau and best AlphaZero is about 1,500 Elo, meaning that AlphaZero would win about 1 game in every 7,500 games. This implies that the AlphaGateau Elo score is based purely on comparisons with earlier AlphaGateau training checkpoints. Another example is the gap between the first and second checkpoints, which is hard to measure by eye due to the plot format but looks roughly equal to 800. This gap implies the weaker model wins 1 game in 100, suggesting that the error of this gap is very high unless the authors use thousands of test games between this specific checkpoint pair to calculate Elo. I couldn't find the number of Elo games in the paper, and would recommend stating it clearly. Considering that the best model's Elo is probably based purely on comparisons to its earlier versions, and considering the possible error of the Elo estimate, it is possible that AlphaGateau's Elo rating will drop significantly if fully-trained AlphaZero models are added to the pool of players. This paper will benefit significantly from fitting the size of the experiments to the compute resources available to the authors. I would suggest training significanly smaller models for 100x more training steps, using the same compute budget. In its current form, the paper only showcases the advantage AlphaGateau has in terms of data-efficiency at early training, in a regime where AlphaZero is severely undertrained. Technical Quality: 1 Clarity: 3 Questions for Authors: - What hyperparameters are used to calculate Elo scores? (number of models tested, number of games between each model pair, who played against who) Three minor suggestions: - It would be helpful for the reader if figures 5 and 6 had FLOPs on the x-axis instead of steps, or if training compute was reported. Plotting only steps, it is hard to understand if one training step of AlphaGateau is comparable to a training step of AlphaZero in terms of compute. - It would be easier and more convincing to use traditional methods for calculating Elo scores, such as BayesElo which was used by Silver et. al. That said, the authors provide clear explanations on their Elo calculation method. - The correct spelling of the metric is "Elo", named after Arpad Elo, rather than "ELO", which is a common typo. Confidence: 4 Soundness: 1 Presentation: 3 Contribution: 3 Limitations: No limitations other than those specified above regarding evaluation. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Remarks > the paper does not provide any information on how the two models compare when they are fully trained. AlphaGateau seems to reach a performance plateau within 100 steps, but the AlphaZero model trained by the authors would likely keep on improving for orders of magnitude more training steps Initial tests on the AlphaZero implementation of PGX seemed to indicate that the model converges in less than 400 steps, with marginal improvements after 100 steps, which is why we limited our experiments to 100 steps. However, we didn't test the runs in the paper for more than 100 steps. We started new longer runs on the AlphaZero model, to include either in the main paper or in the appendix. > For comparison, the authors used a model size roughly 10% of Silver et. al.'s AlphaZero, but trained for 100 steps compared to the original 700,000 steps. It is not clear if AlphaGateau's performance will be comparable with AlphaZero, given enough training steps. > This paper will benefit significantly from fitting the size of the experiments to the compute resources available to the authors. I would suggest training significanly smaller models for 100x more training steps, using the same compute budget. In its current form, the paper only showcases the advantage AlphaGateau has in terms of data-efficiency at early training, in a regime where AlphaZero is severely undertrained. In Silver et. al., they trained for 700,000 steps, however, due to their hardware availability, they were using 5000 TPUs to generate games, and 64 TPUs to train the network. As such, they could have the TPUs in charge of generating the games be always on and constantly generate games, while the TPUs in charge of training could just grab a random batch of positions from the last 1,000,000 generated positions to do one "step" of gradient update. In contrast, for our experiments, we used the same 8 GPUs alternating between generating data and training the neural network. When training, we sample as many batches as required without replacement as to fully cover the current frame window. What we call in the paper one step are the generation of around 130,000 positions, followed by around 500 to 1000 batches to cover the full frame window (depending on the size of the frame window we used). As such, our 100 steps are comprised of up to 100 000 batches. This is still an order of magnitude less than Silver et. al., but our model is also around an order of magnitude smaller. We will update the paper in order to clear up this confusing choice of term on our part. > or to heavily reuse training data between optimization steps As mentioned in the previous answer, we used a frame window, such that on average, each data point is reused around 7 times. We mentioned with Figure 7 part of our experiments on the impact of reusing training data, by varying the amount of data generated, and the amount of data reuse. > In the paper's current form, the reported Elo gap does not give the reader any knowledge about how AlphaGateau compares to a fully-trained AlphaZero, since Elo calculation is unreliable when using such large gaps. Each model is initially assumed to have an Elo of 1000, and plays 5 matches of 60 games each against 5 opponents. After each match the Elo of all the models is recomputed, and each opponent is chosen to be as close as possible to the current Elo evaluation of the model. Models with high difference in Elo are effectively not appropriate to evaluate the Elo ratings, but this is reflected in the plotted confidence intervals. We included in the global PDF response a distribution plot of all the Elo of the models that are included in the linear regression described in Eq (11), which shows that the ratings are quite continuous, without leaving any gaps that could bias the rating estimation. Each player played at least 300 games (150 as white and 150 as black), which was enough to keep the confidence intervals smaller than 100 Elo points. # Questions > What hyper-parameters are used to calculate Elo scores? (number of models tested, number of games between each model pair, who played against who) At the time of submission, we evaluated 255 players, and have since evaluated 977 players (pair of (model, parameters)). Each player played at least 60 games against each of 5 other players that were already evaluated (such that the graph of matches is 5-edge-connected). As the opponents of each player depended on their successive rating evaluations, they are not easy to list. However, the full list of matches as well as their outcomes (wins, draws, losses) is included in the supplementary material zip in `rankings.json` > It would be helpful for the reader if figures 5 and 6 had FLOPs on the x-axis instead of steps, or if training compute was reported. Plotting only steps, it is hard to understand if one training step of AlphaGateau is comparable to a training step of AlphaZero in terms of compute. We do not easily have access to FLOPs data for our previous runs, however we do have access to timing data, which should loosely correlate with FLOPs as the same hardware was used for all experiments. We will include the corresponding plots in the appendix > It would be easier and more convincing to use traditional methods for calculating Elo scores, such as BayesElo which was used by Silver et. al. That said, the authors provide clear explanations on their Elo calculation method. Thanks, we were not aware of the existence of BayesElo. We compared the Elo from our method with the Elo for BayesElo (see plot in global PDF) and got relatively consistent results, besides the fact that weaker players were rated higher by our model, and similarly, stronger players were rater lower by our model, compressing the Elo range when compared to BayesElo. The main effective difference is a small amount of players were rated around 10 to 20 points out of sync with the players closest to them in Elo. --- Rebuttal 2: Title: AlphaZero Elo Comment: Thank you for the detailed reply, I still need to go through all of it but wanted to clarify one point first. Reading the match-related numbers you specify, together with figure 1 in the PDF, I tend to believe your reported Elo scores are reliable, at least for the pool of AlphaGateau agents. It's also nice to see your Elo is not that different than that of BayesElo, making your results more robust. My problem with the comparison to AlphaZero still stands though. You claim (and I believe you) that your AlphaZero agents showed signs of saturation already after 100 'steps', meaning that according to figure 5 in the paper your AlphaZero models can only improve <300 Elo above an agent at 'step' 1, which is almost a purely random agent (right?). That means your AlphaZero still loses 15% of the time against an almost random agent, so clearly it failed to learn well. It is not necessary to train with a DeepMind-sized compute budget to get a good chess agent. One can train a small model tuned to the budget size to get very quick gains in Elo, that will saturate quicker the smaller the model is. I honestly don't know what went wrong with your training, you clearly have enough compute to go through a sufficient number of batches. Is the data-generating model only updated each 'step'? That could partially explain the problem, doing only 100 model updates is crucially little in my experience. Although it doesn't explain why the model would saturate so quickly. Could you specify how AlphaZero was trained? specifically how many games were played between optimization steps, how many batches used each optimization step, how often were the model weights updated (for the data-generation agent)? To summarize, I think your internal Elo rating is solid, but these Elo numbers are not anchored to any competent non-AlphaGateau agent, which makes it impossible to make any claim about AlphaGateau's performance. --- Rebuttal Comment 2.1: Comment: With regards to our assessment of AlphaZero, we could have been clearer in the previous answer. It doesn't saturates after only 100 iterations, but around 400. 'with marginal improvements after 100 steps' was a poorly worded way of saying that the model stayed significantly worse than AlphaGateau, eventually reaching a rating between 300 and 500 Elo, compared to the more than 1500 of AlphaGateau. Seeing these results, we decided to only train AlphaGateau for 100 iterations as it seemed that the behavior of the model during these early iterations was the most important part, with both AlphaZero and AlphaGateau increasing significantly less afterwards. However, we do now agree with your criticism that the asymptotic behavior is also important to include. We will include in the revised version of the paper 500 steps of training of AlphaZero. We are currently running this experiment with our up-to-date code, with Elo ratings going up to iteration 300, where it still only goes from 100 elo at iteration 100 to around 450 by iteration 300. We will also note that AlphaGateau models don't saturate either by iteration 100. This was not clear in our initial figures as we only evaluated Elo rating once every 5 iterations, but it is clearer in our new figures evaluated every second iteration, such as the one in the global rebuttal pdf. We would further like to extend the runs of the AlphaGateau models to at least iteration 200, and will include those new runs in the revised paper. Yes, we only updated the data generating model once every step, and chose how many games to generate in one step to saturate our GPU. We could generate less games per step, but we would waste computing resources by doing so. We experimented trying to run more than one full epoch between each step of generation, but didn't notice any significant improvement behind the first 10 steps, so we decided to stick to only one epoch per step (and around 500-1000 batches) as the training already took the majority of our compute time. For the details of the training of AlphaZero, we generated 256 games at each step, and used batches of 2048 positions, for a total of 488 batches each step (except the first 6-7 steps, while the frame buffer is filling up). Each set of 256 generated games used newly updated network parameters. We also only used 128 MCTS simulations in each of our experiments. We didn't attempt to anchor our Elo ratings to other real agents, as our main results are the improvements of our architecture when compared to its AlphaZero basis. We however did let our latest fine-tuned for 20 steps 6-layer AlphaGateau model play around 600 blitz and bullet games on lichess, against mostly other bots, ending with an approximate Elo of 1800 in blitz, and 2000 in bullet. However, we had to adjust the number of MCTS simulations in order to make our model take an appropriate mostly constant amount of time to evaluate each move, which was often different to the number used during training, so these ratings are only indicative of the approximate abilities of that model.
Summary: The paper explores a novel approach to reinforcement learning for Chess by utilizing a graph-based representation of the game state instead of the traditional grid-based representation. This method is based on GNNs and aims to overcome the limitations of CNNs used in previous models like AlphaZero. Specifically, the paper introduces the Graph Attention neTwork with Edge features from Attention weight Updates (GATEAU) layer, which enhances the classical GAT layer by incorporating edge features. The primary contributions of the paper include demonstrating that the new architecture outperforms previous models with a similar number of parameters and significantly accelerates the learning process. Additionally, the paper provides evidence that the model trained on a smaller variant of chess (5x5) can be quickly fine-tuned to perform well on the standard 8x8 chessboard, indicating promising generalization capabilities. The authors have made their code available, supporting the reproducibility of their experiments and findings. Strengths: **Novel Graph-based Approach**: This paper introduces a creative way to represent game states using a graph-based model instead of the traditional grid-based models. By employing Graph Neural Networks and the Graph Attention neTwork with Edge features from Attention weight Updates, the authors address significant limitations found in Convolutional Neural Networks used in models like AlphaZero. **Improved Learning Efficiency**: The proposed AlphaGateau model demonstrates much faster learning compared to traditional CNN-based models. Experiments show that AlphaGateau can achieve a significant increase in playing strength in a fraction of the training time required by AlphaZero, which is a notable improvement in learning efficiency. **Reproducibility**: The authors have made their code publicly available, supporting the reproducibility of their experiments. Weaknesses: **Limited Generalization Evidence**: The paper claims promising generalization capabilities, but the experimental evidence is limited to a small variant of chess (5x5) and standard chess (8x8). It would strengthen the paper to include additional experiments on other games or variants, such as Shogi or other board games with similar complexity, to demonstrate the broader applicability of the proposed architecture. **Comparative Analysis**: The paper lacks a thorough comparative analysis with other state-of-the-art models beyond AlphaZero. Including comparisons with recent advancements in graph-based reinforcement learning models would provide a clearer picture of the relative performance and innovations of the proposed method. **Real-world Applicability**: While the focus is on reinforcement learning in chess, discussing potential real-world applications of the proposed graph-based approach would broaden the impact of the work. Highlighting areas where this approach could be applied, such as other strategic games or decision-making problems in different domains, would make the contributions more compelling. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. How does AlphaGateau compare to other graph neural network-based reinforcement learning models, such as those presented by Ben-Assayag and El-Yaniv (2021) [1]? 2. Have the authors considered applying AlphaGateau to other strategic games beyond chess? If so, what preliminary results or observations can the authors share? 3. What potential real-world applications besides chess do the authors envision for the graph-based approach proposed in this paper? [1] Ben-Assayag, S., & El-Yaniv, R. (2021). Train on small, play the large: Scaling up board games with alphazero and gnn. *arXiv preprint arXiv:2107.08387*. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The authors have acknowledged some limitations of their work, such as the restricted computational resources which prevented the training of deeper networks, and their focus solely on chess without extending experiments to other games. They also discussed challenges related to the reproducibility of their results due to the non-deterministic nature of parallelized GPU code. However, the paper could benefit from a deeper discussion on the broader implications of these limitations, particularly how they might affect the generalizability and robustness of the proposed method. Regarding potential negative societal impacts, the paper does not explicitly address this aspect. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > 1. How does AlphaGateau compare to other graph neural network-based reinforcement learning models, such as those presented by Ben-Assayag and El-Yaniv (2021) [1]? We are not aware of GNN-based RL models that can be applied to chess besides a slightly adapted version of ScalableAlphaZero (from Ben-Assayag and El-Yaniv (2021)). ScalableAlphaZero differs from AlphaZero mainly in the fact that it replaced the CNN layers with GIN (Graph Isomorphism Network) layers, using a chess king grid as the graph. This should mainly be less expressive than AZ as the convolution kernel cannot treat the 8 surrounding squares differently, as opposed to how a CNN convolution would. The main advantage of ScalableAlphaZero is that it is able to scale, which allow samples of differently sized board to be given during training and testing. This allows for training on scaled down variants of Gomoku and Othello. As such, we believe that ScalableAlphaZero would perform similarly to AlphaZero under the same constraints, or possibly a little worse, due to the more complex functions to evaluate, and lower expressivity. We were not able to perform a fine-tuning experiment with ScalableAlphaZero to compare with AlphaGateau yet as the ScalableAlphaZero code is not available, and the model would require substantial changes to adapt to the action space of chess being linked to the graph edges rather than the graph nodes. We did not compare AG's performances to ScalableAlphaZero's on Othello or Gomoku either as those games do not seem like they would benefit much from a graph based representation. > 2. Have the authors considered applying AlphaGateau to other strategic games beyond chess? If so, what preliminary results or observations can the authors share? We have considered applying AlphaGateau to Shogi and Risk. We didn't test Shogi yet as we were more familiar with Chess while lacking knowledge and intuition for Shogi, and so could more easily interpret the results and play patterns of the model in Chess. We plan to address Risk in the future, but several challenges must be solved first, including handling more than 2 players, randomness (which can be handled by improvements to AlphaZero such as DeepNash), hidden information, and more complex turns. > 3. What potential real-world applications besides chess do the authors envision for the graph-based approach proposed in this paper? We believe that these kind of methods help handling graph based tasks, and could potentially be used as a basis in this kind of adversarial setting to electric network optimizations, or traffic minimization, for example. --- Rebuttal Comment 1.1: Comment: Thanks for your rebuttal and I will maintain my score!
Summary: The authors demonstrate a GNN that works on chess and is amenable to generalization. Strengths: # originality This is the first chess GNN approach that performs well that I'm aware of in the literature, and requires some invoations in the edge representation. # quality The results look good, although the computational limitation of the work mean that a larger test would be good to demonstrate that this model scales up. # clarity The paper is clear and readable. # significance This is a new result in chess AI and looks like it could matter to the larger RL community. I'm not sure this will impact much chess engines in practice though as the GNN is likely much harder to optimize and implement than CNNs or heuristics, the memory usage on the attention mechanism for example would make implementing this in a high performance search code-base difficult. Weaknesses: The authors only look at self-play for model training, the original AlphaGo models were trained on human games to establish that the algorithm performed well, before starting the much more expensive self-play training. The lack of supervised training evaluations is makes the claims of strong performance limited to a single axis (sample efficiency). This result mostly seems to arise from chess specific optimizations applied to an existing algorithm [18], while the authors suggest that it could be applied to other games they only provide evidence in chess. The fine tuning results require more training time (and thus I'm assuming compute) than the initial training, this suggests the pre-training while beneficial is of limited efficacy unless it can be made more efficient. I'm also concerned with the attention-based pooling leading degrading in performance when the graph gets larger as chess is a game that cares much more about the max than the average. I would have liked to see a more complete training run, but I understand that RL is very compute intensive so am not using this lack to negatively impact my score. Technical Quality: 3 Clarity: 4 Questions for Authors: Is the MCTS process identical between AlphaGateau and AlphaZero models? Or does the graph representation change it? Could the graph representation be leveraged to allow a different search algorithm? Could the authors provide PGNs with model predictions and some discussion of the difference in the AlphaGateau and AlphaZero models? There are patterns that CNN based models are good at learning and some that are more difficult (see [1] for examples) and showing differences in the models would strengthen this result. Does the model handle the white bishop well when scaled up? If it does that would be additional evidence of generalization when scaling up since the moves would not have been seen before. Can you provide more details on the Elo calculation? You use the terms ELO and elo and seem to refer to a varation of the Elo rating system created by Arpad Elo, but don't give much details beyond the optimizer. Are you using and Elo or Glicko (2) type system or something else? Section A has not citations so it's unclear. [1] McGrath, T., Kapishnikov, A., Tomašev, N., Pearce, A., Wattenberg, M., Hassabis, D., Kim, B., Paquet, U. and Kramnik, V., 2022. Acquisition of chess knowledge in alphazero. Proceedings of the National Academy of Sciences, 119(47), p.e2206625119. Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: Discussed above Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Remarks > The results look good, although the computational limitation of the work mean that a larger test would be good to demonstrate that this model scales up. We are aware that the reduced scale compared with the original AlphaZero is an issue, we have since ran a fine-tuning experiment with a model of depth 8 that will replace the model of depth 6 in the initial paper. The corresponding plot is included in the global PDF > the memory usage on the attention mechanism for example would make implementing this in a high performance search code-base difficult. We do not compute attention between each pair of node, but only when an edge is already present. As such, it shouldn't represent a significant memory sink, as there is only one attention coefficient per edge, which already have a feature vector of size 128. > This result mostly seems to arise from chess specific optimizations applied to an existing algorithm [18], while the authors suggest that it could be applied to other games they only provide evidence in chess. Yes, this paper only experimented on chess. It would be relatively straight-forward to extend the model to Shogi, as the rules are quite similar besides piece-dropping, which could provide insights to the capacity of the model to learn general game concepts if an hybrid chess/Shogi model was trained in some way. We also plan to look into extending this model to the game of Risk, which is even more suited to graph representations, but several challenges will need to be solved, as Risk has more than 2 players, hidden information, and randomness, which are not currently well handled by AlphaZero (although there exists extensions that focus on those issues, like DeepNash) > The fine tuning results require more training time (and thus I'm assuming compute) than the initial training, this suggests the pretraining while beneficial is of limited efficacy unless it can be made more efficient. The fine tuning on 8x8 chess is slower than the initial training on 5x5 chess, but the fine-tuning starts with a model that is around 1000 Elo point higher than a random initial model, which shows that it can transfer the learning from a smaller variant to the regular variant quickly. It still requires a significant amount of training to achieve optimal performances, but it should be possible to train a large model with strong hardware to provide a good baseline that can be used for further fine-tuning with more modest hardware on a wide range of chess variants, such as chess960, king of the hill, or other (in a similar way as how LLM models are used). > I'm also concerned with the attention-based pooling leading degrading in performance when the graph gets larger as chess is a game that cares much more about the max than the average. I'm not quite sure I understand the issue raised. In our experiments, the graph remains relatively small, containing 64 nodes for 8x8 chess and 1858 edges (25 nodes and 455 edges for 5x5 chess). > I would have liked to see a more complete training run, but I understand that RL is very compute intensive so am not using this lack to negatively impact my score. The original AlphaZero paper trained for 700 000 "steps". However, this is confusing as we also use the term step in our paper (and will clarify this point in the paper) but applied to something else. AlphaZero's steps are equivalent to batches in our paper, and we evaluate between 500 and 1000 batches in one of our step (depending on the size of the frame window). As such, our 100 steps are comprised of up to 100 000 batches. This is still an order of magnitude less, but our model is also around an order of magnitude smaller. We however are also working on a base AlphaZero run with 500 steps to assess whether it tapers off, or continues to increase in performance, after reading the reviews. # Questions > Is the MCTS process identical between AlphaGateau and AlphaZero models? Or does the graph representation change it? > Could the graph representation be leveraged to allow a different search algorithm? Yes, we use the same MCTS algorithm as for Gumbel MuZero, which uses the value and policy neural network oracle as a black box. We did not find a way to use the graph representation to improve it. > Could the authors provide PGNs with model predictions and some discussion of the difference in the AlphaGateau and AlphaZero models? There are patterns that CNN based models are good at learning and some that are more difficult (see [1] for examples) and showing differences in the models would strengthen this result. We did not analyse in detail the AlphaZero model as it performed quite poorly. For illustration, after 170 steps, the AlphaZero model distribution of first moves over 240 games as white is ` b2b3: 48, Ng1f3: 44, f2f3: 35, g2g4: 26, Nb1a3: 25, b2b4: 19, c2c4: 13, Ng1h3: 9, a2a4: 4, c2c3: 4, h2h3: 3, d2d4: 3, d2d3: 3, a2a3: 2, Nb1c3: 1, g2g3: 1` > Does the model handle the white bishop well when scaled up? If it does that would be additional evidence of generalization when scaling up since the moves would not have been seen before. The model seems to have a reasonable grasp of 8x8 chess, even when it was only trained on 5x5 positions. We can see from its distribution of first white move ` b2b3: 16, e2e4: 15, g2g3: 13, a2a3: 12, h2h3: 11, f2f4: 11, c2c3: 11, a2a4: 11, e2e3: 9, d2d3: 8, h2h4: 7, f2f3: 7, b2b4: 6, g2g4: 5, c2c4: 3, Ng1f3: 2, d2d4: 2, Nb1c3: 1`, that it is drawn to playing double pawn moves, which weren't available in 5x5 chess. We also included a PGN of a game it played as white where it was able to use and understand the white bishop, but also probably undervalued the knight as it was quite a restrained piece in 5x5 > Can you provide more details on the Elo calculation? We use the base Elo rating model, but instead of updating the ratings after each game, we periodically run a linear regression following Eq (11) to fit Elo ratings on the players. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed response. As I said in my review I am not judging the work by the amount of compute. I think this is an informative paper on GNNs for game play by itself and while I would like a more detailed examination of it's scaling with respect to other models I don't think that is a requirement for acceptance, but the other reviewers disagree so I will continue to watch the discussion. For now I maintain my score. Also to explain my point about scaling. I the GNN presented here does not use max pooling, instead it uses a weighted average (via attention). This will limit the depth the model can "see" important nodes since information about them will be lost as it passes up network. This is different from the PUCT algorithm which maintains information about the best lines even if they are very deep via counting the number of times each node has been searched and using that as its deciding factor instead of the approximated Q value which is an average. Thus I suspect that there will be scaling issues with this model as the networks increase in scale.
null
null
Rebuttal 1: Rebuttal: We thank the reviewer for their time and thoughtful opinions. As shown by reviewer abiC, we were unclear in our terminology and cause confusion when compared to Silver et. al. (2017) with regards to what we called steps, leading to a first impression of our experiments being smaller than they actually were. We will clarify this point in our paper. We have since the initial submission been able to train a deeper model, with 8 layers instead of 5 and 6, which further confirm our initial results, and included a revised version of Figure 6 in the joint pdf. We also retrained the base 5-layers model in order to be able to evaluate it every second iteration rather than the 1 in 5 it was initially. In addition, we found a bug in our implementation that made some of the models presented in the figure have slightly different hyper-parameters than reported. We have since re-run all affected models resulting in similar or slightly improved results. The paper will be updated with the new 8 and 5 layers run, as well as the fixed reruns. We also plan to include in the appendix a version of each plot with `running time` on the x axis rather than `iteration`, as a proxy to FLOPs, following the recommendation of reviewer abiC. A pgn of a game played as white on 8x8 chess by a model trained only on 5x5 chess was also included in the joint pdf, to show the model ability to learn general pattern and rules in a simplified version of the game. We will add more pgns to the appendix of the full paper. We also recognize that our method of computing Elo ratings could have been better justified. We included in the joint pdf 2 plots justifying that the collection of models tested didn't leave gaps in the rating range, and that our method produced results consistent with ratings estimated using BayesElo. We would be happy to engage in further discussion if you feel there is any remaining questions that we left unanswered, or if our previous answers were lacking in some way. Pdf: /pdf/d508b3641f55c4833595de4dbb00ce529e437a3d.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
RandNet-Parareal: a time-parallel PDE solver using Random Neural Networks
Accept (poster)
Summary: This paper proposes a new method for sequentially predicting and correcting numerical simulations by introducing random neural networks. The RandNets are single-layer feed-forward neural networks and only the output layer is for training. The numerical experiments have shown the improved training efficiency and scalability of the proposed method compared to the baseline models. Strengths: - This paper is well-written and well-organized. The theoretical guarantee of RandNets-Parareal is provided. - Different types of PDEs have been used to evaluate the model performance. Weaknesses: - The studied problem is domain-specific, which might not be of general interest to the scientific machine-learning community. - In Sec 5 Numerical Experiments, this paper provides the comparison of speed-ups/runtimes. It would be better to also show the varying solution accuracy with different setups. - Sections 2 and 3 might be shortened and some of these parts can be moved to the Appendix. The Robustness study (Appendix C) can be moved to the main text since it highlights the benefits of the proposed method. Technical Quality: 3 Clarity: 3 Questions for Authors: In lines 7-8, is it a character "x" or a symbol "\times" for `x125`? Probably a symbol is preferred. Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: N/A. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank you for recognizing that our method shows improved training efficiency and scalability compared to the baseline methods. We also appreciate you acknowledge that our paper is well-written and well-organized, contains theoretical developments, and exemplifies RandNet-Parareal on different types of PDEs. In the paragraphs below, we replied in detail to all your commented weaknesses (W). We hope we can convince you that the paper merits a higher score, especially in view of our explanations of the importance of advancing the state-of-art on (parallel-in-time, PinT) numerical solvers and the implications of our work for the scientific computing community. We also advise you to look at the additional theoretical results (computational complexity, Tbl. A [pdf]) and empirical evidence (two more challenging examples, 2D and 3D Brusselator PDE, Fig. C [pdf]) we added in response to other referees' comments and questions during the time of the rebuttal, which we will incorporate in the camera-ready version of the paper. **W1: The studied problem is domain-specific, which might not be of general interest to the scientific machine-learning community.** **A**: We divide our answer into the following two parts: - _Importance of PinT schemes for scientific machine learning (ML)/computing and science in general_: Time and space efficient solving of (O/P)DEs remains one of the most important research directions of science and engineering. Broader availability of more affordable multicore high-performance computing resources makes parallelization more attractive. Much interest in the literature is concentrated on new numerical schemes, which allow for sub-linear in-time parallelization surpassing slow sequential solvers, especially crucial for long time horizons. Innovation in the area of PinT schemes is actively supported by research funding agencies (e.g. the ExCALIBUR cross-cutting research project "Exposing parallelism: Parallel in Time" involving the Met Office and UKAEA). Our proposed PinT algorithm leverages approximation and generalization properties of a flexible ML random NNs class, advancing the state of research on numerical Parareal solvers. Our work matches the NeurIPS topic of Primary Area on "Machine learning for physical sciences", opens the door to future research on randomized PinT numerical methods, and has implications for scientific computing, physics, and beyond [20,21,24]. Potential extensions of RandNet-Parareal include solvers for SDEs, coupled ODEs, and PDEs with noisy boundary conditions. We pursue some of these directions in our ongoing work. - _Scientific PinT computing for ML_: efficient ODE solvers started to gain much attention in the context of Neural ODEs, which are used as continuous-time models of certain NNs [22]. Other notable examples of ML models where PinT solvers could be game-changers are diffusion and generative models [23], for which the developed dedicated solvers could be further parallelized in time. **W2: In Sec. 5 Numerical Experiments, this paper provides the comparison of speed-ups/runtimes. It would be better to also show the varying solution accuracy with different setups.** **A**: Thank you for the question, similar to Q3 of Ref FREq and W1 of Ref Eu5i. The fine solver $\mathcal{F}$, chosen by the user, defines the accuracy of the competing methods. All Parareal PinT schemes target the solution provided by $\mathcal{F}$. For any given $\epsilon>0$, all the solutions for all converged algorithms are $\epsilon$-close to the solution of $\mathcal{F}$ (eq. (4) [paper]). $\epsilon=5e^{-7}$ is set for all the examples considered. _Empirical evidence_: In the table below (as in our response to Q3 of Ref FREq), we report the accuracies and runtimes (shown in parentheses) for RandNet-Parareal, Parareal, and nnGParareal. The accuracy is measured as max. abs. error (mean across intervals) w.r.t. $\mathcal{F}$ (run sequentially). RandNet-Parareal has the best accuracy-cost trade-off, and one can easily see that under the same time budget for all approaches, RandNet-Parareal achieves higher accuracies than benchmarks. |PDE|RandNet-Parareal|Parareal|nnGParareal| |- | -| -|- | |Burgers' $d=128$|$1.06e^{-8}$ (1h 2m)|$1.85e^{-8}$ (8h 54m)|$1.32e^{-7}$ (1h 39m)| |Diffusion-Reaction $d=7.2e^2$| $3.56e^{-8}$ (23m)| $1.83e^{-8}$ (1h 40m) |$5.71e^{-7}$ (1h 11m)| |Diffusion-Reaction $d=3.3e^3$|$8.56e^{-10}$ (33m)| $2.45e^{-8}$ (7h 52m)|not converged| |Diffusion-Reaction $d=2.5e^4$|$8.09e^{-11}$ (1h 57m)| $7.43e^{-9}$ (9h 50m)|not converged| |SWE $d=3.1e^4$| $6.75e^{-8}$ (4h 9m)| $5.15e^{-8}$ (15h 43m)|not converged| |SWE $d=6.1e^4$| $8.54e^{-9}$ (12h 34m)| $2.84e^{-8}$ (19h 30m)|not converged| To address comments from other Referees, we derived the theoretical complexity of RandNets and compared it to nnGParareal (see Table A [pdf]). Figs. A-B [pdf] plot the theoretical _model_ and _total_ costs (in $\log_{10}$(hours)), respectively, across dimension $d$ (and cores/subintervals $N$). To calibrate the constants in complexity bounds, we used the total empirical cost in Fig. 1 [paper] and its breakdown in Tbl. 6 [paper]. Fig. A shows significantly superior scalability of RandNet-Parareal w.r.t. nnGParareal. Fig. B shows that with the cost of $\mathcal{F}$ added, our results are fully consistent with those presented in the paper and in the table in our response to Q3 of Ref FREq. **W3: Sections 2 and 3 might be shortened and some of these parts can be moved to the Appendix. The Robustness study (Appendix C) can be moved to the main text since it highlights the benefits of the proposed method.** **A**: Thank you for this reasonable and nice suggestion. We hope to have a chance to introduce this change in the camera-ready version of the paper. **Minor Q. In lines 7-8, is it a character ``x'' or a symbol ``$\times$'' for x125? Probably a symbol is preferred.** **A**: Thank you for mentioning this. We will correct it accordingly. --- Rebuttal Comment 1.1: Title: Response for rebuttal Comment: Thanks for your rebuttal, especially the response to **W2**. I still have some reservations about the significance of the work. I will be maintaining my score. --- Rebuttal 2: Title: Follow-up to Reviewer awZW's comment Comment: Thanks for your comment on our rebuttal and W2. We would appreciate if you could read what we wrote as a comment to Reviewer FREq about the impact of our paper on AI, which we report below for your convenience. We hope tis may reduce some of your reservations on the significance of our work. Properly communicating the relevance and impact of our method is very important for us. Our work exemplifies an instance when ML methods allow the advancement of various fields of science and engineering where solving (O/P)DEs is needed, see also our response to Ref FREq to his further question. We mentioned these applications in our initial rebuttal as our submission belongs to the Primary Area of *ML for Physical Sciences*. However, AI is also one of the fields that could greatly benefit from progresses regarding efficient ODE solvers, and thus our method. This is why we gladly provide examples of how our approach can also assist ML and AI, adding these points in the camera-ready version of the paper. ODEs are a crucial building block of some relevant techniques, such as *Diffusion models (DMs)*, *Neural ODEs*, *Optimal control and reinforcement learning*, and *Optimization for ML models*, to name a few. Below, we provide an example of how our work could lead to substantial advancements for DMs. - _Solvers for DMs_: In the context of DMs, the continuous time reversal process can be described by a probability flow ODE, defined by the score function, usually approximated with deep (convolutional) NNs (note that the ODE formulation offers significant advantages over SDEs in high dimensions). Faster ODE solvers can improve sampling speed, yielding faster image synthesis. The main existing directions in the literature involve using classical sequential solvers [30], developing faster and dedicated ones such as DDIM [24], DDPM [29], DPM-Solver [25], Heun [26], and parallelization of the autoregressive sampling process of DMs (by using Picard-Lindelöf iterations for ODE/SDE) [27,28]. The latter emerged mainly due to the need for other solutions to avoid the bottlenecks of sequential solvers. To the best of our knowledge, modulo this particular parallelized sampling, PinT schemes have not yet been used in this context. **This is the gap this paper could fill, as the existing successful sequential dedicated solvers could be embedded into our proposed RandNet-Parareal**. We expect this to be straightforward, since the goal is to collect samples via solving the corresponding (diffusion) ODEs at $[0,T]$, where RandNet-Parareal immediately finds its purpose. We refer to our latest comment to Reviewer FREq for an additional discussion on the GPU implementation which could further improve the efficiency of our method. We hope these clarifications provide you with evidence of the expected impact/implications of our method in the context of generative AI models and beyond. **References used above:** [24] J. Song, C. Meng, S. Ermon. Denoising diffusion implicit models. 2020. [25] C. Lu, Y. Zhou, F. Bao, J. Chen, C. Li, J. Zhu. DPM-solver: A fast ODE solver for diffusion probabilistic model sampling in around 10 steps. 2022. [26] T. Karras, M. Aittala, T. Aila, S. Laine. Elucidating the design space of diffusion-based generative models. 2022. [27] A. Shih, S. Belkhale, S. Ermon, D. Sadigh, N. Anari. Parallel sampling of diffusion models. 2023. [28] Z. Tang, J. Tang, H. Luo, F. Wang, T.-H. Chang. Accelerating parallel sampling of diffusion models. 2024. [29] J. Ho, A. Jain, P. Abbeel. Denoising diffusion probabilistic models. 2020. [30] Y. Song, J. Sohl-Dickstein, D.P. Kingma, A. Kumar, S. Ermon, B. Poole. Score-based generative modeling through stochastic differential equations. 2021.
Summary: This paper proposes a method to accelerate the simulation of partial differential equations (PDEs) by converting them into systems of ordinary differential equations (ODEs). It then utilizes a framework that merges the random neural network and the parareal approach, termed RandNet-Parareal. For validation, three complex systems of PDEs are considered: the Burgers equation, the diffusion-reaction equation, and the shallow water equation. Results show gains achieved in terms of computational cost compared to other methods. Strengths: 1. The proposed method accelerates neural PDE simulations by combining random neural networks with parareal concepts. 2. The considered PDEs are prototypical and complicated and exhibit the efficacy of the proposed approach. Weaknesses: 1. The authors have not mentioned the trade-off between accuracy and computational cost, which is essential for high-fidelity simulations. 2. State-of-the-art comparisons are lacking, restricting the evaluation of the proposed method with random neural network-based PDE solvers. 3. The method is based on conventional numerical solvers, and hence, extending its applicability to complex geometries in higher dimensions may suffer from the curse of dimensionality. 4. It is often the case that random neural networks do not scale well to deep networks, and not much gain is achieved when using a deep network. The paper does not discuss extending the method to deep neural networks and what gains could be achieved. Technical Quality: 2 Clarity: 3 Questions for Authors: 1. The proposed method's performance and compared methods are not quantified through a metric. It would be interesting to see the trade-off between accuracy and computational cost. 2. The method is not compared with other neural PDE solvers, specifically random neural network-based PDE solvers aiming to accelerate PDE simulations. Assessing the gain achieved compared to the methods proposed in the literature would be interesting. 3. As the method depends on traditional mesh-based numerical solvers, can the authors provide insights on how the method would perform in complicated geometries, particularly in higher dimensions? 4. Can the authors comment on the performance of their method when using a deep network instead of a shallow network? Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: The authors have mentioned the limitations of the work related to coarse numerical solvers and stiff systems. There are no social or ethical issues concerning the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank you for acknowledging that our method succeeds in accelerating the solutions of complex PDEs. We appreciate you recognize the efficacy of our approach and see no limitations beyond those we mentioned in the paper. In the paragraphs below, we replied in detail to your questions (Q) and weaknesses (W) grouped by topic. We hope to convince you that our paper merits a higher score, as it pushes the boundaries of numerical solvers in the field of scientific computing by proposing a new direction of randomized numerical parallel-in-time (PinT) (O/P)DE solvers. **W1 & Q1: Discussion of the trade-off between accuracy (w.r.t. some metric) and computational cost, essential for high-fidelity simulations** **A**: Thank you for the question similar to Q3 of Ref FREq and W1 of Ref Eu5i. The fine solver _F_, chosen by the user, defines the accuracy of the competing methods. All Parareal PinT schemes target the solution provided by _F_. For any given $\epsilon>0$, all the solutions for all converged algorithms are $\epsilon$-close to the solution of _F_ (see eq.(4) in the paper, where another accuracy metric criterion could be used). _Empirical evidence_: In the table in our response to Q3 of Ref FREq (not copied here due to the space limit), we report the accuracies and runtimes for RandNet-Parareal, Parareal, and nnGParareal. Max. abs. error (mean across intervals) w.r.t. _F_ (run sequentially) is reported with the runtimes in parentheses. $\epsilon=5e^{-7}$ is chosen for the examples in the paper. One can clearly see that RandNet-Parareal has the best accuracy-cost trade-off. _Theoretical results_: To further address your comments and W1 of Ref Eu5i on the computational costs, we derived the theoretical complexity of RandNets and compared it to nnGParareal (see pdf for the details). Figs. A-B plot the theoretical _model_ cost and _total_ cost (in $\log_{10}$(hours)), respectively, across dimension $d$ (and cores/subintervals $N$). To calibrate the constants in both complexity bounds, we used the total empirical cost in Fig. 1 and its breakdown in Tbl. 6 [paper]. Fig. A shows significantly superior scalability of RandNet-Parareal w.r.t. nnGParareal, and Fig. B shows that once the cost of _F_ is added, our results are fully coherent with the empirical results in the paper and the table in our response to Q3 of Ref FREq. **W2 & Q2: Comparison with other state-of-the-art and neural PDE solvers, specifically RandNets-based ones, and assessment of gains achieved.** **A**: We kindly refer you to our extensive responses to similar questions Q1 of Ref FREq and W2 & Q2 of Ref Eu5i, which we cannot repeat here due to the space limits. We detailed why we focus exclusively on PinT (i.e., Parareal, nnGParareal) and not sequential (neural PDE) benchmarks. Importantly, RandNet-Parareal is the first PinT scheme in general, and Parareal algorithm in particular, that exploits random NNs, so no other NN-based PinT benchmark could be included in our comparative study. As the RandNet-Parareal construction is valid for any fine solver, any (neural) (O/P)DE solver can be readily incorporated as _F_ WLOG. **W3 & Q3: The method is based on mesh-based numerical solvers. Comment on applicability to complex geometries in higher dimensions (curse of dimensionality (CoD)).** **A**: We split the answer to this interesting question as follows: - _CoD in the learning_: Our goal is to approximate correction functions in high dimensions. NN functions are universal approximators (UA) (for example, they are dense in the set of continuous functions, defined on compacta, in the sup norm). Random NNs with random inner weights are proven to be UA of functions of certain regularity (related to Radon-wavelet integral representation in [4]; see more general setup in [12]). See also our reply to your W4 & Q4 below. Notably, under specific regularity assumptions on the correction function, it is possible to establish whether RandNets suffer or not from CoD (i.e., $M$ needs to grow exponentially in $d$ to maintain the same approximation accuracy) [4]. This opens avenues for provable (potentially not prone to CoD) speed and uncertainty quantification improvements with RandNet-Parareal, some of which we pursue in our ongoing work. - _CoD in the cost_: Albeit numerous attempts, the state-of-the-art Parareal algorithms still suffer from unfavourable on-dimension cost dependence. We argue in our answer to W1 of Ref Eu5i and our answer to your W1 & Q1 that our main competitor, nnGParareal, recently proposed in [1] to reduce the GP training cost, also has this issue (see pdf), while RandNet-Parareal needs no hyperparameter tuning and has linear-in-$d$ complexity. We note that the matrix $X$ of neurons activated with the ReLu activation can be sparse. This implies that RandNets' cost is further improvable as the complexity of sparse operations is proportional to the number of nonzero matrix entries. - _Complex geometries in higher dimensions_: _F_ can be a symplectic, variational, energy-preserving, or any other integrator preserving the system's geometric properties. Using RandNet-Parareal with such _F_ is straightforward. **W4 & Q4: Discussion of poor scaling of RandNets to deep architectures and possible gains of using a deep network instead of a shallow one** **A**: The poor scaling of RandNets to deep architectures is not always observed. Statistical properties of RandNets (or random feature NNs) have been studied in a sizeable body of works [18,19]. Asymptotic characterizations for the test error of _shallow_ RandNets have been derived [15,16,17]. Similar studies in the deep case [13,14] show the equivalence of deep RandNets to deep linear Gaussian models and, importantly, provide instances where testing performance _improves_ as a function of depth. Nevertheless, deep RandNets require higher training cost, so using _shallow_ RandNets may still be preferable in our method. We explore this in our ongoing work. --- Rebuttal Comment 1.1: Title: Last day for discussion Comment: Reviewer nFhC, today is the last day for discussion. I hope you can take a moment to respond to the authors' rebuttal. --- Rebuttal Comment 1.2: Title: Raising my score Comment: Thank you so much for providing detailed answers. Considering all the answers, I have raised my score from 5 to 6. It would be nice to include trade off results in the revised paper.
Summary: The authors introduce a numerical algorithm that computes the solution to a large system of ordinary differential equations (ODE) "parallel in time". The main idea of the solver is based on the existing "Parareal", which introduces parallelism by running a sequential, fast, and inaccurate ODE solver and then corrects it in parallel with a more accurate correction scheme. The novelty in this manuscript is that this correction scheme is done using random neural networks, i.e., networks where the hidden weights and biases are chosen at random and then fixed. The benefit of this is that training time for the network is reduced significantly, as only the last layer must be approximated, which is possible using a linear solve. The authors demonstrate this efficiency and the parallel in time property on several nonlinear, time-dependent partial differential equations, discretized to obtain the required large ODE system. Strengths: Using random neural networks in a parallel-in-time setting is (to my knowledge) a new approach, and helps to mitigate issues with earlier versions of algorithms (e.g. the GP version of ParaReal, as presented). To that end, the work is a combination of well-known techniques to solve a problem in a new way. The authors discuss and demonstrate how their approach differs from earlier work, mostly from ParaReal with GP, fine-scale solvers, and a classical ParaReal approach. The paper is very well written, with clear descriptions of the algorithm and numerical results, and even includes scaling results for multi-core systems. The authors also include a robustness study of the approach in the appendix. Weaknesses: 1) There is no theoretical analysis beyond restating existing results from the literature. For example, there is no complexity analysis for the random network setting, which should have been relatively straightforward (given that it involves only a single linear solve, for which complexity results are available). 2) The numerical scenarios chosen in the paper are not particularly challenging on their own. It seems to me that the parallel in time setting itself is what makes them challenging, not the high number of degrees of freedom (e.g., 10^5 is not very large), nor the particularly high spatial dimension (one and two), or the complexity of the PDE (mildly nonlinear). Of course, for parallel-in-time algorithms these are challenging problems nonetheless, but the examples do not demonstrate why parallel in time in general makes more sense than just solving the system with a better algorithm (e.g. higher order, other basis functions, etc.). For example: it is not reasonable that solving Burgers equation on a 1D line takes 13 hours on the "fine scale" (table 1). This looks like an unnecessary difficulty for a problem that should be simple to solve, especially with only 128 degrees of freedom (variable d). 3) It seems to me that the main novelty of this paper is to replace the (numerically suboptimal) Gaussian process with a neural network where the internal weights are randomly chosen. It is not clear to me why this particular change is any better than improving the numerics for the GP setting, or using any other possible solution (e.g., just a linear mapping as approximation, or running a fine-scale solver on multiple scales, or using polynomial regression, ...). It is probably not possible to perform these additional experiments during the rebuttal time, but the manuscript does not even state any other possibilities that were ruled out. 4) Similarly, the state of the art focuses very much on ParaReal, and not on any other parallel in time solver. While this may be acceptable if ParaReal was the most advanced PinT solver available, it is not clear (because it is not stated) if that is the case in general (or at least for the chosen examples). A few minor issues: 1) I would avoid using "nns" for nearest neighbors (l.35), because it is confusing with "NNs" (neural networks). 2) l.87: it is strange to say "24 core processors, 48 cores". It is clear that 24*2=48; I hope that is what the authors mean. Technical Quality: 3 Clarity: 3 Questions for Authors: 1) The authors stress that training a GP is fundamentally more expensive than training the random network. However, I fail to see this: if we do not compute a full kernel matrix of N by N (N being the number of data points), but just randomly choose M center points and then solve a linear system of N by M, the complexity is exactly the same as in the random network case. The only difference is that instead of M neurons we have M kernel functions (k(x_i, *)), essentially forming a "radial basis neural network". This has been discussed at length in the GP literature, and many methods for this exist (e.g. Nystroem kernel approximations, inducing points methods, etc.). Why do the authors only compare to the full kernel case, which obviously is subpar in performance? 2) Why do the authors compare to clearly sub-optimal numerical solvers in the PDE setting? I may have missed the reasoning why even the simplest PDE settings take so much time to solve. 3) Is Parareal the only parallel-in-time solver available? The paper focuses mostly on different Parareal settings and approaches, but not on different parallel-in-time solution methods. 4) How does the high ambient-space dimension d of the state U affect the nearest neighbor search? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors are missing a discussion that "nearest neighbor" algorithms do not work well in high-dimensional settings (as given here, with $d \gg 1000$ in many cases). They also do not discuss the suboptimal scaling of the linear solution for the random neural networks (caused by the linear solve necessary for the outer weights). Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank you for the critical assessment of our work. We identified these main criticisms: (a) lack of a computational complexity study; (b) comparison to a GP with full kernel and lack of alternatives to the GP solution; \(c\) use of a suboptimal fine solver _F_. We addressed (a) conducting a detailed complexity study. We believe that (b)-\(c\) are based on a misconception and provide our explanation together with answers to questions (Q), weaknesses (W), and limitations (L). We hope it is evident that the points raised do not warrant a rejection of our work. **W1: Lack of complexity analysis** - The complexity of RandNet-Parareal is reported in Tbl. A [pdf] and is linear in $d$, while nnGParareal scales as $d^2$. - The theoretical computational costs as a function of $d$ and $N$ are plotted in Fig.A-B [pdf], confirming the empirical superior scalability of RandNet-Parareal given in Fig.1 [paper]. - The speed-up of each approach can be immediately obtained as the ratio of the sequential runtimes of _F_ and the parallel runtimes, as in [1]. **Q1: The proposed approach is only compared to a GP based on full-rank matrix** This statement is not correct. The competing approach does not use the GP with the full kernel matrix of $N\times N$ ($N$ is the sample size), but only its reduced $m \times m$ version with $m \ll N$ nearest neighbors (nnGP [1]). This is the only existing PinT scheme tackling the full GP kernel issue. **Q2 & W2: Choice of subobtimal numerical solvers and high running times** Although valuable for sequential solvers, these comments are not pertinent for PinT schemes. We will add the following comment to a camera-ready version for a better understanding of the customary empirical design for PinT schemes. - *Goal of PinT schemes:* There is some confusion regarding this point. The essence of any Parareal method is to obtain in a PinT manner the $\epsilon$-close solution to that produced sequentially by a generic _F_. Each Parareal variant is agnostic to the choice of _F_, and "canonical" off-the-shelf numerical solvers are standard in the PinT literature (here, Runge-Kutta, as in [1]). WLOG, more appropriate (accurate/faster) _F_ can be used. This does not contrast our framework: RandNet-Parareal allows the seamless replacement of _F_ and inherits its properties. - *Contribution and choice of slow _F_*: As we propose a new PinT scheme, we compare it with existing PinT approaches rather than ad-hoc sequential solvers, focusing on speed-up rather than absolute performance of _F_. Burgers' PDE is particularly slow as we used the same high accuracy setup of [1]. - *Additional challenging example*: Despite the short rebuttal time, we ran 2 more challenging examples (2D and 3D Brusselator PDE) with results in Fig.C [pdf]. Should we add them to Suppl. Material? **W3: Choice of RandNet instead of better numerics for the GP/alternative solutions** A recent work [1] replaced the numerically suboptimal GP from [3] with nearest neighbor GPs. While improving GP numerics, it still suffers from quadratic complexity in $d$ (see Q1) and is sensitive to hyperparameter tuning (Tbl. A [pdf]). We propose abandoning the GP framework and using RandNets. Alternative choices, e.g. linear/polynomial functions, do not offer the desirable learning quality: - *Universal approximation (UA) properties:* Linear functions do not form a UA class, so do not guarantee high-accuracy learning. Polynomials are UA, but (1) their order depends on the problem; (2) they are known to yield ill-conditioning in the regression setting. Instead, RandNets (as neural network functions) are UA [4] and do not suffer from (1)-(2). - *Overcoming the curse of dimensionality (CoD):* Under specific assumptions, one can determine if RandNets suffer from the CoD [4] (see also Q3 of Ref nFhC). This opens multiple avenues for provable speed and uncertainty quantification improvements with RandNet-Parareal. Despite limited rebuttal time, we tested $p$-order polynomials on Diffusion-Reaction PDE, finding they lack sufficient learning accuracy ($p=1$ is linear). The table displays iterations to convergence $K$ and parallel speed-up (in parentheses). For all models $K=N=64$ (so they converge serially), while for RandNet-Parareal, $K=12$ with a speed-up of 5.36 times (Tbl. 6 [paper]). |PDE|$p=1$|$p=2$|$p=3$|$p=5$|$p=7$| |-|-|-|-|-|-| |$d=722, N=64$|64 (0.98)|64 (0.95)|64 (0.92)|64 (0.88)|64 (0.83)| **W4 & Q3: Is Parareal the state-of-art among PinT schemes?** Yes. Due to space constraints, we kindly refer you to our detailed answer to Q1 of Ref FREq who posed a similar question. We will add the needed background to the revised introduction. **W5: nns (nearest neighbors) vs NNs (neural networks)** This may indeed be confusing. In the literature, nearest neighbor GPs are typically denoted as nnGPs, while neural networks as NNs. We welcome alternative suggestions. **W6: 24 core processors, 48 cores** In the updated version, we will change this to read as " 24 core processors, _yielding a total of_ 48 cores." **L1 & Q4: Issues of nearest neighbor algorithms in high dimensions** - *Suboptimal scaling of the linear solution for RandNets:* As shown in Tbl. A [pdf], it contributes as $M^3$ ($M$ the number of neurons) and has limited effect on performance for sensible $M$ values for our examples of (O/P)DE, spatial dimension, degrees of freedom. - *Complexity in high dimensions:* Naive linear search of the nearest neighbor is linear in $N$ and $d$, $O(Nd)$, which we took into account in complexity in WC1. Approximate Near. Neighbors would allow for further improvements [2]. - *Performance in high dimensions:* Fig.D [pdf] shows that nearest neighbors (closest points in L2 distance) of target points are recovered for 2 high-dimensional systems $d>1e^4$, proving the validity of our approach. This nn structure is a feature of Parareal -- the dataset is _not made of random_ observations but of initial conditions converging in $k$. --- Rebuttal Comment 1.1: Comment: I highly appreciate the additional results, both experimental and theoretical. I very much like the general idea of the paper but was unsure about the soundness in these two aspects, both of which were addressed. I will raise my score to 7. * "Should we add [the new experiments] to Suppl. Material?" Yes, please. --- Reply to Comment 1.1.1: Title: Follow-up to Reviewer Eu5i's comment Comment: We are glad you appreciated our additional theoretical and numerical results. We thank you again for your constructive criticisms, we believe the revised paper will greatly benefit from them. We are also extremely grateful that you have upgraded your score notably.
Summary: The paper introduces their method RandNet-Parareal, which is a method to solve differential equations. Their method can be categorized as a Parallel-in-time technique that aims at parallelizing solvers in the temporal domain. They extend the Parareal algorithm by using RandNets which learn the difference between a coarse and a fine solver. They show experimentally a speed-up of their method compared to previous methods and provide theoretical guarantees for their method. Strengths: The paper evaluates their method on three different PDEs. The authors show that their method yields a significant speed-up. The authors provide theoretical guarantees of their method. The authors claim that the algorithm can be used as a convenient out-of-the-box algorithm. Weaknesses: Paper Structure: The authors provide experimental details within the introduction (lines 85-90) that should belong to the experimental section. The paper is not structured as most papers in the conference. I think Section 2 should be put into a Section called “Problem Description”. Section 3 should be included in Section 2 or into a “Related Work” section. The paper misses a related work section. To me, it is questionable if this paper is of interest to the audience of the conference. The paper replaces a subpart of a numerical algorithm with a two-layer Neural Network where only the last layer is trained. Therefore, while the contribution is sound the impact of the work to other fields of machine learning is very limited. Technical Quality: 3 Clarity: 2 Questions for Authors: 1. Can you put your paper into a broader perspective? This paper only compares methods that are based on the Parareal algorithm. Are there also alternatives and how do they compare to your method? 2. What is the “Fine” algorithm in Table 1 and in Fig. 1? I could not find its definition. 3. How can the Runtime in Figure 1 be negative? Is log(Runtime) shown in this plot? 4. Is there a way to measure the solution quality of the solver? Is the solution quality fully defined by the converged initial condition $\epsilon$ and therefore the same for all methods? If the solution quality can be quantified, how does the solution quality of your method compare to other methods? 5. Are the subscripts $k$ and $i$ dropped in Equation 6? If so this is a bit confusing since these are used in the previous Section and then dropped without a notice. Confidence: 2 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: The authors addressed some limitations of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate that you recognize how the proposed method yields a significant speedup compared to competing ones. We believe to have successfully replied and tackled all your questions (Q) and weaknesses comments (W). We hope that you reconsider your score based on the new provided information. **Q1: Lack of background on existing parallel-in-time (PinT) schemes** **A**: This is a very important point, which we will address by adding the relevant background in the revised introduction. There are three general approaches for PinT computation: parallel across-the-problem, parallel-across-the-step, and parallel-across-the-method. [8,11] provide another classification: multiple shooting, methods based on waveform relaxation and domain decomposition, multigrid approaches, and direct time-parallel methods. Parallel-across-the-step methods, in which solutions at multiple time-grid points are computed simultaneously, include the Parareal (approximation of the derivative in the shooting method), Parallel Full Approximation Scheme in Space and Time (PFASST) (multigrid method) [5,10], and Multigrid Reduction in Time (MGRIT) [6,7] methods (see [9] for details). Among them, Parareal has received more attention in the literature, with extensive theoretical analyses, improved versions, and empirical applications [8,11]. Limited theoretical results are available for MGRIT and PFASST, with a few extensions and empirical applications. Interestingly, combined analyses have shown equivalences between Parareal and MGRIT, and connections between MGRIT and PFASST. It is well acknowledged that comparing PinT methods based on different working principles is extremely hard, with [11] representing a recent survey article which nonetheless does not offer a systematic comparison either. Quoting [11], "caution should be taken when directly comparing speedup numbers across methods and implementations. In particular, some of the speedup and efficiency numbers are only theoretical in nature, and many of the parallel time methods do not address the storage or communication overhead of the parallel time integrator". [9] is one of very few recent attempts to systematically compare different PinT classes. However, it is limited exclusively to the Dahlquist problem. Thus, it has become conventional to compare new techniques to the existing state-of-the-art methods within the same group of solvers. This is what we do in our paper, comparing RandNet-Parareal with the original Parareal and the recently improved version nnGParareal [1]. A broader comparison with alternative PinT approaches would be insightful, but it is beyond the scope of this work. **Q2: What is the “Fine” algorithm in Table 1 and in Fig. 1?** **A**: The fine solver $\mathcal{F}$ is the accurate solver defined on Page 1, line 28 in our submission. $\mathcal{F}$ is typically chosen to be a higher accuracy method (e.g. Runge-Kutta 8) compared to the coarse solver (e.g. Runge-Kutta 2). **Q3: How can the Runtime in Figure 1 be negative? Is log(Runtime) shown in this plot?** **A**: Yes, it is the $\log_{10}$ runtime, as specified in the y-axis on Fig. 1 [paper]. **Q4: Is there a way to measure the solution quality of the solver? How does the solution quality of the method compare to other methods?** **A**: The fine solver $\mathcal{F}$ is chosen by the user, and it determines the accuracy of the solution. All Parareal-type PinT schemes target such solution, with closeness controlled by $\epsilon$, i.e., all converged solutions will be $\epsilon$-close to the one of $\mathcal{F}$. In the table below, we report the maximum absolute error committed by each of the RandNet-Parareal, Parareal, and nnGParareal with respect to the fine solver (run sequentially), averaged over intervals, together with the runtime in parentheses. |PDE|RandNet-Parareal|Parareal|nnGParareal| | - | - | - | - | |Burgers' $d=128$| $1.06e^{-8}$ (1h 2m)|$1.85e^{-8}$ (8h 54m)|$1.32e^{-7}$ (1h 39m)| |Diffusion-Reaction $d=7.2e^2$|$3.56e^{-8}$ (23m)|$1.83e^{-8}$ (1h 40m)|$5.71e^{-7}$ (1h 11m)| |Diffusion-Reaction $d=3.3e^3$|$8.56e^{-10}$ (33m)|$2.45e^{-8}$ (7h 52m)|not converged| |Diffusion-Reaction $d=2.5e^4$|$8.09e^{-11}$ (1h 57m) |$7.43e^{-9}$ (9h 50m)|not converged| |SWE $d=3.1e^4$|$6.75e^{-8}$ (4h 9m)|$5.15e^{-8}$ (15h 43m)|not converged| |SWE $d=6.1e^4$|$8.54e^{-9}$ (12h 34m)|$2.84e^{-8}$ (19h 30m)|not converged| **Q5: Are the subscripts $k$ and $i$ dropped in Eq. 6?** **A**: Yes, they are indeed dropped, since we first present our method for a generic setting, and then use it for the specific inputs, adding the indices again on lines 237-238. **W1: Paper structure** We did not find any specific guidelines in terms of paper structure, and we have seen numerous papers with similar structure to ours. Note that what you call "Related work" is in fact Section 3, where we introduce two recent works improving on Parareal. We will nevertheless take your comments on board when preparing the camera-ready version. **W2: Lack of interest for the audience of the conference** We disagree with you, as this is not the first contribution on PinT methods appearing on NeurIPS. You write that *"while the contribution is sound, the impact of the work to other fields of machine learning is very limited"*. However, this is not a NeurIPS requirement, as long as the paper matches the subjects and topicality of the Primary Area. Nevertheless, we believe that our new randomized PinT numerical solver has far-reaching implications for the scientific computing field and others, where efficient numerical (O/P)DEs solvers are in need. We kindly refer you to our detailed answer to W1 of Ref awZW. There, we discuss the importance of PinT schemes for scientific machine learning (ML)/computing and science in general and the use of scientific PinT computing in other fields of machine learning (e.g. diffusion models, neural ODEs, optimization, normalizing flows, to name a few). --- Rebuttal Comment 1.1: Title: Answer to Rebuttal Comment: **AQ1:** I appreciate the authors clarification on the background and encourage the authors to add a corresponding section into the paper. **AQ2:** I think this should be defined more clearly in the experiment section. **AQ3:** The y-axis label is incorrect, as it is denoted in hours but should be log10(hours). It would be better to scale the y-axis logarithmically while keeping the y-axis ticks in hours, as this is easier to interpret than log10(hours). **AQ4:** The authors should not drop indices without mentioning it, and they should add a footnote for clarification. **AW2:** While this may not be a NeurIPS requirement, the reviewers should still consider the potential impact of the method in AI when assigning ratings. The reviewers noted the method's potential applicability to ODE solvers used in machine learning, such as in diffusion models. The paper would benefit from a section discussing this potential application in more detail. **Further Question:** Are PinT schemes used to solve the reverse ODE in Diffusion Models? If not, which ODE solvers are used? What would be necessary to apply the authors' method in this setting? Likely it would need to be implemented on a GPU? Demonstrating the benefit of the authors' method in the context of diffusion models would significantly increase the relevance of this paper to the NeurIPS community. --- Reply to Comment 1.1.1: Title: Follow-up answer to Reviewer FREq's comments Comment: **AQ1-3**: Thank you for your feedback. We will carefully take it into account when preparing the final version. We welcome any comment on our response to **Q4** on the solution quality of the solver. **AQ5: Dropping indices without prior mention.** Please note that we specify that on Line 195 in [paper], which reads "Prior to that [i.e., defining RandNet for Parareal], we define how RandNets work in a general setting with input $\boldsymbol{U}$". Indices are introduced later, upon combining RandNets with Parareal, in line 237. Should you find this still unclear to the reader, we could change it to "Prior to that, we define how RandNets work in a general setting with input $\boldsymbol{U}$, *before going back to the input of interest $\boldsymbol{U}_i^k$ within the Parareal framework*". **AW2 and Further Q: On the potential impact of RandNet-Parareal on ML/AI, and, specifically, on Diffusion Models (DMs).** Thank you for engaging in further conversation about this matter. Properly communicating the relevance and impact of our method is very important for us. Our work exemplifies an instance when ML methods allow the advancement of various fields of science and engineering where solving (O/P)DEs is needed, see also our response to W2 above, and W1 of Ref awZW. We mentioned these applications as our submission belongs to the Primary Area of *ML for Physical Sciences*. However, AI is also one of the fields that could greatly benefit from progresses regarding efficient ODE solvers, and thus our method. This is why we gladly provide examples of how our approach can also assist ML and AI, adding these points in the camera-ready version of the paper. ODEs are a crucial building block of some relevant techniques, such as *DMs*, *Neural ODEs*, *Optimal control and reinforcement learning*, and *Optimization for ML models*, to name a few. - _Solvers for DMs_: In the context of DMs, mentioned in your answer, the continuous time reversal process can be described by a probability flow ODE, defined by the score function, usually approximated with deep (convolutional) NNs (note that the ODE formulation offers significant advantages over SDEs in high dimensions). Faster ODE solvers can improve sampling speed, yielding faster image synthesis. The main existing directions in the literature involve using classical sequential solvers [30], developing faster and dedicated ones such as DDIM [24], DDPM [29], DPM-Solver [25], Heun [26], and parallelization of the autoregressive sampling process of DMs (by using Picard-Lindelöf iterations for ODE/SDE) [27,28]. The latter emerged mainly due to the need for other solutions to avoid the bottlenecks of sequential solvers. To the best of our knowledge, modulo this particular parallelized sampling, PinT schemes have not yet been used in this context. **This is the gap this paper could fill, as the existing successful sequential dedicated solvers could be embedded into our proposed RandNet-Parareal**. We expect this to be straightforward, since the goal is to collect samples via solving the corresponding (diffusion) ODEs at $[0,T]$, where RandNet-Parareal immediately finds its purpose. - _GPU Implementation_: There are two increasingly sophisticated approaches for implementing RandNets-Parareal on GPUs: (i) parallel implementation of the solvers (model parallelism); (ii) parallel implementation of Parareal (in-time parallelism). For (i), both the solver and the RandNet computation can be carried out on the GPU, see e.g.[31]. In-time parallelism would then be implemented across GPUs. Otherwise, under (ii), both model and time parallelism can be implemented concurrently on the same hardware [32]. The latter guarantees a better use of resources at the cost of increased implementation complexity. We hope these clarifications provide you with evidence of the expected impact/implications of our method in the context of generative AI models and beyond. **References used above:** [24] J. Song, C. Meng, S. Ermon. Denoising diffusion implicit models. 2020. [25] C. Lu, Y. Zhou, F. Bao, J. Chen, C. Li, J. Zhu. DPM-solver: A fast ODE solver for diffusion probabilistic model sampling in around 10 steps. 2022. [26] T. Karras, M. Aittala, T. Aila, S. Laine. Elucidating the design space of diffusion-based generative models. 2022. [27] A. Shih, S. Belkhale, S. Ermon, D. Sadigh, N. Anari. Parallel sampling of diffusion models. 2023. [28] Z. Tang, J. Tang, H. Luo, F. Wang, T.-H. Chang. Accelerating parallel sampling of diffusion models. 2024. [29] J. Ho, A. Jain, P. Abbeel. Denoising diffusion probabilistic models. 2020. [30] Y. Song, J. Sohl-Dickstein, D.P. Kingma, A. Kumar, S. Ermon, B. Poole. Score-based generative modeling through stochastic differential equations. 2021. [31] A.N.Budko, M. Möller, C. W. J. Lemmens. Time integration parallel in time. 2020. [32] A.Q. Ibrahim, S. Götschel, D. Ruprecht. Parareal with a physics-informed neural network as coarse propagator. --- Rebuttal 2: Title: Follow-up answer to Reviewer FREq's comments and raised score Comment: We are glad that our rebuttal comments were helpful in addressing your concerns. We are also grateful for reflecting this in your score, and for your further suggestions. We notice that there are still some lingering misconceptions about our work, which we believe are important to point out. Our work does not "present an improvement to a numerical ODE solver". Instead, it improves existing _Parallel-in-Time_ (PinT) methods for solving ODEs and PDEs, as written in the paper abstract, summarized by the other reviewers, and discussed at length during the rebuttal. We do, ultimately, solve an (O/P)DE with a numerical scheme, but our proposed scheme (1) solves DEs by parallelizing in time, belonging thus to a different class than that of the "simple" numerical ODE solvers; (2) models the correction term using particularly suited **random neural networks**, allowing us to achieve significant speed-up compared to existing PinT techniques; (3) can incorporate existing (O/P)DE solvers in its architecture; (4) is not meant to improve a specific ODE solver, but instead to make PinT techniques more effective and, hence, more appealing to practitioners in the field. Moreover, our new approach (even if considered to be an improvement to an ODE solver, an oversimplification which we disagree with) is far from "solely relevant to obtaining training data for PDE machine learning methods". Numerical methods for solving DEs have profoundly impacted scientific progress since the last century, extending far beyond the generation of training data generation. In fact, once again, **our method can incorporate existing PDE machine learning solvers instead of training data for them.** Independently on that, landmark achievements such as weather forecasting, space exploration, and advancements in nuclear energy are just a few examples of the broad and transformative influence of numerical DE solvers. Commenting on the relevance of designing efficient PinT solvers beyond AI, the following non-exhaustive list contains some other areas of natural sciences, social sciences, and engineering where the availability of efficient (O/P)DE solvers could be a game-changer leading to extremely important advances: - *Atmospheric Dynamics:* Accurate solutions can mitigate climate change, enhance disaster preparedness, and optimize agriculture. - *Biomedical Processes:* Real-time simulations of biological processes, such as tumor growth and blood flow, can aid in personalized treatment planning and targeted interventions. - *Earthquake Engineering:* Refined modeling of seismic wave propagation and structural responses enhances earthquake predictions and leads to more resilient building designs. - *Financial Modeling:* Improved pricing accuracy and real-time risk assessment can stabilize financial markets and reduce systemic risk. We believe our work is another step toward providing faster numerical solutions for DEs in any field where this may be required, whether in the natural sciences, engineering, machine learning, or artificial intelligence. We thank you for raising the score and hope that our further comments help convey the main scope of our contributions.
Rebuttal 1: Rebuttal: We would like to thank the reviewers for their constructive feedback and suggestions. We are happy to read that the paper "is well-written and well-organized" (awZW), "very well-written" (Eu5i), "contains clear descriptions and robustness study" (Eu5i) and that the proposed method is a "new approach" (Eu5i) with "provided theoretical guarantees" (FREq) which "yields a significant speed-up" compared to previous methods (FREq) and "helps to mitigate issues with earlier versions of algorithms" (Eu5i). We are particularly encouraged by the observation that "considered PDEs are ... complicated and exhibit the efficacy of the proposed approach" (nFhC). We are glad that all reviewers agree on our contribution being the first one that embeds NNs within parallel-in-time (PinT) schemes. We hope this work will prompt further research in this area, ideally combining the recent advancements in neural PDEs solvers with PinT schemes. We are grateful to the reviewers for suggesting several improvements, which will notably enhance the revised version of the paper. Two common objections are (i) the lack of background on alternative parallel-in-time (PinT) schemes and (ii) a discussion on the computational complexity of the proposed approach (with respect to both competing methods and accuracy). We addressed point (i) in the rebuttal comments, while for (ii) we derived the theoretical complexity of our method and compared it to the recently introduced nnGParareal (see Table A in pdf). Additionally, during the rebuttal period, following suggestions of Eu5i, we tested our method on two new challenging examples, the 2D and 3D Brusselator PDEs, which are known to exhibit complex behavior, including oscillations, spatial patterns, and chaos. We also compare RandNets with polynomial functions, showing the latter's suboptimal performance. The attached pdf (cited as [pdf]) includes additional figures and the complexity table. In the camera-ready version, we will place our contribution within existing PiT schemes, add a discussion on computational complexity, and include results obtained during the rebuttal time in Supplementary Material if the reviewers deem it advisable. We believe that our new randomized PinT numerical solver has far-reaching implications for the scientific computing field specifically and for science/engineering more broadly, where time- and space-efficient solving of (O/P)DEs remains one of the most important research directions. By combining randomized neural networks with existing PinT methodology, we provide a framework that couples theoretical and robustness guarantees with a drastic reduction of computational requirements, as demonstrated through extensive simulation. Finally, our method can offer speedups for numerous instances where solving ODEs in the machine learning realm is needed (for example, in the context of diffusion models, neural ODEs, and many others). We are deeply grateful for the helpful feedback received from the reviewers, which will make the final version of our manuscript (especially its introduction) more comprehensive and refined. **References used in the rebuttals:** [1] G. Gattiglio, L. Grigoryeva, M. Tamborrino. Nearest neighbors GParareal: Improving scalability of Gaussian processes for parallel-in-time solvers. 2024. [2] P. Indyk, R. Motwani. Approximate nearest neighbors: towards removing the curse of dimensionality. 1998. [3] K. Pentland, M. Tamborrino, T. Sullivan, J. Buchanan, L. Appel. GParareal: a time-parallel ODE solver using Gaussian process emulation. 2023. [4] L. Gonon, L. Grigoryeva, J.-P. Ortega. Approximation bounds for random neural networks and reservoir systems. 2023. [5] M. Emmett, M. Minion. Toward an efficient parallel in time method for partial differential equations. 2012. [6] R. Falgout, S. Friedhoff, T. Kolev, S. MacLachlan, J. Schroder. Parallel time integration with multigrid. 2014. [7] S. Friedhoff, R. Falgout, T. Kolev, S. MacLachlan, J. Schroder. A multigrid-in-time algorithm for solving evolution equations in parallel. 2012. [8] M. Gander. 50 years of time parallel time integration. 2015. [9] M. Gander, T. Lunet, D. Ruprecht, R. Speck. A unified analysis framework for iterative parallel-in-time algorithms. 2023. [10] M. Minion. A hybrid parareal spectral deferred corrections method. 2010. [11] B. Ong, J. Schroder. Applications of time parallelization. 2020. [12] A. Neufeld, P. Schmocker. Universal approximation property of random neural networks. 2023. [13] D. Schröder, H. Cui, D. Dmitriev, B. Loureiro. Deterministic equivalent and error universality of deep random features learning. 2023. [14] D. Bosch, A. Panahi, B. Hassibi. Precise asymptotic analysis of deep random feature models. 2023. [15] S. Mei, A. Montanari. The generalization error of random features regression: Precise asymptotics and the double descent curve. 2022. [16] H. Hu, Y. Lu. Universality laws for high-dimensional learning with random features. 2022. [17] F. Gerace, B. Loureiro, F. Krzakala, M. Mezard, L. Zdeborova. Generalisation error in learning with random features and the hidden manifold model. 2020. [18] J. Lee, Y. Bahri, R. Novak, S. Schoenholz, J. Pennington, J. Sohl-Dickstein. Deep neural networks as Gaussian processes. 2018. [19] A. De G. Matthews, J. Hron, M. Rowland, R. Turner, Z. Ghahramani. Gaussian process behaviour in wide deep neural networks. 2018. [20] F. Hamon, M. Schreiber, M. Minion. Parallel-in-time multi-level integration of the shallow-water equations on the rotating sphere. 2020. [21] D. Samaddar, D. Coster, X. Bonnin, L. Berry, W. Elwasif, D. Batchelor. Application of the parareal algorithm to simulations of ELMs in ITER plasma. 2019. [22] R. Chen, Y. Rubanova, J. Bettencourt, D. Duvenaud. Neural ordinary differential equations. 2018. [23] G. Papamakarios, E. Nalisnick, D. Rezende, S. Mohamed, B. Lakshminarayanan. Normalizing flows for probabilistic modeling and inference. 2021. Pdf: /pdf/6b8b7bc5c2ec0bd332cc3a6b1069b262a3c75735.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Pearls from Pebbles: Improved Confidence Functions for Auto-labeling
Accept (poster)
Summary: This paper introduces a threshold-based auto-labeling (TBAL) method called Colander to maximize TBAL performance by finding the optimal labling confidence function and thresholds. In order to find the optimal confidence function, Colander treats the auto-labeling objective as an optimization problem that maximizes the coverage under label error constraint. They use a neural network as the confidence function, and design a surgate for the optimization problem which can then be optimized using gradient-based methods. Strengths: S1: The paper turns auto-labeling into an optimization problem, and proposes a disciplined solution that can be solved by gradient methods. S2: The proposed optimization surrogate may be adopted by other auto-labeling methods. S3: The paper presents extensive experiments to compare with existing methods, and the results are promising. Weaknesses: W1: the paper is overall well-written, but the discussion of the thresholds is confusing or missing details. Specifically: - In line 89, it says "the vector _**t**_ denotes scores over _k_ classes", and I suppose score means the predicted scores (or probabilities) of the _k_ class, but later _**t**_ is defined as a vector of thresholds. - In line 144 (P1), it is unclear what _T^k_ stands for, and how the set of threshold, _T_, is determined. - In Algorithm 1, the Colander produces the estimated confidence function and thresholds hat _ti'_ (line 14), but the threshold is not used, and it relies on Algorithm 2 to estimate the threshold for each class. There is no discussion what the difference between these two thresholds. W2: There is no discussion of the computation overhead of Colander. Technical Quality: 3 Clarity: 3 Questions for Authors: Q1: How do you determine the threshold set of _T_? Is it a grid of _k_ classes * _n_ steps in [0,1]? Q2: Why not use the thresholds produced by Colander (line 14 of Algorithm 1)? Q3: My understanding is the that confidence function is searched by a neural network using gradient descent, and the thresholds are searched by iterating the training process over the threshold set T. But can we also make the thresholds a neural network (meaning that thresholds would be sample-specific)? Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The computation cost of Colander increases linearly with the size of the set of confidence thresholds _T_, but it seems the process requires finer granularity in _T_ in order to get a better estimate of the optimal confidence function and thresholds. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your feedback and acknowledgment of the paper's strengths. Our response: **Clarification on thresholds** The vector $\mathbf{t}$ denotes the thresholds over $k$ classes. $T^{k}$ stands for the space of threshold vectors $\mathbf{t}$. Because we solve a relaxed version of the optimization, using $\hat{\mathbf{t}}'_i$ does not guarantee auto-labeling error below $\epsilon_a$. We estimate the thresholds again in line 16 to ensure the auto-labeling error constraint is strictly followed by the estimated threshold. We have updated the draft to further clarify this point. **Discussion on computation overhead** The wall clock time of our method is similar to other post-hoc methods. For instance, a single run on CIFAR-10 setting on NVIDIA RTX A6000 takes roughly 1.5 hours with post-hoc methods and roughly 1 hour without post-hoc methods. We have included a discussion on the computation overhead in the paper. **Answers** **A1.** Yes, our current implementation uses a grid in $[0,1]$ as the set $T$. We emphasize that this is not a necessity. We chose it to favor a simple implementation. It boils down to evaluating the number of thresholds equal to the number of validation points given in the round, so the effective size of $T$ is the number of validation points $N_v$. Moreover, a binary search based implementation can reduce the effective size to $\log(N_v)$ for each class. **A2.** See the above discussion on thresholds. **A3.** It is an interesting idea for future works to make thresholds instance-specific. We chose the current thresholding technique as it is backed by the theoretical guarantees on auto-labeling error and coverage [1]. **Limitation.** Please see our response A1 above. [1] Vishwakarma et al., Promises and Pitfalls of Threshold-based Auto-labeling, NeurIPS 2023, --- Rebuttal 2: Comment: Thanks for the clarification on the thresholds and the additional experiments! My questions and concerns regarding the proposed method have been addressed and I decided to raise the presentation score.
Summary: This paper proposes a novel auto-labeling method, called Colander. In contrast to existing works, Colander models the objective of finding an optimal confidence function as a constrained optimization problem (the confidence function should have maximum coverage whilst obtaining a sufficiently low error rate; both components are controlled with a penalty term). Colander is evaluated and compared to baselines on four datasets covering vision and language tasks: MNIST, CIFAR-10, Tiny-ImageNet and 20 Newsgroups. The obtained results clearly indicate that the method improves upon existing baselines, both in terms of coverage and error rate. Strengths: * The paper proposes a novel approach to identifying confidence functions for auto-labeling. As indicated in the paper, finding confidence functions for auto-labeling is challenging, and this paper tackles this problem elegantly via constrained optimization. * The paper’s results are promising, showing that the introduced method improves upon existing baselines. * Overall, I believe that this paper provides a solid contribution to the research area of auto-labeling. Weaknesses: * The paper focuses heavily on formally introducing Colander, and the experimental content is comparatively thin. To provide the reader with a better notion of robustness, it would for example have been useful to provide additional details on the hyperparameter search in the main paper, and how selecting those differently affects both the coverage and error rate. Likewise, it would have been useful to provide additional details on the impact of the introduced penalty term. * It would also have been interesting to better understand the relationship between chosen model architecture for a dataset and auto-labeling performance. * The paper scarcely discusses limitations in the conclusion section, yet I would have expected a more detailed discussion of where and how this approach is limited. * Related to that, the paper does not touch upon future work / research questions that the obtained results create. Technical Quality: 3 Clarity: 3 Questions for Authors: * Did you conduct additional experiments evaluating how different architectures affect auto-labeling performances for different datasets (i.e., how does changing model architectures for each of the four analyzed datasets affect auto-labeling performance)? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: As discussed in the Weaknesses, the paper mentions a limitation in the conclusion section but I'd encourage the authors to further elaborate on how and where their approach is limited, and how such limitations can potentially be addressed in future work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insightful review and for recognizing the key strengths of our paper. Our responses: **Details and effects of hyperparameter selection** Due to space constraints, we deferred a detailed discussion on hyperparameter search to Appendices C.3 to C.7. We plan to incorporate more details in the main paper given the additional space in the camera-ready version. In the rebuttal pdf (Table 5) we provide results with various choices of the penalty term $\lambda$. We observe our method is robust to a wide range for $\lambda$, except around extreme values. **Effect of model architecture choice** In TBAL it is not a priori clear what model the practitioner should use. The overall system is **flexible enough to work with any chosen model class**. Our focus is on evaluating the effect of various training time and post-hoc methods designed to improve the confidence functions for any given model. To answer the query, **we ran experiments with Resnet18 and ViT models in the CIFAR-10 setting** (see Table 6 in the rebuttal pdf). As we expected there are variations in the results in the baselines due to model choices but **our method maintains high performance irrespective of the classification model used**. This is due to its ability to learn confidence scores tailored for TBAL. **Limitations and future work** We discussed these in the conclusions sections and briefly reiterate them here. Colander, like other post-hoc methods, relies on validation data to learn the confidence function. The use of validation data is in general a requirement in TBAL systems as noted in [1]. Reducing or eliminating this dependence is an important direction for future work. It is also related to the general problem of model evaluation in machine learning where solutions based on active testing have emerged [2]. We have updated the draft with additional discussions. [1] Vishwakarma et al., Promises and Pitfalls of Threshold-based Auto-labeling, NeurIPS, 2023. [2] Kossen et al., Active Testing: Sample–Efficient Model Evaluation, ICML, 2021. --- Rebuttal Comment 1.1: Title: Thanks for the detailed response! Comment: Thank you for providing such a detailed response and the additional results and insights. I will maintain my score and recommend acceptance as indicated in my initial review.
Summary: This paper discusses threshold-based auto-labeling functions aimed at identifying a large subset of unlabeled instances where the auto-labeling error remains below a specified threshold. The authors observed that standard temperature-scaled softmax scores are inadequate for effectively thresholding labeled instances. To address this, they propose learning a confidence scoring function and a threshold based on a subset of the validation data. This confidence function is a two-layer neural network trained on the logits from the last two layers of the base model. Extensive experiments were conducted on both image and text datasets using various training-time strategies to optimize the base model, demonstrating the efficacy of their approach. Strengths: 1. The paper is well written, with a variety of experiments conducted to demonstrate the applicability of their proposed method. The choices of strategies and hyperparameters are documented well. 2. The performance of the proposed method is quite significant, achieving lower error rate and higher coverage than the other methods, especially on the harder datasets. Weaknesses: 1. The whole TBAL procedure is a hybrid mixture of iterative training/self training and active learning. While the authors explained the differences between TBAL and self-training + active learning (ST+AL), the difference seems small (mostly, TBAL aims to identify a low-error auto-labelled dataset and ST+AL aims for a good classifier). These two goals do not seem fundamentally different, and can be easily translated to each other. Given the similarity of the framework, I feel it is still necessary to include some ST/AL works in the experiments. 2. A lesser weakness is that due to iterative training, the proposed method and compared methods all went through rounds of data selection, which makes the comparison indirect. For example, it would be hard to tell the exact improvement on thresholding quality the method brings. Technical Quality: 3 Clarity: 4 Questions for Authors: n/a Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 2 Limitations: The authors acknowledge that the limitation of their work is the requirement of validation data. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your detailed feedback and for noting the strengths of our paper. Our responses are the following: **On ST/AL works in the experiments** Our response involves the following 3 points: 1. The focus of our paper is to study confidence functions for existing TBAL techniques [1,2,3]. Our main contribution is a method to learn the **optimal confidence function for TBAL**. Given this focus, we have used various natural and widely used choices of confidence functions as baselines. In particular, we have considered both post-hoc methods and train-time methods used for calibrating confidence functions and thoroughly evaluated our proposed method for learning confidence function vs the combination of train-time and post-hoc methods. ST/AL is not a method for learning or calibrating confidence functions, and hence cannot be used as a baseline in our setting. 2. However, given the overlapping components, we addressed the fundamental differences between TBAL and ST/AL in our paper. Our experiments demonstrate that methods like ST/AL, which first aim to learn the best possible classifier from a given function class before performing auto-labeling, **can be severely constrained by the choice of function class**, especially when it does not contain a high accuracy classifier. In contrast, **for TBAL it is not necessary to learn a classifier with high accuracy**. Indeed, TBAL can iteratively auto-label and cover much of a dataset with classifiers that are moderately accurate. 3. Our aim is **not to establish TBAL as the state-of-the-art method** for data labeling or learning a good model with less data. For this reason, the comparison with ST/AL and other methods is not useful in our work: **what we seek to know is whether TBAL itself can be improved with our new confidence funtion approach**. An extensive evaluation of TBAL, ST/AL, and other similar techniques would be an interesting work. The differences in the goal -- labeling data with reliability guarantees versus learning the best model in the class with fewer labels are important distinctions and these are discussed in detail along with thorough empirical comparisons in [1]. **On indirect comparison due to multiple rounds** We agree that multiple rounds of auto-labeling and data selection make the comparison indirect. However, it is evident that the gains are due to using Colander since we keep all the other components as it is and only change the confidence functions. To make this more clear, we have run experiments in the single round (passive learning) setting as well, and our results are consistent with the multi-round setting. Reinforcing the fact that the performance improvements are due to Colander, Table 4 in the pdf shows results in single round setting. The evolution of coverage and error over multiple rounds in the multi-round setting is shown in Figure 1. [1] Promises and Pitfalls of Threshold-based Auto-labeling, Vishwakarma et al., NeurIPS, 2023, [2] MCAL: Minimum Cost Human-Machine Active Labeling, Qiu et al. ICLR, 2023. [3] AWS Sagemaker Ground Truth https://aws.amazon.com/sagemaker/data-labeling/
Summary: This paper addresses the challenges of overconfident scores in threshold-based auto-labeling (TBAL) systems. It critiques existing confidence scoring and calibration methods and introduces Colander, a new post-hoc method tailored to optimize TBAL performance. Strengths: - The paper has a good identification of the problem which is over confidence of existing TBAL confidence functions, and it proposes a novel framework to find the optimal confidence function. - The paper conducts extensive experiments on both image and text data. - The paper compares various post-hoc functions. - The paper is well written with great details in Appendix Weaknesses: - The authors rightly mention limitations (“A limitation of Colander is that similar to other post-hoc methods it also requires validation data to learn the confidence function. Reducing (eliminating) this dependence on validation data could be an interesting future work.”) in the conclusion. So, does this mean that they use Gold data for validation? If that is the case, that might be the weakness of TBAL in general. The following paper talks about it in detail: https://aclanthology.org/2023.acl-long.796/ Technical Quality: 4 Clarity: 4 Questions for Authors: - Not a question but authors can consider following related paper for comparison. It also works with confidence scoring: https://aclanthology.org/2021.naacl-main.84.pdf Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your positive and encouraging review. We are delighted with your assessment and recognition of our work’s contributions. Our response: **Dependence on validation data.** Yes, Colander, like other post-hoc methods, relies on validation data to learn the confidence function. In fact, this is in general a requirement in TBAL systems as noted in [4] and your reference [1]. Reducing or eliminating this dependence is a useful direction for future work. It is also related to the general problem of model evaluation in machine learning where solutions based on active testing have emerged [3]. **Question on a related work.** We appreciate the suggestion of the related paper [2]. This work uses softmax scores in a similar pipeline augmented with external knowledge sources. In our experiments, we have a comparison with the softmax scores. On the other hand, we believe using scores from Colander could be helpful in [2]. We have included a discussion on this idea in the draft. [1] https://aclanthology.org/2023.acl-long.796.pdf [2] https://aclanthology.org/2021.naacl-main.84.pdf [3] http://proceedings.mlr.press/v139/kossen21a/kossen21a.pdf [4] https://openreview.net/pdf?id=RUCFAKNDb2 --- Rebuttal Comment 1.1: Title: Thank You! Comment: Thank you for your response. I enjoyed reading your paper and learned a lot. A few things regarding the reviewer YvdP. As I differ a lot in scores, I felt I should comment on a few of them. - The presentation of the paper is really good. I do believe that Figure 1 was useful while I was reading the paper and two of my colleagues also agree with that. - Many comments are unreasonable for example "There is no theoretical analysis and understanding of mathematical properties" Good luck with the other reviewers!
Rebuttal 1: Rebuttal: We thank all of the reviewers for their insightful and positive feedback. We have used their suggestions to improve our draft, added experiments, and improved the clarity of our work. Before providing individual responses, we (1) summarize the strengths highlighted by reviewers, (2) provide a pair of common responses, and (3) describe some new experiments that strengthen our work. ## 1. Strengths **Motivation and novelty of our method (7Dad, Nkxd, oXkX, ffq7, YvdP):** Reviewers appreciated our **novel and well-motivated approach** to addressing the issue of overconfidence in auto-labeling tasks. The reviewers appreciated our mathematically grounded framework to learn the optimal confidence function for TBAL. They also noted our optimization framework could be useful for a variety of auto-labeling methods. **Significant performance improvements over the vanilla TBAL (Nkxd, oXkX, ffq7):** Integrating our method into TBAL provides significant performance improvements over the vanilla version used in the previous works on TBAL [1]. **Thorough empirical evaluation (7Dad, Nkxd, ffq7):** The evaluations covered diverse datasets, including text and image data, showcasing the proposed method's applicability across different domains. We provide comparisons against several train-time and post-hoc methods commonly used to reduce the overconfidence issue. **Clarity (7Dad, Nkxd, oXkX):** Most reviewers found our paper well-written and easy to follow, and they appreciated the illustrations. They also liked our step-by-step discussion for designing the objective function, which leads to a tractable optimization problem. ## 2. Response to common questions * **On contributions and practical significance**. We are motivated by the fact that TBAL is widely used in practice to create labeled datasets [2], including by industry giants like Amazon (in its SageMaker Ground Truth product). TBAL has also recently been the subject of theoretical study in exciting works such as [1,3]. Given this interest---both practical and theoretical---our goal is to understand the role of **confidence functions, a key component in TBAL that has not been previously studied**. As our work shows, TBAL systems are heavily affected by the choice of confidence function. This led us to develop a new approach for these that **leads to substantial improvements in TBAL systems**. Given its strengths, we anticipate that our new method will become the standard for TBAL systems. * **On active learning/self-training and TBAL**: The focus of our work is on understanding and improving confidence functions for TBAL. A prior work [1] studies TBAL theoretically and shows the fundamental differences between active learning and TBAL. It also provides extensive experiments illustrating this difference. Since there are some overlaps in these methods, we have a brief discussion and a simulation in our paper to clarify the fundamental differences between them. ## 3. Additional Experiments We provide a brief summary of the additional experiments and results here. The tables and figures are in the attached rebuttal pdf. The details are deferred to individual reviewer responses. 1. (YvdP) We run TBAL with the additional calibration baseline Adaptive Temperature Scaling (AdaTS) [4] and report the results in Table 1. The results are **consistent with the main paper and our expectations**. 2. (YvdP) We run TBAL with five values of $\epsilon_a \in$ { $0.01,0.025,0.05,0.075, 0.1 $ } and report the results in Table 2. **As expected** the auto-labeling error is high with larger values of $\epsilon_a$ and smaller with small $\epsilon_a$. 3. (YvdP) The results with $C_1 \in$ { $0.0, 0.25$ } are in Table 3. Using $C_1=0.0$ leads to higher variance in the auto-labeling error. This is consistent with prior work [1]. 4. (Nkxd) We further demonstrate that the **performance gains are due to the use of Colander**, even if methods use multiple rounds. To do so, we show the evolution of coverage and error over multiple rounds in Figure 1 in the rebuttal pdf. The effects of using Colander are visible from the first round itself, and the following rounds improve performance further. We also run a single round (passive) variant of TBAL where we sample all the human-labeled points for training ($N_t$) randomly at once, train a classifier, do auto-labeling, and then stop. This setting avoids confounding due to multiple rounds. We observe that using Colander yields significantly higher coverage in comparison to the baselines (see Table 4 in the pdf). This reinforces the fact that the gains in the multi-round TBAL are directly due to Colander, while multiple rounds of data selection, training, and auto-labeling are superior to doing everything in a single round. 5. (oXkX) We run TBAL with ViT and ResNet18 models. The results are in Table 6. We see the choice of classification model affects baseline performance but TBAL with Colander remains robust to the model choice. 6. (oXkX) The results with $\lambda$ variation are in Table 5. We see Colander is robust to a wide range of values of $\lambda$, except extreme values. **References** [1] Vishwakarma et al., Promises and Pitfalls of Threshold-based Auto-labeling, NeurIPS 2023. [2] https://aws.amazon.com/sagemaker/data-labeling/ [3] Qiu et al., MCAL: Minimum Cost Human-Machine Active Labeling, ICLR, 2023 [4] Joy et al., Sample-dependent Adaptive Temperature Scaling for Improved Calibration, AAAI 2023. Pdf: /pdf/407907b697c705a7dba8d4401ad278f0218f792c.pdf
NeurIPS_2024_submissions_huggingface
2,024
Summary: This paper introduces a Colander for threshold-based auto-labeling (TBAL) pipeline. Colander trains a flexible confidence function using latent information of the classifier model, instead of fixed confidence functions. The optimization problem of Colander is based on the objective of TBAL, that is, maximizing coverage while minimizing auto-labeling errors. The authors consider differentiable surrogates for the 0-1 variables using the sigmoid function to make this problem tractable using gradient-based methods. Strengths: This paper is well-motivated. The proposed method is explained with simplicity and clarity. Weaknesses: Contribution: Although this paper proposes the Colander for the TBAL pipeline, there is no theoretical analysis and understanding of mathematical properties. Completeness: - The paper includes unknown notation, which is used without pre-defined. - In Section 3.3, an undefined procedure exists: RandomQuery, RandomSplit, and ActiveQuery. In the case of Random method, it is possible to guess by readers, but in the case of ActiveQuery, a detailed explanation is needed. - There are a few expressions that make it difficult to figure out the intent. Additionally, some keywords used in the paper need to be unified. - What is the difference between the score (or scoring) function and the confidence function? - What is the difference between ‘inherent tension’ (in line 33) and trade-off? - What is the difference between post-hoc method and calibration method? - The reviewer is unsure if Figure 1 is necessary for the reader: it contains acronyms that are not defined in advance, such as ERM, and certain formulas (h) are left undefined in the introduction. - Line 174, line 186: The reader does not know which Appendix to look at because Appendix information is omitted. - Appendix, Algorithm 2, line 8: The formula where C_1 weight is considered is not defined in the text. - Appendix, Table 3: there are typos. - This paper is considered to have no significant contributions compared to previous research about TBAL[1]. [1] Vishwakarma, Harit, et al. "Promises and pitfalls of threshold-based auto-labeling," Neural Information Processing Systems, 2023. Technical Quality: 3 Clarity: 1 Questions for Authors: Active Learning Vs. TBAL - Active learning and TBAL fundamentally play the role of labelers for exploiting unlabeled raw data. Lines 100-104 include these differences, but it is hard to get the point. Please explain these differences in detail. - Although the paper includes a performance comparison with active learning (and self-training), an algorithmic setup of active learning is not provided. Please provide experimental settings for understanding whether it is fair or not. - Why does the TBAL process require a human labeler at every iteration? Is it impossible to use labeled and unlabeled data completely separately in advance? Colander (proposed solution) - In Algorithm 1 (line 14), Colander extracts a confidence function g ̂_i and t ̂_i^' through an optimization problem. As the reviewer’s knowledge, t ̂_i^' is different from t ̂_i obtained estimation threshold process. Where is t ̂_i^' used? - The optimization problem (P1,2,3) is solved using a given classifier. It means that this optimization process only works when the classifier's performance is guaranteed. Is it right and is this a correct assumption? - The reviewer understood that the proposed solution configures the confidence function as a neural network, so it is flexible. But what is the meaning of 'choice of confidence function.' Does Colander learn multiple confidence functions and choose one of them? - Why did the authors select exactly two layers of the classifier as inputs to the confidence function? There are various options: single layer, three layers, and inclusion of output information. - Is it right to learn a new classifier/confidence function every round? If so, how long does the entire process require? If not, some expressions in the algorithm (pseudocode) and Section 3.3 need to be changed. Post-hoc method - Most baseline methods were proposed before 2020, except one algorithm. Please check following methods. - R. Verma, et al. "Learning to defer to multiple experts: Consistent surrogate losses, confidence calibration, and conformal ensembles." AISTATS, 2023. - L. Tao, et al. "Calibrating a deep neural network with its predecessors." IJCAI, 2023. - T. Joy, et al. "Sample-dependent adaptive temperature scaling for improved calibration." AAAI, 2023. - The reviewer would appreciate it if the authors could provide some insight/intuition into the problems with previous methods. The experimental sections just show empirical results and lack discussion. Empirical results - It is important to provide accurate settings when performing partial improvements in the overall pipeline. Have you checked the performance of various procedure method changes such as ActiveQuery? - Are there any performance analysis results regarding changes in confidence scores? - The experimental results clearly show that the algorithm has high performance in the provided experimental settings. What about time efficiency? Have you checked the wall-time? - In the optimization problem (P3), how did the authors select the upper-bound error parameter ϵ? How do experimental results change depending on parameter changes? - The value of C_1 in line 8 of Algorithm 2 seems to be fixed at 0.25. What results can see when it changes? Confidence: 3 Soundness: 3 Presentation: 1 Contribution: 2 Limitations: In this paper, there was not enough discussion about the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the detailed review. We have updated our work to account for points on notation and writing. Our response: ### Contributions We believe the reviewer has missed our paper's key contribution, specifically **with respect to prior work on TBAL [1]**. Our work **does not reiterate [1]**. Instead, *the weaknesses of the TBAL technique in [1] inspired our work*. That is, [1] uses a simple, fixed confidence function that leads to poor performance in terms of error rate and coverage. Our goal is to understand the impact of this choice and to produce **innovations that resolve these weaknesses**, as illustrated in our draft's Figure 2. This is important given wide industry adoption of TBAL [2]. We: - demonstrate the importance of confidence functions in TBAL, evaluate several choices of functions, and find them to be **severely limited---as they are not well-aligned with TBAL objectives**. - propose a framework to learn the optimal confidence function and provide a practical version of it. - provide extensive empirical evaluations, showing that using confidence functions learned via Colander in TBAL improves auto-labeling coverage significantly. ### Theoretical Analysis Our main contribution lies in demonstrating the issues with common choices of confidence functions in TBAL, proposing a principled solution to learn the optimal confidence functions for TBAL, and showing its effectiveness empirically. A rigorous theoretical analysis of our method is left as future work. ### Completeness * We have fixed notation and typos. * ActiveQuery is discussed in Appendix B.3 (lines 583-603). We have added a detailed algorithm. * We use scoring and confidence functions synonymously. * The terms inherent tension and trade-off indeed refer to similar behaviors. * Train-time and post-hoc calibration methods are standard terms: Train-time methods modify the training procedure while post-hoc methods operate on trained classifiers. * Fig. 1 shows a simplified workflow of the standard procedure highlighting the role of confidence functions. We have set out to make the figure readable without the notations as well; we kept the notations for consistency with later figures. * It is the Appendix B.2 (line 571 onwards) * $\hat{\sigma}$ is the standard deviation of the error estimate. We have included the formula. ### Active Learning vs. TBAL Our answer: 1. The details of the AL setup and experiment are in Appendix A.1. We added further detail. 2. The aim of the AL experiment is to **highlight the fundamental difference between TBAL and procedures (such as AL)** that seek to first learn the best possible classifier from the given function class and then do auto-labeling. These methods are severely limited by the choice of function class. On the other hand, TBAL succeeds since it iteratively auto-labels and eliminates the auto-labeled space. The comparison is fair as the methods use same amount of labeled data, same function classes, and training to learn the model. 3. We adhered to the workflow of TBAL from [1]. While other ways to involve the human labeler are possible, these would not permit a comparison to [1]. ### On Colander 1. Because we solve a relaxed version of the optimization, $\hat{\mathbf{t}}'_i$ is not guaranteed to ensure auto-labeling error below $\epsilon_a$. We estimate the thresholds again in line 16, to ensure the auto-labeling error constraint is strictly followed by the estimated threshold. 2. The optimization process does not require any specific performance assumptions on the classifier. It has to be better than a random classifier to get any meaningful output from the procedure; it is not a strong requirement. 3. Colander does not learn multiple functions to choose from. In each round, it takes the current classifier and learns a confidence function by solving P3. 4. Colander can use any function class for $g$. In experiments, we chose 2-layer nets and successfully used the same across all datasets, thus we may not need an exhaustive architecture search. Intuitively, we do not need a large network for $g$ since $h$ already performs the heavier representation learning work. As a result, simple models are preferable to avoid overfitting and to reduce training time (since post-hoc methods should be fast). 5. TBAL learns a new classifier in every round, thus the confidence function also needs to be relearned. We comment on the time below. ### Baselines We use prominent and recent post-hoc and train-time baselines. Thank you for the references. Of these [3] fits our setting. We provide results in the attached pdf (Table 1); these are consistent with the draft. The key insight is that baselines are not tailored to the TBAL objective. Colander is designed to maximize TBAL performance. Figure 2 and its discussion make this point. ### Empirical results 1. Our primary focus is on confidence functions, and to avoid confounding, we have chosen not to compare various active querying strategies within this work. Including multiple variables would obscure the effects of our method. 2. Could you please clarify this question? 3. The wall clock time of our method is similar to other post-hoc methods: a single CIFAR-10 run on an NVIDIA RTX A6000 is roughly 1.5 hours (post-hoc) and 1 hour (no post-hoc). 4. Our framework is flexible to work with any choice of $\epsilon_a \in (0,1)$. In experiments, we used a fixed $\epsilon_a=0.05\$. We provide additional results in Table 2 in the pdf with varying $\epsilon_a$ to demonstrate its effect. 5. Results with $C_1 \in ${$0.0, 0.25$} in 20-newsgroup setting with vanilla train-time and all post-hoc methods are in Table 3. $C_1=0.0$ leads to higher variance in auto-labeling error, consistent with previous works [1]. [1] Vishwakarma et al., Promises and Pitfalls of Threshold-based Auto-labeling, NeurIPS 2023. [2] https://aws.amazon.com/sagemaker/data-labeling [3] Joy et al., Sample-dependent Adaptive Temperature Scaling for Improved Calibration, AAAI 2023. --- Rebuttal Comment 1.1: Comment: Thank the authors for their hard work! After an additional explanation, some of my concerns were addressed. So, I have raised my score. Clarification regarding my score. +) This work is well-motivated. I believe the community would appreciate this work, which can be very practical. Additional results provided in the rebuttal period strengthen the claim. -) The manuscript is not ready to be published. Notations can be revised, and the figure can be better described. The text size in the figure is too tiny. The critical limitation of this work is outdated baselines. The authors added a new baseline in the rebuttal period, but overall evaluations are quite limited. So, I think this work is around the borderline, but I lean to the reject side. In terms of presentation and evaluation, I strongly believe this manuscript is not ready to be published.
null
null
null
null
null
null
Implicit Multimodal Alignment: On the Generalization of Frozen LLMs to Multimodal Inputs
Accept (poster)
Summary: The authors propose an exploration of the Implicit Multimodal Alignment (IMA) phenomenon in frozen Large Language Models (LLMs): when exposed to perceptual tokens (e.g. from image or audio features), authors show that those tokens are implicitly aligned on text tokens, even so the LLM has only been trained on text previously. The study argues this phenomenon is a fundamental mechanism into LLMs, and that it is relevant to (i) better understand task performance (e.g using that implicit alignment as proxy) and (ii) to better understand model visual faithfulness (i.e absence or presence of hallucinations). Based on their findings, the authors propose as well architecture changes such as skipping specific operations on perceptual tokens in order to decrease inference cost. Strengths: - This is a very original and in-depth work to understand the inside mechanisms appearing in MLLMs. Many/Most of the current published MLLMs are based on variations a pre-trained LLM + a pre-trained vision encoder with smaller VL connector in between. The insights from this study could be applied on many of those works, especially to better improve the visual faithfulness of those models. Weaknesses: - This is not commonly an important weakness but for this work, it might be: the presentation of the results and the overall quality of the layout organization is hurting the understanding. All the figures are pixelated bitmap/jpeg screenshots and are barely readable. This doesn't change the pertinence of the work, but hurt importantly how well its content is conveyed. I urge the authors to revisit this. Many solutions exist today, such as TKIZ or exporting figures in vectorized PDF to better include them in the final document. Lacking of space in a limited-page paper is understandable, but not using TKIZ or vectorized figures is not. Figure 5 for instance shouldn't be as is in this work. I challenge other readers to read/decipher the labels of the x and y axes on the heatmaps. - In many details, the work could have be improved. See the "Typos" list below. As well, some claims are not backed anywhere. For instance, Figure 1 and Figure 12 both claim a 70% sparsity. Where is that introduced in the main text? Consistency is not respected as well: in some contexts, P means Prompt while in other context means Perceptual. Is Prompt == Perceptual? This is not stated in Section 2. Prompt usually refers to the text and image prompts, before preprocessing and tokenization. The formatting is off as well. Sometimes, $P$ is used, sometimes plain-text P. Same for $T$ vs. T. - Evaluation could be more complete for the Multi Task setup with the use of common benchmarks such as LLaVABench-in-the-Wild, MMVet, etc. As well, Hallucinations benchmarks such as MMHALBench and HallusionBench should be used. Authors quote [46], which is the very work that introduces MMHALBench. Typos: - Line 90: Define k? - Line 94: FC1 and FC2 should be $FC1$ and $FC2$. Same for g, LN1, LN2, SA. - Line 92: What is (B)? It is not defined in eq (2). - Line 101: s/We/we/ - L105: Reformulate "In the paper, we focus no LLaVA-1.5-4 as it is most similar to the ST setup, and analyse other variants in App. E."? s/ no / on / ? - L106: (e.g. Sec. 3) ? Do you mean (see Sec. 3)? Why e.g.? - Section/Equation/Appendix references are not following the same format. Sec. is used in place of Sec., same for App. and Appendix, but Equation is fully written. What about adopting a consistent terminology? - Figure 2 is barely readable, same for Figure 3, Figure 4. Figure 5, as well it is not vectorized. Consider using TKIZ or export your figures in PDF and include those cropped PDF in your tex document. Bitmap/JPEG/PNG used here are leading to poor and pixelated results, hurting the readability. - Figure 1: s/Analysing/Analyzing/ - Figure 1: s/LMM/LLM/ (twice) - Figure 1: s/perceputa/perceptual/ - Figure 1: "Computation efficiency", do you mean "Computational efficiency" or "Computation and efficiency"? - Figure 1: What is the point of the right block named "Framework"? It doesn't seem related to the work or the caption of Figure 1. - L136: The claim "Fig. 2 shows a clear narrow cone effect for textual and perceptual tokens." is not verifiable when looking at Figure 2. - Figure 10. What is the difference between Cosine Similarity and Sim. Aren't they the same as defined in Equation 3? Why using different terminology? Technical Quality: 3 Clarity: 1 Questions for Authors: - Author claim that the alignment between Image <-> Text tokens helps the model performing well on VQA tasks and hallucinations tasks. It would be interesting to know if this is a cause (i.e performance on vision tasks is high thanks to that alignment), or a consequence (i.e alignment is high along performance on vision tasks thanks to the vision-language training). A way to verify that could be to compare alignment among different off-the-shelf models (e.g. LLaVA-1.5, LLaVA-1.6, Qwen-VL-Chat, Phi3-Vision) and see if there are differences in their respective alignment scores. With such population of models and scores, it could be possible to rely on statistical testing to establish it. Do you see a covariation between alignment and performance among that population? Can you control the alignment independently of the training on vision-language itself? E.g. manipulating directly P and T such as the similarity is higher/lower, and still observe that covariation between the alignment and performance variables in that population? - As well, the authors claim that an LLM can be seen as a residual stream with refinement blocks acting as steering blocks, and that this architecture is what helps the LLM to implicitly having an alignment between text and image tokens. It seems hard to support this claim without contrasting their experiments with another architecture aligning image and text. For instance, using a simple pre-trained CNN to process the image, and a simple pre-trained RNN (e.g. an LSTM or GRU) to process the text input, and then concatenating/pooling the outputs with a simple connector (e.g c-abstractor or just a plain linear projection) could be used to contrast their findings. Is that something planned by the authors? Confidence: 4 Soundness: 3 Presentation: 1 Contribution: 4 Limitations: - One important limitation is the presentation of this work. The majority of the figures are pixelated and not readable. This is often a minor inconvenience in other works, but in this paper all the results are presented through figures. Figure 5 for instance should be removed or importantly reworked. Lacking of space in a limited-page paper is understandable, but not using TKIZ or vectorized figures is not. This is hurting the overall understanding of an interesting work trying to address interesting questions. This work should not be published as is. - Authors claim that the alignment between Image <-> Text tokens is what helps the model performing well on VQA tasks and hallucinations tasks. Something that is not tested is in which extend this is a cause (i.e performance on vision tasks is high thanks to that alignment), or a consequence (i.e alignment is high along performance on vision tasks thanks to the vision-language training). See "Questions" section of this review. - Authors claim that an LLM can be seen as a residual stream with refinement blocks acting as steering blocks, and that this architecture is what helps the LLM to implicitly having an alignment between text and image tokens. It seems hard to support this claim without contrasting the work with another architecture. See "Questions" section of this review. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for the detailed feedback on our work. The comprehensive comments reflect a significant effort in reading our paper and providing feedback, which we highly appreciate. We recognize the reviewer's intention to help us further improve our paper and are grateful for their constructive input. ## **Weaknesses:** - **The presentation of the results and the overall quality** We appreciate the reviewer's suggestion to improve the readability of the figures. We have made significant efforts to ensure all numbers in the figures are as clear as possible (sometimes by zooming due to space constraints). To clarify, each figure in our submission is generated by a dedicated Python script that performs model inference, calculated metrics, creates the figures, and saves them at a very high resolution. Initially, we received initial feedback from several readers about the large PDF file size and slow navigation of the PDF due to the high-resolution figures. Consequently, we compressed the figures to balance clarity and efficient navigation. After inspection, we did not find any figure that could make any claim ambiguous due to clarity. Regarding Figure 5, we acknowledge that readers may need to zoom in to read it. To address this, we have included a clearer version of Figure 5 in the appendix. Using TikZ packages would be an ideal solution for figure clarity, however, it requires significant effort to adapt our codes, rerun the experiments, and reproduce the figures with TikZ. We consider this for future papers. - **In many details, the work could have be improved (Claims and Typos).** Figure 1/12: We changed this to 50%. We didn’t put the 70% sparsity results in the paper because the Wanda pruning was not effective at this sparsity regime (see response to hn2C for more details) Prompt == Perceptual? Yes, we use them interchangeably as mentioned in L85 in Section 2. In our setup he prompt refers to tokens prepended before the textual ones. We unify the formatting to be $T$ and $P$ instead of T and P. - **Evaluation could be more complete.** The benchmarks that we use are typically correlated with other multimodal benchmarks. We agree that adding more recent ones will complete the work and we consider this for future iterations on the paper. - **Typos:** We tried as possible to follow the reviewer's suggestions. We update the paper accordingly and provide answers to raised questions: **What is the point of the right block named "Framework"?** To clarify that family of models that we analyze and to show that we focus on the LLM and look at the representations in the residual stream, between and inside the blocks. **The claim "Fig. 2 shows a clear narrow cone effect for textual and perceptual tokens." is not verifiable when looking at Figure 2.** We use the cosine similarity as an indicator of the cone effect. The unimodal tokens have high cosine similarity scores (p vs p) and (T vs T). This score is significantly larger than the cross-modal score (P vs T) or (P vs P in ST across different perceptual modalities). As high scores mean there is a cone effect, this means the narrow cones exist. As the cross-modal cosine similarity is small, this means the multimodal cones are distinct ## **Questions:** ### **Author claim about the alignment between Image <-> Text** - First, we would like to clarify that our paper discusses positive "correlation" and provides initial observations suggesting that the IMA score "could" serve as a proxy metric for task performance (framed as questions in the paragraph titles L238 and L243). We don’t claim that alignment leads to better performance or that this alignment should be maximized. Nonetheless this section occupies only half a page. - Despite being conservative in our claims, we have enough results to consider the IMA score a good candidate metric. We conducted a controlled experimental study by comparing different LLaVA variants (MT setup) and models trained for varying numbers of steps. These experiments showed indeed that the training improves both alignment and performance. - Additionally, and to support more our hypothesis, we add the following experiments (for the ST setup). The experiment consists of training LMMs with different image encoders; text-aligned (CLIP), self-supervised (MAE) and trained for imagenet (ViT), We report the task performance and the IMA score (Fig.1 and Tab.1 in the uploaded pdf). Here we also observe that the alignment across correlates with performance: encoders with higher IMA scores (e.g. CLIP) also demonstrates better performance. - Rigorously studying this alignment across different setups and models is an interesting direction that we leave for future work. We thank the reviewer again for the very interesting suggestion. ### **Claim that an LLM can be seen as a residual stream with refinement blocks acting as steering blocks** - Contrasting the LLM architecture with other architectures would be necessary if we claimed that this is a unique property of LLM architecture, which we do not. We hypothesize that generalization to non-textual tokens is related to the architecture and provide observations to support this hypothesis. - The presence of similar architectural inductive biases (e.g., residuals or alternatives to attention modules) in other models does not change the main findings of the paper. Instead, it generalizes and solidifies the proposed hypothesis across a wider range of architectures. - While studying RNNs might be important, these architectures are not common in the multimodal community. Therefore, we prefer to focus on architectures that are widely adopted. - We thank the reviewer for this suggestion and leave pursuing this study in future work. ## **Limitations:** - Please see answer to weakness 1 - Please see answer to question 1 - Please see answer to question 2 --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for their response. It is unclear why producing TikZ figures would require to rerun experiments as mentioned in the rebuttal. I assume the authors have kept all the outputs of their experiments in a readable format (csv, json, or even better wandb) for future consultation. My other concerns have been addressed. I keep my ratings unchanged.
Summary: The paper explores how frozen Large Language Models generalize to multimodal inputs without the need for extensive re-training. It introduces the concept of Implicit Multimodal Alignment, which suggests that despite the distinct representations of perceptual and textual tokens, there exists a form of implicit alignment facilitated by the architecture of LLMs. The study leverages experimental setups across single-task and multitask environments to validate the IMA phenomenon and discusses its implications for computational efficiency and model performance. Strengths: 1. The analysis spans various setups and modalities, providing a comprehensive look at how LLMs process multimodal inputs. 2. This study can quickly extend large semantic models to the multimodal domain, effectively reducing computational costs and enhancing the generalizability of LLMs. 3. The results are meaningful, proving the effectiveness of the method. Weaknesses: 1. The concept of alignment within neural networks, although well-explored here, does not offer a groundbreaking methodological advance. The novelty lies more in the application context rather than in the development of new techniques or models. 2. The generalizability of the results is restricted to a subset of model architectures and sizes, potentially limiting the broader applicability of the findings. 3. The visualization experiments are not clear. Current figures are somewhat generic and do not adequately convey the unique aspects of the IMA effect. Technical Quality: 2 Clarity: 2 Questions for Authors: 1. How does the IMA effect influence the overall performance of LLMs on standard multimodal tasks? Are there performance trade-offs associated with the alignments observed? 2. Are there indications that these findings could be applicable to other types of models or architectures not covered in the study? Confidence: 2 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: 1. The paper lacks implementation details. 2. The study focuses primarily on a specific range of LLMs. More domains and tasks should be included. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for the feedback. In the following, we answer the reviewer’s concerns. ## **Weaknesses:** - **The concept of alignment within neural networks, although well-explored here, does not offer a groundbreaking methodological advance. The novelty lies more in the application context rather than in the development of new techniques or models.** As far as we know, the significant gap between perceptual and textual tokens within LLMs and the ability of LLMs to generalize to non-textual tokens has not been studied before. While we respectfully disagree with the reviewer's categorization, this novelty does not diminish the importance of the messages conveyed by the paper. - **The generalizability of the results is restricted to a subset of model architectures and sizes, potentially limiting the broader applicability of the findings.** The primary motivation for our paper stems from the observation that simply connecting unimodal encoders to LLMs achieves unprecedented performance. This led us to consider two typical setups that use LLMs to tackle multimodal tasks. In the paper, we explore different LLMs (OPT, Vicuna v1.5), visual encoders (ViT, CLIP, MAE), and modalities (image, video, audio and text) as seen for example in Figs. 23, 26, 27, 28, 29, 32. We believe this covers a wide range of multimodal LLM approaches. Regarding the model size, 7B LLMs are currently widely used and have been shown to outperform many larger LLMs, making them an appealing choice for studying and understanding their capabilities. However, we agree that extending our study to other setups beyond LLMs is an interesting direction for future work, as stated in the discussion in Appendix B. - **The visualization experiments are not clear. Current figures are somewhat generic and do not adequately convey the unique aspects of the IMA effect.** We would appreciate more clarification on what the reviewer means by "not adequately convey the unique aspects of the IMA effect." This comment is too general, and it is unclear to us how to respond. ## **Questions:** ### **How does the IMA effect influence the overall performance of LLMs on standard multimodal tasks? Are there performance trade-offs associated with the alignments observed?** As we show in Figure 9, there is a positive correlation between the IMA score and performance on standard multimodal tasks; models with higher performance also have the highest IMA scores, and vice versa. Intuitively, better multimodal alignment leads to better understanding of different modalities. However, we do not claim that the two modalities must be fully aligned, and we leave this investigation for future work. ### **Are there indications that these findings could be applicable to other types of models or architectures not covered in the study?** Our experiments cover several LLMs with different encoders and connectors, trained on various multimodal datasets. We believe that these observations are generalizable to the broader category of multimodal LLMs (MLLMs). We leave conducting a more in-depth investigation of this for future work. ## **Limitations:** - **The paper lacks implementation details.** Due to space constraints, we included most of the details in the appendix. Could the reviewer point out details that should be considered in the main paper? - **The study focuses primarily on a specific range of LLMs. More domains and tasks should be included.** The current multimodal tasks we cover include question answering and text generation conditioned on images (i.e. image captioning), which cover many multimodal tasks (most of the tasks can be cast as question answering or instruction following). We address four modalities in our work. However, there are other domains of applications, such as 3D vision and speech, that we do not cover, but we believe that our findings should generalize to additional modalities and tasks. --- Rebuttal Comment 1.1: Comment: Dear reviewer, we hope our clarifications can address your concerns, and we sincerely hope that you can reconsider our work in light of these clarifications. If you have any further comments, please do not hesitate to contact us. We greatly appreciate your contributions to the community.
Summary: This paper conducts an in-depth study on the generalization capabilities of LLM when handling multimodal inputs without multimodal fine-tuning. It reveals the implicit multimodal alignment (IMA) effect between perceptual and textual tokens within LLMs and finds that this effect is closely related to the model architecture. The IMA effect contributes to enhanced task performance and reduced inference costs. Additionally, the paper proposes methods for model compression and reducing computational overhead, providing valuable insights for the future design and optimization of multimodal models. Strengths: Conducted an EXTENSIVE series of experiments to validate four hypotheses, providing insightful understanding of the mechanisms on how the MLLMs perceive multimodal information. Based on the four findings, the paper offers relevant implications that explain the effects of IMA on tasks and hallucinations. Proposes a novel approach to model compression by retaining a subnetwork. Experiments are comprehensive and were conducted on various large language models, including OPT, Llama, and Vicuna. Weaknesses: The textual descriptions are somewhat difficult to understand, with limited explanation of the figures. Insufficient explanation of the subnet part. It is unclear whether the similar performance after 50% sparsity is due to the effectiveness of the WANDA method or the extraction of the subnet. Technical Quality: 4 Clarity: 3 Questions for Authors: I do not fully understand line 247. How is the degree of hallucination measured? What is the relationship between cosine similarity and this measurement? Out of personal curiosity, how long did these experiments take? Confidence: 4 Soundness: 4 Presentation: 3 Contribution: 4 Limitations: See weakness and question Flag For Ethics Review: ['No ethics review needed.'] Rating: 9 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for very positive feedback about the paper and appreciate finding it novel, supported by an extensive series of experiments and insightful messages. This feedback encourages us to push further for similar interesting work. In the following we try to address all the reviewer's remaining concerns. ## Weaknesses: - **The textual descriptions are somewhat difficult to understand, with limited explanation of the figures.** We agree that some figures are not exhaustively explained in the captions. This is due to space limitation, we tried to explain rather in the text or in the appendix. We appreciate it if the reviewer can point to specific figures that need an elaborated explanation. - **Insufficient explanation of the subnet part. It is unclear whether the similar performance after 50% sparsity is due to the effectiveness of the WANDA method or the extraction of the subnet.** Based on our experiments, the significant drop in performance when exceeding 50% sparsity is attributed to Wanda. To support this, we provide additional scores when pruning the model with Wanda (OPT LLM with a CLIP encoder): | **Sparsity** | **COCO** | **VQAv2** | **GQA** | **OKVQA** | |--------------|----------|-----------|---------|-----------| | **0.5** | 126.81 | 55.28 | 24.72 | 42.00 | | **0.7** | 84.47 | 16.41 | 10.70 | 1.80 | These results demonstrate a substantial performance decrease with increased sparsity. It is also worth noting that similar observations have been reported in concurrent works [1] (Fig. 4a). [1] Sung, Yi-Lin, Jaehong Yoon, and Mohit Bansal. "Ecoflap: Efficient coarse-to-fine layer-wise pruning for vision-language models." arXiv preprint arXiv:2310.02998 (2023). ## Questions: ### **I do not fully understand line 247. How is the degree of hallucination measured? What is the relationship between cosine similarity and this measurement?** We follow standard practices for measuring multimodal hallucinations (specifically object hallucinations), which occur when the model describes or refers to objects not present in the input image. **Hallucination Metrics:** - COCO: On the COCO image captioning dataset, the model is asked to describe the images. We compute the CHAIR metrics based on the generated captions and the ground truth annotations of all objects in the image. If a caption contains non-existent objects, we classify it as a hallucinated caption. The CHAIRs score is the ratio of hallucinated captions to the total number of captions. Additionally, we calculate the ratio of hallucinated objects to the total number of objects across all captions, which is referred to as CHAIRi. A CHAIR score of 0 indicates no hallucinations. In the paper, we report (1 - CHAIR) × 100, so a higher score indicates fewer hallucinations. - POPE: This is a question-answering task involving questions about the existence of objects in images. The metric used is accuracy; the fewer the hallucinations, the higher the accuracy. We add these details to the appendix of the revised paper. **Link to image-text similarity**: In the paper, we hypothesize a link between the degree of cross-modal alignment and hallucinations. Our intuition is that for a textual model to understand the details in images, the image features should be well aligned to the textual domain. The better the alignment (measured here with cosine similarity) the better the model at performing reasoning about it. Misaligned representations can be considered loosely as out-of-distribution/modality samples which cause the model to be less confident and more likely to hallucinate. ### **Out of personal curiosity, how long did these experiments take?** The experiments involve both training models and analyzing them. Training LLaVA models typically takes less than one day on 8 GPUs, while ST models require a few hours, depending on the size of the dataset. Computing similarity at the token level also takes a few hours on a single GPU. However, due to the exploration of many different ideas, of which only a portion are included in the paper, the project consumes a significant amount of GPU hours. Generally, such a project can be conducted with an academic-level compute budget (8 GPUs).
Summary: This work aims to understand multi-modality representation within MLLMs. It provides some interesting findings about how LLMs generalize to non-textual tokens and what helps LLMs to generalize to multimodal tokens. Additionally, several implications are proposed based on these findings. Strengths: 1. The authors present many interesting findings about how LLMs generalize to non-textual tokens. These findings could help in understanding MLLMs and inspire future research. 2. Based on these findings, the authors propose several implications on performance, safety and efficiency. 3. The figures are well-illustrated and effectively support the findings. Weaknesses: **About experiments on α-SubNet.** --Figure 12 appears incomplete. What does "Avg" mean in this table? --The comparison between task-specific pruning approaches (Wanda) and the proposed task-agnostic approach is not comprehensive. For instance, what is the performance of Wanda sub-network pruning on COCO across other multimodal tasks? In other words, I'm curious about the generalization of the task-specific methods. Including these results would help in understanding the significance of task and modality-agnostic pruning methods. --Additionally, the results in Table 1 of the Appendix should be incorporated into the main paper for better readability. --Overall, I recommend reorganizing the experiments related to α-SubNet. **Others**. --line 1192 is unfinished. Technical Quality: 4 Clarity: 4 Questions for Authors: In the single-task (ST) setting (Fig. 2), the authors use unimodal encoders without text-aligned. Figure 2 illustrates "Multimodal cones: different narrow cones for different modalities." However, I wonder if this phenomenon is related to the unimodal encoders themselves—i.e., different modality features from unimodal encoders might have very different feature distributions, leading to different narrow cones in LLMs. Could the authors conduct new experiments using aligned encoders, such as ImageBind [1], as different modality encoders? This would help determine whether the different narrow cones in LLMs are due to different modalities or different encoders. Ref: 1. ImageBind: One Embedding Space To Bind Them All. CVPR 2023 Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: As stated by the authors, the generalization of the findings, to larger and more powerful models, with different architectures, including proprietary ones remains to be seen Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: The authors would like to thank the reviewer for the positive feedback and appreciate finding the paper interesting, insightful and well presented. In the following we try to address all the remaining concerns: ## **Weaknesses** ### About experiments on α-SubNet. - **Clarification about Figure 12**: The "Avg" in Figure 12 represents the average score across all multimodal tasks (in this case, VQAv2 and COCO). We will update the figure to reflect this more clearly. - **Generalization of Task-Specific Methods**: This is an important point. We have included experiments in Figure 6, as well as Figures 26, 27, 28, and 29 in the appendix, that address the generalization of pruned masks/subnetworks to other tasks and modalities. All these results involve task-specific pruning methods (i.e., Wanda). For example, the performance of Wanda sub-network pruning on COCO across other multimodal tasks is as follows (Figure 6): 59.23 (vs 59.69) on VQAv2, 45.19 (vs 44.09) on GQA, and 29.30 (vs 30.89) on OKVQA which indicate very good generalization as the scores are comparable to pruning masks coming from the same datasets (indicated in parenthesis here). The generalization of task-specific pruning to other tasks was a main part of the motivation for α-SubNet. - **Incorporation of Table 1 into the Main Paper**: We agree that Table 1 provides a stronger signal regarding the effectiveness of α-SubNet. However, due to space limitations, we included only a subset of the results. We update the revised paper by moving more benchmarks coming from additional modalities. ### Others **--line 1192 is unfinished.** Thanks for spotting this. This paragraph is redundant as it is detailed before in line in 1169. We remove this line in the revised paper. ## Questions: ### **Whether the different narrow cones in LLMs are due to different modalities or different encoders?** We can break this question to several sub questions: 1. Does the modality gap still exist for text-aligned encoders? Yes. To demonstrate that the modality gap (or different multimodal "cones") persists even for text-aligned features, we conduct a comprehensive comparison between different encoders (new Fig 1 in the uploaded pdf). This includes text-aligned encoders such as CLIP, unaligned encoders like MAE, and encoders trained for classification (e.g., ViT on imagenet). Our findings reveal that: (1) the modality gap persists even for encoders aligned with text; (2) CLIP encoders produce features that are more closely aligned with LLM textual features, while MAE produces the most misaligned features. 2. What causes different narrow cones in LLMs? Previous work [1] has shown that even with the same model architecture and training data, varying the random seed can lead to differences in encoded representations (i.e., different narrow cones). Thus, differences among encoders, or the same encoders trained on different data or modalities, all contribute to having different narrow cones. 3. Does this affect the main paper message? Our goal is not to show that there is a gap between different multimodal encoders, but rather to highlight that this gap still exists within LLMs even after projecting all modalities into the same LLM input space. Despite this gap between perceptual and textual features, the LLM is still able to generalize effectively. Whether the multimodal aspect is the sole cause of this gap or if there are additional contributing factors, our main conclusions and messages remain valid. [1] Liang, Victor Weixin, et al. "Mind the gap: Understanding the modality gap in multi-modal contrastive representation learning." Advances in Neural Information Processing Systems 35 (2022): 17612-17625. --- Rebuttal Comment 1.1: Comment: Thank the authors for the detailed response. All my concerns have been addressed. I'd like to keep my rating.
Rebuttal 1: Rebuttal: - We would like to thank the reviewers for the very detailed and positive feedback. We appreciate finding our work original (vaQc, hn2C, dcgF, 69kn), provide valuable and interesting insights (vaQc, hn2C, dcgF, 69kn), supported by extensive experimentation (hn2C) and well presented (vaQc). We find the feedback very encouraging with many interesting and relevant suggestions. - Based on the reviewers suggestions, we made changes to the paper, some of them are mentioned in this rebuttal and the uploaded PDF, and others are integrated directly to the main paper. The changes mainly cover adding new experiments, clarification and improving the overall presentation (e.g. figures, typos). We detail the changes in our response to each reviewer. - In the following we try to address all reviewers ' concerns. Pdf: /pdf/a21d7e0b58f2bae187405132b1d47a972ba5cfbc.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Autonomous Driving with Spiking Neural Networks
Accept (poster)
Summary: Although SNNs have potential of neuromorphic computing in sustainable and safety-critical autonomous technology, they still lack evidence in complex real-world computer vision applications. In this work, the authors propose a unified end-to-end SNN called SAD that consists of three models to generate safe trajectories, which presents improved energy efficiency. The researchers complete the validation on the nuScenes dataset, which not only verifies the good performance, but also proves the effectiveness to a certain extent. Strengths: The proposed idea is novel and relevant to the NeurIPS community. 1. SNNs have attracted a lot of attention in recent years due to their better performance and low energy consumption, but the application potential remains to be explored. In this work, the authors investigate an end-to-end SNN-based approach applying in autonomous driving, which shows impressive performance similar to traditional neural network approaches. This increases the impact of the work. 2. The proposed method can help reduce energy consumption and plan a safe path, which shows that SNNs have great potential to be applied in the field of autonomous driving systems. 3. In experiments, the authors compare the proposed method with traditional deep learning works in recent five years, and obtain competitive results. Weaknesses: Comparisons with state-of-art works in the last three years are lacking in the results of perception, prediction, and planning in Section 4.1. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. In related works, the authors should add a relevant description of the connection between this research and End-to-end Autonomous Driving, as in the previous paragraph “We push...in this paper ”. 2. In Section 4.1, the authors mention that "The results, as summarized in Tab. 1, show that our SAD method, ..., competes favorably against state-of-the-art, non-spiking artificial neural networks (ANNs)”. But I only see a specific figure of 7.43%. Please add descriptions to support it. 3. This work lacks a discussion of the robustness of the proposed method, I would like to recommend adding experiments on more datasets. 4. In Table 4 and Table 5, the authors do not show complete ablation experiments, e.g., SEW+SP in Table 4, SA+SR in Table 5, etc. Please provide the necessary details. Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Some details in the manuscript are not clearly stated and algorithmic performance needs more convincing. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their thorough and constructive feedback on our manuscript. We appreciate the positive comments on the novelty and relevance of our work, as well as the recognition of its potential impact in the field of autonomous driving. We have carefully considered all the points raised and have addressed them in detail in the following sections. We kindly ask the reviewer to find our responses to their specific questions and concerns in the content below. >In related works, the authors should add a relevant description of the connection between this research and End-to-end Autonomous Driving, as in the previous paragraph “We push...in this paper ”. Response: Thank you for pointing this out. We will add a paragraph that connects our research to end-to-end autonomous driving in the final version. A draft passage is provided here: "While previous work has shown SNNs' effectiveness in autonomous control tasks, our research extends to the more complex challenge of end-to-end autonomous driving. We build upon these foundations, pushing SNNs to handle the challenging, real-time decision-making required for full autonomy in dynamic, real-world environments. This work bridges the gap between simplified control tasks and the holistic approach needed for true end-to-end autonomous driving. >In Section 4.1, the authors mention that "The results, as summarized in Tab. 1, show that our SAD method, ..., competes favorably against state-of-the-art, non-spiking artificial neural networks (ANNs)”. But I only see a specific figure of 7.43%. Please add descriptions to support it. Response: Thank you for bringing this to our attention. You're right that we should provide more comprehensive support for our statement about competing favorably against state-of-the-art ANNs. We'll revise this section to include more detailed comparisons. A draft update is provided here: "The results, as summarized in Tab. 1, show that our SAD method, which is fully implemented with spiking neural networks (SNNs), competes favorably against several state-of-the-art, non-spiking artificial neural networks (ANNs). Specifically: Our SAD method achieves a mean IoU of 35.62%, which outperforms VED (28.19%), VPN (30.36%), PON (30.52%), and Lift-Splat (34.61%). While more recent methods like IVMP (36.76%), FIERY (40.18%), and ST-P3 (42.69%) achieve higher mean IoUs, our SNN-based approach comes close to their performance, especially considering the inherent energy-efficiency of using SNNs." >This work lacks a discussion of the robustness of the proposed method, I would like to recommend adding experiments on more datasets. Response: We appreciate your suggestion. We agree that testing on additional datasets would strengthen our work. While time constraints prevent us from conducting these experiments for this paper, we acknowledge this limitation and plan to address it in future work. Specifically, we aim to extend our experiments to include the CARLA dataset. >In Table 4 and Table 5, the authors do not show complete ablation experiments, e.g., SEW+SP in Table 4, SA+SR in Table 5, etc. Please provide the necessary details. Response: Thank you for your observation. We appreciate the suggestion for more comprehensive ablation experiments. However, we would like to clarify that our ablation studies are designed to compare individual changes in configuration against our proposed model (shown in the last row of each table). The purpose of these experiments is to demonstrate the impact of each specific component or strategy. Combining multiple changes simultaneously (e.g., SEW+SP in Table 4 or SA+SR in Table 5) would make it difficult to isolate the effect of individual components. Moreover, as our results show, changing one configuration already leads to a decrease in performance. It's reasonable to expect that altering multiple configurations simultaneously would likely result in an even greater performance drop. We thank the reviewer once again for their valuable feedback and insightful questions. We believe that addressing these points has helped to clarify and strengthen our work. We have provided additional context for our research, expanded on our results, acknowledged limitations, and explained our experimental design choices. We hope that these responses adequately address the reviewer's concerns and further demonstrate the significance and potential impact of our work in advancing SNN-based approaches for autonomous driving. We are committed to incorporating these improvements in the final version of our paper, should it be accepted. We appreciate the reviewer's time and expertise in evaluating our submission. --- Rebuttal 2: Comment: Dear Reviewer XRtX, With the discussion period drawing to a close, we wanted to extend our heartfelt appreciation for your insightful comments and the positive feedback on our work, which has been incredibly helpful to us. We would be immensely grateful if you could kindly inform us whether our responses have resolved your concerns or if you have any additional questions that we can assist with. --- Rebuttal Comment 2.1: Comment: Dear Reviewer XRtX, We're writing to kindly remind you that today is the final day of our author-reviewer discussion period. If you have any remaining concerns, we would be most grateful if you could share them with us as soon as possible. Thank you once again for your positive review. Your insights have been invaluable, and we sincerely appreciate your time and expertise.
Summary: This paper introduces Spiking Autonomous Driving (SAD), an end-to-end spiking neural network (SNN) designed for autonomous driving. SAD integrates perception, prediction, and planning into a unified neuromorphic framework. It achieves competitive performance on the nuScenes dataset and significantly outperforms traditional artificial neural network (ANN) methods in terms of energy efficiency. The authors anticipate that this work will spur further research into neuromorphic computing for sustainable and robust autonomous driving solutions. Strengths: 1. Autonomous driving can be seen as an energy-limited scene, so I think it is meaningful to introduce SNNs to this task. To the best of my knowledge, this paper is among the first to utilize SNNs to AD. 2. This paper provides a unified framework on how to apply SNNs toAD. The authors employ a dual-pathway architecture and I think this is an innovative design in the use of spiking neurons. 3. SAD achieves competitive results on the BEV segmentation IoU and semantic segmentation IoU, indicating the practical viability of the proposed method. The results in Table 3 have really surprised me, for its great energy-efficiency. Weaknesses: 1. Notation typos. It seems that the authors make mistakes in equations in Line 148, 155, 157, and 169. For example, I think $X \in$ \{0,1\} $^{C\times T\times L\times H\times W}$ instead of $X \in \mathbb{R}^{C\times T\times L\times H\times W}$ because $X$ should be a spike train rather than a floating-point matrix. 2. I think energy efficiency is the key to this paper. However, the authors talk little about this issue in Section 4.1. Can you provide me more about how you calculate the energy consumption and your detailed energy experiments of all 3 steps (i.e., perception, prediction, and planning)? Technical Quality: 3 Clarity: 3 Questions for Authors: 1. I wonder if the numbers in your proposed SNNs are spikes or not. $\mathbb{R}$ stands for real number field. 2. Can you show the detailed energy results of all steps in Table 3? I think it is incomplete to show the planning step only. 3. In Line 238, I wonder why you chose GRU to refine the trajectory. Does GRU have unique advantages on this issue? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors mentioned limitations in the Conclusion Section, focusing on further validation through real-world on-vehicle testing. However, it would be better for authors to discuss limitations in more detail. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thorough review and insightful comments on our manuscript. We appreciate the time and effort you've invested in providing valuable feedback. We have carefully considered your points and are pleased to address them below. >I wonder if the numbers in your proposed SNNs are spikes or not. stands for real number field. Response: Thank you for your observation. You are correct; the SNNs indeed represent spikes. We should clarify that the domain should be $\{0,1\}$ instead of $\mathbb{R}$, accurately reflecting the spiking nature of the neurons. >Can you show the detailed energy results of all steps in Table 3? I think it is incomplete to show the planning step only. Response: We apologize for any confusion. To clarify, Table 3 presents the overall energy results, not just a single stage. Here's a detailed breakdown of the energy consumption for each stage: | Step | Energy (mJ) | | ---------- | -------------- | | Perception | 41.09 | | Prediction | 4.53 | | Planning | 1.30 | | Overall | 46.92 | >In Line 238, I wonder why you chose GRU to refine the trajectory. Does GRU have unique advantages on this issue? Response:We chose GRU for its effectiveness in handling temporal and spatial mixing. While both GRU and LSTM are suitable for this task, GRU offers a slight advantage in terms of computational efficiency. It has a lower token mixer complexity compared to LSTM, as it uses one less hidden state. This makes GRU faster while still providing comparable performance for our specific use case. We hope these clarifications and additional details address your concerns and improve the quality of our manuscript. Once again, we sincerely appreciate your valuable feedback, which has helped us enhance the clarity and completeness of our work. --- Rebuttal Comment 1.1: Comment: I have thoroughly reviewed the authors' response, and they have addressed my concerns. I also read their replies to other reviewers and noticed that some reviewers also had doubts about the energy computation of this paper as I did, but I solved the doubts about how to calculate energy consumption in the authors' second reply to reviewer EVdw. --- Reply to Comment 1.1.1: Title: Thank you Comment: Thank you for your thorough review and for taking the time to read our responses to all reviewers. We are grateful for your increased rating of our paper and value your contribution to improving our work, and we appreciate your careful consideration of our explanations, especially regarding the energy computation.
Summary: The authors adapted ST-P3 into a version with a binary spiking neuron and then stated that autonomous driving based on a spiking neural network can address the energy challenges. The authors stated that this neuromorphic technology can be a step toward sustainable and safety-critical automotive technology. Strengths: This work has completed the end-to-end SNN-based autonomous driving pipeline, which is a heavy job. The work has a clear architecture. This work is easy to follow. The idea of using SNN in driving technology is interesting. Weaknesses: The evaluation is not as effective as its reference work, ST-P3, which has evaluations of open loop validation and close loop validation. The implementation plan on neuromorphic hardware is not clear in this paper, which I think it’s the most important aspect for energy-efficient applications. No comparison with recent state-of-the-art ANN models (later than ST-P3), as the necessity of autonomous driving using SNN is an open question. It is better to include more comparison and further discussion. Technical Quality: 2 Clarity: 3 Questions for Authors: What is the advantage of the SNN solution from the perspective of a safety-critical application? As this work is oriented towards this specific application, it is better to clearly state the advantages. Do you have specific safety considerations in the design of this model? Do you include any related work on SNN robustness in this paper? Dual pathways and other architecture designs manifest a problem of mapping on neuromorphic hardware. I suspect that SGRU is as energy efficient as simple LIF. Eqs. 6, 7, 8, and 9 imply heavy matrix computation. Can you provide an explanation for this? ANN seems more robust than SNN (Figs. 5, 7). In the figures of ANN outputs, are the vehicle markers clearer and more accurate? What do you think SNN will do to lead to this result? Do you have any solutions to this? Can you include experiments on this to address this problem? (This question is important.) Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: See questions and weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their thorough examination of our work and their insightful comments. We acknowledge the challenges highlighted in your review and appreciate the opportunity to address them. In the following responses, we aim to clarify our contributions, explain our methodological choices, and discuss the implications of our findings for the field. > What is the advantage of the SNN solution from the perspective of a safety-critical application? As this work is oriented towards this specific application, it is better to clearly state the advantages. Do you have specific safety considerations in the design of this model? Do you include any related work on SNN robustness in this paper? Response: Thank you for your question. We'd like to clarify our focus and address your points: The primary aim of our work is to demonstrate the potential of SNNs to handle the complex requirements of low-power autonomous driving. Our current research is centered on establishing the feasibility and efficiency of SNN-based models for this application. To our knowledge, there is ongoing research into robust SNNs [R1], demonstrating their potential in safety-critical applications. This existing work provides strong evidence for the viability of SNNs in scenarios where safety is paramount, such as autonomous driving. Prior to this work, the application of SNNs to autonomous vehicle planning had not been achieved, and so integrating robustness, interpretability, and safety-critical features are a logical next step beyond efficiency. Moreover, we appreciate your suggestion about including related work on SNN robustness. This is a valuable addition that we plan to incorporate in the final version of our paper. > Dual pathways and other architecture designs manifest a problem of mapping on neuromorphic hardware. I suspect that SGRU is as energy efficient as simple LIF. Eqs. 6, 7, 8, and 9 imply heavy matrix computation. Can you provide an explanation for this? Response: Thank you for highlighting this point. The dual pathways and other complex architectural designs present challenges when mapping onto neuromorphic hardware. However, it's worth noting that designs similar to ResNet have been extensively explored in the field of Spiking Neural Networks (SNNs) and successfully implemented in hardware. This suggests that dual pathway architectures are quite straightforward to realize in neuromorphic systems. Given that most modern neuromorphic hardware utilize many smaller cores, parallel layers would be executed in separate cores, with their results merged in another core - quite similarly to residual/skip connections. Regarding the energy efficiency of SGRU compared to simple LIF neurons, your intuition may be correct. The key lies in understanding the nature of the computations in Equations 6, 7, 8, and 9. While these equations might initially appear to involve heavy matrix computations, the actual implementation in SGRU is more efficient due to the binary nature of the signals. In SGRU, the input $x_t \in \{0,1\}$ represents spikes. This binary representation simplifies the computations $W_{ir}x_t$, $W_{iz}x_t$, and $W_{in}x_t$. Similarly, since $h_t = (1 - z_t) \odot n_t + z_t \odot h_{t-1}$, where $z_t \in \{0,1\}$ and $n_t \in \{0,1\}$, it follows that $h_t$ is also a binary value. Consequently, the terms $W_{hr}h_{t-1}$, $W_{hz}h_{t-1}$, and $W_{hn}h_{t-1}$ become spike-driven operations. --- Rebuttal 2: Title: Extended Rebuttal Comment: > ANN seems more robust than SNN (Figs. 5, 7). In the figures of ANN outputs, are the vehicle markers clearer and more accurate? What do you think SNN will do to lead to this result? Do you have any solutions to this? Can you include experiments on this to address this problem? (This question is important.) Response: Thank you for this important observation. You're correct that the ANN outputs appear more robust than the SNN outputs in Figures 5 and 7, with clearer and potentially more accurate vehicle markers. This difference highlights a crucial challenge in the field of neuromorphic computing. The primary reason for this disparity lies in the fundamental nature of SNNs. While SNNs offer significant advantages in terms of energy efficiency due to their discrete, spike-based processing, this same characteristic can lead to information loss during training and inference. This trade-off between efficiency and performance is a well-known challenge in the SNN domain. The discretization of information into spikes, while beneficial for efficiency, can result in a reduction of fine-grained details, potentially leading to less clear outputs compared to traditional ANNs. This performance gap is something the entire field is working to overcome. However, it's important to note that our work represents a significant step forward in applying SNNs to complex, real-world tasks like autonomous driving. We've made substantial efforts to bridge this performance gap: 1. We've developed novel architectural approaches that are specifically tailored to maintain spike-driven processing while improving performance in complex tasks. 2. Our training methods have been carefully designed to maximize the information carried by spikes, helping to mitigate some of the inherent limitations of spike-based computation. 3. We've introduced innovative components, such as our unique Spiking Token Mixer, which have allowed us to achieve competitive performance without relying on traditional ANN elements. These efforts have made it possible to apply SNNs to autonomous driving tasks, which was previously considered extremely challenging. While there's still room for improvement, our work demonstrates that SNNs can be viable for such complex applications. Regarding robustness, you've highlighted an important point that, while not the primary focus of this paper, is crucial for the practical application of SNNs in safety-critical systems like autonomous driving. Improving SNN robustness is indeed a vital direction for future research. There are several promising approaches to enhance SNN robustness, such as developing more advanced neuron models, exploring different surrogate gradient methods, or applying adversarial training techniques. These methods could potentially be integrated with our current work to further improve performance and reliability. In conclusion, while the current performance gap between SNNs and ANNs is evident in our results, our work represents a significant advancement in applying SNNs to complex, real-world tasks. We've laid a foundation that future research can build upon, not only to further improve performance but also to address critical aspects like robustness. This opens up exciting possibilities for the future of energy-efficient, neuromorphic computing in autonomous driving and other demanding applications. --- [R1] Ding J, Yu Z, Huang T, et al. Enhancing the robustness of spiking neural networks with stochastic gating mechanisms [AAAI] --- Rebuttal 3: Comment: Dear Reviewer j4mM, As we near the conclusion of the discussion period, we wanted to extend our sincere gratitude for your valuable feedback. We would be most appreciative if you could let us know whether our responses have resolved your concerns or if there are any areas where you feel further elaboration would be beneficial. --- Rebuttal 4: Comment: Dear Reviewer j4mM, This is a kind reminder that today is the last day of the author-reviewer discussion period. If you have any concerns, please let us know as soon as possible so that we can address them.
Summary: This paper presents an end-to-end SNN model for the autonomous driving to address the energy challenges. This model consists of three main modules: perception, prediction, and planning. The model is evaluated in the nuScenes dataset. Strengths: 1. This paper introduce the first SNN designed for end-to-end autonomous driving, integrating perception, prediction, and planning into a single model. 2. well-written. Weaknesses: 1. The novelty is limited. The paper just uses the SNN to autonomous driving and does not provide any special designs. 2. Only one dataset is used. 3. The energy conputation does not consider the data moving. Technical Quality: 3 Clarity: 3 Questions for Authors: see weekness. Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: None Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the time and effort the reviewer has dedicated to evaluating our work. Your feedback is valuable in helping us improve and clarify our research. >The novelty is limited. The paper just uses the SNN to autonomous driving and does not provide any special designs. Response: We appreciate the reviewer's comments but respectfully disagree with the assessment regarding novelty and the lack of special designs. Our research makes several significant contributions to the field: Firstly, unlike tasks such as object detection, semantic segmentation, and classification that have been previously addressed by SNNs, we tackle autonomous driving - a task widely recognized as highly complex in the visual domain. To our knowledge, this application of SNNs to autonomous driving is the first of its kind in the field. If there are existing papers on this topic, we kindly request the reviewer to point them out. Secondly, the claim that there are no special designs is inaccurate. Our 3.1 module, "Distinct Temporal Strategies for Encoder and Decoder," combines Sequential Alignment with Sequential Repetition, which is a novel approach in the SNN domain while maintaining a spike-driven architecture. Furthermore, the training process of the Spiking Token Mixer, a core component in our Encoder, is entirely unique. As detailed in Appendix B.1, we avoided using any attention or convolutional components, as well as MLP-Mixer type token mixers. Instead, our approach achieved a performance of 72.1% on ImageNet, surpassing other Spiking Transformers. Thirdly, the design of the Spiking GRU is an original contribution not previously proposed in other papers. Our ablation study demonstrates how our architecture outperforms alternative designs, some of which struggle to achieve meaningful performance. Contrary to the reviewer's assertion, we adhere to Occam's razor principle, seeking the most appropriate design rather than pursuing novelty for its own sake. We firmly believe that truly effective designs, not merely novel ones, are what advance the machine learning community. > Only one dataset is used. Response: We'd like to address this concern from several angles. Firstly, comparable datasets for end-to-end autonomous driving are scarce. Several prominent works in this field, such as [R1] and [R2], have also exclusively utilized the nuScenes dataset, highlighting its significance and broad acceptance in the research community. Moreover, it's not entirely accurate to say we only used one dataset. As detailed in Appendix B.1, we also tested the crucial part of our method's performance on ImageNet to verify its robustness and generalizability. We believe that the combination of in-depth analysis on a specialized autonomous driving dataset and performance verification on a general, large-scale dataset provides a comprehensive evaluation of our method's effectiveness and potential for broader applications. >The energy conputation does not consider the data moving. Response: Thank you for your observation. You're correct that our energy computation does not consider data movement. This approach is consistent with standard practices in SNNs power estimation. Typically, these estimations focus on computation using FLOPS [R1][R2][R3]. It's important to note that this is a common limitation in the SNN field as a whole. However, the field is evolving, and new SNN hardware based on in-memory computing is emerging[R4]. This development is part of the broader trend in neuromorphic hardware design, which aims to address energy efficiency issues, including those related to data movement. In conclusion, we would like to emphasize that our work represents a novel and significant contribution to the field of SNNs and autonomous driving. We have introduced unique architectural designs, applied SNNs to a complex real-world problem, and provided comprehensive evaluations on both specialized and general datasets. While we acknowledge that there is always room for improvement and further exploration, we believe our research pushes the boundaries of what is possible with SNNs in autonomous driving applications. We hope that our clarifications have addressed the reviewer's concerns and demonstrated the value and novelty of our work. We remain open to further discussion and are committed to advancing this important area of research. --- [R1]:Zhou, Zhaokun, et al. "Spikformer: When spiking neural network meets transformer." arXiv preprint arXiv:2209.15425 (2022). [R2]:Yao, Man, et al. "Spike-driven transformer." Advances in neural information processing systems 36 (2024). [R3]:Zhu, Rui-Jie, et al. "Spikegpt: Generative pre-trained language model with spiking neural networks." TMLR 2024. [R4]: El Arrassi, Asmae, et al. "Energy-efficient SNN implementation using RRAM-based computation in-memory (CIM)." 2022 IFIP/IEEE 30th International Conference on Very Large Scale Integration (VLSI-SoC). IEEE, 2022. --- Rebuttal Comment 1.1: Title: Thanks or your response Comment: Thanks for your response. My second concern have be well addressed. However, I still feel the novelty is limited and the energy computation is biased. I have raised my score. hope the author could further give a more detailed analysis for energy computation. --- Rebuttal 2: Comment: Thank you for considering our rebuttal and raising the score. **Regarding energy consumption, we have provided a detailed explanation in Appendix F**. We'd like to further elaborate on two main aspects of energy consumption: ### **Memory Access** Memory access is critical for model inference and training latency. Techniques like Flash Attention [R1] use fused kernels to reduce memory access frequency, significantly improving the efficiency of attention kernels. However, memory access is not directly proportional to energy consumption; it's also related to the number of synaptic operations (SOPs) [R2] when inferencing on CPUs and GPUs. SOPs refer to the number of operations performed in neural networks, As we mentioned in the rebuttal, in neuromorphic hardware, a new paradigm called in-memory computing is emerging. Designs like IBM's NorthPole [R3] and Intel's Loihi [R4] partially adopt this concept. In these systems, synaptic weights between neurons are determined by synapse strength, greatly reducing latency and energy consumption while fully utilizing sparsity. ### **Compute Energy** Unlike memory access, compute energy is highly correlated with the SOPs in neural networks, even though it's not the primary bottleneck for training/inference latency. As shown in [R5], the number of neural network SOPs is almost directly proportional to energy consumption. SNNs gain energy efficiency mainly through two aspects: **a) Binary activations** In ANNs, the SOPs primarily involve MAC (Multiply-Accumulate) operations; but in SNNs, all MAC operations can be performed without multiplication, using only addition. This can be represented mathematically as: $\text{ANN: } y = \sum_{i} w_i x_i,$ where $x_i \in \mathbb{R}$ $\text{SNN: } y = \sum_{i} w_i x_i,$ where $ x_i \in \\{0,1\\}$ **b) Sparsity** The binary nature of activations results in many zeros, which neuromorphic chips can exploit to create event-driven sparsity. These two factors—addition-only operations and sparsity—form the foundation of SNN's superior energy efficiency. In modern neuromorphic chips like Loihi 2 [R4] and Speck [R6], sparsity is the primary factor in reducing computational energy. They leverage the graph-like nature of neural networks, constructing each neuron as a router rather than representing synapses as matrices, as in traditional GPUs. ### **Energy Consumption Calculation** To quantify the energy efficiency of our SNN architecture, we calculate the theoretical energy consumption using the following methodology: 1. Calculate Synaptic Operations (SOPs) for each block: $$\operatorname{SOPs}(l) = fr \times T \times \operatorname{FLOPs}(l)$$ where: - $l$ is the block number - $fr$ is the input spike train firing rate - $T$ is the neuron time step - $\operatorname{FLOPs}(l)$ are the floating-point operations in the block 2. Compute SNN energy consumption: $$E_{SNN} = E_{MAC} \times \mathrm{FLOP}{\mathrm{SNN}\mathrm{Conv}}^1 + E_{AC} \times \left(\sum_{n=2}^N \mathrm{SOP}{\mathrm{SNN}\mathrm{Conv}}^n + \sum_{m=1}^M \mathrm{SOP}{\mathrm{SNN}\mathrm{FC}}^m\right)$$ where: - $E_{MAC} = 4.6 \text{pJ}$ (MAC operation energy cost) - $E_{AC} = 0.9 \text{pJ}$ (AC operation energy cost) - $N$ and $M$ are the number of Conv and FC layers, respectively - The first Conv layer uses direct encoding, employing MAC operations, so FLOPs are used for its energy calculation. 3. For comparison, ANN energy consumption: $$E_{ANN} = E_{MAC} \times \mathrm{FLOP}{\mathrm{ANN}}$$ This approach allows us to directly compare the energy efficiency of our SNN with ANN models. We hope this explanation provides a more comprehensive justification for our energy consumption calculations. Thank you once again for your constructive comments and for giving us the opportunity to improve our paper. --- [R1]:Dao, Tri, et al. "Flashattention: Fast and memory-efficient exact attention with io-awareness." Advances in Neural Information Processing Systems 35 (2022). [R2]:Tripp, Charles Edison, et al. "Measuring the Energy Consumption and Efficiency of Deep Neural Networks: An Empirical Analysis and Design Recommendations." arXiv:2403.08151. [R3]: Modha, Dharmendra S., et al. "Neural inference at the frontier of energy, space, and time." Science 2023. [R4]: Orchard, Garrick, et al. "Efficient neuromorphic signal processing with loihi 2." 2021 IEEE Workshop on SIPS [R5]:Lahmer, Seyyidahmed, et al. "Energy consumption of neural networks on nvidia edge boards: an empirical model." IEEE, 2022. [R6]:Richter, Ole, et al. "Speck: A smart event-based vision sensor with a low latency 327k neuron convolutional neuronal network processing pipeline." (2024).
null
NeurIPS_2024_submissions_huggingface
2,024
Summary: This paper presents Spiking Neural Network to reduce the energy consumption by normal neural network in autonomous driving. The network includes end-to-end architecture including perception, prediction and planning. It evaluates the model using nuScenes dataset and achieves comparable performance in three modules yet drawing much lower energy consumption. Strengths: The author proposes a good architecture and describe in detail all of the modules: perception, prediction and planning. I enjoy the reading and found it very good to list in details of architecture, especially the results, visualization and ablation study. In general, I think this stands a good paper. The reason I rate it at 5 is due to not familiar with the work in energy consumption using spike neural networks. Weaknesses: I do not write this due to not being familiar with the work. Technical Quality: 3 Clarity: 3 Questions for Authors: - Do the author know if any self-driving company use spiking neural network in their real operation? This can justify the impact of this work. - Energy is saved but how about memory, inference time? Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: I think the work addresses the proposed question about reducing energy consumption. Besides, I do not have expert knowledge in spiking neural networks, thus I shall refrain from talking about limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insightful questions regarding the practical implementation of SNN in self-driving systems. While it's true that SNNs are not yet widely deployed in commercial self-driving operations, our research aims to pave the way for their future adoption by demonstrating their viability and potential advantages in this domain. > Do the author know if any self-driving company use spiking neural network in their real operation? This can justify the impact of this work. Response: While there are several companies exploring the use of SNNs in autonomous driving, the exact information on their progress is currently not in the public domain. To the best of our knowledge, this is the first work to use an SNN to perform complex motion planning given multi-camera input. Mercedes Benz has begun [implementing neuromorphic chips](https://www.eetimes.com/mercedes-applies-neuromorphic-computing-in-ev-concept-car/) in its concept car, EQXX, specifically for voice detection. Similarly, [BMW has announced plans](https://www.greaterzuricharea.com/en/news/synsense-and-bmw-exploring-intelligent-cockpits) to use neuromorphic chips in their vehicles. However, we've observed that in most cases, these chips are being used to solve relatively simple tasks. This aligns with the limitations of SNNs that we discussed in our paper, where they are often perceived as unable to solve complex problems. Secondly, Intel's simultaneous development of both the Loihi 2 neuromorphic chip and Mobileye's self-driving processors presents a unique opportunity for integrating SNNs into autonomous driving systems. This combination of technologies within the same company creates a potential pathway for implementing SNN-based self-driving models on neuromorphic hardware. Lastly, the primary advantage of deploying SNN models on neuromorphic chips like Loihi 2 for self-driving applications would be significantly reduced power consumption. This is crucial for electric and autonomous vehicles, where energy efficiency directly impacts range and overall performance. Therefore, while not yet widely implemented, the potential benefits of SNNs in self-driving technology make it a promising area for future development and application. > Energy is saved but how about memory, inference time? Response: You raise an important point about memory usage and inference time. In our current stage of research, we've primarily validated the performance of our SNN-based self-driving model on standard hardware like GPUs. At this point, SNNs don't show significant advantages in terms of memory usage or inference time compared to traditional neural networks on these platforms. However, the real potential of SNNs for self-driving applications is in their deployment on neuromorphic hardware such as Intel's Loihi 2 chip. The asynchronous data processing capabilities of Loihi 2 are expected to greatly enhance efficiency across multiple dimensions, including memory usage and inference time. It's worth noting that previous work has already demonstrated the feasibility and benefits of implementing self-driving tasks on neuromorphic hardware. For instance, [R1] successfully applied SNNs to a self-driving classification task on the Loihi chip. While this was a specific subtask of autonomous driving, it provided strong evidence that SNNs can indeed work effectively on neuromorphic hardware for automotive applications. In future stages of our research, we plan to implement our model on Loihi 2 or a similar neuromorphic platform. This transition from standard GPUs to specialized neuromorphic hardware is expected to unlock the full potential of SNNs for self-driving applications, addressing current limitations in memory usage and inference time while maintaining the energy efficiency advantages. We appreciate the reviewer's thoughtful questions and comments. Your feedback has allowed us to clarify important aspects of our work and its potential impact on the field of autonomous driving. --- [R1]: Viale A, Marchisio A, Martina M, et al. Carsnn: An efficient spiking neural network for event-based autonomous cars on the loihi neuromorphic research processor, 2021 IJCNN. --- Rebuttal Comment 1.1: Title: Response Comment: Thank you authors for providing a detailed response. I am satisfied with your answers. This is an important piece of work to reduce the energy consumption for autonomous driving and I believe it is quite novel as in the first time, I read energy-efficient architecture using SNN. I keep my score as 5 due to low confidence in the work. I appreciate the author's response to other reviewers and hope can improve the overall quality of paper regardless of decisions. --- Reply to Comment 1.1.1: Title: Thank you Comment: Thank you for your kind words and valuable feedback. We are deeply grateful for your response, which motivates us to continue our research in energy-efficient SNN architectures for autonomous driving.
Summary: This paper introduces a unified Spiking Autonomous Driving (SAD) system based on Spiking Neural Networks (SNNs). The system is trained end-to-end and comprises three main modules: a perception module that processes inputs from multi-view cameras and constructs spatiotemporal bird's-eye views; a prediction module that forecasts future states using a novel dual-pathway approach; and a planning module that generates safe trajectories based on the predictions. The authors evaluated SAD on the nuScenes dataset, where it demonstrated competitive performance in perception, prediction, and planning tasks. By leveraging its event-driven and energy-efficient nature, SAD effectively addresses the energy challenges faced by autonomous driving systems. Strengths: 1. This paper constructs the first end-to-end spiking neural network designed specifically for autonomous driving. By integrating perception, prediction, and planning into a single network, it enhances efficiency and reduces the complexity associated with managing these components separately. 2. The experimental results are comprehensive and persuasive, with a thorough comparison with other works, demonstrating the significant potential of SNNs in complex real-world applications. Weaknesses: 1. Energy Consumption Analysis: The main motivation for implementing autonomous driving tasks with SNNs in this paper is to enhance energy efficiency, with a corresponding energy efficiency analysis provided in Appendix F and Table 3. Nonetheless, I have reservations about the energy consumption calculations presented for the following reasons: - Since both ANNs and SNNs process the same input data (with SNNs employing direct encoding at the first layer), the difference in energy efficiency can only stem from differences in memory access or the energy consumption of MAC/AC operations. This paper only offers a quantitative comparison of energy consumption during computation (L748-L768). - It is well-known that energy consumption is primarily determined by memory access rather than FLOPS or IPS [1r], yet this aspect is not discussed in the paper, nor is it considered in the energy consumption calculations. This should at least be acknowledged, as it is a common shortcoming in publications claiming reduced energy consumption. Could the authors comment on how memory access impacts the energy consumption of SNNs? - ANN networks typically exhibit similarly sparse ReLU activations. Previous research has shown that even on general-purpose CPU hardware, the sparsity of ReLU can be utilized to accelerate (and reduce energy consumption of) inference [2r]. How would the comparison results between SNNs and ANNs change if ANN sparsity were considered? [1r]: An Analytical Estimation of Spiking Neural Networks Energy Efficiency. ICONIP 2022 [2r]: Inducing and Exploiting Activation Sparsity for Fast Neural Network Inference. ICML 2020 2. Inference Speed: In comparisons with six ANN methods, SNN performance only surpassed half of the existing methods. Assuming SNNs have lower energy consumption, we would consider using SNNs for autonomous driving tasks. What about their inference speed in practical deployments? Autonomous driving tasks rely on high dynamic resolution and timely responsiveness, such as detecting and avoiding obstacles on highways. Technical Quality: 3 Clarity: 2 Questions for Authors: Referencing the Weakness section, I recognize that some evaluation methods are commonly used within the SNN community. However, due to differences in task formulations, explanations and clarifications are necessary. Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate the thoughtful comments and questions raised by the reviewers. Your feedback has provided us with valuable insights that will help improve our manuscript. We are grateful for the opportunity to address these points and clarify certain aspects of our work. In the following responses, we aim to address each of your concerns comprehensively while highlighting the strengths and potential impact of our research on SNNs for autonomous driving tasks. > Since both ANNs and SNNs process the same input data (with SNNs employing direct encoding at the first layer), the difference in energy efficiency can only stem from differences in memory access or the energy consumption of MAC/AC operations. This paper only offers a quantitative comparison of energy consumption during computation (L748-L768). It is well-known that energy consumption is primarily determined by memory access rather than FLOPS or IPS [1r], yet this aspect is not discussed in the paper, nor is it considered in the energy consumption calculations. This should at least be acknowledged, as it is a common shortcoming in publications claiming reduced energy consumption. Could the authors comment on how memory access impacts the energy consumption of SNNs? Response: Thank you for your insightful question. We acknowledge that this issue indeed exists. The field of SNNs has not specifically focused on memory access, despite the fact that data movement is indeed the primary bottleneck in modern deep learning. We will certainly address this point in our revised manuscript. SNNs utilize binary activations, which naturally lead to activation sparsity. This sparsity can be leveraged to reduce memory access frequency and enable more efficient inference, potentially allowing for partial offloading to CPU DRAM [R1]. Additionally, the lower precision of activations in SNNs contributes to reduced memory bandwidth requirements. Furthermore, neuromorphic hardware inspired by in-memory computing concepts, such as RRAM-based architectures, shows promise in eliminating traditional memory access costs altogether when paired with SNNs [R2]. These factors combined contribute significantly to the overall energy efficiency of SNN implementations, beyond what is captured by comparing only the MAC/AC operations. Collectively, these characteristics of SNNs and their potential hardware implementations suggest that the energy efficiency gains may be even more substantial when considering memory access, though we agree that this aspect warrants more thorough investigation and quantification in future research. > ANN networks typically exhibit similarly sparse ReLU activations. Previous research has shown that even on general-purpose CPU hardware, the sparsity of ReLU can be utilized to accelerate (and reduce energy consumption of) inference [2r]. How would the comparison results between SNNs and ANNs change if ANN sparsity were considered? Response: Thanks for your question. In our examination of ANNs, we focused on ST-P3, the state-of-the-art ANN for these tasks. We found that the main challenge in utilizing sparsity across the entire network stems from the varied use of activation functions. Specifically, only the encoder and decoder head employ ReLU, which exhibits sparsity. Other crucial components such as parts of the decoder, temporal module, and planning module use different activation functions like tanh (for GRU) and GeLU (for partial decoder blocks). These functions are not inherently sparse, resulting in a model that is more dense than sparse overall. This heterogeneity in activation functions makes it difficult to fully leverage the potential benefits of sparsity throughout the ANN. In contrast, our SNN model utilizes sparse and binary activations across all layers, enabling us to fully exploit both the sparsity and binary nature of the network. This comprehensive approach to sparsity in SNNs provides a more consistent basis for comparison and potential performance advantages. > Inference Speed: In comparisons with six ANN methods, SNN performance only surpassed half of the existing methods. Assuming SNNs have lower energy consumption, we would consider using SNNs for autonomous driving tasks. What about their inference speed in practical deployments? Autonomous driving tasks rely on high dynamic resolution and timely responsiveness, such as detecting and avoiding obstacles on highways. Response: Thank you for your question. As stated in our paper, our primary goal is to demonstrate the potential of SNNs to handle the complex requirements of low-power autonomous driving. Neuromorphic chips can achieve extremely low latency (about 200 times less) compared to conventional chips, while also maintaining low power consumption. For example, Speck [R3] can achieve <0.1 ms latency when implementing SNN on-chip, compared to 24.7 ms latency when not implemented on-chip. Intel's Loihi chip also shows superior latency when executing similar tasks [R4]. We plan to implement our algorithm on these chips to fully utilize their capabilities. In conclusion, we thank the reviewers for their insightful comments, which have allowed us to provide a more comprehensive view of our work. --- [R1]:Song, Yixin, et al. "Powerinfer: Fast large language model serving with a consumer-grade gpu." arXiv preprint arXiv:2312.12456 (2023). [R2]:El Arrassi, Asmae, et al. "Energy-efficient SNN implementation using RRAM-based computation in-memory (CIM)." 2022 IFIP/IEEE 30th International Conference on Very Large Scale Integration (VLSI-SoC). IEEE, 2022. [R3]: Yao M, Richter O, Zhao G, et al. Spike-based dynamic computing with asynchronous sensing-computing neuromorphic chip. Nature Communications, 2024. [R4]: Viale A, Marchisio A, Martina M, et al. Carsnn: An efficient spiking neural network for event-based autonomous cars on the loihi neuromorphic research processor, 2021 IJCNN. --- Rebuttal Comment 1.1: Comment: Dear Reviewer x6bz, As the discussion period is drawing to a close, we wanted to reach out and kindly inquire if our rebuttal has adequately addressed the concerns you raised. We would be grateful for any feedback on whether our responses have clarified the points in question or if you have any additional queries we can assist with.
null
null
null
null
Robust Guided Diffusion for Offline Black-box Optimization
Reject
Summary: The paper proposed RGD, a novel method for integrating classifier guidance into classifier-free guidance diffusion models for solving offline MBO problems. Experiment results and ablation studies validate that the method outperforms state-of-the-art baselines and each proposed component is resonable. Strengths: - Idea is intuitive and easy to follow - Motivating example in the introduction makes the reader easy to understand the limitations of the prior method and the advantages of the proposed method - Strong experiment results and detailed ablation studies make the proposed method more convincing Weaknesses: - For the diffusion-based proxy refinement part, it seems that there are several estimations to compute the distance between $p_{\phi}(y\vert \hat{x})$ and $p_{\theta}(y\vert \hat{x})$. Furthermore, it incurs additional hyperparameter $\alpha$, which should be carefully tuned. Technical Quality: 4 Clarity: 3 Questions for Authors: - For the diffusion-based proxy refinement part, have the authors tried other approaches for estimating $p(y)$? I wonder if the choice of estimation affects the performance significantly. - To compute the regularization loss term in Eq (15), we need to collect samples from the adversarial distribution. I cannot find the detailed procedure for collecting adversarial samples ($M$, $\eta$, ...). Could authors elaborate more on that part? Confidence: 3 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: There are a few minor comments on the manuscript. - For figure 2, it seems that $\tilde{s}(x_T, y, \omega)$ should be written as $\tilde{s}(x_T, y, \hat{\omega})$. Furthermore, at first, it makes me confusion that RGD conducts classifer-guidance. However, that misleading part has been resolved after reading the manuscript. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer, Thank you very much for your thorough and constructive feedback. Your insights are immensely valuable and provide essential guidance as we seek to enhance the quality and clarity of our manuscript. We truly appreciate the time and effort you have invested in reviewing our work, and we are committed to carefully considering and incorporating your suggestions in our revisions. ## Weaknesses: > For the diffusion-based proxy refinement part, it seems that there are several estimations to compute the distance between and . Furthermore, it incurs additional hyperparameter, which should be carefully tuned. Yes, the diffusion-based proxy refinement involves three computational estimates: $p(x)$, $p(x|y)$, and $p(y)$. We compute $p(x)$ and $p(x|y)$ using the learned SDE, as detailed in https://anonymous.4open.science/r/RGD-7DBB/likelihood.py, and estimate $p(y)$ using Gaussian kernel-density estimation. These methods are aligned with common practices in the field. Regarding the hyperparameter $\alpha$, it is not manually tuned but is instead optimized based on the validation loss, as detailed in Appendix B. > For the diffusion-based proxy refinement part, have the authors tried other approaches for estimating p(y) In addition to the Gaussian kernel-density estimation discussed in our paper, we also experimented with Gaussian Mixture Models (GMM) for estimating $p(y)$. The distribution $p(y)$ estimated by GMM was quite similar to that obtained using Gaussian kernel-density estimation. When we incorporated the GMM-based $p(y)$ into the diffusion-based proxy refinement module, the results remained consistent, with a performance of 0.968 using the original method and 0.964 with GMM on the Ant task. This similarity in outcomes underscores the robustness of our estimator choice. We will incorporate this discussion in Section 4.5 Ablation Studies. > To compute the regularization loss term in Eq (15), we need to collect samples from the adversarial distribution. I cannot find the detailed procedure for collecting adversarial samples. Could authors elaborate more on that part? Thank you for your inquiry about collecting samples from the adversarial distribution. We employ gradient ascent to generate these samples. For a comprehensive explanation of this process, please refer to the "Adversarial Sample Identification" section in the global response. ## Limitation > Notation s(x, y, $\hat{\omega}$) and s(x, y, $\omega$) Thank you for pointing out the notation inconsistency. Strictly speaking, we should use $s(x, y, \hat{\omega})$ instead of $s(x, y, \omega)$. We opted to use $s(x, y, \omega)$ in the paper as the symbol $\hat{\omega}$ had not been introduced at that point. > Furthermore, at first, it makes me confusion that RGD conducts classifer-guidance. However, that misleading part has been resolved after reading the manuscript. Regarding your initial confusion about classifier guidance in RGD, we appreciate your feedback. To clarify, we will insert the sentence "Our framework is based on proxy-free diffusion" at Line 57 to better communicate this aspect from the outset. ## Overall Does our response resolve your concerns? We value your detailed feedback and look forward to more discussions in the rebuttal phase. Thank you for your contributions. --- Rebuttal 2: Comment: Thank you for your detailed feedback and I keep my positive rating. There are some minor comments. > For the diffusion-based proxy refinement part, have the authors tried other approaches for estimating p(y) As the authors say, ablation studies on the choice of estimating p(y) enhance the robustness of the proposed method. I also recommend that authors conduct the ablation study across at least two tasks for the claim. --- Rebuttal 3: Title: Thanks for your prompt feedback and continued support. Comment: Thank you for your prompt feedback and continued support. To further assess the robustness of our method against the choice of p(y), in addition to the Ant task, we have conducted experiments on the TFB8 and TFB10 tasks. The performance was 0.974 with the original method and 0.975 with GMM on the TFB8 task, and it was 0.694 with the original method and 0.692 with GMM on the TFB10 task. These consistent results reinforce the robustness of our estimator choice. We will ensure that all discussions are meticulously incorporated into the final manuscript, as suggested.
Summary: In this paper, the authors proposed to combine both classifier guidance and classifier-free guidance for offline black-box optimization. In addition, the authors propose a Proxy Refinement procedure by minimizing KL divergence between the Proxy distribution and diffusion distribution regarding $y$. Strengths: 1. The paper is well-written and well-organized. 2. The paper introduces several refinement procedures to boost the offline optimization performance. The proposed Diffusion-based Proxy Refinement procedure is interesting. Weaknesses: 1. **Technical contribution seems to be incremental** Employing diffusion models for offline black-box optimization is not new. The technical contribution of this paper seems to be incremental. The draft extends the paper "Diffusion Models for Black-Box Optimization" [1]. However, detailed discussions about the relationship between the proposed method and the paper [1] are missing. [1] Siddarth Krishnamoorthy, Satvik Mashkaria, and Aditya Grover. "Diffusion Models for Black-Box Optimization." ICML 2023. 2. **Part of the technical details are not clear.** (a) In Equation (12), the concrete computation procedure of $p_\theta (\hat{\boldsymbol{x}} | y)$ and $p_\theta (\hat{\boldsymbol{x}})$ via diffusion model is not clear. (b) The derivation of Equation (10) is not given. It seems that Equation (10) is from the forward pass of the diffusion model. However, the forward pass (Eq.32-32 in [10]) is regarding the distribution. And the concrete $\boldsymbol{x} _ t $ is constructed via the backward pass with $s_\theta(\boldsymbol{x}_k,k)$ for $k \in T,\cdots, t+1$. In addition, how to choose $\mu(t)$ and $\sigma(t)$ in Equation (10) is not clear. 3. **The additional proxy training, sample refinement procedure and proxy refinement procedure increase the computation cost** The additional proxy training, sample refinement procedure and proxy refinement procedure increase the computation cost. However, the time comparison with baselines is missing. 4. **The additional proxy training, sample refinement procedure and proxy refinement procedure bring many additional hyperparameters, which may overfit the offline BBO task** In the offline BBO tasks, the offline dataset is provided. The evaluation is the black-box function value at the generated query at one time. The long-term convergence properties and exploration/exploitation balance are not considered. As a result, there are risks that overfit the evaluation metric for the offline tasks. The paper Introduces lots of additional hyperparameters, which increases the overfitting risks. Technical Quality: 2 Clarity: 3 Questions for Authors: 1. Please provide more discussion about the technical details in Weakness 2. 2. What is the computation time of the proposed method? Please provide time comparisons with baselines. 3. Please provide explanations about the overfiting issue. Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: Additional computation cost and overfitting risk may be additional limitations besides the limitations discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer, We greatly appreciate your insightful feedback. Your suggestions are crucial for enhancing our manuscript, and we are dedicated to meticulously revising our work in accordance with your recommendations. ## Weakness > Technical contribution seems to be incremental. Employing diffusion models for offline black-box optimization is not new. The technical contribution of this paper seems to be incremental. The draft extends the paper "Diffusion Models for Black-Box Optimization" [1]. However, detailed discussions about the relationship between the proposed method and the paper [1] are missing. We acknowledge the contributions of DDOM in applying diffusion models to offline model-based optimization. As noted, we have initially discussed the relationship between RGD and DDOM from Line 349 to Line 351. To clarify further, we will add this discussion: "DDOM integrates diffusion models into offline model-based optimization without specifically addressing how to extrapolate from an existing offline dataset to obtain high-scoring samples—it relies solely on conditioning on the maximum value found in the static dataset. In contrast, our work introduces a proxy-enhanced sampling module that incorporates explicit guidance into the sampling process, enabling effective extrapolation. Furthermore, we have developed a diffusion-based proxy refinement module that leverages diffusion-specific priors to refine the proxy. This approach represents a novel advancement not previously explored in the literature." These additions will be integrated immediately following Line 349 to provide a detailed comparison and highlight the novel aspects of our methodology. > Part of the technical details are not clear. (a) We have trained a score function and then use the official library for score-based SDEs to compute the likelihood. Details are provided at https://anonymous.4open.science/r/RGD-7DBB/likelihood.py. The conditional probability $p(x|y)$ is calculated by inputting a specific $y$ label, while the unconditional probability $p(x)$ is computed using a zero label. (b) Equation (10) employs Tweedie's formula, which is used to transform noise into clean data. We will include the citation "Robbins H. E., 'An empirical Bayes approach to statistics', in Breakthroughs in Statistics: Foundations and basic theory, Springer New York, 1992, pp. 388-394." in the manuscript to provide a reference for this formula. For the settings of $\mu$ and $\sigma$, we adhere to the setting provided in Appendix C, "SDES IN THE WILD," from the paper "SCORE-BASED GENERATIVE MODELING THROUGH STOCHASTIC DIFFERENTIAL EQUATIONS." > The additional proxy training, sample refinement procedure and proxy refinement procedure increase the computation cost. However, the time comparison with baselines is missing. We have detailed the computational costs in Appendix D. The expenses are justified, considering the performance improvements and the typically high costs of real-world experiments. > The additional proxy training, sample refinement procedure and proxy refinement procedure bring many additional hyperparameters, which may overfit the offline BBO task. In the offline BBO tasks, the offline dataset is provided. The evaluation is the black-box function value at the generated query at one time. The long-term convergence properties and exploration/exploitation balance are not considered. As a result, there are risks that overfit the evaluation metric for the offline tasks. The paper Introduces lots of additional hyperparameters, which increases the overfitting risks. We acknowledge the introduction of additional hyperparameters in our approach. However, it's important to note that in the offline BBO tasks we address, access to the oracle black-box function is unavailable, precluding the direct exploration-exploitation considerations. Furthermore, we do not use the black-box oracle function to fine-tune these hyperparameters, thereby avoiding overfitting. For instance, the hyperparameter $\alpha$ is solely adjusted using the validation set included in the offline dataset. This approach ensures that there is no overfitting risk associated with our method. ## Questions See Weakness. ## Overall Have we adequately addressed your concerns? We truly appreciate your comprehensive feedback and anticipate further dialogue during the rebuttal phase. Thank you for your insights. --- Rebuttal Comment 1.1: Title: Reply to Authors' rebuttal Comment: Thanks for the authors' detailed clarification and responses. Most of my concerns have been addressed. I still have some concerns regarding the overfitting risk. I acknowledge the different focuses of offline black-box optimization and online black-box optimization and explain why the authors preclude the exploration-exploitation considerations for long-term behavior. However, I am not sure what the key component among the several proposed ones is that makes the whole model more robust against the overfitting. In addition, what is the key component to achieve a better score beyond the maximum in the dataset and outperforms baselines? --- Rebuttal 2: Title: Please reply to the rebuttal. Comment: Dear Reviewer, Please reply to the rebuttal. AC. --- Rebuttal 3: Title: Clarification on Model Robustness and Key Components Comment: We appreciate the reviewer’s detailed feedback and are glad that most of the concerns have been addressed. We are uncertain about your use of the term 'overfitting.' We interpret this as the possibility that our proposed designs might overfit the trained proxy. Please let us know if we have misunderstood your concern. **The first key component** enhancing our model's robustness is the use of proxy-free diffusion guidance. In this framework, the denoising step cannot be interpreted as a gradient-based adversarial attack, since it does not rely on proxy gradients to directly modify the input design. This concept is thoroughly discussed in the seminar work [A], particularly in Equation (6). However, proxy-free diffusion guidance alone is insufficient, as it lacks direct guidance from the proxy, limiting its ability to extrapolate effectively. This limitation is illustrated in Figure 1 of our manuscript, titled 'Motivation for Explicit Proxy Guidance.' To address this, we introduce explicit proxy guidance as **the second key component**. This component aims to direct the sampling process toward high-property regions. Direct application of proxy gradients to the input space would result in out-of-distribution samples, potentially leading to what might be perceived as overfitting. Therefore, we apply proxy gradients to the scalar strength parameter $\omega$, which modulates both condition and diversity. This approach of optimizing scalar parameters rather than the design itself is explored in the ICML 2023 paper [B], specifically in Section 4.5 on Adaptive-$\gamma$. It demonstrates how supervision signals from the proxy can effectively update scalar hyperparameters, thereby enhancing robustness without directly modifying the input design. In essence, our model combines (1) the robustness afforded by proxy-free diffusion—which does not rely on proxy gradients—with (2) the targeted guidance from a trained proxy, which influences only the scalar strength parameter $\omega$ to mitigate overfitting risk. **These two key components** form our **proxy-enhanced sampling module**, which is crucial for enhancing the model’s resilience against overfitting and are instrumental in achieving superior outcomes. Additionally, in our **diffusion-based proxy refinement module**, we propose utilizing diffusion-derived distribution priors to refine the proxy, an approach not previously explored. This is also **an important component**. While prior work, such as COMs and ROMA, employed simple intuitive priors like conservative estimation and smoothness to refine proxy, our method leverages diffusion-derived distributions, providing a more relevant and impactful signal for refining the proxy, which has been proved to be more effective. The detailed comparisons with COMs and ROMA are discussed in detail in Appendix E, 'Further Ablation Studies'. Have we adequately addressed your concerns? We truly appreciate your comprehensive feedback. [A] Ho J, Salimans T. Classifier-free diffusion guidance[J]. arXiv preprint arXiv:2207.12598, 2022. [B] C. Chen. et al. Bidirectional Learning for Offline Model-based Biological Sequence Design. ICML 2023 --- Rebuttal Comment 3.1: Comment: Thanks for the authors' detailed response. I now have a better understanding of how each proposed component works and how they relate to one another. I am not certain about the experiments, but I think the proposed method makes sense. Therefore, I decided to increase my score. --- Rebuttal 4: Title: Thanks for your support Comment: Thank you for your supportive feedback. We are pleased that our explanations have helped clarify the aspects of our work. We will ensure that these discussions are incorporated into the revised manuscript. --- Rebuttal Comment 4.1: Title: Additional question about Eq.(10) in the draft Comment: Dear authors, I find an issue when I try to derive Eq.(10) using Tweedie’s formula suggested by the author. Note that $X_0 \sim p_0(X_0)$ and $X_t = X_0 + \mathcal{N}(0, \sigma(t)^2 \boldsymbol{I} )$ from forward pass of diffusion model (corresponding VE SDE Eq.(31) in [r1]) . From the Tweedie’s formula, we achieve the following Equation: $$ \mathbb{E}[X_0|X_t=x_t] = x_t + \sigma(t)^2 \nabla _{x_t} \log p_t (x_t) $$ It can then be approximated using the trained score function $s_\theta(x_t)$ as $$ \mathbb{E}[X_0|X_t=x_t] \approx x_t + \sigma(t)^2 s_\theta(x_t) $$ However, the Eq.(10) in the submission drops the conditional expectation w.r.t. $X_0$ and directly obtain $$ x_0 \approx x_t + \sigma(t)^2 s_\theta(x_t) $$ Did I miss anything to achieve Eq.(10)? Could the authors explain more about how to derive Eq.(10)? [r1] Song et al. Score-Based Generative Modeling through Stochastic Differential Equations. 2021. --- Rebuttal 5: Title: Clarification on Eq.(10). Comment: Thank you for your insightful question regarding Eq.(10). We will now derive Eq.(10) step by step to address any concerns. We follow Eq (33) from [r1] where $p(x_t|x_0) = N(x_t; \mu(t) x_0, \sigma^2(t) I)$. Given this, we can sample $x_t$ from $x_0$ using: $x_t = \mu(t) x_0 + \epsilon \sigma(t)$. To recover $x_0$ from $x_t$, we need to know $\epsilon$, which approximates as $\epsilon \approx -\sigma(t) \cdot s_{\boldsymbol{\theta}}(x_t)$. Using this approximation, we derive $x_0 = \frac{x_t - \epsilon \sigma(t)}{\mu(t)} \approx \frac{x_t + s_{\boldsymbol{\theta}}(x_t) \cdot \sigma^2(t) }{\mu(t)}$. This approach originates from [r1], and we utilize the implementation framework detailed in another seminal work [r2]. Specifically, our code, available at https://anonymous.4open.science/r/RGD-7DBB/lib/sdes.py , implements this process as follows: - Line 24 implements $\mu(t)$ - Line 27 implements $\sigma^2(t)$ - Line 37 describes the sampling process: $x_t = \mu(t) x_0 + \epsilon \sigma(t)$ - Line 112 optimizes: $\epsilon \approx -\sigma(t) \cdot s_{\boldsymbol{\theta}}(x_t)$, where $\epsilon$ is the target, $\sigma(t)$ is the std, and $a$ is $s_{\boldsymbol{\theta}}(x_t)$. Apologies for any confusion. Considering that most readers and reviewers come from an offline MBO background, these advanced concepts of diffusion models can be challenging. We will add a statement following Line 178, "For a more detailed derivation, please refer to the Appendix." In the Appendix, we will include the discussion outlined above. Have we adequately addressed your concerns? References: - [r1] Song et al. Score-Based Generative Modeling through Stochastic Differential Equations. 2021. - [r2] Huang C W, Lim J H, Courville A C. A variational perspective on diffusion-based generative models and score matching. Advances in Neural Information Processing Systems, 2021, 34: 22863-22876. --- Rebuttal 6: Comment: Additionally, it's worth noting that Eq.(10) in our submission aligns closely with Eq.(15) from the seminal work DDPM[r3]. In DDPM, they present the equation $x_0 \approx \frac{x_t - \sqrt{1-\alpha_t} \epsilon_{\boldsymbol{\theta}}(x_t)}{\sqrt{\alpha_t}}$, which is derived in a discrete setting. [r3] Ho J, Jain A, Abbeel P. Denoising diffusion probabilistic models[J]. Advances in neural information processing systems, 2020, 33: 6840-6851. --- Rebuttal 7: Comment: Thanks for the authors' detailed response. From the authors' new response, I find that the derivation of Eq.(10) does not come from Tweedie’s formula suggested by the authors in the previous rebuttal. The Tweedie’s formula calculates the posterior expectation $X_0$ given $X_t = x_t$, i.e. $\mathbb{E} [X_0 | X_t = x_t]$, instead of a concrete sample $x_0$. The derivation of Eq.(10) is actually through reparametrization of $X_t$ and relies on an approximation of the Gaussian noise $\epsilon \approx -\sigma(t) \cdot s_\theta(x_t)$. I got the derivation. Thanks again for the authors' detailed response. My concern is well addressed. --- Rebuttal 8: Title: Thank you for your continued engagement Comment: Thank you for your continued engagement and for raising these important points regarding the derivation of Eq.(10). The reference to Tweedie's formula in our previous rebuttal was intended as a high-level citation, acknowledging the foundational idea of recovering samples from imperfect data. It was not directly used to derive Eq.(10), but rather to highlight the conceptual framework. To enhance understanding, we will include a citation of the seminar work DDPM [r3], where similar concepts are more explicitly detailed. Regarding the approximation $\epsilon \approx -\sigma(t) \cdot s_{\boldsymbol{\theta}}(x_t)$, the essence of diffusion models is to learn a model that can predict the noise vector, thereby enabling the denoising of samples from pure noise to realistic data. In our specific case, we train the diffusion model $s_{\boldsymbol{\theta}}(x_t)$ to predict $-\frac{\epsilon}{\sigma(t)}$. This is operationalized by optimizing the loss function mentioned in Line 112 of our sde.py, where $\epsilon$ is the target, $\sigma(t)$ is the std, and $a$ is $s_{\boldsymbol{\theta}}(x_t)$. In essence, starting with $x_0$, we sample a noise vector $\epsilon$ as the target, add the correponding noise to $x_0$ to generate $x_t$, and then aim to train $s_{\boldsymbol{\theta}}(x_t)$ to more closely approximate $-\frac{\epsilon}{\sigma(t)}$. ~~Have we adequately addressed your concerns?~~. Based on the latest feedback, it appears that the derivation is now clear, so we have removed this question sentence. Thank you again for your thoughtful engagement with our work.
Summary: The paper proposes a framework called Robust Guided Diffusion for the problem of Offline Black-box Optimization. The key idea is to formulate the solution as conditional generation of high-performance designs using a diffusion model which has explicit guidance from a proxy (surrogate) model. This proxy model is also refined/updated via a proxy-free diffusion procedure. Experimental analysis is shown on multiple tasks from design-bench benchmark. Strengths: - Overall, I like the paper because it includes two simple changes to an existing approach (DDOM) that shows improved performance and the changes are validated by ablation choices. Weaknesses: - One major premise (repeated multiple times in the paper) in the paper is that proxy guidance conditional generation is more robust than updating the design with standard gradient ascent on the proxy. However, it is not immediately clear why this should be true and the justification for this key point is somewhat limited. If true, this will be much bigger insight going beyond black-box optimization. If it is only about the exploration/exploitation balance driven by w, we could also make standard gradient have this property by optimizing a upper/lower confidence bound on the objective. Please describe why this is the case either via some empirical experiment or theoretical insight. Also, in equation 11, we might evaluate the proxy far away from the training data depending on the values of s_\theta(x_t), \sigma(t), \mu(t). - The related work coverage and corresponding experimental analysis of the paper can be improved. This problem has seen an extensive body of work recently. Please see the references below and discuss/compare them appropriately. Some of them are included in references but not compared in the experiments ([1], [2], [3]): - [1] Yuan, Ye, et al. "Importance-aware co-teaching for offline model-based optimization." Advances in Neural Information Processing Systems 36 (2023). - [2] Kim, Minsu, et al. "Bootstrapped training of score-conditioned generator for offline design of biological sequences." Advances in Neural Information Processing Systems 36 (2023). - [3] Nguyen, Tung, Sudhanshu Agrawal, and Aditya Grover. "ExPT: Synthetic pretraining for few-shot experimental design." Advances in Neural Information Processing Systems 36 (2023). - [4] Chemingui, Yassine, et al. "Offline model-based optimization via policy-guided gradient search." *Proceedings of the AAAI Conference on Artificial Intelligence*. Vol. 38. No. 10. 2024. - [5] Yao, Michael S., et al. "Generative Adversarial Bayesian Optimization for Surrogate Objectives." arXiv preprint arXiv:2402.06532 (2024). Technical Quality: 2 Clarity: 2 Questions for Authors: Some of the tasks in design-bench benchmark has errors which makes them not so informative for evaluation. For example, the offline dataset in superconductor task has multiple copies of the same inputs but with different outputs. As a result, the random forest oracle which is fit on this offline data is not reliable. It is mentioned that "This issue has now been rectified by the development team." How is it fixed? Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: Please see weaknesses section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer, Thank you for your thoughtful feedback. We are committed to incorporating your suggestions in our revisions. ## Weaknesses > One major premise in the paper is that proxy guidance conditional generation is more robust. Let's clarify some concepts. Our discussion from Line 26 to 30 categorizes offline BBO methods into forward and reverse approaches. (1) Employing standard gradient ascent on a proxy represents a forward approach, which encounters the OOD issue due to proxy inaccuracies in unseen designs. (2) Proxy diffusion guidance, a reverse approach, maps high values into high-scoring designs using diffusion steps and proxy gradients, but faces adversarial solutions due to reliance on proxy gradients. (3) Proxy-free diffusion guidance, another reverse approach, similarly maps high values to high-scoring designs but does not rely the proxy gradients. The core premise of our paper is that proxy-free diffusion guidance (3) outperforms both proxy gradient ascent (1) and proxy diffusion guidance (2) in robustness due to its independence from explicit proxy gradients on inputs, mitigating the risk of adversarial manipulation. This significant insight aligns with findings from [A], which argues that proxy-free diffusion guidance surpasses proxy diffusion guidance in robustness. Proxy diffusion guidance is akin to a gradient-based adversarial attack, whereas proxy-free diffusion is not, as it lacks a proxy for the diffusion process. [A] Ho, J. and Salimans, T. Classifier-free diffusion guidance > equation 11, we might evaluate the proxy far away from the training data. Our diffusion-based proxy refinement module addresses this by identifying adversarial samples located beyond the training data. It then refines the proxy by reducing its distance with the diffusion distribution for these outliers. This refinement enhances the proxy's accuracy for samples distant from the training data. We demonstrated the superior effectiveness of this method over COMS and ROMA, in Appendix E "Further Ablation Studies." Additionally, we optimize only the scalar strength $\omega$, which has been empirically shown to provide greater robustness compared to complete design optimization in Section 4.5. "Ablation Studies" of BIB [B]. [B] Bidirectional learning for offline model-based biological sequence design. ICML 2023. > Related work. We have incorporated $14$ baselines in our study, including recent methods like DDOM, BONET, and BDI. To address your specific points, we conducted additional experiments with [1, 2, 4]. The focus of [2] is on biological sequence design, thus we specifically compared [2] on the TF8 and TF10 tasks: | Method | TF8 | TF10 | |--------|-----|------| | BOOTGEN | $0.970 \pm 0.001$ | $0.670 \pm 0.052$ | | RGD | $0.974 \pm 0.003$ | $0.694 \pm 0.018$ | Additionally, we present results for RGD alongside ICT and PGS: | Method | Superc | Ant | DKitty | Rosen | TF8 | TF10 | NAS | |--------|-----------------|----------------|----------------|----------------|----------------|----------------|----------------| | ICT | $0.505 \pm 0.014$ | $0.958 \pm 0.008$ | $0.960 \pm 0.025$ | $0.778 \pm 0.012$ | $0.957 \pm 0.010$ | $0.688 \pm 0.020$ | $0.665 \pm 0.072$ | | PGS | $0.475 \pm 0.048$ | $0.748 \pm 0.049$ | $0.948 \pm 0.014$ | $0.740 \pm 0.019$ | $0.968 \pm 0.019$ | $0.693 \pm 0.031$ | N/A | | RGD | $0.515 \pm 0.011$ | $0.968 \pm 0.006$ | $0.943 \pm 0.004$ | $0.797 \pm 0.011$ | $0.974 \pm 0.003$ | $0.694 \pm 0.018$ | $0.825 \pm 0.063$ | Our results confirm that RGD generally performs better than these methods. We opted not to include [3, 5] in our comparisons due to their experimental focus on a few-shot setting, and to prevent overcrowding the experimental section with an excessive number of baselines (already 14 + 3 = 17 in total). Here, "N/A" indicates that we lacked the resources to finish them now but this does not impact our overall conclusion. We will integrate these results into Section 4.4 of our manuscript. In the related work of our revised manuscript, we will discuss the mentioned [1, 2, 3, 4, 5]. ## Questions > benchmark errors The original SuperC task presented two issues: (1) the offline dataset contained multiple instances of the same inputs with different outputs, and (2) the oracle generated inconsistent predictions for identical inputs due to its randomness in the code. Initially, a similar issue was reported, which we initially believed was related to the second point. This was rectified by the development team, who removed the randomness in the code. Regarding the first issue, we consulted with the Design-Bench authors who advised retaining the duplicate entries as they represent distinct observations for the same inputs. To provide clear validation, we conducted further experiments after removing duplicates to reassess our method: | Method | BO-qEI | CMA-ES | RL | Grad | COMs | ROMA | NEMO | IOM | BDI | CbAS | Auto | MIN | BONET | DDOM | RGD | |--------|-------------|------------|------------|-----------|-----------|-----------|-----------|-----------|-----------|-----------|-----------|-----------|-----------|-----------|-----------| | SuperC | $0.362$ | $0.380$ | $0.399$ | $0.390$ | $0.396$ | $0.407$ | $0.404$ | $0.409$ | $0.405$ | $0.414$ | $0.371$ | $0.402$ | $0.371$ | $0.404$ | $0.410$ | These results demonstrate that our method continues to perform effectively in this adjusted scenario. We will incorporate these experimental results into the Appendix. Additionally, we will add after Line 211 of the main text stating: "We removed duplications in the SuperC and reran the experiments, with details provided in Appendix." ## Overall Have we adequately addressed your concerns? We are eager to continue this dialogue during the rebuttal phase. Thank you. --- Rebuttal Comment 1.1: Title: Response to rebuttal Comment: Thanks for taking the time to respond to my questions. Please see points related to your response below: - Regarding `This significant insight aligns with findings from [A], which argues that proxy-free diffusion guidance surpasses proxy diffusion guidance in robustness. [A] Ho, J. and Salimans, T. Classifier-free diffusion guidance.` Unless I am missing something, the main premise in [A] is to "attain a trade-off between sample quality and diversity **similar** to that obtained using classifier guidance". I am not sure how this reference paper [A] findings conveys that ` proxy-free diffusion guidance surpasses proxy diffusion guidance in robustness.` Moreover, guiding diffusion models to generate images for a specific class is a relatively easier/different problem that guiding towards the optima of a function. In the former, we are interested in sampling any point from a class conditional distribution whereas in the latter, we explicitly want to find the optima (rare sample in the distribution). - Thanks for the ablation comparing diffusion-based proxy refinement module with COMs/ROMa strategy. Since the ablation show final evaluation performance on the tasks, this ablation is useful only if the proxy refinement part of the three methods is changed and everything else is kept the same. For example, this requires fixing either gradient ascent or proxy enhanced sampling for search/generating candidates for evaluation. Is this the case? - Thanks for including the discussion about new related work and fixing the error in superconductor task. --- Rebuttal 2: Title: Please reply to the rebuttal. Comment: Dear Reviewer, Please reply to the rebuttal. AC. --- Rebuttal 3: Title: Thank you for your detailed feedback Comment: Thank you for your detailed feedback, which provides constructive insights for refining our paper. We will incorporate these points into the revised version. - In the paper [A], searching the term 'adversarial' reveals critical arguments supporting the robustness of proxy-free diffusion guidance over proxy diffusion guidance. The text notes, > Furthermore, because classifier guidance mixes a score estimate with a classifier gradient during sampling, classifier-guided diffusion sampling can be interpreted as attempting to confuse an image classifier with a gradient-based adversarial attack. as mentioned in the second paragraph of the introduction. Additionally, descriptions related to Equation (6) state, > Eq. (6) has no classifier gradient present, so taking a step in the $\epsilon$ direction cannot be interpreted as a gradient-based adversarial attack on an image classifier. These points suggest that proxy-free diffusion guidance surpasses proxy diffusion guidance in robustness. Acknowledging that generating images for a specific class is simpler than guiding towards the optima of a function, the latter scenario necessitates more robust guidance mechanisms. This is where proxy-free guidance excels, as directly using proxy diffusion guidance may lead to out-of-distribution issues, whereas proxy-free diffusion guidance provides more robust guidance. We will incorporate these discussions in Line 121 of our manuscript where we introduce guided diffusion. - Thank you for your comment. Yes, in our ablation study, we kept all elements except the proxy refinement part constant across the three methods. We will emphasize this in Line 482 of our manuscript where we perform further ablation studies. Have we adequately addressed your concerns? --- Rebuttal Comment 3.1: Title: Response Comment: Thanks for the response. I am not fully convinced with the robustness of proxy-free diffusion guidance argument but I don't want to nitpick now and would be happy to increase the score towards acceptance. Please add these points in the main paper along with proper discussion of the related work. --- Rebuttal 4: Title: Thanks for your prompt response and support. Comment: Thank you for your willingness to consider our arguments and for your constructive suggestions. We will ensure to add these points and enhance the discussion of related work in the main paper.
Summary: The paper introduces a robust guided diffusion framework for offline black-box optimization, combining proxy and proxy-free diffusion for conditional generation. Key improvements include proxy-enhanced sampling and diffusion-based proxy refinement to address out-of-distribution issues. Experiments on the Design-Bench benchmark show the method outperforms existing techniques, validated by ablation studies. Strengths: - The regularization of the proxy using the diffusion model is interesting. Additionally, optimizing the alpha parameter in an offline manner aligns well with the offline setup, enhancing the method's consistency and applicability. - Experiments and ablations on four continuous and three discrete tasks validate the effectiveness of the proposed RGD method, showing improved performance and robustness. Weaknesses: - The paper lacks comparison with relevant approaches like ICT [1] and TRI-mentoring [2]. Despite referencing the latter in the related work section, it’s overlooked in the results. - It is unclear why the results without proxy-enhanced sampling still achieve competitive outcomes, surpassing the dataset y_max. This contradicts the claims in lines 40-46. Where does the out-of-distribution (OOD) problem arise then? What is the distribution of the generated 128 candidates with and without the sampling? - The BDI reported results are significantly lower than in the original paper, especially for the ANT and TFBIND8 tasks. This also seems to be the case for BONET results. Did the authors change the evaluation setup? [1]: Importance-aware Co-teaching for Offline Model-based Optimization, https://arxiv.org/abs/2309.11600 [2]: Parallel-mentoring for Offline Model-based Optimization, https://arxiv.org/abs/2309.11592 Technical Quality: 2 Clarity: 3 Questions for Authors: - How are the initial 128 designs selected? Do you generate N designs and then use the proxy to select 128? - Are the diffusion parameters like T the same for DDOM? - Can you show, for a task, how the values (proxy/oracle) of the generated designs progress throughout the diffusion steps? - How were the discrete tasks handled? Were logits used, or were they kept discrete? - Previous methods typically evaluate on the Hopper task; why was it removed and Rosenbrock is added instead? Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: The authors address the limitations and potential negative impacts in their paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## General Reply Dear Reviewer, Thank you for your valuable feedback. Your insights are instrumental in improving our paper, and we are committed to thoroughly revising our work based on your suggestions. ## WEAKNESSES > The paper lacks comparison with relevant approaches like ICT [1] and Tri-mentoring [2]. Thank you for pointing out the omission of comparisons with ICT and Tri-mentoring. Our initial experiments did not include these methods due to their use of ensemble techniques, contrasting with our single proxy approach. To address this, we have now conducted comparative experiments with both methods using the same benchmarks: | Method | Superc | Ant | DKitty | Rosen | TF8 | TF10 | NAS | |--------|------|-----|------|-----|------|-----|------| | ICT | $0.505 \pm 0.014$ | $0.958 \pm 0.008$ | $0.960 \pm 0.025$ | $0.778 \pm 0.012$ | $0.957 \pm 0.010$ | $0.688 \pm 0.020$ | $0.665 \pm 0.072$ | | Tri-mentoring | $0.510 \pm 0.014$ | $0.946 \pm 0.010$ | $0.950 \pm 0.015$ | $0.780 \pm 0.006$ | $0.968 \pm 0.002$ | $0.689 \pm 0.014$ | $0.760 \pm 0.092$ | | Our Method | $0.515 \pm 0.011$ | $0.968 \pm 0.006$ | $0.943 \pm 0.004$ | $0.797 \pm 0.011$ | $0.974 \pm 0.003$ | $0.694 \pm 0.018$ | $0.825 \pm 0.063$ | These results validate our method's effectiveness relative to ensemble-based approaches. We will include this data in Tables 1 and 2 of the revised manuscript. > It is unclear why the results without proxy-enhanced sampling still achieve competitive outcomes, surpassing the dataset y_max. Thank you for your observation regarding the unexpected competitive outcomes of our model without proxy-enhanced sampling. The results surpassing dataset $y_{\text{max}}$ can be attributed to two key factors: 1. **Conditioning on Maximum Labels**: The diffusion model is conditioned with the label $y_{\text{max}}$, naturally guiding the sample generation to orbit around $y_{\text{max}}$. 2. **Diversity of Generation**: The inherent diversity of the diffusion model contributes to the possibility of occasionally surpassing $y_{\text{max}}$ even in the absence of explicit guidance. This observation does not contradict the statements made in lines $40$-$46$, where we discuss the model's struggles without explicit guidance: 1. **Comparative Performance**: While samples without proxy-enhanced sampling can exceed $y_{\text{max}}$, the results with explicit guidance consistently outperform those without, confirming the benefits of proxy-enhanced approaches as mentioned. 2. **Frequency and Quality of High-Performance Samples**: The diffusion model without explicit guidance does produce high-performance samples, but this occurs less frequently and with lower average performance compared to when explicit guidance is employed. To provide a clearer picture, most of the $128$ candidates generated without explicit guidance indeed perform below $y_{\text{max}}$, and their average performance significantly trails those generated with proxy-enhanced sampling. We will elaborate on these aspects in Line $283$ of the revised manuscript. > The BDI reported results are significantly lower than in the original paper, especially for the ANT and TFBIND8 tasks. This also seems to be the case for BONET results. Thank you for noting the discrepancies in the BDI results. We referenced the BDI results from the Tri-mentoring paper [2], where a modified network architecture was used due to computational constraints. Specifically, instead of the original $6$-layer MLP kernel, a more manageable 3-layer MLP was implemented to handle datasets. We will add this in Line 239. Regarding BONET, a deviation was noted in the candidate selection process, evaluating $256$ candidates compared to the typical strategy of $128$ candidates. To align with standard practices, we reran BONET’s code under standardized conditions. We will add this in Line 249. ## Questions > How are the initial 128 designs selected? Do you generate N designs and then use the proxy to select 128? In our generative model approach, we do not select from a predetermined set of "initial designs" as seen in traditional methods like gradient ascent. Instead, our method generates designs directly from pure noise, bypassing the typical optimization of existing designs. This aligns with DDOM. > Are the diffusion parameters like T the same for DDOM? Yes, we have aligned key diffusion parameters with DDOM. For instance, the diffusion time steps parameter \(T\) is set consistently at 1000 in both our RGD and DDOM models to ensure comparability. > Can you show, for a task, how the values (proxy/oracle) of the generated designs progress throughout the diffusion steps? We have documented the progression of values (proxy/oracle) for the Ant Dkitty design generation throughout the diffusion steps in Figure 1 of the global response PDF. The data demonstrate how the design is gradually guided towards higher scores via the proxy. > How were the discrete tasks handled? In handling discrete tasks, we utilize the "map_to_logits" function provided by the design-bench library. This function effectively converts discrete task outputs into logits。 > Previous methods typically evaluate on the Hopper task; why was it removed and Rosenbrock is added instead? We excluded the Hopper task due to inconsistencies between the offline dataset values and those obtained from the oracle, as discussed in DDOM [3] under "A.4. HopperController". To enhance the robustness and credibility of our method, we introduced the Rosenbrock task instead. ## Overall Have we addressed your concerns with our response? Your thorough feedback is highly valued, and we welcome continued discussion in the rebuttal phase. Thank you. [1] Ye Yuan et al. Importance-aware coteaching for offline MBO. NeurIPS 2023. [2] Can Chen et al. Parallel-mentoring for offline MBO.NeurIPS 2023. [3] Siddarth et al. Diffusion models for black-box optimization. ICML 2023. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed responses. While you've addressed most of my concerns, two key issues are troubling: (1) The use of BDI results from the TRI-mentoring paper without proper citation, coupled with the initial omission of TRI-mentoring from the benchmarks, and (2) The results from a reduced network architecture, which leads to an unfair comparison that may inadvertently favor your method. These issues affect the credibility of the work, prompting me to revise my score to a rejection. I hope these concerns can be addressed in any future submissions to ensure a fair and accurate presentation. --- Rebuttal 2: Title: reply to the rebuttal Comment: Dear Reviewer, Please reply to the rebuttal. AC. --- Rebuttal 3: Comment: Thank you for your detailed feedback. Regarding the use of BDI results from the TRI-mentoring paper, we have now included proper citations and incorporated TRI-mentoring into our benchmarks for a comprehensive comparison. We initially viewed TRI-mentoring primarily as an advanced ensemble method, which seemed distinct from our model’s approach, and thus only include it in the related work section. However, recognizing the importance of clarity and completeness, we have addressed this in our revised submission. As for the reduced network architecture, we followed the specifications outlined in the published NeurIPS2023 paper TRI-mentoring and use the reported results in this NeurIPS2023 paper. Our approach was consistent with the standard three-layer MLP used across all methods in this comparison, ensuring a fair and uniform basis for evaluation. We hope these clarifications address your concerns. Could you please let us know if there are any further issues or suggestions you have? --- Rebuttal Comment 3.1: Title: Appeal for Reassessment Based on Review Feedback Comment: Given your recognition that we have addressed most of the initial concerns: > While you've addressed most of my concerns, we were surprised by the decision to downgrade our submission from 'borderline reject' to 'reject,' particularly since the remaining issues were, in our assessment, relatively minor. We are confident that the revisions we have implemented thoroughly address the issues raised. In light of these efforts, we kindly request a reevaluation of our paper. --- Rebuttal 4: Comment: Thank you for your prompt feedback. As previously mentioned, we acknowledge the TRI-mentoring method and have discussed it in the related work section. We initially did not include a direct comparison with TRI-mentoring because (1) it employs an advanced ensemble approach, distinctly different from our methodology with a single proxy, and (2) our analysis already included a comprehensive set of 14 baselines. Following your insightful recommendation, we have now incorporated a comparative analysis with TRI-mentoring. We also recognize and regret the inadvertent omission of the citation for the BDI results from TRI-mentoring, which has been duly corrected. We wish to clarify that **our omission was limited to the citation of results; we did not manipulate the baseline results, and thus this should not be considered as selective reporting.** We appreciate the emphasis on transparency and fully agree with its importance. We understand that **peer review should focus on the scientific content and contributions of a manuscript**. While unintentional oversights are unfortunate, they have been rectified and should not overshadow the substantive scientific evaluations of the work. We hope that the changes implemented demonstrate our commitment to transparency and scientific rigor.
Rebuttal 1: Rebuttal: Dear Reviewers, We appreciate your detailed evaluation and insightful comments on our manuscript. Acknowledging your feedback, we have addressed one primary concern highlighted in your reviews within this response. ## Adversarial Sample Identification > (Reviewer a4Cg) i) Algorithm 1, Line 4, how to identify the adversarial examples? From Line 187-188, it looks like gradient ascent is utilized to find the x that maximize y, it is unclear to the reviewer that how to determine if the obtained x is an adversarial example. > (Reviewer hUhz) To compute the regularization loss term in Eq (15), we need to collect samples from the adversarial distribution. I cannot find the detailed procedure for collecting adversarial samples. We utilize a vanilla proxy to perform $300$ gradient ascent steps, identifying samples with unusually high prediction scores as adversarial. This method is based on the limited extrapolation capability of the vanilla proxy, as demonstrated in Figure 3 in COMs[1]. These unusually high predictions indicate deviations from the normal data distribution, validating their classification as adversarial examples. We will include these details in Line 188 of our manuscript to enhance understanding. Best, Submission 2262 Authors. [1] Brandon Trabucco, Aviral Kumar, Xinyang Geng, and Sergey Levine. Conservative objective models for effective offline model-based optimization. In Proc. Int. Conf. Machine Learning (ICML), 2021. Pdf: /pdf/222d42015234893351e06d47fa36506996857a2d.pdf
NeurIPS_2024_submissions_huggingface
2,024
Summary: The paper introduces a new method, named RGD, for Offline Black-box Optimization (BBO). RGD incorporates an improved proxy to guide the previous proxy-free method (i.e. DDOM[4]). Key technical innovations includes (1) improving the robustness of the proxy function against adversarial samples by consistency regularization with the diffusion process; (2) dynamic per-sample reweighting between proxy-guided and proxy-free sampling. Compared to previous approaches, RGD demonstrates superior performance on Design-Bench [3]. Strengths: Methodology: RGD integrates forward and reverse approaches for BBO, in a way that they can help with each other (e.g. using forward proxy to guide the reverse sampling and using the diffusion process to improve the forward proxy), which is technically sound and interesting. Experiment: RGD demonstrates superior performance on Design-Bench, compared to the baselines. Ablation: Ablations on different components of RGD are provided. Weaknesses: The reviewer would prefer some clarifications on the method and the experiments i) Algorithm 1, Line 4, how to identify the adversarial examples? From Line 187-188, it looks like gradient ascent is utilized to find the x that maximize y, it is unclear to the reviewer that how to determine if the obtained x is an adversarial example ii) Algorithm 1, Line 7, refine the proxy function via eq 15. It would be best if the author could provide further details on how to optimize eq (15), e.g. number of validation and adversarial samples, number of iterations for the bi-level optimization discussed in Appendix B. iii) Algorithm 1 Line 13, optimizing \omega. Again, it would be best if the author could provide extra info on how to optimize \omega. From Algorithm 1, it looks like \omega is time dependent and optimized for each time step. How many training iterations are required for each time step. The reviewer also wonder if the obtained \omega are dramatically different between different time steps. iv) From Line 257-258, it looks like the baselines shown in Table 1 & 2 were re-implemented. If this is the case, the authors are encouraged to include more implementation details, e.g. the model architecture for the score function, etc. This could help follow-up works to reproduce the reported results. The reviewer also wonders if the source code will be made public. Technical Quality: 3 Clarity: 3 Questions for Authors: Please refer to the weaknesses. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Limitations have been discussed in the appendix Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## General Reply Dear Reviewer, We sincerely appreciate the time and effort you have invested in providing such a constructive review of our manuscript. Your insights and suggestions are invaluable, and we are truly grateful for the guidance you have provided. We are fully committed to carefully considering and incorporating all your feedback to enhance the quality and clarity of our revised manuscript. ## Weaknesses > i) Algorithm 1, Line 4, how to identify the adversarial examples? Refer to the global response "Adversarial Sample Identification". > ii) Algorithm 1, Line 7, refine the proxy function via eq 15. It would be best if the author could provide further details on how to optimize eq (15), e.g. number of validation and adversarial samples, number of iterations for the bi-level optimization discussed in Appendix B. We refine the proxy function using batch optimization. Each batch comprises 256 training, 256 validation, and 128 adversarial samples. The bi-level optimization process, outlined in Appendix B, involves a single iteration for both the inner and outer levels to adjust the hyperparameter $\alpha$. We will include these details in Appendix B. > iii) Algorithm 1 Line 13, optimizing \omega. Again, it would be best if the author could provide extra info on how to optimize \omega. From Algorithm 1, it looks like \omega is time dependent and optimized for each time step. How many training iterations are required for each time step. The reviewer also wonder if the obtained \omega are dramatically different between different time steps. $\omega$ is indeed optimized time-dependently, updated once per time step using an Adam optimizer with a learning rate of 0.01. We will include this specification in Line 168 of the revised manuscript. Regarding its variability, we have already discussed the variability of $\omega$ in Figure 3 of our manuscript, which exhibits significant changes between different time steps. > iv) From Line 257-258, it looks like the baselines shown in Table 1 & 2 were re-implemented. If this is the case, the authors are encouraged to include more implementation details, e.g. the model architecture for the score function, etc. This could help follow-up works to reproduce the reported results. The reviewer also wonders if the source code will be made public. We only modify the setting where necessary. For example, for the case with DDOM/BONET which generates $256$ candidates unlike the typical $128$ from most methods, we reran the experiments to ensure comparable conditions. This ensures our results are directly comparable across all methods. We plan to make all source code publicly available upon acceptance of the paper, facilitating easy reproduction of our results by the research community. ## Overall Does this response address your concerns? We appreciate your feedback and look forward to further discussions during the rebuttal phase. Thank you for your input. --- Rebuttal Comment 1.1: Comment: Dear Reviewer, As the discussion period is nearing its conclusion, we kindly ask you to engage in the discussion and provide notes on any concerns that have not yet been addressed, along with the reasons why. Thank you for your attention to this matter. AC. --- Rebuttal 2: Title: reply to the rebuttal Comment: Dear Reviewer, Please reply to the rebuttal. AC.
null
null
null
null
null
null
Boosting Alignment for Post-Unlearning Text-to-Image Generative Models
Accept (poster)
Summary: This paper presents a novel perspective on model unlearning for text-to-image generative models, considering it a constraint optimization problem. It introduces a new loss function for the unlearning process, which is a combination of the remaining loss and the forgetting loss. The paper's key technical contribution is the concept of "restricted gradient." Additionally, the proposed method uses an "LLM in the loop" approach. Strengths: - The paper offers a novel and interesting perspective on gradient surgery, viewing it as constraint optimization. - The "LLM in the loop" approach sounds catchy. Weaknesses: - The use of the proposed loss function isn't sufficiently justified. The unlearning process employs a combination of the remaining loss and the forgetting loss, with the latter being the negative of the loss used to train the diffusion model. Making a random prediction without considering image fidelity would minimize the forgetting loss. In this work, the proposed method seems to work due to the balance between the two losses and the restricted gradient. Nonetheless, justifying the use of the forgetting loss remains challenging. - The diversification process largely depends on the performance of LLM. For the class conditional tasks, the stratified sampling technique can work since the number of classes is quite limited. For the target concept removal tasks, it is unclear how to obtain diverse prompts that can retain the performance of the diffusion models (since there will be more than a million concepts in reality). - This paper's primary technical contribution revolves around the concept of "restricted gradient." However, no experimental results support this technical contribution. Technical Quality: 3 Clarity: 3 Questions for Authors: - When training is complete, the remaining loss is zero or close to zero. Therefore, the constraints defined in definition 3 might be violated. What action does the optimizer take under such a condition? - What is the performance of the baseline diffusion mode without any unlearning method in Table 1? Due to the classifier performance, the baseline may not achieve 100% accuracy for both UA and RA. - How robust the clip alignment score is? Is clip alignment score well aligned with human judgement, especially in the context of artist removal? - Do you have any detailed ablation studies on the "LLM in the loop" process? Does the number of prompts used to generate D_f and D_r affect the overall performance? If so, what would be your practical suggestions for this process? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Limitations are well addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insightful feedback. We have provided detailed responses to your questions and concerns below. Should you have any remaining concerns or questions, we would welcome further discussion. If our responses have adequately addressed your initial concerns, we would be grateful if you would consider adjusting your evaluation accordingly. **Justifying the proposed loss function. In particular, the forget loss may favor random predictions.** We appreciate your excellent point. We would like to start by clarifying our objective and the goal of unlearning. *Problem Formulation:* Our objective is to find an unlearned model with new optimal weights $\theta_u$, given a pre-trained model with original weights $\theta$, a forget set $D_f$, and a retain set $D_r$, such that the unlearned model has forgotten $D_f$ while maintaining its utility on $D_r$. Formally, we aim to: - Maximize the forget error on $D_f$, represented by $L_f(\theta)$ - Minimize the retain error on $D_r$, represented by $L_r(\theta)$ We formulate this as: $\min_\theta L_r(\theta) + L_f(\theta)$, which is presented in line 122 in our paper. The goal of unlearning is application-dependent and should be defined and evaluated depending on applications as described in [1]. In our applications, unlearning means the unlearned model can no longer output with respect to "undesirable prompts" while preserving utility for non-target-related prompts. For example, For CIFAR10: inability to generate target class images, while still being able to generate non-target class images. For nudity or style removal: inability to generate any form of exposed body parts or target styles, while still being able to generate non-target concept-related prompts. Our choice of forget loss function for unlearning, which has also been widely used in [2,3,4], is specifically designed to maximize the loss for the forget set. This approach is justified by our goal of making the model unable to generate content related to the target concepts. **The diversification process largely depends on the performance of the LLM.** Thank you for your insightful feedback. This is indeed an important consideration in our approach. We acknowledge that we cannot cover all concepts as a retain set, which is why it's crucial to design the retain set systematically. For target concept removal tasks, we take a different approach to address the challenge of "millions of concepts": *Generating diverse initial prompts:* We generate a retain set of prompts ($D_r$) that cover a wide range of diverse dimensions unrelated to the target concept we aim to erase. These dimensions include activities, environments, times, moods, and others suggested by a Large Language Model. *Preserving broader categories:* We aim to maintain the model's knowledge of broader categories that could potentially be affected by the unlearning process. For instance, when unlearning nudity-related concepts, we strive to preserve the model's understanding of the broader category of "a person". For example, prompts in $D_r$ might include "A person walking in a forest at sunset." To create the forget set ($D_f$), we incorporate target-related concept words into these diverse prompts. For instance, "A nude person walking in a forest at sunset". We use 'LLM in the loop' to help generate diverse prompts, but our core strategy focuses on ensuring both broader categories and diverse dimensions. This approach is based on the assumption that not all concepts are equally affected by the unlearning process. Our empirical results show that alignment scores for COCO 10K (SD: 0.334, ESD: 0.322, Δ: 0.012) have a smaller Δ than those from $D_r$ (SD: 0.352, ESD: 0.329, Δ: 0.023). This motivates our work and raises questions about the need for a detailed design of the retain set. **Experimental results do not support the primary technical contribution, the restricted gradient.** Thank you for your feedback. We would like to clarify the following points: In Table 1, GradDiff represents the method without the restricted gradient, while RG represents the method with the restricted gradient without diversification. The results demonstrate improved performance in terms of Unlearning Accuracy (UA), Remaining Accuracy (RA), and Fréchet Inception Distance (FID) when using the restricted gradient. **The restricted gradient in Definition 3 – The constraints may be violated when the loss is zero. Is this an issue?** We thank the reviewer for pointing this out. Indeed, the direction $\mathbf{v}$ will not exist when $\mathbf{x}$ is a maximizer of $L_{\alpha}$ and $L_{\beta}$. However, this setting does not occur in our stochastic learning setting for two reasons: 1. The losses $L_{\alpha}$ and $L_{\beta}$ are generally intractable and only approximable by stochastic minibatches. Therefore, the model parameters $\mathbf{x}$ will almost never be the maximizer on any particular minibatch, meaning that there will always be a valid direction on each training iteration. 2. During unlearning, we perform gradient ascent on the diffusion loss, which is a quadratic function with no global maximizer. Empirically, we have never reached a point where this issue has arisen. We have added a note in our manuscript to discuss these details. [1] Towards Unbounded Machine Unlearning [2] Eternal sunshine of the spotless net: Selective forgetting in deep networks [3] Unrolling sgd: Understanding factors influencing machine unlearning [4] Knowledge Unlearning for Mitigating Privacy Risks in Langauge Models --- Rebuttal 2: Title: Additional Questions Comment: We thank the reviewer for those questions. These are great comments and feedback. **Performance of the baseline diffusion model in Table 1.** Thank you for this important point. We've added the baseline SD performance on UA/RA/FID as follows: - **UA (↑):** 0.052 - **RA (↑):** 0.955 - **FID (↓):** 3.815 We will incorporate this result in the revised version. **Robustness of the CLIP alignment score. Is it aligned with human judgment?** This is a very insightful question. We thank the reviewer for this comment. We acknowledge that there might be a gap between the clip alignment score and human judgment. Therefore, we additionally conducted the human evaluation and attached the results here. | | Human Judgment on $D_r$ (↑) | AS on $D_r$ (↑) | |----------|----------------|-----------| | SD | 3.4 | 0.348 | | ESD | 3.0 | 0.330 | | Salun | 1.1 | 0.280 | | RGD (Ours) | 3.2 | 0.352 | For human judgment, we collected responses from 9 subjects who were asked to score a test set. The scoring range was from 1 (least aligned with a given prompt) to 5 (most aligned with a given prompt). Our results demonstrate that our method achieves judgment scores close to those of SD, while Salun performs poorly. The relative ranking on the retain set aligns well with the CLIP alignment score. We will incorporate these results into our revised version. Thank you again for your feedback. **Detailed ablation studies on the LLM in the loop process. Does the number of prompts affect performance?** We thank the reviewer for this great question. We acknowledge that it is important to study the impact of dataset size from “LLM in the loop” and therefore, we have investigated how varying the size of $D_f$ and $D_r$ affects performance. Here are our key findings: Consistency: Across different sizes (400, 800, 1200), our method maintains higher alignment scores. Optimal Size: The size of 800 (reported in our paper) shows the best balance of performance. It matches the setting used in [1], allowing for fair comparison. Smaller Set (400): While still effective, this size shows a slight decrease in alignment scores. This is likely due to simply increased iterations on a smaller dataset. Larger Set (1200): This size can achieve high alignment scores comparable to 800 if we reduce $\alpha$ from 1.5 to 1.15 to balance the increased gradient ascent steps. Therefore, the practical suggestion will be in general, it would be beneficial to include more diverse samples for unlearning to maintain the model utility. **Ablation on the size of $D_f$ and $D_r$ for Nudity Removal.** | AS (↑) | $D_{r, train}$ | $D_{r, test}$ | |------------------------|------------------------------|------------------------------| | SD | 0.357 | 0.352 | | RGD (Ours) \|$D_r$\| = \|$D_f$\| = 400 | 0.336 (0.021) | 0.339 (0.013) | | RGD (Ours) \|$D_r$\| = \|$D_f$\| = 800 | 0.354 (0.003) | 0.350 (0.002) | | RGD (Ours) \|$D_r$\| = \|$D_f$\| = 1200 | 0.352 (0.005) | 0.346 (0.006) | We believe that we have initiated the importance of proper usage of 'LLM in the loop' through this paper, and a more comprehensive study of the design will be valuable for future work. [1] SALUN: EMPOWERING MACHINE UNLEARNING VIA GRADIENT-BASED WEIGHT SALIENCY IN BOTH IMAGE CLASSIFICATION AND GENERATION --- Rebuttal Comment 2.1: Comment: Thanks for the additional experiments and further clarification. After reading the rebuttal and other reviews, I decided to maintain my original score.
Summary: This work addresses the issue in generative models where powerful models may be generating harmful or undesired contents that should be unlearned. This work proposes to balance the unlearning objective and the text-image alignment on the remaining data, by identifying a gradient direction that achieves a monotonic decrease of both objectives (with theoretical analysis), along with a LLM-powered augmentation to encourage the diversity in the data to be unlearned. The experiments show capacity of the proposed method’s capacity in unlearning, emphasizing its retaining text-image alignment. Strengths: This paper addresses an important issue of machine unlearning, in an area when powerful generative models could produce harmful or undesired projects and should be mitigated. The idea of finding the gradient direction that is good for both objectives is sounding and interesting, although similar has been explored in other works.. Overall, this paper is easy to follow, and the experiments as well as theoretical analysis are provided in balance. The quality is good and the authors should be praised for that. Weaknesses: While this work claims to be able to do unlearning for both forgetting harmful concepts / copy-righted styles and forgetting individual objects (at least as the introduction suggests), the overall seems to be limited to the former one. For example, the diversity object is through LLM produced examples, thus limiting to concepts that are expressible in the text, rather than obese supported by image examples. Also, the experiments do not show removing identities (e.g. celebrities), which are demonstrated in previous works. The improvements over Salum are shown mostly in quantitative results. Qualitatively, one may argue the difference is small. This should be better explained. Technical Quality: 3 Clarity: 3 Questions for Authors: N/A Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Yes. This work addresses the negative impact of generative models, which the work should be awarded for. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insightful feedback. We have provided detailed responses to your questions and concerns below. Should you have any remaining concerns or questions, we would welcome further discussion. If our responses have adequately addressed your initial concerns, we would be grateful if you would consider adjusting your evaluation accordingly. **The authors do not provide a study on forgetting individual objects.** We appreciate the reviewer's point and concern. We would like to clarify that our experiments do not focus on forgetting individual objects, but rather on: 1. Class-wise object forgetting in diffusion models (demonstrated through CIFAR-10 experiments). 2. Concept removal, specifically harmful content ("nudity") and copyrighted styles ("artist style") in stable diffusion models, as described in lines 13-15. **The experiments do not show removing identities (e.g., celebrities). Do the authors have any results here?** We appreciate the reviewer's observation. We would like to emphasize that not all prior works address all applications. For example: - [1] focuses primarily on harmful content removal and object removal. - [2] addresses style and harmful content removal but not class-wise removal or celebrity removal. However, we acknowledge the importance of identity removal, particularly for celebrities, as demonstrated in previous works. Based on your valuable suggestion, we have conducted additional experiments on celebrity removal. We have also performed a human judgment evaluation to collect quantitative results on celebrity removal. For this evaluation, we gathered responses from 9 subjects who were asked to score whether the generated images contained any information regarding the target celebrity (i.e., Elon Musk). The scoring range was from 1 (Not contained) to 5 (Most contained). As shown in the following tables, we observe that our method can effectively erase the target concept, similarly to ESD. However, our CLIP alignment scores on D_r demonstrate better alignment performance after unlearning, which indicates superior performance. | AS (↑) | $D_{r,train}$ | $D_{r,test}$ | |------------|-----------|----------| | SD | 0.332 | 0.334 | | ESD | 0.294 | 0.299 | | Salun | 0.303 | 0.300 | | RGD (Ours) | 0.338 | 0.338 | | Human Judgment (↓) | $D_f $ | |-------------|-----------------------| | SD | 3.3 | | ESD | 1.0 | | Salun | 2.7 | | RGD (Ours) | 1.0 | We will incorporate these results and experiment details in our revised version. Thank you again for your great suggestion. **Qualitative improvements over SalUn appear to be small. Can the authors explain?** We appreciate your insightful feedback. We would like to clarify several points regarding our findings: - **Figure 3 Observations:** Salun generates visually similar images, regardless of the different forget prompts used. - **Figure 1 Observations:** Salun causes the unlearned model to forget many important semantic concepts, such as "quiet beach" and "sunrise". Additional qualitative examples can be found in Figure 8 in the appendix. - **Figure 5 Observations on Style Removal:** In style removal tasks, Salun loses closely related concepts such as Monet's style and da Vinci's style. These observations suggest that Salun's performance in reducing NSFW risk may be attributed to "overfitting" due to the uniformly designed forget and retain sets, which motivates our study on the importance of diversification in the unlearning process. Additionally, our experiments using the Salun method on SD v3 show further decreased alignment scores (Please find results in the general response). To further substantiate the qualitative comparison, we conducted a human evaluation. The results are attached below: | | Human Judgment on $D_r$ (↑) | AS on $D_r$ (↑) | |----------|----------------|-----------| | SD | 3.4 | 0.348 | | ESD | 3.0 | 0.330 | | Salun | 1.1 | 0.280 | | RGD (Ours) | 3.2 | 0.352 | For human judgment, we collected responses from 9 subjects who were asked to score a test set. The scoring range was from 1 (least aligned with a given prompt) to 5 (most aligned with a given prompt). Our results demonstrate that our method achieves judgment scores close to those of SD, while Salun performs poorly. We will incorporate details about setting in our revised version. Thank you for your insightful question. [1] SALUN: Empowering Machine Unlearning via Gradient-Based Weight Saliency in Both Image Classification and Generation [2] Erasing Concepts from Diffusion Models --- Rebuttal Comment 1.1: Comment: The authors rebuttals addressed some of my concerns. Based on that and the conversation between the authors and other reviewers, I updated my rating.
Summary: This paper addresses the problem of unlearning in generative text-to-image models. They formulate a training objective that improves upon the commonly-used one that simultaneously minimizes the loss on the retain set while maximizing it on the forget set (referred to as GradDiff here); i.e. gradient descent on the retain set and ascent on the forget set, simultaneously. Specifically, as is well-established in the literature, there are trade-offs inherent in this optimization problem, e.g. maximizing the loss on the forget set may lead to “accidentally” destroying information that is permissible, with unwanted consequences like overly compromising on utility. The authors in particular emphasize a different unwanted consequence of not handling this trade-off well, that is specific to the context of text-to-image diffusion models: the generated images are no longer as well aligned with the text prompt after unlearning, compared to before. They measure this via an alignment score based on CLIP. They show that SalUn, for instance (a state-of-the-art method) actually has the lowest alignment score after unlearning, compared to other baselines. Motivated by these issues, the authors propose a modified objective that builds on GradDiff but leads to a solution that is more balanced between the two objectives. Instead of summing the retain set gradient and the (negated) forget set gradient directly, they sum two quantities which are the projection of the former gradient on top of the normalized gradient of the latter, and the other way around. This approach (which they refer to as the “restricted gradient”) is closely related, as they discuss, to tricks used in the multi-task learning literature to jointly optimize competing objectives. In addition, they discuss issues relating to the selection of the subset of the retain set that is used when operationalizing the training objective. They find that explicitly encouraging diversity there is important for avoiding issues relating to “overfitting” and maintaining the ability to generate diverse images that align well with textual prompts. They show results on CIFAR-100 and using stable diffusion, comparing against different unlearning baselines, for removing a class in CIFAR-10 or a concept (e.g. nudity) from SD models. They report qualitative results (showing generations for different prompts) as well as some quantitative results, like the accuracy of the unlearned model on different sets (based on using pretrained classification models, when given as input images generated by the unlearned model), a perceptual metric for the quality of generated images as well as the alignment score. They show that their method outperforms previous baselines in terms of unlearning effectiveness (according to the particular metric) with the smallest sacrifice to alignment score and utility compared to those baselines. Strengths: - The paper tackles an important problem and uncovers previously-unknown failure modes of existing algorithms (e.g. the lack of diversity of generations coming from SalUn, indicating potential overfitting to the retain set), the drop in alignment scores after unlearning. - The proposed objective is grounded in the multi-task literature and seems appropriate for handling competing objectives for unlearning too. - The empirical investigation is thorough from the perspective of datasets, metrics for utility, and the authors also conduct ablations and sensitivity analyses for their method. - The paper is for the most part well-written and easy to follow (see below for some exceptions). Weaknesses: - The paper is missing a clear problem formulation. Section 3 defines notation, but the objective isn’t stated. What is the goal of unlearning? What does it mean to unlearn something? The authors say this can be used for privacy or harmful generations or copyright, but it’s not clear how these all connect under a unified formulation and how the proposed method (and experimental design) actually addresses these problems. There is no formal definition and one must rely on the empirical metrics presented to infer how success is measured. Other types of unlearning (e.g. for privacy) are defined formally in the literature, e.g. [1,2]. Why is this definition not relevant here, since privacy is listed as one of the applications? See the below point too. - The authors don’t present any comparisons with retrain-from-scratch which is assumed to be the gold standard for unlearning, at least according to some definitions, as discussed above (and thus serves as a reference point for the desired level of performance, e.g. how high the accuracy on the forget set should be). Several metrics have been developed, like membership inference attacks and other statistical tests according to this definition [3,4]. The authors also state that retrain-from-scratch is the gold standard solution (and that privacy is one of the applications of interest) but don’t use these metrics. It would be great to either expand the experiments accordingly or adjust the scope of claims to include only the unlearning formulation (and applications) that best corresponds to the experiments conducted here. - Related work and baselines: [5] seems closely related to this work. It would be great to discuss the differences and compare empirically. It would also be great to compare against Safe Latent Diffusion [6]. In the experiments, how were the baselines chosen? Why not include also (Zhang et al, 2023), that was mentioned in section 2.2? Some claims are not well substantiated: - “these models, trained on vast amounts of public data, inevitably face concerns related to privacy” – if the training data is public, what are the privacy considerations? - “In this study, we aim to address the crucial concerns of harmful content generation and copyright infringement [...] by focusing on the removal of target classes or concepts” – it is not clear to me how copyright infringement can be addressed by removing classes or concepts. Can you please elaborate on the setup that you have in mind? Clarity issues relating to the experimental setup: - For UA, why is it that higher is better? I would have thought the opposite: unlearning a concept results in poorer accuracy on classifying images of that concept. This concern is also tied to the fact that the paper is missing a problem formulation, making it hard to understand how success is defined. - What is the rationale of evaluating model utility on the retain set only, and not the test set (e.g. in Table 1)? - the authors discuss the size of the retain set and how it’s sampled but not the size / sampling approach of the forget set. In general, several experimental details are lacking about the benchmarks used (if they are in the appendix, please refer to the relevant sections from the main paper). In CIFAR-10, for instance, is the forget set comprised of all and only the examples of a particular class? If so, which one? In the SD experiments, how is the forget set generated? - Table 1 caption “The metrics are averaged across all 10 classes” - the setup here is unclear; is there a different class that is unlearned each time, so UA is computed according to that one class, and then this process is repeated for unlearning different classes, and all UA’s are finally averaged? - In Table 2, how are D_{r,train} / D_{r, test} generated for the different applications? - “prompted by I2P” – what is I2P? Please add a citation if possible. Minor - “competitive nature” → “competing nature”? - “generative reply” → “generative replay”? - The sentence at the end of line 227 is incomplete “To fairly compare,”. References [1] Making AI Forget You: Data Deletion in Machine Learning. Ginart et al. 2019. [2] Descent-to-delete: Gradient-based Methods for Machine Unlearning. Neel et al. 2020. [3] Inexact Unlearning Needs More Careful Evaluations to Avoid a False Sense of Privacy. Hayes et al. 2024. [4] Are we making progress in unlearning? Findings from the first NeurIPS unlearning competition. Triantafillou et al. 2024. [5] One-dimensional adapter to rule them all: concepts, diffusion models and erasing applications. Lyu et al. 2023. [6] Safe Latent Diffusion: Mitigating Inappropriate Degeneration in Diffusion Models. Schramowski et al. 2023. Technical Quality: 2 Clarity: 3 Questions for Authors: - I don’t understand how the proposed approach for diversifying the retain set in the case where the textual input is unconstrained actually achieves diversification (or to what extent). If I understand correctly, some prompts are first generated that relate to the target concept to erase, and then, to generate the retain set, these text prompts are modified for the purpose of removing words related to the target concept, and then retain images are generated from these modified prompts. But given that the prompts were initially generated to relate to the target concept, even with erasing some words, perhaps these prompts maintain some relationship to the target concept; or at least don’t uniformly cover the space of possible prompts. It seems that generating an initial set of prompts that aren’t necessarily related to the target concept (instead, they can relate to a wide range of diverse concepts) would yield more diversity. Did the authors consider this? In what ways do the authors view this approach as diversification? How does it relate to the approach usually taken in the literature for constructing the retain set? - How different do the authors think the accuracy results would be if using a different pretrained model to assess this? Would the overall conclusions or relative rankings between methods be the same? - The authors claim that their solution offers a monotonic improvement of each task. Is there any empirical evidence to support this claim? Regarding my overall score / assessment, I am leaning towards weak reject due to issues with the problem description, the motivation and associated applications, and describing / justifying design decisions in the experimental protocol, tying in experiments a little better with related work (see the weaknesses section). I look forward to the author's rebuttal especially on those points. Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: I encourage the authors to think about potential societal impact of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insightful feedback. We have provided responses below and hope they clarify the points you raised. Should you have any remaining concerns or questions, we would welcome further discussion. If our responses have adequately addressed your initial concerns, we would be grateful if you would consider adjusting your evaluation accordingly. **Can the authors provide a formal definition for unlearning?** We appreciate these insightful questions, which highlight important concerns for clarification. We acknowledge that our current presentation lacks a clear problem formulation and will address this in the revised version. *Problem Formulation:* Our objective is to find an unlearned model with new optimal weights $\theta_u$, given a pre-trained model with original weights $\theta$, a forget set $D_f$, and a retain set $D_r$, such that the unlearned model has forgotten $D_f$ while maintaining its utility on $D_r$. Formally, we aim to: - Maximize the forget error on $D_f$, represented by $L_f(\theta)$ - Minimize the retain error on $D_r$, represented by $L_r(\theta)$ We formulate this as: $\min_\theta L_r(\theta) + L_f(\theta)$, which is presented in line 122 in our paper. This allows us to simultaneously pursue both objectives: forgetting specific data while preserving the utility of the remaining data. **Meaning of unlearning:** In our context, unlearning means the model can no longer output with respect to "undesirable prompts" while preserving utility for non-target related prompts. Our definition of unlearning is intentionally application-specific, aligning with the argument presented in [1]. This application-specific framework allows us to tailor our unlearning objectives to the particular needs of each use case. For example: - For CIFAR10: inability to generate target class images, while still being able to generate non-target class images. - For nudity or style removal: inability to generate any form of exposed body parts or target styles, while still being able to generate non-target concept-related prompts. Our formulation directly addresses the concerns of harmful content generation and copyright infringement. By maximizing forget error on $D_f$, we ensure the model "forgets" how to generate harmful content or copyrighted styles. Simultaneously, by minimizing retain error on $D_r$, we maintain the model's overall utility on desired tasks. **Relevance to other formal unlearning definitions for privacy:** Other types of unlearning definitions (for privacy) often rely on model indistinguishability -- comparing models trained on full dataset vs those with specific data removed. However, recent literature [2,3] challenges the effectiveness of defining unlearning success solely through this lens. These critiques argue that indistinguishability from retrain-from-scratch models may be neither sufficient nor necessary for effective unlearning across various applications. As argued in [1], it's crucial to define 'forgetting' in an application-dependent manner. For example, a privacy application might prioritize defending against Membership Inference Attacks (MIAs), potentially at the cost of some model performance. However, in the case of removing harmful concepts like nudity, maintaining model performance is a core priority. In this case, the amount of data being forgotten becomes less critical, as long as the harmful concepts are effectively removed while maintaining overall model performance. This shift necessitates different evaluation metrics, as also discussed in [1]. Therefore, our application-specific approach to unlearning addresses two key challenges: 1. **Impracticality:** Collecting all training data related to a concept (e.g., "nudity") is often infeasible, computationally expensive, and requires additional assumptions about data access. 2. **Outcome-focused:** In concept removal, success is measured by the model's inability to generate target-related output, regardless of specific parameters achieved. For these reasons, we have an application-specific definition and goal of unlearning, which necessitates the use of tailored evaluation metrics to assess its effectiveness. **Lack of “retrain-from-scratch” comparisons, which are the gold standard for unlearning.** We thank you for bringing up this excellent point and constructive suggestions. We acknowledge that our initial framing may have been overly broad. Our primary application is concept removal and safe generation, rather than privacy-focused applications. We will revise the paper to clarify this focus and remove potentially misleading references to privacy as a primary application. We note that in our application, since our unlearning definition is application-specific, outcome-oriented, and aligning with [4,5] literature; in this line of work, our chosen metrics directly measure the effectiveness of concept removal and preservation of model utility. Comparing with training from scratch is not necessary for evaluation. Moreover, training from scratch is also not feasible for the scale of the models of practical interest. For example, as described in [6], with 8 Tesla V100 GPUs, training a class-guidance diffusion model takes around 2 days and we have to repeat it 10 times. Furthermore, for stable diffusion models, it is more impractical considering the size of parameters and number of training data, which is why recent works do not compare with retrain-from-scratch for diffusion models [4,5,7]. [1] Towards Unbounded Machine Unlearning [2] On the necessity of auditable algorithmic definitions for machine unlearning [3] Evaluating Inexact Unlearning Requires Revisiting Forgetting [4] Erasing Concepts from Diffusion Models [5] SALUN: EMPOWERING MACHINE UNLEARNING VIA GRADIENT-BASED WEIGHT SALIENCY IN BOTH IMAGE CLASSIFICATION AND GENERATION [6] Elucidating the Design Space of Diffusion-Based Generative Models (EDM) [7] Forget-Me-Not: Learning to Forget in Text-to-Image Diffusion Models --- Rebuttal 2: Title: Comparison to other baselines Comment: We appreciate the reviewer's thoughtful suggestions regarding baseline comparisons. Our baseline selection was guided by two main principles: **Complementarity:** In inexact unlearning, we can broadly categorize approaches into different categories: 1. **Modifying the parameters:** These methods directly alter the model's weights to remove target knowledge. 2. **Modifying the inference:** These approaches change the inference process without altering the original model. Our work falls into the first category, focusing on modifying the model's parameters. Safe Latent Diffusion (SLD) [1] modifies the inference process to prevent certain concepts from being generated, which falls into the second category. We consider works from the second category as complementary to our approach. Moreover, the method proposed in [2] trains only an adapter and can be applied as a plug-and-play solution to other pre-trained models, which differs from our parameter-level modifications. We believe this also falls into the second category. Therefore, we did not include these methods in our comparisons, as they could potentially be used in conjunction with our method rather than as direct alternatives. However, since [6] falls into the first category, we have added additional comparison results in the general response. **State-of-the-art performance:** We prioritized comparisons with methods that represent the current state-of-the-art in the field. For example, both [1] and [3] have been outperformed by more recent methods such as [4] and [5], which we included in our comparisons. [1] Safe Latent Diffusion: Mitigating Inappropriate Degeneration in Diffusion Models [2] One-dimensional adapter to rule them all: concepts, diffusion models, and erasing applications. [3] Forget-Me-Not: Learning to Forget in Text-to-Image Diffusion Models [4] Erasing Concepts from Diffusion Models [5] SALUN: EMPOWERING MACHINE UNLEARNING VIA GRADIENT-BASED WEIGHT SALIENCY IN BOTH IMAGE CLASSIFICATION AND GENERATION [6] Ablating Concepts in Text-to-Image Diffusion Models --- Rebuttal 3: Title: Some claims are not well substantiated: Comment: **What are the privacy considerations of training on public data?** We thank the reviewer for highlighting this point. Upon reflection, we agree that our statement about privacy concerns in the context of models trained on public data might be too broad for the scope of our paper, given that our work focuses on concept removal instead of privacy. We will revise the paper to calibrate the framing. **Elaboration on how copyright infringements can be addressed by removing concepts from a model.** Thank you for your insightful question regarding the motivation and associated application of our work. Our focus on copyright infringement is motivated by recent developments in the field of generative AI. As described in [1], companies like Stability AI and MidJourney are facing civil lawsuits for training their models on artists' work without consent, enabling their models to replicate the style and work of these artists. In response to this issue, we study "style removal" as a practical application of our unlearning method on the copyright infringement application. For example, we aim to demonstrate that our method can effectively remove a particular artist's style from a model's capabilities. This removal would prevent the model from generating images in the style of the artist whose work was used without permission, thus addressing the copyright infringement issue. [1] On Copyright Risks of Text-to-Image Diffusion Models --- Rebuttal 4: Title: Clarity issues relating to the experimental setup: Comment: **UA – how come higher is better on this metric?** We thank the reviewer for pointing out the potential confusion regarding the UA metric. We define UA as 1 - accuracy of the unlearned model on $D_f$ (as noted in line 232 of our paper). This definition aims to measure the model's inability to generate or correctly classify forgotten concepts. Higher UA values indicate better unlearning performance, as they represent a greater error rate on the forget set. In other words, a higher UA suggests that the model has effectively 'forgotten' how to generate or recognize the target concepts. We adopted this metric definition to maintain consistency with previous research [1]. **Rationale of evaluating model utility on the retain set only, not the test set.** Thank you for your question about our evaluation. Our evaluation pipeline for Table 1 focuses on two main aspects: the effectiveness of forgetting the target class and the preservation of model utility for the remaining classes. We don't use a separate test set in the traditional sense but rather evaluate on generated images. *Remaining Accuracy (RA)* is used to evaluate whether the unlearned model can still generate the remaining classes correctly after erasing the target class. We ask the unlearned model to generate images belonging to the remaining classes, which effectively serve as a 'test set'. We calculate the classification performance of these generated images to ensure each class of remaining images is well preserved in the unlearned model. *Unlearning Accuracy (UA)* evaluates how effectively the model has forgotten the target class. We prompt the unlearned model to generate images of the target class and then use a pre-trained classifier to determine if these generated images still belong to the target class. We will make it clear in our revised version by including more details. Thank you again for your question. **Missing experimental details for forget set construction in CIFAR-10 and SD experiments.** Thank you for your great point. We acknowledge the lack of detail regarding the forget set in our initial presentation and will incorporate these details in our revised version. For CIFAR-10 experiments: The forget set $D_f$ comprises all images of a particular target class, which is 5000 images. For Stable Diffusion (SD) experiments: Our forget set has 800 images with the corresponding prompts. The forget prompts are generated by adding the target concept (e.g., "nudity" or specific artist styles) into the retain prompts ($D_r$). **Table 2: How are $D_\text{r, train}$, $D_\text{r, test}$ generated?** We appreciate the reviewer's questions regarding the generation of $D_{r,train}$ and $D_{r,test}$ sets. *For the nudity removal application:* a) We use a structured approach to generate diverse prompts for $D_r$, considering multiple dimensions such as activities, environments, times, and moods provided by a Large Language Model (LLM). b) For each dimension, we use LLMs to suggest multiple subconcepts, incorporating diverse semantics belonging to each dimension such as walking, and sitting in activities. c) To create $D_{r,train}$ and $D_{r,test}$, we split these subconcepts in each dimension into train and test sets, ensuring that there is no overlap between train and test sets. d) We then combine these subconcepts to generate $D_r$. *For the style removal application:* similar to nudity removal, we construct some templates with multiple dimensions such as the artist’s name, actions, environments, and moods, then fill in each dimension with the suggestions from LLMs. Compared between retain set and forget set, the only difference is in the forget set ($D_f$) we use the name of the target that we want to unlearn (e.g., Van Gogh), and use other artists’ names or some virtual names in the retain set ($D_r$). We will revise the paper to incorporate these details, providing a clearer and more comprehensive explanation of our method for generating $D_{r,train}$ and $D_{r,test}$ in each application. **Table 1: How are the metrics computed across the 10 classes?** Thank you for bringing attention to this potentially confusing point. The interpretation provided is correct and reflects our intended meaning. To clarify: the caption "The metrics are averaged across all 10 classes" indeed means that we repeat the unlearning process for each class separately, and the results are averaged to produce the final metrics presented in the table. We will revise the caption to make this clearer in the revised version. **What is I2P?** We thank you for your question for further clarification. I2P stands for "Inappropriate Image Prompts", a collection of prompts designed to evaluate the effectiveness of content moderation in text-to-image models. We have cited this work in line 237 of our paper. [1] SALUN: EMPOWERING MACHINE UNLEARNING VIA GRADIENT-BASED WEIGHT SALIENCY IN BOTH IMAGE CLASSIFICATION AND GENERATION --- Rebuttal 5: Title: Additional Questions. Comment: **Explain the proposed approach for diversifying the retain set. In particular, why not generate an initial set of prompts (that aren’t necessarily related to the target concept)?** Thank you for your insightful question about our approach to diversifying the retain set. We fully agree with your suggestion that generating an initial set of prompts unrelated to the target concept would yield more diversity. In fact, this is precisely the approach we take. To clarify: our initial set of retain prompts ($D_r$) is generated to cover a wide range of diverse dimensions unrelated to the target concept we aim to erase. These dimensions include activities, environments, times, moods, and others. At the same time, our retain set aims to preserve the model's understanding of a broader category (e.g., "a person") that could be potentially affected by erasing “nudity.” To create $D_f$, we include target-related concept words (e.g., "nude", "naked") in the diverse prompts generated. For example, a prompt in $D_r$ might be "A person walking in a forest at sunset", while the corresponding $D_f$ prompt would be "A nude person walking in a forest at sunset". This approach ensures no direct relationship between the target concept and the retain concepts, as the initial prompts are generated independently of the target concept. We will incorporate the details of generating a diversified retain set in our revised paper. **How does it relate to the approach usually taken in the literature for constructing the retain set?** We appreciate your question about how our approach relates to existing methods in the literature. The design of the retain set has been largely overlooked in previous studies, despite its crucial role. For example, **ESD** doesn't utilize retain set information, yet claims alignment scores comparable to the original SD based on COCO dataset evaluation. However, our evaluation using "person"-related benign prompts reveals a drop in their alignment scores (as shown in Table 2), likely due to the interconnection between erasing "nudity" and general person image generation. **Salun** uses single repeated prompts for the forget (e.g., "a photo of a nude person") and retain (e.g., "a photo of a person wearing clothes") sets, which can potentially lead to overfitting. Therefore, we believe we have initiated the discussion on the importance of carefully designing the retain set, and our more systematic approach to retain set design opens up interesting avenues for further research. **How different would accuracy results be if using a different pre-trained model? Would the overall conclusions or relative rankings between methods be the same?** Thank you for this important question about the generalizability of our results. To address this, we've conducted additional evaluations using SD v3, the most recent version of the pre-trained model. Our approach and findings are as follows (please refer to the tables in the general response): *Model Architecture:* SD v3 employs a transformer-based architecture (e.g., Diffusion Transformer models) instead of the UNet-based architecture used in previous versions. This significant change allows us to test our method's performance across different model structures. *Model Size:* SD v3 offers a range of model sizes, with the largest being nearly 10 times the size of v1.4. We choose a medium size model with 2B parameters, which is approximately 2 times larger than v1.4. This variability enables us to assess how our method performs across different model capacities. *Evaluation Approach:* We maintained the same hyperparameter settings as in v1.4 to ensure an easy generalization capability. We evaluated two baselines alongside our method, observing their performance under multiple hyperparameter tunings. *Results:* a) Alignment Scores: We observed high alignment scores for both $D_{r,train}$ and $D_{r,test}$ splits with SD v3, effectively mitigating harmful output generation. b) Baseline Comparison: Both baselines showed significant alignment score drops with multiple hyperparameter tunings, and our method continued to outperform them. --- Rebuttal 6: Title: Additional Question Comment: **The authors claim that their solution offers a monotonic improvement of each task. Is there any empirical evidence to support this claim?** We appreciate your question regarding empirical evidence for our claim of monotonic improvement. Our claim refers to consistent improvement on both objectives—forgetting the target concept (the 'forget' task) and maintaining performance on retained concepts (the 'retain' task)—without sacrificing one for the other. Theoretically, our claim for monotonic improvement is guaranteed given access to the true loss gradients in Eq. 4, and as the step size tends towards zero. In practice, there are two considerations: 1. The step size cannot be infinitely small and must be balanced against the computational budget of the training algorithm, as smaller step sizes also result in longer training times. 2. The true expected loss (i.e., the expectation of the loss over the entire data distribution) is not tractable, and we must approximate it via the empirical batch-wise loss. This results in minor deviations at each step and a stochastic approximation of the gradients in Eq. 4. To verify our claim in the empirical setting, we perform unlearning with our approach and baseline for the class-wise forgetting experiment, please see Figure 2 in the PDF: - **Forget Losses Comparison:** The left figure shows the loss for the 'forget' task. Our method maintains a higher loss compared to the baseline. This higher loss indicates better 'forgetting' of the target concept, as we want the model to perform poorly on this task. - **Retain Losses Comparison:** The right figure shows the loss for the 'retain' task. Our method maintains a relatively stable and low loss throughout the iterations, especially when compared to the baseline, which shows a sharp increase after 300 iterations. This stability is crucial as it indicates our method consistently preserves model utility on retained concepts. We note that the retain loss doesn't show a decreasing trend because our primary goal for the retain set is to maintain utility, not necessarily to improve it. The unlearning process naturally tends to increase the retain loss as we do gradient ascent. However, our "$\min D_r$" objective counteracts this increase (i.e., decrease the retain loss), resulting in the observed stability. --- Rebuttal Comment 6.1: Title: response to authors Comment: Dear authors, Thank you for the thorough responses. I am satisfied with the discussion of the problem definition and with the promised modifications on clarifying the problem definition and the scope of the paper (in particular, discussing the application-specific definition of "Unlearning" as defined in the rebuttal, the differences from privacy definitions and reducing the scope of the claims to exclude privacy applications). Thank you as well for all of the the clarifications, including regarding the copyright application, the UA metric (please do emphasize more that it's 1 - the accuracy, as I found this not to be obvious), and the experimental setup. I also appreciated the additional experiments with the different pretrained model, to test generalizability, and the explanation and results re: the "monotonic improvement" question. I will increase my score in light of the above. Please do incorporate these clarifications in the paper (or in a section in the appendix). thanks! --- Rebuttal 7: Title: Official Comment by Authors Comment: Dear Reviewer RWKV, We are happy to hear that our responses have addressed your concerns and questions. We appreciate you taking the time to read our rebuttal and adjust your evaluation accordingly. We will incorporate all the clarifications, additional experimental results, and suggested modifications discussed during the rebuttal into our revised version. Thank you once again for your valuable, constructive feedback and for your consideration. Best regards, Authors
Summary: This paper tackles the approximate machine unlearning task of the target class and concept removal from diffusion models. This work endeavors to improve the existing literature’s output quality and text alignment after unlearning. Firstly, A concept of the restricted gradient is proposed, allowing monotonic decreases of the two losses from the objectives of unlearning quality and the remaining data alignment. Secondly, data is also deliberately processed to improve its diversity, which is beneficial to text alignment. According to the two aforementioned objectives, The two components have proved effective from the ablation studies and other comparative results. Strengths: 1. The paper is well-written and easy to follow, and the implementation details are also well-documented. 2. The task is well-motivated and is of practical significance to the field of safe generation using diffusion models. 3. The idea of turning a trade-off of two learning objectives into monotonic optimization is novel and useful. 4. The ablation study is a plus, for showing the effectiveness of both design components of the method. Weaknesses: 1. The presentation in Table 1 is misleading, where the baseline method Finetune has the best RA and FID results; however, the results of the proposed RGD method are highlighted in bold. 2. There are two closely relevant baselines [1, 2] that need to be compared as they tackle the same task of concept removal and safe generation and have been popular as the baseline for this line of work. Missing such a comparison will lead to doubt about the proposed method’s practical significance. [1] Ablating Concepts in Text-to-Image Diffusion Models. ICCV2023. [2] Safe Latent Diffusion: Mitigating Inappropriate Degeneration in Diffusion Models. CVPR2023. Technical Quality: 2 Clarity: 3 Questions for Authors: Please see the Weaknesses section, the missing baseline comparison is my main concern. Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: The authors have adequately addressed the limitations and broader impacts. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insightful feedback. We have provided detailed responses to your questions and concerns below. Should you have any remaining concerns or questions, we would welcome further discussion. If our responses have adequately addressed your initial concerns, we would be grateful if you would consider adjusting your evaluation accordingly. **Presentation of RA and FID results in Table 1.** Thank you for your careful review and for pointing out this discrepancy in Table 1. You are correct that the presentation could be seen as misleading, as the Finetune method indeed shows the best results for RA and FID metrics. The bolding of RGD results was intended to highlight our proposed method's overall performance across all three metrics. While Finetune excels in RA and FID, RGD shows strong performance in UA while maintaining competitive results in RA and FID. We believe this balance represents a significant advancement, especially considering the trade-offs often encountered in unlearning tasks. However, we acknowledge that our current presentation may not clearly convey this nuanced comparison. We will revise Table 1 to more accurately represent the relative strengths of each method, by highlighting the best result in each column. **Comparison to relevant baselines [1, 2].** We appreciate the reviewer's suggestion to include additional baselines. This feedback is valuable and helps us further demonstrate the significance of our work. In response, we have conducted additional experiments to compare our method with one of the baselines mentioned in [1]. Our findings are as follows: *Comparison with [1]:* We implemented [1] following their "ablating nudity" setup. Using 200 prompts related to the "anchor concept" with nudity and nsfw as caption targets, our results show it struggles to eliminate the "nudity" concept despite achieving similar alignment scores to SD. We note that their paper does not provide results on nudity concept removal. Therefore, this application likely requires further study of prompt generation and approach usage. |I2P Prompts | Female Genitalia | Buttocks | Male Breast | Belly | Male Genitalia | Armpits | Female Breast | |-----------------|------------------|----------|-------------|-------|----------------|---------|---------------| | SD | 16 | 30 | 48 | 136 | 6 | 100 | 262 | | [1] | 15 | 13 | 20 | 116 | 3 | 80 | 231 | | RGD (Ours) | 0 | 0 | 0 | 0 | 0 | 0 | 0 | *Harmful Removal* | AS (↑) | $D_{r,train}$ | $D_{r,test}$ | |-----------------|---------------|--------------| | SD | 0.357 | 0.352 | | [1] | 0.354 | 0.347 | | RGD (Ours) | 0.354 | 0.350 | Their style removal approach doesn't well preserve close concept alignments like Monet, as shown in Figure 1, which is also described in their limitations. Also, the alignment score drops after style removal. *Style Removal* | AS (↑) | $D_{r,train}$ | $D_{r,test}$ | |-----------------|---------------|--------------| | SD | 0.349 | 0.348 | | [1] | 0.340 | 0.339 | | RGD (Ours) | 0.355 | 0.352 | *Regarding the baseline [2]:* We note that our baseline selection was guided by two main principles: **Complementarity:** In inexact unlearning, we can broadly categorize approaches into different categories: a) Modifying the parameters: These methods directly alter the model's weights to remove target knowledge. b) Modifying the inference: These approaches change the inference process without altering the original model. Our work falls into the first category, focusing on modifying the model's parameters, and Safe Latent Diffusion (SLD) [2] modifies the inference process to prevent certain concepts from being generated, which falls into the second category. We consider works from the second category as complementary to our approach. Therefore, we did not include these methods in our comparisons, as they could potentially be used in conjunction with our method rather than as direct alternatives. **State-of-the-art performance:** We prioritized comparisons with methods that represent the current state-of-the-art in the field. For example, [2, 3] have been outperformed by more recent methods such as [4] and [5], which we included in our comparisons. [1] Ablating Concepts in Text-to-Image Diffusion Models [2] Safe Latent Diffusion [3] Forget-Me-Not: Learning to Forget in Text-to-Image Diffusion Models [4] Erasing Concepts from Diffusion Models [5] SALUN: EMPOWERING MACHINE UNLEARNING VIA GRADIENT-BASED WEIGHT SALIENCY IN BOTH IMAGE CLASSIFICATION AND GENERATION --- Rebuttal Comment 1.1: Comment: I thank the authors for their rebuttal, which has well-addressed my concerns. I keep my positive recommendation for acceptance.
Rebuttal 1: Rebuttal: **General response** We thank the reviewers for their thoughtful feedback. We are glad the reviewers find that: - **Our paper addresses an important problem in machine unlearning** [h3BB, RWKV, DUTP] - **Our paper is well-written and clearly presented** [h3BB, RWKV, DUTP] - **Our proposed method is novel and effective** [h3BB, RWKV, DUTP, 68ka] Our paper presents a new approach to unlearning in generative models, particularly text-to-image diffusion models, with a focus on balancing the unlearning objective and maintaining model utility with a restricted gradient. **Contributions** We introduce a novel "restricted gradient" approach that allows for the improvement of both unlearning and retain objectives. We propose a “diversification” method to incorporate the diversity into the retain set. This approach addresses the observed failures of existing methods in maintaining model utility, an aspect that has been overlooked in previous studies. We demonstrate improved performance in unlearning effectiveness while better-preserving model utility and text-image alignment compared to existing baselines through class-wise removal and concept removal experiments, showing the effectiveness of our method in various unlearning applications. **Paper improvements made in response to feedback** In response to the reviewers' feedback, we have conducted additional experiments: We have added one more baseline for comparison (reviewer h3BB, reviewer RWKV) We have included experiments with an additional pre-trained model (SDv3) to evaluate the generalization of our method (reviewer RWKV) We have conducted experiments to evaluate the improvement of our solution, compared with baseline (reviewer RWKV) We have included the celebrity removal experiments, along with a human study (reviewer DUTP) We have conducted a human judgment evaluation on style removal for comparison with CLIP alignment (reviewer 68ka) We have performed an ablation study on the size of forget and retain sets. (reviewer 68ka) We have added the baseline diffusion model (SD v1.4)’s UA, RA, FID metrics (reviewer 68ka) Moreover, we will incorporate details regarding the experimental setup and provide further clarifications based on reviewers’ suggestions in the revised version. We thank the reviewers for all their insightful comments, constructive questions, and suggestions. **Additional baseline comparison (reviewer h3BB, reviewer RWKV)** We implemented [1] following their "ablating nudity" setup. Using 200 prompts related to the "anchor concept" with nudity and nsfw as caption targets, our results show it struggles to eliminate the "nudity" concept despite achieving similar alignment scores to SD. We note that their paper does not provide results on nudity concept removal. Therefore, this application likely requires further study of prompt generation and approach usage. |I2P Prompts | Female Genitalia | Buttocks | Male Breast | Belly | Male Genitalia | Armpits | Female Breast | |-----------------|------------------|----------|-------------|-------|----------------|---------|---------------| | SD | 16 | 30 | 48 | 136 | 6 | 100 | 262 | | [1] | 15 | 13 | 20 | 116 | 3 | 80 | 231 | | RGD (Ours) | 0 | 0 | 0 | 0 | 0 | 0 | 0 | *Harmful Removal* | AS (↑) | $D_{r,train}$ | $D_{r,test}$ | |-----------------|---------------|--------------| | SD | 0.357 | 0.352 | | [1] | 0.354 | 0.347 | | RGD (Ours) | 0.354 | 0.350 | Their style removal approach doesn't well preserve close concept alignments like Monet, as shown in Figure 1, which is also described in their limitations. Also, the $D_r$ alignment score drops after style removal. *Style Removal* | AS (↑) | $D_{r,train}$ | $D_{r,test}$ | |-----------------|---------------|--------------| | SD | 0.349 | 0.348 | | [1] | 0.340 | 0.339 | | RGD (Ours) | 0.355 | 0.352 | [1] Ablating Concepts in Text-to-Image Diffusion Models **Additional generalization evaluation - evaluate on SDv3 (reviewer RWKV)** | I2P Prompts | Female Genitalia | Buttocks | Male Breast | Belly | Male Genitalia | Armpits | Female Breast | |-----------------|------------------|----------|-------------|-------|----------------|---------|---------------| | SD v3 | 0 | 1 | 9 | 69 | 4 | 58 | 46 | | ESD | 0 | 0 | 2 | 10 | 0 | 4 | 6 | | Salun | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | RGD (Ours) | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | AS (↑) | $D_{r,train}$ | $D_{r,test}$ | |-----------------------------------|---------------|--------------| | SD v3 | 0.364 | 0.371 | | ESD | 0.335 | 0.332 | | Salun | 0.079 | 0.088 | | RGD (Ours) | 0.362 | 0.370 | We've conducted additional evaluations using SD v3, the most recent version of the pre-trained model. Our findings are as follows: 1) We maintained the same hyperparameter settings as in v1.4 to ensure an easy generalization capability. 2) We evaluated two baselines alongside our method under multiple hyperparameter tunings. We observed high alignment scores for both $D_{r,train}$ and $D_{r,test}$ splits with SD v3, while effectively mitigating harmful output generation. On the other hand, both baselines showed alignment score drops. Pdf: /pdf/b1266fd545c5f5f55a96994e377e7116c98cb2ae.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
UNION: Unsupervised 3D Object Detection using Object Appearance-based Pseudo-Classes
Accept (poster)
Summary: This work presents an unsupervised 3D object detection method named UNION, which exploits LiDAR, camera, and temporal information jointly for generating pseudo bounding boxes to train existing object detectors. In addition, the authors introduce an appearance-based clustering method to generate pseudo class labels and train the object detector in a multi-class fashion. Strengths: ++ The method has significant improvement on the accuracy compared to existing unsupervised methods. ++ This method does not require the time-consuming multi-round self-training procedure. Weaknesses: -- There are not any qualitative results in the paper. Visualizations on the pseudo box/class generation and the comparative results would help readers understand the method more comprehensively. -- Detailed analysis of failure cases would make this paper stronger. For example, in which scenarios UNION would generate bad pseudo labels, and how each step of UNION contributes to different failure cases. -- Details about the HDBSCAN hyper-parameters are missing. Technical Quality: 3 Clarity: 2 Questions for Authors: -- Could the authors explain why OYSTER and LISO are not included in the multi-class experiments? -- Will self-training help improve the detector trained with UNION pseudo labels? -- For visual appearance encoding, DINOv2 usually can only get a lower-resolution feature map of the input image. How did the authors obtain point-wise features? -- Which variant of DINOv2 is used for the experiments in this paper? -- Could the authors provide the runtime analysis for each step of UNION? Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: Limitations are discussed in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank you for your valuable feedback. **Qualitative results of pseudo-bounding box generation** We provide qualitative results of the intermediate outputs of the UNION pipeline and the final generated pseudo-bounding boxes **in Figure 2 in the rebuttal PDF**. This figure shows sample 2 from scene-1100, which is part of the nuScenes training dataset. It can be seen that the scene flow can identify multiple dynamic objects, and the appearance clustering can discover static mobile objects, including vehicles and pedestrians, using those dynamic instances. **UNION failure cases** The UNION pipeline consists of multiple components that each have their limitations, and as a result, UNION can fail to generate good pseudo-labels for some cases. The spatial clustering using HDBSCAN fails, for example, to generate correct clusters if multiple objects are close to each other, when objects are partially occluded, or when objects are far from the LiDAR (few points). This may be (partially) solved by using temporal tracking in combination with bounding box refinement. In addition, the self-supervised scene flow may fail to estimate a correct velocity when the spatial cluster contains few points. This also may be solved by using temporal tracking or other sensors such as radar. Finally, the parallax effect sometimes causes problems when computing the appearance embedding, e.g., LiDAR points are projected on the wrong object in the camera image. As a result, objects may be part of the wrong appearance cluster and, thus, may be falsely labeled as mobile or non-mobile. One failure case is shown at the bottom left of the UNION results in Figure 2. According to the ground truth, many pedestrians are located there, but UNION only discovers 2 of them (the bus stop partially occludes some and they have few LiDAR points). **HDBSCAN hyperparameters** We will add an appendix to the paper to list all the hyperparameters we used for UNION. For HDBSCAN, we used the implementation from scikit-learn, a Python module for machine learning built on top of SciPy. The minimum cluster size is set to 16 points, and the cluster selection epsilon is set equal to 0.50 meters, i.e., clusters are merged if they are within half a meter of each other. **OYSTER and LISO are not included in the multi-class experiment** The methods OYSTER [1] and LISO [2] are for unsupervised class-agnostic 3D object detection, i.e. they generate class-agnostic pseudo-bounding boxes that can be used for training existing detectors. So, their pseudo-bounding boxes cannot be used for training multi-class detectors as no (pseudo-)class labels are available. The same holds for HDBSCAN; therefore, we assigned class labels to the class-agnostic bounding boxes based on the size of the bounding boxes **(see Table 4 in the submission and Table 2 in the rebuttal PDF)**. However, the source code of OYSTER and LISO has not been released (yet), so we cannot use the size prior for these two methods. Consequently, we cannot compare multi-class UNION to 'OYSTER+Size prior' and 'LISO+Size prior' for the multi-class 3D object detection experiment. **Self-training with UNION** We have experimented with self-training for CenterPoint trained with the class-agnostic pseudo-labels from UNION. We used the following strategy for each self-training round: (1) predict with the trained CenterPoint on the training dataset from nuScenes, (2) filter the predictions using a score threshold, i.e. only keep the prediction if the predicted score is at least equal to the score threshold, and (3) train CenterPoint from scratch using the filtered predictions. We did this for three rounds for three different score thresholds, namely 0.10, 0.20, and 0.30. The average precision (AP) did not improve for any score thresholds. In addition, we noticed that the AP decreased after each round of self-training. Especially for score threshold 0.10, the performance drop was significant, i.e. the AP dropped to less than 20, while UNION achieved 38.4 **(see Table 1 in the rebuttal PDF)**. So, we did not experience any benefit from self-training after training with the UNION pseudo-labels. This is in line with the objective of UNION, discovering the static and dynamic mobile instances before training such that the computationally expensive self-training is not needed and that consistency during training enhances the detection performance (e.g. the detector is not penalized for detecting a parked car during training). **Point-wise camera-based features** The used large vision transformer has a stride of 14, i.e. the spatial resolution of the obtained feature map is in each dimension 14 times smaller. Each LiDAR point is first projected to the image plane. After that, the pixel coordinate is transformed to feature map coordinates (i.e. dividing by the stride), and then bilinear interpolation is used to get a camera-based feature vector for each LiDAR point so that the lower spatial resolution does not constrain us. **DINOv2 version** See global rebuttal text: 'Different image encoders'. **Runtime analysis of UNION** We have computed the runtime statistics for creating the class-agnostic pseudo-labels with UNION, which are used for training the detector (CenterPoint). We tested on a system with 2 Intel Xeon E5-2690 v4 CPUs (56 logical CPUs) and 8 NVIDIA V100 32GB GPUs. The training dataset of nuScenes consists of 700 sequences. In practice, we process the sequences in parallel in 8 threads. As a result, it takes 9, 19, 7, 9, and 2 hours for the ground removal, spatial clustering, motion estimation (scene flow), image encoding (DINOv2), and appearance embedding, respectively. After that, the appearance clustering is done for all sequences together and this takes 1 hour. [1] Zhang et al. (2023). Towards unsupervised object detection from lidar point clouds. In CVPR. [2] Baur et al. (2024). LISO: Lidar-only Self-Supervised 3D Object Detection. In ECCV. --- Rebuttal Comment 1.1: Comment: Thank you for the responses. I have no further concerns.
Summary: This paper explores the challenge of unsupervised 3D object detection, introducing UNION. UNION leverages camera, LiDAR, and temporal information jointly to train 3D object detectors without relying on self-training. The approach demonstrates strong performance particularly on the nuScenes dataset. Additionally, the authors tackle unsupervised multi-class 3D object detection by clustering object appearance embeddings and employing these clusters as pseudo-class labels. Strengths: 1. This paper studies a significant problem in unsupervised 3D object discovery. 2. The paper is clearly written, and its motivation is straightforward. 3. This method introduces visual embedding to enable certain static objects to be generated during object proposal generation. 4. This method has achieved good performance on nuScenes dataset. Weaknesses: 1. Compared to prior work, the method's novelty is weak. The core innovation lies in leveraging self-supervised visual features to aid in object proposal extraction, thereby eliminating the need for self-training to acquire static vehicle data. 2. The core exploration of this work revolves around object appearance embedding. The current study only investigates the features of DINOv2. It remains to be explored how other self-supervised features perform in comparison, or whether combining different self-supervised methods could yield better results. 3. As for training the multi-class detector, I believe the choice of K is crucial. The authors use K=10 as default. It should conduct an ablation study on the impact of K. 4. As depicted in Table 2, the discussion centers around the challenges in orientation estimation. To enhance clarity, it would be beneficial to include detailed metrics such as ATE, AOE, etc. 5. In spatial clustering, when utilizing point cloud aggregation, how to deal with the dynamic objects, such as multiple moving cars whose point clouds may overlap. 6. As shown in Table 4, the AP for cyclists is always zero. Does this imply limitations of the method for detecting long-tail objects? What are some potential directions for improving detection of these less frequent objects in the future? Technical Quality: 3 Clarity: 3 Questions for Authors: See weakness. Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The authors have discussed the limitations of the work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank you for your valuable feedback. **The method's novelty** UNION extracts object proposals by spatially clustering the non-ground points from LiDAR. Subsequently, the velocity of each object proposal is estimated using self-supervised scene flow, and the cameras are leveraged to encode its visual appearance into an object appearance embedding. These different types of information are then fused in a novel way to discover mobile objects in 3D space, such as vehicles, pedestrians, and cyclists, namely the visual appearances are clustered into $K_{1}$ appearance clusters and the fraction of dynamic object proposals per appearance cluster is calculated. The appearance similarity between static and dynamic mobile objects is utilized for discriminating between static mobile objects (e.g. parked vehicles) and background instances (e.g. trees and buildings), i.e. appearance clusters are labeled as mobile when they have at least 5 percent dynamic objects. The instances of these mobile appearance clusters form together the set of (class-agnostic) mobile objects and can be used to train any 3D object detector in an unsupervised manner. Compared to existing methods such as OYSTER [1] and LISO [2], we do not rely on computationally expensive self-training to detect static mobile objects because we can discover them entirely unsupervised using their visual appearance. This significantly reduces the training time of the detector as only a single training is required instead of multiple training rounds (e.g. sequentially training 5-10 times) while at the same time obtaining a much better detection performance **(see Table 1 in our rebuttal PDF)**. **Different image encoders** See global rebuttal text: 'Different image encoders'. **Ablation for number of pseudo-classes $K_2$ in multi-class detection** We have added three hyperparameter configurations for the multi-class detection, namely 5, 15, and 20 pseudo-classes **(see Table 2 in the rebuttal PDF)**. The table shows that UNION with 5 pseudo-classes achieves the highest average precision (AP) and nuScenes detection score (NDS). In general, the multi-class versions of UNION get a slightly lower AP for the vehicle class than 'UNION+Size prior' but a significantly higher AP for the pedestrian class. As a result, multi-class UNION gets a relatively high AP for both the vehicle and pedestrian class compared to the baselines using the size prior. **True positive metrics for nuScenes** We have added the average translation error (ATE), average scale error (ASE), average orientation error (AOE), and average velocity error (AVE) to **Table 1 in the rebuttal PDF**. Note that we set the average attribute error (AAE) equal to 1 by default for the task of class-agnostic object detection because object classes have different attributes in nuScenes so attributes lose their meaning when all mobile classes are combined into a single class. Therefore, this error is not shown in the table. We use the official nuScenes formula for calculating the nuScenes detection score (NDS). **Dealing with dynamic objects when aggregating point clouds** For a frame, we aggregate the past $M$ and future $M$ non-ground point clouds, i.e. $2M+1$ LiDAR scans are aggregated into a single coordinate system, to obtain a denser point cloud. Consequently, the spatial clustering may create a single spatial cluster for dynamic objects that are close to each other during this time interval. We use $M=7$ for our experiments, resulting in a time interval of 0.70 second, and notice that it rarely happens that different objects are merged into a single spatial cluster. A potential solution for this would be to estimate the motion vector for each point of the spatial cluster (scene flow), and determine whether the motion across the entire cluster is consistent. If this is not the case, a cluster may be split into multiple spatial sub-clusters. However, this heavily relies on an accurate scene flow estimation. **Cyclist performance for multi-class detection** From the pseudo-classes of multi-class UNION, there are 1, 1, 3, and 4 pseudo-classes assigned to the cyclist class for 5, 10, 15, and 20 pseudo-classes, respectively. So, it is not the case that all pseudo-classes are assigned to either the vehicle or pedestrian class. However, the nuScenes evaluation protocol discards the precision-recall curve where the precision or recall is lower than 10 percent. As a result, the average precision for the cyclist class for multi-class UNION is equal to 0, while cyclists are detected. Therefore, we also evaluated without clipping the precision-recall curve **(see the last column in Table 2 in the rebuttal PDF)**. The results show that UNION with 20 pseudo-classes significantly outperforms the baselines with the size prior. **Potential solutions for improving less frequent objects** UNION assumes that objects are mobile (i.e. have the potential to move) and that mobile objects with a similar appearance occur relatively often. The appearance clusters are made using K-means, meaning that a mobile object type that rarely occurs will most likely be part of an appearance cluster belonging to objects with a dissimilar appearance. As a result, the mobile objects may be considered non-mobile. Some directions to improve this are (1) exploring other clustering mechanisms than K-means for creating appearance clusters and (2) creating learnable appearance embeddings (self-supervised) that split mobile and non-mobile objects in the appearance feature space better. [1] Zhang et al. (2023). Towards unsupervised object detection from lidar point clouds. In CVPR. [2] Baur et al. (2024). LISO: Lidar-only Self-Supervised 3D Object Detection. In ECCV. --- Rebuttal Comment 1.1: Comment: Thanks for the authors' response, and I have no further concerns.
Summary: The paper introduces UNION, an unsupervised 3D object detection method designed to detect both static and dynamic objects without manual labels. UNION utilizes spatial clustering and self-supervised scene flow to generate object proposals and employs visual appearance encoding to distinguish between static and dynamic objects. This approach enables a single training process, enhancing performance while reducing computational costs. Experiments on the nuScenes dataset show that UNION significantly improves the state-of-the-art performance in unsupervised object discovery, doubling the average precision score to 33.9. Strengths: 1. The approach is quite novel, successfully eliminating the dependence on self-training. 2. The presentation is clear and easy to understand. 3. The paper effectively highlights the similarities and differences with related literature. Weaknesses: The experiments are relatively limited. 1. Besides DINOv2, how do other image encoders affect the model's performance? 2. In Table 4, the cyclist AP is 0. Is there a more detailed investigation into the reasons for this? Given that, as a small object, the pedestrian class shows a significant improvement in AP with UNION-10pc compared to UNION+Size prior, why does the cyclist class not exhibit a similar improvement? 3. In Fig. 3, 5% was chosen as the threshold. Why was this value selected? The authors should provide more discussion or experimental support for this choice. Technical Quality: 2 Clarity: 3 Questions for Authors: See weaknesses. Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: The authors acknowledge the limitations of their work and plan to address these in future research. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank you for your valuable feedback. **Different image encoders** See global rebuttal text: 'Different image encoders'. **Cyclist performance for multi-class detection** From the pseudo-classes of multi-class UNION, there are 1, 1, 3, and 4 pseudo-classes assigned to the cyclist class for 5, 10, 15, and 20 pseudo-classes, respectively. So, it is not the case that all pseudo-classes are assigned to either the vehicle or pedestrian class. However, the nuScenes evaluation protocol discards the precision-recall curve where the precision or recall is lower than 10 percent. As a result, the average precision for the cyclist class for multi-class UNION is equal to 0, while cyclists are detected. Therefore, we also evaluated without clipping the precision-recall curve **(see the last column in Table 2 in the rebuttal PDF)**. The results show that UNION with 20 pseudo-classes significantly outperforms the baselines with the size prior. **Threshold of 5 percent for the fraction of dynamic instances X** As stated in the global response of the rebuttal, we have further improved the self-supervised scene flow component's recall by also estimating the velocity for spatial clusters with relatively few LiDAR points, and we now use only 20 appearance clusters for filtering the non-mobile instances. This improved recall makes the distinction between non-mobile and mobile appearance clusters more obvious **(see Figure 1 in the rebuttal PDF)**. In this figure, it can be visually observed that the first 13 clusters (indicated in blue) all have a relatively low dynamic instance fraction compared to the other 7 clusters (indicated in orange), i.e. the fraction of cluster 14 (considered a mobile cluster) is roughly three times as high as the fraction of cluster 13 (considered a non-mobile cluster). As a result of this observed difference, we selected 5 percent for splitting the fractions into two groups: (1) clusters with very few dynamic instances (non-mobile clusters) and (2) clusters with many dynamic instances (mobile clusters). In addition, we empirically observed that the detection performance is very robust when using the 5 percent threshold while varying, for example, the number of appearance clusters, or when the threshold itself is changed too to, for example, 7.5 percent. The intuition is that there should be a clear distinction between the dynamic instance fraction of non-mobile clusters and mobile clusters because non-mobile instances do not have the potential to be dynamic. In contrast, mobile instances move very often, as most are part of the traffic. There are multiple reasons why the non-mobile clusters do have some dynamic instances, including (1) false positives from the scene flow component, i.e. static objects are estimated to be dynamic (e.g. dynamic tree), and (2) some instances may be part of an appearance cluster while having an appearance dissimilar to the other instances in the appearance cluster because the instance is close to the border of the appearance cluster in feature space. Note that using a threshold equal to 0 percent, i.e. just using all object proposals, is equal to the output of the scene flow component and achieves an average precision (AP) of less than half of the performance of UNION (see Table 3 in our submission). So the appearance clustering component is essential for achieving high performance. --- Rebuttal 2: Comment: I have read all the reviews and the responses of the authors. I appreciate the author for providing these experiments. My concern is addressed. So I keep my rating.
Summary: This paper introduces UNION, a novel method for unsupervised 3D object detection that leverages object appearance-based pseudo-classes. This paper addresses the challenge of training object detection models without manual annotations by using spatial clustering and self-supervised scene flow to generate static and dynamic object proposals from LiDAR data. It then encodes the visual appearances of these proposals to distinguish static objects in the foreground and background. The main contribution is the method design that jointly uses camera, LiDAR, and temporal information to train existing 3D object detectors in an unsupervised manner. Strengths: 1. The paper is well-structured, with a clear abstract, introduction, methodology, experiments, and conclusion sections that logically flow from one to the next. 2. The use of figures and diagrams, such as Figure 1, effectively illustrates the process and contributes to the clarity of the UNION method. 3. The use of pseudo-classes based on object appearance for training classifiers is innovative, offering a new way to tackle multi-class object detection without relying on manual annotations. 4. The UNION method introduces a new approach to unsupervised learning by combining spatial clustering, self-supervised scene flow, and visual appearance encoding in a synergistic manner. This represents a creative fusion of existing ideas applied to the problem of 3D object detection. Weaknesses: 1. The claim of being 'the first to do unsupervised multi-class 3D object detection' requires scrutiny. It's noted that other works, such as the one by Wu et al. presented at CVPR 2024, also delve into unsupervised multi-class 3D object detection. If the paper is accepted, it would be prudent to revise this statement to reflect the current state of research accurately and avoid overstating the novelty of the approach. 2. The decision to solely utilize the nuScenes dataset for experiments raises questions about the breadth of the evaluation. The Waymo dataset, with its denser point clouds, might offer a better testbed for supervised tasks. Furthermore, the official multi-class metric of the Waymo dataset, which assesses the detection of vehicles, pedestrians, and cyclists, provides a more straightforward comparison to fully supervised detectors. It would be beneficial to consider a comparative analysis using the Waymo dataset to strengthen the paper's findings. Rebuttal may not have time for this work, but can be part of the future. 3. The observed zero AP for the cyclist class is concerning and warrants a deeper investigation. It is suggested that the discussion should be expanded to address potential solutions to this issue. While the less sample is a contributing factor, it's also essential to explore other possibilities, such as the high similarity in appearance between pedestrians and cyclists, which might lead to misclassification and the pedestrian labels actually being cyclist labels. [1] Wu, Hai, et al. "Commonsense Prototype for Outdoor Unsupervised 3D Object Detection." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024. Technical Quality: 3 Clarity: 2 Questions for Authors: See weakness. Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: The authors have provided a discussion on the limitations of their work. They acknowledge that the method makes implicit assumptions about the occurrence frequency of objects. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank you for your valuable feedback. **First method to do unsupervised multi-class 3D object detection** Our submission did not compare to [8] as it was not peer-reviewed back then. Now that it has been published at CVPR'23, we will add the paper to our related work section and Table 1 (the overview) to accurately reflect the current state of the research field. This concurrent work does not use the nuScenes dataset, and their code has not yet been fully released, according to their GitHub page, so we cannot compare with them. Wu et al. use multi-class bounding box templates based on Wikipedia to obtain a set of high-quality bounding boxes used for bounding box refinement, i.e. re-size and re-location. However, there is a fundamental difference between UNION and their methods of how the pseudo-classes are used to supervise the detector, i.e. class-agnostic versus multi-class detection. They do not train object classification as their trained detector only outputs class-agnostic bounding boxes, i.e. they do class-agnostic object detection. In contrast, we not only have a class-agnostic object detection experiment (experiment 1) but also have a multi-class experiment as clarified in the global rebuttal text (experiment 2). In addition, we do not assume any class-specific geometry prior during training and inference. For our multi-class object detection, we create $K_2$ appearance-based pseudo-classes as targets for training object classification and train CenterPoint in a multi-class fashion. We can assign the learned pseudo-classes to real classes after training, i.e. during inference, as we match the appearance prototype of each pseudo-class with an example appearance of the real objects that are relevant during inference, such as vehicles, pedestrians, and cyclists. Therefore, we are actually the first method to train unsupervised multi-class 3D object detection. In our paper, we will explain more extensively how our contribution differs from the existing methods, such as [8]. **Results on Waymo** We agree that results on the Waymo dataset [9] would improve the breadth of the evaluation. However, we do not have enough time during the rebuttal to provide these as Waymo is a very large dataset, and it would take more than one week in total to run the UNION pipeline plus training CenterPoint. Therefore, we consider Waymo results as future work. **Cyclist performance for multi-class detection** From the pseudo-classes of multi-class UNION, there are 1, 1, 3, and 4 pseudo-classes assigned to the cyclist class for 5, 10, 15, and 20 pseudo-classes, respectively. So, it is not the case that all pseudo-classes are assigned to either the vehicle or pedestrian class. However, the nuScenes evaluation protocol discards the precision-recall curve where the precision or recall is lower than 10 percent. As a result, the average precision for the cyclist class for multi-class UNION is equal to 0, while cyclists are detected. Therefore, we also evaluated without clipping the precision-recall curve **(see the last column in Table 2 in the rebuttal PDF)**. The results show that UNION with 20 pseudo-classes significantly outperforms the baselines with the size prior. [8] Wu et al. (2024). Commonsense Prototype for Outdoor Unsupervised 3D Object Detection. In Conference on Computer Vision and Pattern Recognition (CVPR). [9] Sun et al. (2020). Scalability in perception for autonomous driving: Waymo open dataset. In Conference on Computer Vision and Pattern Recognition (CVPR). --- Rebuttal Comment 1.1: Comment: I read the comments of other reviewers and the author's reply. The author also addressed my concerns. So I keep my previous recommendation.
Rebuttal 1: Rebuttal: We thank the reviewers (R1:LFQD, R2:ZwY5, R3:Gw7r, R4:JvDj) for their valuable and detailed feedback. We appreciate that our method UNION was overall well-received e.g. 'represents a creative fusion of existing ideas applied to the problem of 3D object detection' (R1), 'is quite novel' (R2), and 'has significant improvement on the accuracy compared to existing unsupervised methods' (R4). In addition, we are pleased that the paper is considered 'well-structured' (R1), and that it 'effectively highlights the similarities and differences with related literature' (R2) and 'studies a significant problem in unsupervised 3D object discovery' (R3). **Clarification on primary and secondary contributions (class-agnostics discovery, multi-class detection)** OYSTER [1] and LISO [2] only provide class-agnostic pseudo-bounding boxes for training (i.e. no pseudo-class labels). Our first step is similar in that it provides class-agnostic bounding boxes for mobile objects. Object proposals are clustered based on appearance embedding into $K_1$ appearance clusters, and mobile objects are obtained by selecting the appearance clusters with at least $X$ percent dynamic instances. Unlike OYSTER and LISO, our approach does not require computationally intensive self-training iterations. This is our primary contribution, described in Section 3 (lines 185-252) and evaluated in Section 4.3. In a second step, however, we extend class-agnostic 3D object discovery to multi-class 3D object detection. We do so by splitting the obtained set of (class-agnostic) mobile objects into $K_2$ appearance-based pseudo-classes, using the same object appearance embedding technique used in the first step. During inference, we assign each pseudo-class to a real class using the appearance prototype of the pseudo-class and the appearance of the real classes, requiring a single appearance embedding per real class **(see Figure 1 in rebuttal PDF)**. Note that this step requires negligible supervision as we do one-shot association during inference. This is our secondary contribution, described in Section 3 (lines 252-255) and Section 4 (lines 336-345), and evaluated in Section 4.4. We cannot compare to OYSTER and LISO for this task as they do not have pseudo-classes, i.e. they only have class-agnostic pseudo-bounding boxes, and their source code is not released. We will adapt the paper text to make the above points more clear. **Improved results** We meanwhile improved the results for class-agnostic 3D object detection **(see updated Table 1 in the rebuttal PDF)**. Previously, we did not use the velocity direction of dynamic objects to correct the orientation of the fitted bounding boxes and did not use the estimated velocity during training (velocity prediction is part of the nuScenes detection score (NDS)). In addition, we tuned the scene flow estimation, resulting in better performance, i.e. significantly higher recall with slightly lower precision for determining whether an object is dynamic. We also did a parameter optimization on $K_1$ (grid search) and now use 20 appearance clusters instead of 50. This all combined increased the average precision (AP) by 4.5 to 38.4, which is more than 3.5 times higher than OYSTER and LISO. Also, in this rebuttal **(see PDF)**, we present updated results for the multi-class detection, show the bounding box generation pipeline, provide an analysis for the cyclist performance, present results for different image encoders, and provide a runtime analysis for the entire UNION pipeline. **Different image encoders** In our submission, all experiments were conducted with a large vision transformer (ViT-L/14) trained using DINOv2 [3] with registers [4]. Table 3 compares UNION's class-agnostic 3D object detection performance for DINOv2 and I-JEPA [5]. DINOv2 uses contrastive learning, which ensures that the representations of different views of the same image are similar while the representations of different images are distinct, while I-JEPA predicts the representations of part of an image from the representations of other parts of the same image. DINOv2 can process high-resolution images, such as the camera images from nuScenes. In contrast, I-JEPA can only process square-shaped images of a maximum of 448 by 448 pixels. As a result, the obtained feature maps from I-JEPA have a lower resolution than the ones from DINOv2. From Table 3, it can be seen that UNION with DINOv2 outperforms UNION with I-JEPA by 15.6 in AP and 8.4 in NDS. Please note that our main paper contributions do not depend on specific canonical steps of the UNION pipeline (e.g., image encoding, scene flow estimation). If better approaches become available, UNION can incorporate them. **Outperforming shelf-supervision** Recently, a new method named CM3D [6] was released on arXiv; this method has not been peer-reviewed yet. CM3D is a 'shelf-supervised' 3D object detection method, i.e. off-the-shelve foundation models that were trained with manual labels such as SAM [7] are used, and it achieves an AP and NDS of 27.9 and 25.0, respectively. In contrast, UNION is *entirely* unsupervised and outperforms CM3D by more than 10 points in AP and more than 5 points in NDS **(see Table 1 in the rebuttal PDF)**, demonstrating the effectiveness of the UNION pipeline, while not relying on manually labeled data sources. [1] Zhang et al. (2023). Towards unsupervised object detection from lidar point clouds. In CVPR. [2] Baur et al. (2024). LISO: Lidar-only Self-Supervised 3D Object Detection. In ECCV. [3] Oquab et al. (2024). DINOv2: Learning robust visual features without supervision. In TMLR. [4] Darcet et al. (2024). Vision Transformers Need Registers. In ICLR. [5] Assran et al. (2023). Self-supervised learning from images with a joint-embedding predictive architecture. In CVPR. [6] Khurana, et al. (2024). Shelf-Supervised Multi-Modal Pre-Training for 3D Object Detection. arXiv. [7] Kirillov et al. (2023). Segment anything. In ICCV. Pdf: /pdf/b9c8e4c54fd5d71e2aad972f60fab8e90810cb93.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Transformers need glasses! Information over-squashing in language tasks
Accept (poster)
Summary: This paper presents an in-depth analysis of decoder-only Transformers, focusing on their limitations in handling information propagation. The authors identify two key phenomena: "representational collapse" due to "over-squashing" (line in Graph Neural Networks). These issues lead to a significant loss of information, especially in tasks involving counting and copying, which require precise handling of individual tokens. The paper combines theoretical analysis with empirical evidence to demonstrate these problems. Strengths: 1. The paper provides a theoretical framework to understand the limitations of decoder-only Transformers. The concepts of representational collapse and over-squashing are well-formulated and offer valuable insights into why Transformers struggle with certain tasks. 2. Very strong theoretical analysis is provided in paper, but the clarity of narrative is preserved. The quality of presentation is excellent. 3. The authors provide empirical evidence from contemporary LLM, specifically Gemini 1.5 and Gemma 7B, supporting their theory. They also provide the analysis of effect of floating point precision. 4. Authors carefully provide all the necessary details about experiment. Weaknesses: 1. The experiments are primarily focused on specific artificial tasks (counting and copying), but the study lacks the analysis of the real-world texts. It would be beneficial to include some statistics of difference of tokens' representations on the standard text corpora. 2. Authors provide only one artificial solution to split the sequence by different token. 3. The theoretical analysis makes some simplifying assumptions, such as treating attention weights as independent of the input. While the authors justify these assumptions, it would be beneficial to explore their impact on the results more thoroughly. Technical Quality: 3 Clarity: 4 Questions for Authors: 1. Could you please include some statistics of difference of tokens' representations on the standard text corpora? It would be interesting to see does representational collapse happen in natural texts. 2. How can you deal with representational collapse in real-world scenarios? Wouldn't it be worse to include random token for the information? 3. Could this theoretical result be generalized to non-causal language modeling (different attention mask)? Also here are some small remarks: Line 331 typo: summariesd Line 339 space in "phenomena : representational" Confidence: 3 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: 1. Theoretical analysis relies on several simplifying assumptions, which are justified but may not fully capture the complexities of real-world models. 2. Empirical validation is conducted only on two specific models Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are happy to hear that you believe our paper offers valuable insights into Transformers, has a very strong theoretical analysis, and excellent presentation. We would like to address your questions. **(Q1)** *Could you please include some statistics of difference of tokens' representations on the standard text corpora? It would be interesting to see does representational collapse happen in natural texts.* We thank you for the thought-provoking question! Having given it consideration, we believe that it is not sufficiently well-defined to measure token representation difference on generic natural text prompt pairs. Generally, representational collapse can only be rigorously talked about w.r.t. a specific family of prompts which are all related by a certain property. Throughout our paper we explore a number of these. Hence, to rigorously study representational collapse in general-purpose corpora, we would first need to have a method to detect specific subsets of inputs in these corpora that form a relevant family, and we find this to be a non-trivial endeavor beyond the scope of this rebuttal. Randomly-chosen pairs of natural prompts without clear relations to each other are highly unlikely to have collapsing representations due to the high diversity of tokens in both. All this being said, we believe there are many domains where representational collapse, even just over families of prompts, can be a significant issue---some key examples include: (1) finance where it is common to find numbers with many zeros (share counts, acquisition amounts, etc…), (2) numerical spreadsheets, (3) LLMs operating over bit sequences. This list is of course not exhaustive, but provides some concrete examples in which our analysis would be likely very relevant. **(Q2)** *How can you deal with representational collapse in real-world scenarios? Wouldn't it be worse to include random token for the information?* In general, we believe that it is very hard to avoid the issue of representational collapse without modifying the architecture, or as you hinted in the next question, the attention mask. Solutions such as adding commas between repeated digits could work well for domains such as finance, where this is common practice to help parse large numbers. Similarly, adding “whitespace” or even avoiding repetition due to tokenization (as we touch upon in our paper) will have the same effect. We believe that adding random tokens is likely to do more harm than good, but adding “filler” tokens is instead likely a practical solution, something that has already been observed to help in certain cases [2]. **(Q3)** *Could this theoretical result be generalized to non-causal language modeling (different attention mask)?* This is an excellent point and something that we think would be very important to comment on in our paper. Indeed different attention masks (such as the commonly used sliding-window attention), are going to behave differently to a full causal mask, both in regards to representational collapse and over-squashing. A sliding-window mask for instance can help with representational collapse as now the contribution of a single token is not forced to monotonically decrease with sequence length. It would also help fix the topological aspect of over-squashing in which tokens at the beginning of the sequence are favored, in fact it would flip this. We will add a more detailed analysis of this in a new section in the appendix, touching upon sliding-window attention and the common alternation between windowed (local attention) and standard causal attention as done in the new Gemma 2 models [1], for instance. We once again thank you very much for this great comment and are happy to provide more details of our analysis. We hope to have addressed your questions and point you to the global comment for additional ablations and discussions. We thank you once again for endorsing our paper. We are looking forward to the rebuttal period. [1] Gemma 2: Improving Open Language Models at a Practical Size. Google DeepMind, ArXiv, 2024. [2] Let’s Think Dot by Dot: Hidden Computation in Transformer Language Models. Pfau et al, ArXiv, 2024. --- Rebuttal Comment 1.1: Comment: Thank you for your quick and very detailed reply. My concerns were answered, I understand now the real-world scenarios, and I admire the filler-token solution supported by references. I believe the score is already high enough, but you have fully answered all the questions. I am looking forward to see the detailed analysis of several types of attention in appendix. Best wishes to your paper, and let me know if I can be of any help. --- Reply to Comment 1.1.1: Title: Thank you! Comment: Thank you for acknowledging our responses and appreciating our work! We are delighted to hear our comments answered your concerns. Should you have any other questions, please feel free to let us know.
Summary: The paper provides a theoretical and empirical analysis of the final representations of transformers, revealing the phenomenon of representational collapse. Strengths: - The paper is well-written. - It highlights a problem in transformers that causes them to fail on a large set of tasks (assuming this extends to addition, etc.). - The paper provides empirical evidence supporting the theoretical claims, demonstrating the real-world relevance of the identified issues. - The impact of low-precision floating-point formats is interesting, which is highly relevant to current everyday practices Weaknesses: - There is no strong solution to this problem. Although the authors provide a simple approach, it is challenging to understand how to apply this approach in a practical setting. - The problems seem to be tied to the positional embeddings, as the authors stated. NoPE (https://arxiv.org/abs/2305.19466) and no causal mask would make this problem impossible. However, there are no experiments on how certain positional embeddings might be better than others. Technical Quality: 3 Clarity: 3 Questions for Authors: - What do the authors believe is the breadth of tasks affected by this fundamental issue in transformers? - What are the positional embeddings used for the experimental settings? Can these affect the representational collapse? - Minor: Some figures, like Figure 2, have a very small font size, making them hard to read. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: See Questions and Weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank you for pointing out that our paper is well written and that you found it interesting and with real-world relevance. We would like to address your questions on the breadth of tasks affected and on positional encodings. **(Q1)** *What do the authors believe is the breadth of tasks affected by this fundamental issue in transformers?* While it is hard to comment on the entirety of the tasks affected by this issue, we believe that copying and counting are fundamental enough to claim that this is a rather wide-spread issue concerning the repetition of tokens. Copying is especially important as many LLM endpoints today benefit from tool-use, which relies on copying. We suspect that tasks that see frequent repetition of tokens to be particularly problematic. For instance in finance it is common to see many consecutive 0s. For such a case we believe that our simple solution of including “,” every third digit is rather sensible and practical. We kindly refer to the global comment for a longer discussion on what we believe are the applications of our results. **(Q2)** *What are the positional embeddings used for the experimental settings? Can these affect the representational collapse?* We thank you for the valuable question. Our experiments rely on the positional encodings used by Gemma (RoPE). We believe that RoPE is a wide-spread PE, for instance also used by Llama 3 [1]. Having said this, there is no particular reason that this issue is one of RoPE specifically. To demonstrate this, in the global comment we describe an ablation we have done showcasing representational collapse occurring with RoPE, Alibi [2], original sinusoidal embeddings (APE) [3], and No positional encodings (NoPE) [4]. We refer to the global comment for further details and to the supplementary PDF for the results. **(Q3)** *Minor: Some figures, like Figure 2, have a very small font size, making them hard to read.* We thank you for pointing this out, we have now improved the readability and font size of the figures throughout the paper. We hope that our response and additional experiments answer your questions. We are of course more than happy to keep engaging with you and thank you again for endorsing our paper. [1] The Llama 3 Herd of Models, July 23, 2024. https://ai.meta.com/research/publications/the-llama-3-herd-of-models/. [2] Train Short, Test Long: Attention with Linear Biases Enables Input Length Extrapolation, Press et al. Arxiv, 2021. [3] Attention is all you need, Vaswani et al. Neurips, 2017. [4] The Impact of Positional Encoding on Length Generalization in Transformers, Kazemnejad et al, Neurips, 2023. --- Rebuttal 2: Comment: I thank the authors for their reply. After reading their comments, I will keep my score. I do believe that this paper helps us understand a problem in LLM that is directly seen in a couple of problems such as copying and counting that has an impact in many applications. However, I did not increase my score further because I do not believe that a strong solution was proposed to this problem. --- Rebuttal Comment 2.1: Comment: We would like to thank you once again for your time and consideration. We are pleased that you wish to maintain your acceptance score for our paper. We are very much excited about the potential of our work to contribute to stronger and scalable solutions in the future.
Summary: In the paper, the authors first discuss a phenomenon occurring in LLMs that they call "representational collapse". They provide empirical evidence of the phenomenon in state-of-the-art language models and they provide a theoretical justification for it. They then show that decoder-only transformers exhibit what is known in graph network theory as "over-squashing", which may cause loss of information in the final prediction. Strengths: The paper highlights a curious phenomenon in current language models, which shows their limitations even for extremely simple tasks like counting. The paper is well organized and well written. The observation of the over-squashing phenomenon is interesting. Weaknesses: The observed phenomenon, while interesting, has limited scope. Also, the representational collapse result is hardly surprising to me. It feels quite trivial that in the limit of a very long sequence, the importance of a single token out of many repeated ones becomes negligible. Technical Quality: 3 Clarity: 4 Questions for Authors: 1) While the paper claims that the transformer architecture that they analyze is the most popular in state-of-the-art LLMs, it seems to me that they only consider relative positional embeddings and not absolute ones. To the best of my knowledge, current LLMs like GPT and Llama all use absolute positional embeddings. Would your results also extend to this case? In any case, I would highlight the use of relative positional embeddings more prominently in the main text. 2) In the actual prompts used for the experiments, the sequence does not appear at the end of the prompt but somewhere in the middle. Would your theoretical results somehow generalize if the repeated token is not the last one? Would anything change in the experiments if the sequence appears exactly at the end? Confidence: 3 Soundness: 3 Presentation: 4 Contribution: 2 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are happy that you found our paper well organized, well written, and the over-squashing phenomenon interesting. We would like to address your questions on positional encodings and on the prompting. **(Q1)** *[...] it seems to me that they only consider relative positional embeddings and not absolute ones. To the best of my knowledge, current LLMs like GPT and Llama all use absolute positional embeddings. Would your results also extend to this case?* We thank you for the excellent comment. We would like to point out that according to Table 3 from the most recent Llama 3 [1] report, Llama 3 seems to be using the relative positional encoding RoPE. As far as we are aware, details regarding positional encodings used in most recent GPT models are not publicly available and as such we cannot comment with certainty on the positional encodings used for such models. Regardless, we agree with you that it is important to add more details on the type of positional encoding used. We clarify that our results still apply for absolute positional encodings as we never make any explicit assumptions on the formulation of the PE. What we do require is that the effect of the positional encoding decays with distance, a common design choice used to encode distance in PEs. We have further emphasized this in the main text and will add a detailed section in the Appendix covering RoPE, Alibi [2], absolute sinusoidal embeddings (APE) [3], and no positional encodings (NoPE) [4]. To provide experimental evidence supporting such claims, we have designed a synthetic experiment showing that representational collapse occurs also for Alibi, sinusoidal absolute positional encodings, and NoPE. We refer to the global comment for the details and results. We thank you for helping to strengthen our work in this direction, which we believe is a very important one. **(Q2)** *In the actual prompts used for the experiments, the sequence does not appear at the end of the prompt but somewhere in the middle. Would your theoretical results somehow generalize if the repeated token is not the last one? Would anything change in the experiments if the sequence appears exactly at the end?* We thank you for the great point you raise. In our prompts we add the formatting instructions at the end of the prompt in order to help with the automatic parsing of the output. We did not notice any qualitative difference in the counting tasks. On the copying tasks it seems like the formatting affects more the results. To explore this, we reformatted in 2 ways: 1) "What is the last digit of the following sequence? Please answer exactly as 'The answer to your question is: <ANSWER>'. Here is the sequence: {seq}" 2) "Please answer exactly as 'The answer to your question is: <ANSWER>'. What is the last digit of the following sequence? {seq}" The results are in the supplementary PDF in Figure 2, with the first being called Prompt Type 1 and the second Prompt Type 2. We see that the model encounters the same failure for both prompts, with some prompts failing at a later point than others. The theory applies most directly when the token difference occurs at the end, but it can also be applied to the prompts with the formatting instructions coming at the end. This holds due to a recursive argument. Once the representation has collapsed, the information for that token will be lost in the subsequent tokens as well. To confirm this, we plot the hidden representations of Gemma under the same prompt 3) used in the original copying experiments and show that the representations collapse in such a case – see Figure 3 in the supplementary PDF. We will add these two additional experiments to the appendix of our work. 3) “Consider the following sequence: {seq} What is the last digit in this sequence? Please answer exactly as 'The answer to your question is: <ANSWER>'” We thank you again for your comment and believe that these additional results help to strengthen our paper in this direction. **(W1)** *The observed phenomenon, while interesting, has limited scope. Also, the representational collapse result is hardly surprising to me [...]* While we are happy that you find the phenomenon to be interesting, we respectfully disagree that its scope is limited, as it highlights a fundamental issue of the Transformer architecture to perform important tasks. We believe that there are two fundamental observations in our analysis of representational collapse that are valuable. (1) We believe that the idea of analysing what can be contained in the embeddings of the last token of the last layer is a novel approach that may provide an interesting new way of studying decoder-only Transformers. (2) We believe that tying representational collapse to an inability to generalize/solve problems like counting and copying due to floating point arithmetic errors is novel. We believe that connecting (1), (2), and representational collapse together creates an interesting way of understanding decoder-only Transformers. We emphasize that tasks such as copying are fundamental for settings in which an AI agent dispatches part of a computation to external tools, a paradigm that is becoming more and more frequent. We thank you again for the great comments that have helped strengthen our work. We hope that our additional experiments and discussion make you more confident about our contributions. We are of course happy to clarify any further points in more detail and we thank you for endorsing our paper. [1] The Llama 3 Herd of Models, July 23, 2024. https://ai.meta.com/research/publications/the-llama-3-herd-of-models/. [2] Train Short, Test Long: Attention with Linear Biases Enables Input Length Extrapolation, Press et al. Arxiv, 2021. [3] Attention is all you need, Vaswani et al. Neurips, 2017. [4] The Impact of Positional Encoding on Length Generalization in Transformers, Kazemnejad et al, Neurips, 2023. --- Rebuttal Comment 1.1: Comment: Thank you for your satisfactory comments to my questions. I will raise my score to 6. --- Reply to Comment 1.1.1: Title: Thank you! Comment: We would like to thank you for acknowledging our responses and increasing your score. We remain available should you have any additional questions during the discussion period!
null
null
Rebuttal 1: Rebuttal: We are delighted to see that our paper has been well-received by all reviewers, with comments on the high quality of the writing and presentation. We would like to summarize the improvements we have made to our paper. We have added a supplementary one-page PDF with the additional experiments in our response as well. - Improved discussion and experiments regarding positional encodings, now also including Alibi [1], Absolute Sinusoidal Embeddings (APE) [2], and no positional embeddings (NoPE) [3] (Reviewers SAa1 and GKd9) - Ablation on the structure of the prompt (Reviewer SAa1) - Added discussion on windowed masked attention (Reviewer vhDf) - Improved figure readability (Reviewer GKd9) In this global comment we address in more detail topics that were touched upon by more than one reviewer. **Effect of different positional encodings** Reviewers SAa1 and GKd9 both asked how different positional encodings (PEs) play a role in representational collapse. The current experiments focus on RoPE, which is widely used in LLMs today, e.g. Gemma and Llama 3 [1]. The theory uses the fact that the effect of the positional encodings decreases with distance, which we believe applies to many of the popular encodings used today. To showcase this, we have designed a synthetic experiment testing the representational collapse using RoPE, Alibi [2], Absolute Sinusoidal Embeddings (APE) [3], and no positional embeddings (NoPE) [4]. The experiment serves as a very controlled setting to ablate positional encodings (absolute and relative) in isolation. In our experiment, we sample the entries of the queries, keys, and values from a standard Gaussian independently and then apply the various encodings, taking into account normalisations due to e.g. layer norm. We sample sequences of length n and create sequences of length n+1 by taking the sequence of length n and repeating the last token. We then measure the L1 distance between the two latent vectors of the last token of the two sequences. The result can be seen in Figure 1 in the supplementary PDF, noting that the y-axis is measured in log-scale. We set the hidden dimension to 64, use a single layer, and focus on a single attention head. It is clear that the different positional encodings converge very similarly, supporting our theoretical claims on representational collapse. **Practical implications of our work** All reviewers have asked for us to comment further on the practical implications of our work. We believe that highlighting the brittleness of fundamental operations of a Transformer-based model is important to increase our understanding of the robustness of such models and to ultimately help us improve them. The prompts we chose allowed us to more carefully measure and support our theory, as general-purpose queries may have many more confounding variables. We reiterate that the tasks we study (counting and copying) are essential for future agent-based AI systems. Especially in settings where AI off-loads most of the computation to other tools, it is important to be able to correctly copy the input into such tools. We believe there are numerous real-world applications which this work could prove to be insightful for, particularly for tasks which see frequent repetition of tokens. One example is finance in which numbers could contain a large number of 0s (e.g. number of shares, company valuations, etc.), which was one of the motivations for us to test the introduction of “,” characters to delimit the repetitions. Other examples could be the processing of spreadsheets which could present many repeated values or LLMs operating over bit sequences. Further, we are excited by the prospects of using our work to explain existing observed phenomena in LLMs, for instance, the “lost-in-the-middle” phenomenon. We hope that our work acts as a foundation to understand further issues related to representational collapse and over-squashing in LLMs. We are very much looking forward to the rebuttal period and thank the reviewers for helping improve and strengthen our work with their valuable insights, questions, and comments. [1] The Llama 3 Herd of Models, July 23, 2024. https://ai.meta.com/research/publications/the-llama-3-herd-of-models/. [2] Train Short, Test Long: Attention with Linear Biases Enables Input Length Extrapolation, Press et al. Arxiv, 2021. [3] Attention is all you need, Vaswani et al. Neurips, 2017. [4] The Impact of Positional Encoding on Length Generalization in Transformers, Kazemnejad et al, Neurips, 2023. Pdf: /pdf/26c104552761f7942643b61708ea7227a1935fdb.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Global Distortions from Local Rewards: Neural Coding Strategies in Path-Integrating Neural Systems
Accept (poster)
Summary: This paper develops a model meant to explain recent experimental work showing that the orderly receptive fields of grid cells in the rodent brain can be distorted by spatial landmarks. Here, the model is a biologically-inspired ANN which is trained to path-integrate, with non-uniform weighting across the environment to induce distortion. The resulting distortions due to reward are then quantified, which offers some concrete hypotheses about the corresponding behavior expected from grid cells in the brain. Strengths: * Grid cell distortion is a strong empirical finding that calls into question some key assumptions in systems neuroscience, and developing explanations and models for it will be a major contribution to theoretical neuroscience. * The model the authors consider here is eminently simple and easy to explain, but captures the problem quite well. * The writing is polished, and the paper is clear and easy to read. Weaknesses: * The theory in Section 3 is not very deep. As I understand it, it basically lists a few plausible distortions (diffused, attracted, banded) and considers what impact the first two would have on the network. However, only diffused units are studied in the experiments in Section 4, even though banded units are also found, and an additional type (ring units) identified in the experiments is not accounted for. Overall, I feel that the theory doesn't have much explanation to offer of the empirical phenomena. * The experiments are not particularly in-depth. Some distortions are identified, but only diffusion is carefully examined. How particular changes to the loss correspond to particular distortions is not studied. This is a passable study of how the manifold geometry is warped (which could still benefit from more thorough experiments), but a suitable contribution for this venue would have to involve a more comprehensive analysis of distortion. Technical Quality: 2 Clarity: 4 Questions for Authors: * In Fig 3, why does the diffused firing field have different periodicity from the original? * Line 228 states that "we observed the emergence of diffused and band units, as predicted by our theoretical framework". However, I think Section 3 only examines the effect of diffused and attracted units on the manifold; as far as I can tell, it doesn't make any claims about when units with these tunings are expected to appear. * It makes sense to me that attracted units -- which would increase the resolution for highly salient regions of space (correct me if I'm wrong) -- are a relevant distortion. However, they don't seem to be found in Section 4! This is surprising to me. * The text of the introduction strongly suggests that reward will be incorporated into the loss function. However, the actual method simply allows for weighting the prediction error for different locations, which is more generic (this is an advantage, in my opinion) and should be better reflected in the introduction. * It may be helpful to quantify distortion with some other measures (e.g. distribution distance). Confidence: 4 Soundness: 2 Presentation: 4 Contribution: 1 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for taking the time to read and review our paper. Your feedback was invaluable, as it highlighted that we had not properly emphasized several crucial aspects of our work. In particular, we had not sufficiently explained the generality of our theoretical framework and the fact that our experiments were made possible only when freezing the place cell read-out weights. We address your comments and questions in detail below. **Theoretical contributions** We thank you for this important remark. Motivated by your feedback, we provide additional elements to our theory, as discussed in the global answer to all reviewers. We now explicitly model the emergence of the four deformations: diffused units, band units, ring units, and attracted units. Interestingly, this only required a simple modification of the theory. We look forward to hearing your thoughts about it. **Experimental contributions** You noted that we did not sufficiently study how particular changes in the loss function correspond to particular changes in the rate map distortions. Indeed, the earlier experiments were simple in the sense that differences in reward location, magnitude, or spread were not well explored. We now performed these experiments, please see our global response for details. As you mentioned, the most relevant modification in our work is the differential weighing of the loss function to incorporate the reward location. Here, we wish to emphasize another of our unique contributions that was missing from your summary. When we first performed the saliency training, we observed that the topology of the neural manifold was not preserved (Fig. 7B). This was a surprising result, one we had not anticipated. Only after we froze the pre-trained output weights to the place cells, we were able to observe both the global distortions and the preservation of the topology (Fig. 7C). So, when referring to saliency training, we do not mean a simple modification of the loss function, but also this non-trivial aspect of weight freezing. We believe setting up this framework to study the effects of local rewards is an important technical contribution of our work, which the future work can build on. **Signal periodicity** In Fig 3, showing results on synthetic neural responses, the diffused units have the same spatial periodicity as the original units – as the diffused units were created by applying a Gaussian convolution to the original units. That is, the locations of the signal peaks are the same. **Wording on diffused and band units** This is a very good point, thank you for bringing it up. As you correctly point out, our theory does not explain why some distortions appear and some others don’t. Instead, we provide a mathematical formulation of their distortions, and explain what is the effect of this distortion on the neural manifold. We apologize for the confusing wording, which we propose to correct as follows: "we observed the emergence of diffused, band and ring units, which fall into the category of distortion via isotropic and anisotropic deformations from our theoretical framework". **Attracted units** You mentioned that the lack of attracted units in the trained piRNN was surprising. We also expected to see some local distortions of the grid cell firing patterns, given the standard interpretation of neuroscience experiments like Boccara et al. However, to our surprise, we did not observe this kind of local “magnification” or deformations in any of our experiments. Instead, we observed more global changes in the rate maps – which inspired the title of the work, “Global Distortions from Local Rewards”. We believe this is a strength of our work, not a weakness. Our results challenge the conventional interpretation that reward modulation results in local magnification of the grid cell lattice, which may be a simplistic explanation. Indeed, in hindsight, the original findings might have been inconsistent with the conventional wisdom that grid cells provide a global code for space. Thus, our results provide the first evidence that reward modulation may result in global deformations of spatial responses in MEC, and the local distortions may arise due to interactions with other experimental variables not controlled in Boccara et al., 2019, or a reasonable but incorrect interpretation of the data. It may also be true that future changes to the saliency training may recapitulate the experimental findings. Our work is just the first step (which sets up the framework) to study this problem theoretically and in detail. **Rephrasing introduction.** We agree with the suggestion of rephrasing the introduction so that it no longer suggests that “reward will be incorporated into the loss function”. Indeed, as you mention, the actual method is simpler. We plan to rephrase as follows: “we modify the loss function to differentially weigh position estimation error based on a saliency map of space.” Thank you for the suggestion! **Quantifying distortion** This is an excellent suggestion! To address this, we performed statistical tests with structural similarity index and Wasserstein distances. Specifically, we computed these distances between images before and after distortion for each class of distortions, and now defined the distortion units as a result of this statistical analysis. Overall, we believe your feedback was very helpful for increasing our impact. To address your concerns, we have performed additional analyses (leading to new results, see our general response!) and expanded on our theory. If you believe we have adequately addressed your concerns, would you consider increasing your score to support the acceptance of our work? --- Rebuttal Comment 1.1: Comment: Thank you for the thorough response to my comments. I sincerely appreciate that they've been taken seriously. I am convinced that you've addressed my concerns on the experimental side of things, and I'm increasing my score to a 5 (from 3) accordingly. Thank you also for drawing my attention to the necessity of freezing the readout weights, which is another (i) an important starting point for future work in this direction and (ii) a prediction of this model for neuroscience which should be highlighted. While I believe the new theoretical results as described will significantly strengthen the paper, I'm unwilling to evaluate them sight unseen and so am not incorporating them into my revised score. --- Reply to Comment 1.1.1: Comment: Thank you very much for your feedback throughout the process, taking our rebuttal into consideration, and engaging with us constructively. Thanks to you, our results are much better presented and the paper is strengthened!
Summary: The overarching goal of this paper is to provide a theoretical framework to understand the experimentally observed effects of rewards on grid cells, which have historically been known to encode spatial information. To do so, they focus on path-integrating RNNs whose units behave like grid cells post training. They show that modeling the influence of rewards as a diffeomorphism of the environment (with constant firing rate budget per neuron), results in changing fire rates of grid cells such that the underlying manifold (toroid) remains unchanged in topology, however its geometry get distorted. That is, grid cells are able to preserve spatial information as before, while accounting for rewards through a global distortion. They further fine-tune piRNNs with a modified loss that captures rewards, and show that under the right training scheme the resultant grid cell firing patterns change in a way that only changes the geometry and not the topology of the neural manifold. In particular, they observe ring like structures, diffused firing grids and band like structures in the presence of rewards in the environment. Strengths: 1. Firstly, the paper is well-written and very clear. 2. The motivation is very strong: understand what grid cells encode is indeed an important scientific question with recent discoveries showing changes in grid cell firing patterns in the presence of environment cues. 3. The theoretical framework describing the effect of rewards as a diffeomorphism and the consequent effect on the neural manifold is solid, and a strong conjecture re coding in grid cells. 4. The addition to the loss function, and fine-tuning schemes that preserved the toroid topology are indeed unique contributions. Weaknesses: 1. My main concern here is that while the theoretical ideas underlying this work make sense, in practice we do not see firing patterns that have been observed empirically. I understand that the diffeomorphism theory can in principle explain attractive firing patterns that have been experimentally observed, however, the path-integrating models trained by the authors assuming a circular reward function do not show such patterns. As a result, while intuitively the theory makes sense, it seems to me that it is crucial to address the disconnect in the observed results compared to experimental findings. I am curious to hear the authors thoughts on this. 2. I also wish that the experiments were more thorough, for example investigating a range of salience functions potentially uncovering distinct firing rate patterns. Perhaps a different salience map would lead to attraction firing patterns? Technical Quality: 3 Clarity: 4 Questions for Authors: 1. With respect to continual learning, would this scheme be capable of training a salience-trained piRNN on a new changed reward function? I am curious as to whether a salience-remapping can be observed in such a setting, while preserving the same spatial encoding. (To be clear, I am not suggesting this experiment be run during the rebuttal phase, but curious to hear the authors thoughts on this). 2. While the authors claim global distortion in the presence of rewards, in my understanding references 16,17,18 which experimentally observe firing rate distortions report them as local changes as opposed to global distortions How do the authors reconcile their findings with these experimental works? 3. Is there a link to be made between the reward-based loss function and the diffeomorphism theory presented earlier? I.e. does this loss function encourage such a mapping? Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 2 Limitations: The main limitation of this paper seem to be its disconnect with existing experimental work that observed reward-based firing rate distortions in grid cells. Since this is a paper attempting to explain a scientific phenomenon, it is important to address this. I am relatively torn as I think the theory itself is a novel contribution and one that deserves to see the light of the day, I would appreciate if the authors could answer my questions in the weakness and questions section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your thorough review. We appreciate your attention and address your comments and questions below. **Main concern: link between theory, and firing patterns observed in piRNN experiments versus real experiments** Thank you for raising the point that distortions of firing patterns in real rodent experiments differ from those observed in piRNN experiments. We think that this is an important point, and is actually one of our most interesting findings, which is answered below in response to your question #2. **On piRNN experiments** We agree that more thorough experiments would be beneficial to this work. This concern was shared across several reviewers. Consequently, we have run additional experiments. These experiments include a wider range of saliency functions as per your suggestion. Please see our global answer where we present and discuss their results. **Answer to question 1. on continual learning** You asked whether this scheme would be capable of training a salience-trained piRNN on a new changed reward function. This is a very interesting question! In the same way that we pretrained a network with uniform saliency (no reward) and then fine tuned the network with non-uniform saliency s(x), a good starting point would be to introduce additional phases of fine-tuning with different saliency maps, and analyze the resulting deformations after each phase. We expect that the neural responses will adapt by deforming in the ways we have described (developing a mixture of band units, diffused units, and ring units), while phases of training without rewards would see a convergence back to the typical hexagonal grid cells. **Answer to question 2. on global distortions** You noted that previous works reporting experimental observations of firing rate distortions interpret them as local changes as opposed to global distortions. This is what we also expected to see a priori, given the claims from neuroscience experiments in Boccara et al. However, to our initial surprise, we did not observe this kind of local “magnification” or deformations in any of our experiments. Instead, we observed more global changes in the rate maps – which inspired the title of the work, “Global Distortions from Local Rewards”. Thus we challenge the conventional interpretation that reward modulation results in local magnification of the grid cell lattice as too simplistic an explanation. Indeed, because grid cells provide a global code for space, we propose that reward modulation should result in global deformations of spatial responses in MEC, as observed in our piRNN experiments. **Answer to question 3. on link between loss function and diffeomorphism theory** You raise a thought-provoking question about how a grid cell deformation, modeled as a diffeomorphism of the 2D environment, depends on the chosen loss function. Dorrell & Latham et al., 2023 showed that hexagonal grid cell representations result from minimizing a particular loss function subject to biological and representational constraints. Their loss function encourages the separation of neural representations of different positions in space, and includes a term that weighs the importance of separating any pair of positions x and x’. This term is directly analogous to the saliency map we introduce in the loss of our piRNN. Because Dorrell & Latham et al. do not study grid cell deformations, we believe that the framework we introduce in our paper for studying grid cell deformations opens a new route to make progress on this question, complementing existing work in the literature. We note that this question is of utmost importance to neuroscience and AI: given some task and objective function, can we predict and explain the representations learned by neural systems? **Conclusion** We thank you for your thorough review and specifically for prompting us to discuss the disconnect between the results from the piRNN experiments and the real experiments, as well as its link with our theoretical framework. We hope that we have addressed your concerns and we look forward to discussing these topics further. --- Rebuttal Comment 1.1: Title: Response to rebuttal Comment: Thank you for your response. I appreciate the new experiments. Thanks also for discussing the disconnect between the two findings. I hope that this will be clarified in the manuscript as well. As I said earlier, the theory itself is a novel contribution and I hope going forward this will help us understand experimental findings. I am raising my score to a 6 in response to the authors detailed rebuttal. --- Reply to Comment 1.1.1: Comment: Thank you very much for your feedback throughout the process and taking our rebuttal into consideration. We highly appreciate your detailed questions and constructive criticism -- we believe our paper is stronger thanks to you!
Summary: The paper investigates the phenomenon of grid cell distortion in rewarded environments, a topic of interest in neuroscience. The authors propose a theoretical framework to understand how the 2D firing fields of grid cells deform while preserving their topological structure in high-dimensional neural space. By providing a position dependent weighted loss term, spatial saliency loss, the experiments showed that global distortion (ring, diffused and band patterns) appeared in the firing patterns of grid cells. The authors also provided detailed analyses of piRNNs under various conditions, including frozen and trainable place cells. Strengths: The paper is well written with a comprehensive review of related works and makes a good point on a interesting topic. The authors propose a theory which offers a potential explanation of deformation. It shows that global distortion (geometry) can be added while maintain the toroidal topology for path integration. This finding could inspire future research in the field. I really like the simplicity of this work. The introduction of the spatial saliency loss (Eq. 7) is elegantly simple, building upon previous literature in a clear and interpretable way. The author leverages the flexibility of piRNN and specify an explicit scale at different locations while preserving the topological properties. Weaknesses: The work is all based on simulated results and evaluate on mostly on the change of the firing patterns. It would be better for the authors to show some comparison with the "real" data from rodent brain, i.e. the diffused or band unit is also found in biology experiments. If the simulation results can match with the experiment in some metrics, it will greatly enhance the confidence of this work. More experiment can be provided with the change of the settings, such as different numbers of grid cells or change of s(x), to show the generality of the theory and method. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. In the experiment, s(x) is defined as a Gaussian and the mean is at the center of the map, which gives more weight on the transformation at the center area. I may expect to see firing patterns that magnify towards the center. Do you have any insights on why the distortion seems to be uniformly appeared on the whole map? 2. For s(x), have the authors tried to change the scale or the position of the distribution to see how the patterns changed? 3. I'm curious if the authors tried different number of grid cells or modules in the experiments. Does the distortion need a larger number of cells to form? 4. In Figure 5, grid cells in the same module may emerge different distortion patterns. For example, in module 2, there are diffused and ring units. Are there any insights why this happens? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The limitation of this work is well illustrated in the appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank you very much for the thorough review and for your time. We address your comments and questions below. **On real neuroscience experiments:** You make the very important point that linking the theory with “real” data from rodent brains is important to increase the confidence of this work. Thank you for bringing it up. We think that this point deserves additional attention and we discuss it below. The existing experimental neuroscience data on such reward-modulated deformations of grid cells firing patterns is limited in number and in the strength of the conclusions. We are aware of the results from [Boccara et al. 2019, Butler et al. 2019, Krupic et al. 2018, and Wang et al. 2021]. Boccara et al. 2019 in particular motivated our approach and we explained and discussed its results with our theoretical framework. To summarize: Boccara et al. observed an attraction of grid cells firing fields towards the reward – a phenomenon called “attracted units” in our paper. Furthermore, band units are well described in the literature – see Krupic et al. 2012. Are you aware of any other recent public datasets that may be relevant? If so, we would be happy to discuss them as well. Due to the current limited experimental data, we chose to study in silico ANN-based experiments inspired by the existing, real neuroscience experiments. Therefore, the scope of our work is: we investigate how the representations learned by artificial path-integrating RNNs (piRNNs) change to account for “saliency” in the environment, in a way that is (1) interpretable, and (2) inspired by neuroscience experiments. We observe that after training with saliency conditions, our piRNN learns a more diverse set of neural representations (ring neurons, diffused neurons, band neurons) compared to observed in Xu et al. and Gao et al. earlier – while still maintaining toroidal topology. We then build a theoretical framework to explain how different deformations in the individual rate maps affect the geometry of the neural representation manifold, which is novel in the literature. Unfortunately, being a theory lab, we cannot perform additional experiments on real rodents ourselves. Yet, we believe that our geometric study of piRNNs already provides interesting results for the study of RNNs, which is an important field of research in AI/ML. Additionally, we hope that our work will inspire experimental neuroscientists to perform corresponding experiments on rodents. In that perspective, it might even be preferable that theory and experiments come from two separate research teams, as its final validation on real data would even be stronger. **On additional piRNN experiments:** Thank you for proposing this more comprehensive set of experiments. We have run them; please see the global response on experiments. We believe that these additional results strengthen our paper and we appreciate your suggestion. **Answer to question 1.): why such global distortions?** You asked why the distortions of grid cells firing patterns appear uniformly, while the reward (saliency) is local and localized at the center of the space. This is a great question! We also expected to see some local distortions of the grid cell firing patterns, given the claims from neuroscience experiments in Boccara et al. However, to our initial surprise, we did not observe this kind of local “magnification” or deformations in any of our experiments. Instead, we observed more global changes in the rate maps – which inspired the title of the work, “Global Distortions from Local Rewards”. Thus we challenge the conventional interpretation that reward modulation results in local magnification of the grid cell lattice as too simplistic an explanation. Indeed, because grid cells provide a global code for space, we propose that reward modulation should result in global deformations of spatial responses in MEC, as observed in our piRNN experiments. **Answer to question 2. on piRNN experiments that vary the saliency map** Thank you for proposing additional experiments with variations to the saliency map s(x)! Please see our results in the experiments section of the global response. **Answer to question 3. on the link between distortion and number of grid cells/modules** While we have not run experiments varying the number of grid cells, this is an interesting modification we will investigate in the near future. We would like to emphasize that the number of modules, as we have defined them, is an emergent property of the trained network and thus we do not have direct control over it. How the number of modules that emerge depend on various experimental parameters is a very interesting question that needs to be further explored, but is out of scope here. **Answer to question 4. on why different distortion patterns appear?** Thank you for the great thought-provoking question. We had not considered why different distortion patterns indeed jointly appear, i.e., why we observe both diffused and ring units after introduction of the reward. Your question also prompts the follow-up question: what role can diffused units, band units, and ring units play? To bring some insights to this question, in our new Fig. N4, we show that the grid cells contributing to the coding of the origin (the most visited location), and not the center of the saliency map (the reward location), are robust to global distortions during the saliency training. In other words, as a secondary objective is introduced, grid cells with projections to the place fields at the edges are more likely to become distorted. **Conclusion** We hope that we were able to address your comments and resolve your questions. These discussions have been beneficial for us, and we will gladly incorporate them into the final version of the paper. If you have any remaining questions, we'd be happy to discuss further! --- Rebuttal Comment 1.1: Comment: Thank you for your detailed rebuttal. The results are indeed inspiring. I hope the author can have theoretical breakthroughs along this direction. I believe this paper deserve to be accepted and would like to maintain my score. --- Reply to Comment 1.1.1: Comment: Thank you very much for your thorough engagement with our work -- your feedback and questions have helped us strengthen the paper. We appreciate your encouraging words!
Summary: This paper provides a mathematical theory of how spatially local reward distortions can lead to global representational distortions in grid cells. Strengths: The paper is extremely well-written, self-contained, nicely motivated from biology, and mathematically elegant. As someone who does not know the grid cell theory literature that well, I really appreciated learning from this paper. Weaknesses: The paper seems to be somewhat similar to previous work in the literature (e.g., Xu et al, Conformal Isometry of Lie Group Representation in Recurrent Network of Grid Cells), impacting its novelty. Similarly, a look at the literature shows that there are hundreds of theory papers on grid cells in the literature (which makes sense, they are a beautiful and simple phenomenon), so the paper is not particularly original in this regard. Technical Quality: 4 Clarity: 4 Questions for Authors: - Could the authors please clarify equations (1) and (3)? I am confused by them. In particular, (1) seems to suggest that $\phi(x) = Qr(x)$. But equation (3) suggests that they are not equal (otherwise $L_{error}$ would always be zero). Perhaps relatedly, where does the position estimate $\hat{x}$ enter into the loss? Confidence: 3 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank you for the thoughtful review, and address your comment and questions below. We look forward to hearing from you in case you have any additional comments! **Answer to weakness on Originality/Novelty:** We agree that the originality and novelty of our approach could have been made clearer in the original paper, which did not emphasize these points enough. In the final version, we propose to add a subsection 2.3 in Section 2 on Background & Related Works to explain the following: In this work, we use the supervised piRNN approach as a framework to study the deformations in grid cell firing patterns reported in the experimental neuroscience literature. We do so by modifying the piRNN’s supervised loss in an interpretable way to account for varying saliency of space. Particularly, our saliency training allows for systematic study of two highly entangled behavioral variables: i) where the rewards are located and ii) which locations the agents have the tendency to visit the most. To our knowledge, this approach is completely novel in the literature. In fact, in our new Fig. N4, we show that the grid cells contributing to the coding of the origin (the most visited location), but not the saliency map (the reward location), are robust to global distortions during the saliency training. In other words, as a secondary objective is introduced, grid cells with projections to the place fields at the edges are more likely to become distorted. To our knowledge, we are the first to make this prediction, which can be tested experimentally, for instance, by introducing both reward and landmark locations. Furthermore, we introduce a mathematical, geometric framework to explain and quantify the firing patterns’ deformations observed in our experiments; this is also novel and original. Finally, we show “that global distortion (geometry) can be added while maintaining the toroidal topology for path integration” (citing reviewer yMbD), which is true once the read-out weights to place fields are frozen, but not otherwise! One related work is Nayebi, et. al. “Explaining heterogeneity in medial entorhinal cortex with task-driven neural networks” (NeurIPS 2021). The authors were also motivated by results from experimental neuroscience on reward-modulated neural responses by grid cells. While not the central question of their work, Nayebi, et al. propose a modification of training piRNNs to account for reward modulation: they modify the way training paths are generated in the presence of reward. Specifically, the authors modified the training paths ge to be either (1) direct movement toward a reward location, called “pure exploitation”, (2) random walk, as is standard in the piRNN literature, called “pure exploration”, or (3) an intermediate policy consisting of a mix of movement toward reward and random walk exploration. Their results are consistent with the existence of nontrivial reward-modulated response changes in piRNNs. However, they did not seek to characterize the specific deformations to the neural responses, as we do in our work. **Answer to question on Equations (1) and (3):** Thank you for pointing this out. On the one hand, there is $\phi(x_t)$, which is the vector of place cell activity corresponding to position $x_t$. This is an “encoding” of 2D position into a place cell code. On the other hand we have $\hat{\phi}_t = Qr_t$, which is a linear readout of the recurrent grid cell activity $r_t$. We apologize for the typo in Equation (1), which thus should have been: $\hat\phi = Qr$ and $\hat x = \text{argmax} \hat \phi.$ Successful path integration here means that $\hat \phi_t \simeq \phi(x_t)$, captured in the $L_{error}$ term. To further highlight this point, we can rewrite Equation (3) as $$L_{error} = \sum_{t=1}^T ||\phi(x + …) - \hat \phi (x + …)||^2 = \sum_{t=1}^T ||\phi(x + …) -Qr (x + …)||^2.$$ We thank you for catching this. **Conclusion** We once again thank you for your time and attention. We hope that we were able to resolve your concern. If not, we look forward to discussing these topics further! If you believe we were able to adequately address your concerns, we hope that you will consider supporting our work with a strong acceptance! --- Rebuttal 2: Title: Concerns Addressed, Confusion Cleared Comment: I thank the authors for addressing my concerns, and for clearing up my confusion regarding Equation (3). I have accordingly increased my score to a Strong Accept. --- Rebuttal Comment 2.1: Comment: Thank you very much for your kind words, time, and consideration. We appreciate your encouraging words and please let us know if you end up having additional questions!
Rebuttal 1: Rebuttal: We thank the four reviewers for their time and attention. For this rebuttal, we ran additional experiments to draw a more complete picture of the phenomena of firing patterns distortions, and expanded on our theory to address reviewers’ concerns. Along the way, we also found some new exciting results! **Additional experiments** Following suggestions from all 4 reviewers, we performed additional experiments by varying: 1. the location of the reward [$x_*$ in Eq. 8] $x_*$in [(0.5,0.5), (0.8,0.8), (0.5,0.8)] 2. the overall magnitude of the reward [$s_0$ in Eq. 8] in [1,10,100] 3. the spread of the reward [$\sigma_*$ in Eq. 8] in [0.05,0.1,0.2,0.5] These experiments, coupled with a new statistical analysis (explained in Figs. N1-N3), confirmed the consistency of our finding that about one-third of the toroidal cells become distorted during the saliency training regardless of the hyperparameters. Moreover, we also realized that the distortions did not emerge in random populations of cells, rather in those that did not preferentially send read-out weights to the place field at the origin (Fig. N4). Notably, even when the saliency map was placed in a location other than the origin (reward location), the global distortion likelihood still decreased towards the origin (most visited location); providing an interesting contrast to the prior work that has observed local distortions towards reward locations. To date, prior experimental work had not distinguished between the tendency of the animals to visit familiar locations from the reward valence. Thus, our synthetic training allowed us to disentangle these aspects and bring forth new biological predictions, i.e., that global (but not necessarily local) distortions to grid cells may be signs of continual learning towards secondary objectives. Future explicit tests of these ideas can be performed experimentally by introducing both reward and landmark locations to the path-integrating RNNs and/or freely behaving animals! Overall, after saliency training across several hyperparameters, we observed that - the toroidal topology of the neural manifolds is preserved (data not shown), - diffused, band, and ring neurons consistently emerged as the leading distortions (Figs N1-N3 for _the new quantification analysis_), and - the distortions did not emerge in random populations of cells, rather those that were coding locations furthest away from the origin (Fig. N4, _new result_). Finally, we want to note that we ran slightly more than 650 experimental configurations (saliency training), which took on average around 3 hours per experiment. Overall, we used 2400 GPU hours, not counting the additional analyses we performed on top of the saliency training. We will make all of these models and our analyses publicly available, which we believe is an important step forward for the field. We kindly ask that the reviewers consider this as a contribution! **Link between theory, results on piRNN experiments versus results on real experiments** In addition to our experimental efforts, we have also performed additional theoretical investigations and expanded on our geometrical framework (to address primarily the concerns by Reviewers STz6 and BFFn). Notably, we complemented our current theory with a framework of global distortions based on diffusion processes. This represents a simple modification, and mathematical refinement, of section 3.2 “Diffused Units: Smoothing of Firing Fields”. Yet, and despite its simplicity, it now characterizes the three phenomena: diffused units, band units, and ring units. Below, we provide the intuition behind the new framework that can model ring, band, and diffused deformations of the grid cell lattice: - *Diffused units*: In this work, we show that diffused units, observed in practice in our piRNN experiments, lead to smaller neural manifolds in neural state space and leave the energy budget unchanged. Diffused units can be explained by 2D isotropic smoothing of the original grid lattice. - *Band units*: These units emerge as anisotropic smoothing of the firing fields, which can be useful for example to incorporate direction information. We show how band units, if emerge in large fractions, can lead to a collapse of the toroidal manifold into a circle. In our experiments, we observe only a small fraction of these, and thus the toroidal manifold structure is preserved. - *Ring units*: These units emerge after angular smoothing of firing fields, which can be useful for example to encode angle-invariant distance information. Similar to before, the energy budget is unchanged. Notably, we observed ring units even when the saliency map was not at the origin, hinting that these units may be more relevant with coding frequently visited space information (here origin), which is also angle-independent. **Conclusion** To conclude, our theory does not explain which firing patterns should or should not emerge (which we now explicitly discuss in the paper). Instead, we introduce a rich mathematical framework that (1) quantifies all four firing field deformations, observed across piRNN and real experiments, with group actions, and (2) relate the firing field deformations with a neural manifold deformations. However, we recognize that one part of the theory (group action by diffeomorphisms) explains the results observed in real experimental data, while another part (group action by diffusion) explains the results observed in our piRNN experiments. Accordingly, results from real and piRNN experiments do not agree: the former finds locally attracted units, while the latter finds global distortions. While we were shocked by this disconnect at first, we now believe that it represents one of our most interesting findings in this work. All in all, we believe that truly uncovering these mysteries will require a collaborative effort. Since all our datasets will be made public, we are also looking forward to the inputs from the full community! Pdf: /pdf/14421c0908a6b3d008aa8b0ae53197fc90099f1f.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Cluster-Learngene: Inheriting Adaptive Clusters for Vision Transformers
Accept (poster)
Summary: Cluster-Learngene is an innovative approach for initializing Vision Transformer models. It works by inheriting adaptive clusters from a large pre-trained ancestry model. The key features of this method include: Adaptive Clustering: Cluster-Learngene adaptively clusters the attention heads and feed-forward networks within each layer of the ancestry model based on their density characteristics. This forms compact and information-dense modules called "learngenes." Efficient Initialization: The method uses priority weight-sharing and learnable parameter transformations to expand these learngenes. This enables efficient initialization of descendant models with varying scales and resource constraints. Resource Customization: Cluster-Learngene tailors the model initialization to the downstream task resources, offering a more efficient alternative to the standard pretraining and fine-tuning approach. Empirical Validation: Extensive experiments on multiple datasets demonstrate that Cluster-Learngene outperforms traditional initialization strategies and is competitive with more resource-intensive fine-tuning methods. Scalability and Efficiency: The approach achieves faster convergence and higher data efficiency compared to models initialized from scratch, making it effective for scenarios with limited data. Broad Applicability: The paper shows the effectiveness of Cluster-Learngene across different model scales and various downstream tasks, showcasing its versatility. In summary, Cluster-Learngene presents a novel and efficient strategy for initializing Vision Transformer models, addressing the challenge of resource constraints in downstream tasks. Strengths: This paper contributes to the field of deep learning with several key strengths: Originality: It proposes an innovative method for initializing Vision Transformer models by adaptively clustering attention heads and FFNs, introducing the concept of learngenes. Quality: The research is thorough, with a well-defined methodology and robust empirical validation across multiple datasets. Clarity: The paper is clearly structured and articulated, making the complex methodology understandable. Significance: It addresses an important problem in AI—efficient model scaling for different resource constraints, with potential far-reaching impacts in various application domains. Creative Combination: The paper successfully combines existing concepts in a novel way, leading to a unique and effective model initialization approach. Removing Limitations: It offers a solution that overcomes the limitations of traditional model initialization, enhancing flexibility and efficiency. Weaknesses: Lack of comparison to other clustering methods: The paper uses a density-based clustering approach, but does not compare this to other clustering methods like k-means. Experiments focused on image classification: The experimental evaluation is primarily on image classification tasks. Testing on a broader range of vision tasks like object detection or segmentation would demonstrate the generalizability of the approach better. The paper format can be improved: Table 2 should underline the highest result. Tables 4 and 5 should be three-line tables. Technical Quality: 2 Clarity: 2 Questions for Authors: 1. Can you offer more experiments on other architectures(ConvNext, hybrid arch etc.), datasets(VTAB, FGVC etc.) or even modalities(like NLP tasks&models) to further validate the effectiveness of Cluster-Learngene? Which I believe will be powerful proof of its generalizability. 2. There have been other methods in model merging, can you compare Cluster-Learngene with these methods[1,2] to highlight its advantages, in experimental and theoritical ways? [1]Yang, Enneng, et al. "Adamerging: Adaptive model merging for multi-task learning." arXiv preprint arXiv:2310.02575 (2023). [2]Yang, Enneng, et al. "Representation Surgery for Multi-Task Model Merging." arXiv preprint arXiv:2402.02705 (2024). 3. How computationally expensive is the clustering step compared to the savings in downstream fine-tuning? Is there a break-even point in terms of number of downstream tasks where this becomes beneficial? Confidence: 3 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: The authors provide a good discussion of limitations in the appendix, acknowledging that the performance of descendant models depends heavily on the quality of the ancestry model. The discussion of broader impacts is quite limited. Given that the method aims to make transfer learning more efficient, some discussion of potential positive impacts (e.g. democratizing access to good models with less computational resources) as well as negative impacts (e.g. potential to amplify biases present in large pre-trained models) would be valuable. The authors could expand on mitigation strategies for some of these risks. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful and constructive feedback. Below, I will respond to your questions. > Q1: Lack of comparison to other clustering methods. R1: Your question is thoughtful. In **Section 4.3.1**, we have compared clustering methods such as k-means and provided different values for the set (k) of cluster centroids in k-means. The results are as follows: | *k-means*, $k=1$ | *k-means*, $k=2$ | *k-means*, $k=3$ | Ours | | ----------------- | ----------------- | ---------------- | --------- | | 80.12 | 81.02 | 81.41 | **85.38** | Cluster-Learngene not only outperforms in clustering efficiency but also adaptively adjusts the count of cluster centroids for each model layer, unlike *k-means* which requires predefined numbers of cluster centroids. > Q2: Experiments focused on image classification. R2: Thanks for your suggestion. We conduct additional segmentation experiments on the ADE20K [A]. We set the base learning rate to $10^ {−3}$ and train for 160K iterations with a batch size of 8. The results are as follows: | Method | T-Params (M) | I-Params (M) | FLOP (G) | mIoU | | ---------------------- | ------------ | ------------ | -------- | --------- | | Pretraining-Finetuning | 98.5 | 83.2 | 105 | 47.08 | | Heuristic-Learngene | 57.8 | 42.5 | 105 | 40.12 | | Cluster-Learngene | 74.7 | 59.4 | 105 | **48.30** | , where “T-Params" and “I-Params" correspond to the **T**otal number of parameters in the downstream/descendant models and the number of parameters **I**nherited into the downstream/descendant models, respectively. We validate the efficient cross-task initialization capability of our Cluster-Learngene. On the other hand, Fine-tuning exhibits inferior performance due to the risk of negative transfer compared to our Cluster-Learngene. [A] Bolei Zhou, et al. "Semantic understanding of scenes through the ADE20K dataset." Int. J. Comput. Vis., 127(3):302–321, 2019. > Q3: The paper format can be improved. R3:We thank the reviewer for their constructive feedback and will enhance the paper by underlining the highest result in Table 2 and reformating Tables 4 and 5 into three-line tables for improved clarity and readability. > Q4: Can you offer more experiments? R4: Thanks for your suggestion. (1) We have further supplemented the result for the **hybrid architecture**, such as LeViT-192 [B], which integrates convolutional and transformer components. We perform clustering and expansion on the MSA and FFN in stages 1-3 of LeViT-192. The experimental results are presented in the table below: | Method | CIFAR-100 | ImageNet | | ---------------------- | --------- | --------- | | Pretraining-Finetuning | 85.11 | 69.08 | | From-Scratch | 74.06 | 65.12 | | Heuristic-Learngene | 78.22 | 66.65 | | Auto-Learngene | 80.96 | 67.91 | | Cluster-Learngene | **86.27** | **71.60** | We fine-tune all downstream models for 500 epochs on CIFAR-100 and 50 epochs on ImageNet. It can be observed that our method, when extended to the hybrid architecture, also achieves the best results on both CIFAR-100 and ImageNet. [B] Graham, Benjamin, et al. "Levit: a vision transformer in convnet's clothing for faster inference." *ICCV*. 2021. (2)For Section 4.2.3, in addition to the existing results on FGVC (CUB-200-2011, Stanford Cars, Oxford Flowers) and VTAB (CIFAR-100, Flowers102) datasets, we have also supplemented results from **FGVC** datasets such as Stanford Dogs, NABirds, and four **VTAB** datasets which are SVHN, DTD, EuroSAT, and Resisc45. The results are presented in the table below: | Method | Stanford Dogs | NABirds | SVHN | DTD | EuroSAT | Resisc45 | | ---------------------- | ------------- | --------- | --------- | --------- | --------- | --------- | | Pretraining-Finetuning | 75.41 | 78.33 | 97.57 | 84.88 | 97.32 | 96.53 | | From-Scratch | 61.45 | 64.99 | 91.23 | 73.74 | 92.49 | 90.60 | | Heuristic-Learngene | 71.68 | 73.12 | 94.81 | 77.30 | 94.88 | 94.25 | | Auto-Learngene | 73.53 | 73.65 | 95.16 | 79.85 | 95.02 | 94.37 | | Cluster-Learngene | **76.37** | **80.76** | **97.92** | **85.30** | **98.24** | **96.85** | Our Cluster-Learngene outperforms Pretraining-Finetuning and other previous Learngene methods, thereby demonstrating the strong initialization capability and generalizability of our approach. > Q5: Can you compare Cluster-Learngene with model merging[1,2]? R5: Thank you to the reviewer for providing these papers. We will cite them in the 'Related Work' section. At the conceptual level, Multi-Task Model Merging[1,2] aims to merge models for collaborative multi-task processing, whereas our Learngene is designed to **independently initialize** diverse models and generate **scalable scales** to meet specific task demands and resource limitations. In the experimental way, we compare our method with the two Multi-Task Model Merging methods[1,2] on the cars dataset. Our method achieves a performance of **89.87%** on DeiT-Small, outperforming the Multi-Task Model Merging methods[1,2], which show performances of 69.6% and 72.0% on the larger-scale ViT-B/32, respectively. > Q6: Computationally expensive and break-even point. R6: (1)For the largest model, DeiT-Base, the computational time for the clustering step is **less than 6 minutes** on a single GeForce RTX 3090 GPU, which is negligible compared to the several hours or even a full day required for fine-tuning downstream tasks. (2)Indeed, our Cluster-Learngene reaches a break-even point with as few as two downstream tasks, indicating that the conditions for our algorithm to be beneficial are quite lenient. --- Rebuttal 2: Comment: Thank you for addressing many of the concerns I initially had with your manuscript. However, I maintain my original score due to two persisting issues: Performance in Practical Settings: While the theoretical contributions of your work are clear, the practical performance does not appear to significantly surpass the traditional industry-standard approach of pre-training followed by fine-tuning. The advantages of your method over existing techniques are not clearly demonstrated, and the problem statement and solution approach described in the abstract seem somewhat superficial. For your work to have a substantial impact, it would be beneficial to provide more concrete evidence or comparative analysis demonstrating its superiority in real-world applications. Reproducibility Concerns: The complexity of your proposed method and the lack of critical implementation details raise concerns about reproducibility. Without these details, it may be challenging for readers to replicate your results or compare them with their work, which could undermine the credibility and utility of your paper. I would encourage you to include these essential details or simplify the approach to facilitate a better understanding and reproducibility by the wider research community. I hope these points are helpful for refining your paper. --- Rebuttal Comment 2.1: Comment: Thank you very much for your timely response and constructive suggestions. Below, I will address the questions that you have newly raised: > Q1:Performance in Practical Settings We compared the traditional industry-standard approach of pre-training followed by fine-tuning and found that although our approach inherits fewer parameters, the performance is better. Therefore, this result needs to be viewed comprehensively and the representative results are shown in the table below: | Method | Params (M) | iNat-2019 | Cars | **NABirds** | | ---------------------- | ---------- | --------- | --------- | ----------- | | Pretraining-Finetuning | 10.5 | 68.48 | 86.81 | 78.33 | | Cluster-Learngene | **7.5** | **71.09** | **89.87** | **80.76** | In response to your request, we are pleased to report that in our exploration of real-world applications, we have extended our presentation beyond the classification results featured in the paper. Specifically, we have supplemented our experiments with segmentation results in the rebuttal phase, as per your guidance. Moreover, to initialize $N$ downstream models of different scales, Pretraining-Finetuning would require pre-training $N$ times. However, our method only needs to utilize **one** pre-trained model, from which we can cluster and expand to initialize $N$ downstream models of different scales, saving the pre-training time for $N-1$ models. > Q2: Reproducibility Concerns Due to the organizing committee's policy against providing anonymized code links directly in the rebuttal response, we have sent the anonymized code link to the Area Chair (AC) as per the instructions. We kindly ask the AC to forward it to you, in the hope that this will address your reproducibility concerns.
Summary: This paper seeks to address the issue of overgeneralizing the applicability of large pre-trained models in deep learning, particularly when faced with task-specific resource constraints. The authors propose Cluster-Learngene, a novel method that clusters critical internal modules from a large ancestry model and uses them to initialize descendant models of various scales. By adaptively clustering attention heads and feed-forward networks (FFNs) based on their density characteristics, Cluster-Learngene creates a flexible and efficient initialization process. The method also incorporates priority weight-sharing and learnable parameter transformations to expand the learngene. Extensive experiments demonstrate that Cluster-Learngene outperforms other initialization methods in efficiency and in customizing models to fit the resources available for downstream tasks. Strengths: 1. The problem studied in this paper is interesting and valuable. The paper is well-structured and clearly written, making it accessible and comprehensible to a broad audience. 2. The approach of adaptively clustering attention heads and FFNs based on density characteristics is innovative and adds a novel dimension to model initialization techniques. The introduction of priority weight-sharing and learnable parameter transformations effectively addresses the need for models to adapt to varying resource constraints. 3. The authors have conducted an extensive set of experiments to validate the effectiveness of Cluster-Learngene, including initializing descendant models of elastic scales and evaluating initialization results on different downstream tasks, both of which show significant improvements compared to baselines. Weaknesses: 1. The paper lacks a complexity analysis or specific time cost assessments for Cluster-Learngene, which are crucial for understanding its computational efficiency. 2. Regarding the choice of hyperparameters, the authors could provide an ablation study, particularly for parameters like ε, to better understand their impact on the model's performance. 3. Will the code be available for open source in the future? Technical Quality: 3 Clarity: 3 Questions for Authors: See Weaknesses. Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 4 Limitations: The authors have addressed the limitations and potential negative societal impacts of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful and constructive feedback. Below, I will respond to your questions. > Q1: The paper lacks a complexity analysis or specific time cost assessments for Cluster-Learngene, which are crucial for understanding its computational efficiency. R1: For the largest model, DeiT-Base, the computational time for the clustering step is **less than 6 minutes** on a single GeForce RTX 3090 GPU, which is negligible compared to the several hours or even a full day required for fine-tuning downstream tasks. > Q2: Regarding the choice of hyperparameters, the authors could provide an ablation study, particularly for parameters like ε, to better understand their impact on the model's performance. R2: In the **Appendix A.6**, we have analyzed the varying settings of the hyperparameter Eps (ε). The following table displays the results for different values of ε: | $\varepsilon=1$ | $\varepsilon=10$ | $\varepsilon=100$ | | --------------- | ---------------- | ----------------- | | 85.27 | **85.38** | 80.45 | $\varepsilon=1$ implies no FFN clustering, potentially causing negative transfer. $\varepsilon=100$ means clustering all FFNs in adjacent layers with identical head centroid counts, resulting in an excessive cluster of FFNs and subsequent degradation in the performance of initialized descendant models. Our Cluster-Learngene ($\varepsilon=10$) strikes a good balance between these issues. > Q3: Will the code be available for open source in the future? R3: We expect to release our code by late Sep.
Summary: The paper introduces Cluster-Learngene, a novel approach for initializing Vision Transformers (ViTs). This method clusters attention heads and position-wise feed-forward networks (FFNs) from a large "ancestry" model to form a condensed initialization core, termed "learngene." By leveraging the density characteristics of attention heads and FFNs, Cluster-Learngene effectively customizes models of elastic scales to meet the resource constraints of various downstream tasks. The approach not only preserves critical knowledge but also enhances efficiency by reducing redundancy and facilitating weight-sharing in descendant models. Strengths: - Originality: The clustering approach for attention heads based on density characteristics is innovative. - Quality: The empirical validation is extensive, comparing the proposed method against several state-of-the-art initialization techniques. - Clarity: Despite some dense sections, the overall exposition of concepts is clear. - Significance: The method addresses critical issues in the scalability and resource efficiency of model deployment, making it highly significant for applications in resource-constrained environments. Weaknesses: Insufficient Downstream Task Evaluation: The dataset assessments for downstream tasks are currently limited to classification challenges. The generalizability of this method to other tasks such as segmentation and detection remains unknown. Technical Quality: 2 Clarity: 2 Questions for Authors: 1. The Cluster-Learngene method is primarily designed for models with full attention mechanisms. Given the emergence of attention variants aimed at reducing computational costs, such as sparse or low-rank attention, it remains to be seen whether this method can be effectively transferred and applied to these types of models. 2. The hyperparameter Eps (ε) is currently set manually and plays a crucial role in the effectiveness of the clustering. Could the authors provide ablation study data to explore the robustness of the algorithm with varying settings of this parameter? Confidence: 2 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: Please refer to weakness and questions Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful and constructive feedback. Below, I will respond to your questions. > Q1: Insufficient Downstream Task Evaluation R1: Thanks for your suggestion. We conduct additional segmentation experiments on the ADE20K [A]. We set the base learning rate to $10^ {−3}$ and train for 160K iterations with a batch size of 8. The results are as follows: | Method | T-Params (M) | I-Params (M) | FLOP (G) | mIoU | | ---------------------- | ------------ | ------------ | -------- | --------- | | Pretraining-Finetuning | 98.5 | 83.2 | 105 | 47.08 | | Heuristic-Learngene | 57.8 | 42.5 | 105 | 40.12 | | Cluster-Learngene | 74.7 | 59.4 | 105 | **48.30** | , where “T-Params" and “I-Params" correspond to the **T**otal number of parameters in the downstream/descendant models and the number of parameters **I**nherited into the downstream/descendant models, respectively. We validate the efficient cross task initialization capability of our Cluster-Learngene. On the other hand, Fine-tuning exhibits inferior performance due to the risk of negative transfer compared to our Cluster-Learngene. [A] Bolei Zhou, , et al. "Semantic understanding of scenes through the ADE20K dataset." Int. J. Comput. Vis., 127(3):302–321, 2019. > Q2: Given the emergence of attention variants aimed at reducing computational costs, such as sparse or low-rank attention, it remains to be seen whether this method can be effectively transferred and applied to these types of models. R2: Thanks for your thoughtful question. In general, sparse and low-rank attention mechanisms reduce the computational load through specific acceleration techniques. Sparse attention achieves this by limiting each query to compute attention with only a local set of keys. Meanwhile, low-rank attention decomposes the attention weight matrix into a low-rank form, such as $\mathbf{A} \approx \mathbf{P}^{\top} \mathbf{Q}$, where $\mathbf{P}$ and $\mathbf{Q}$ are low-rank matrices. However, they still produce complete attention head outputs. Therefore, our Cluster-Learngene can still calculate the Mean Attention Distance and apply it to these sparse or low-rank attention mechanisms. > Q3: The hyperparameter Eps (ε) is currently set manually and plays a crucial role in the effectiveness of the clustering. R3: In the **Appendix A.6**, we have analyzed the varying settings of the hyperparameter Eps (ε). The following table displays the results for different values of ε: | $\varepsilon=1$ | $\varepsilon=10$ | $\varepsilon=100$ | | --------------- | ---------------- | ----------------- | | 85.27 | **85.38** | 80.45 | $\varepsilon=1$ implies no FFN clustering, potentially causing negative transfer. $\varepsilon=100$ means clustering all FFNs in adjacent layers with identical head centroid counts, resulting in an excessive cluster of FFNs and subsequent degradation in the performance of initialized descendant models. Our Cluster-Learngene ($\varepsilon=10$) strikes a good balance between these issues. --- Rebuttal 2: Comment: Dear Reviewer 2s46, This paper received mixed ratings. I would really appreciate it if you could check the authors' responses and post your further concerns (if there are still remaining concerns). Thank you so much! Your AC
Summary: This paper introduces Cluster-Learngene, a weight initialization method for initializing downstream models of various sizes with pre-trained models. The proposed method is based on Learngene and includes the following two improvements: MSA and FFN centroid, which extract critical parameters and reduce redundancy; priority weight-sharing, which is able to initialize downstream models with varying number of attention heads. Strengths: (1) The paper is overall well-written with sound illustrations; (2) As a weight initialization method, the proposed method makes sense and holds many potential applications; (3) Compared with Learngene, the proposed method can better adapt to downstream models of different scales. Weaknesses: (1) Section 3.1 and L183-196 seems to have a messy formatting of formulas; for example, some letters are missing. This reduces the readability and overall quality of the paper; (2) Some key experiments were insufficient. For example, the counterpart of this work is the pre-existing Learngene methods, whereas in Table 1, only training from scratch is compared and in Table 2 results on ImageNet are also lacking; (3) Some details of the proposed method are not explained clearly enough (see Questions). Technical Quality: 3 Clarity: 2 Questions for Authors: (1) After Adaptively Learngene Clustering, the number of layers obtained is variable. How to deal with that if it is not equal to the number of layers in the downstream model? (2) In priority weight-sharing (Figure 2), what if the number of head centroids is less than the number of head in the downstream model? (3) Are all parameters of the downstream model (inherited or not) updated (or fixed) during training? (4) In L222, “The learnable parameters in Eqns. (7) are implemented through a nonlinear mapping such as a neural network with the rectified linear units (ReLU)”, does this mean that $\hat{W}_t$ is more than just a parameter? Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: The limitations have been discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful and constructive feedback. Below, I will respond to your questions. > Q1: Section 3.1 and L183-196 seems to have a messy formatting of formulas; R1: We appreciate the reviewer's keen observation regarding Section 3.1 and lines 183-196. We apologize for any inconvenience caused and assure the reviewer that we will thoroughly revise this section to ensure all formulas are correctly formatted and legible. Below are the complete expressions of equations (1) and (2) in Section 3.1: ​ $\begin{align}\mathbf{A}^h=\text{Attention}(\mathbf{Q}_ h, \mathbf{K}_ h, \mathbf{V}_ h) = \text{softmax}\left(\frac{\mathbf{Q}_ {h}\mathbf{K}_ {h}^\top}{\sqrt{d_ k}}\right)\mathbf{V}_ {h}\end{align}$. (1) ​ $\text{MultiHead}(\mathbf{Q}, \mathbf{K}, \mathbf{V}) = \text{Concat}(\mathbf{A}^1, \ldots, \mathbf{A}^H)\mathbf{W}^O$. (2) Here are the revisions made to L183-196: - When $H_d$ is divisible by $c_l$: The weights of head centroids are shared $\frac{H_d}{c_l}$ times in sequence. For instance, centroids of weights $\mathbf{A}^{(L,1)}$ and $\mathbf{A}^{(L,2)}$ each share their weights across four attention heads, which are then directly assigned to eight attention heads of the descendant model in layer $L$. - When $H_d$ is not divisible by $c_l$: The weights of the head centroids are sequentially shared $\lfloor \frac{H_d}{c_l}\rfloor$ times, followed by appending $\mathbf{A}^{(l,1)}, \ldots, \mathbf{A}^{(l,H_d \mod c_l)}$ at the end. As an illustration, we share the centroids of weights $\mathbf{A}^{(1,1)}, \ldots, \mathbf{A}^{(1,5)}$ once and then append $\mathbf{A}^{(1,1)}, \ldots, \mathbf{A}^{(1,3)}$, thus initializing eight attention heads of the descendant model in the first layer. For the attention heads in the descendant models, we introduce the hyperparameter $\omega = \frac{H_a}{H_d}$ to denote the factor by which the number of attention heads is reduced compared to the ancestry model. ... According to the adjustments in the number of attention heads, the weights $\mathbf{W}^O$ of the projection layer are also proportionally pruned and then inherited by the descendant models. > Q2:Some key experiments were insufficient. R2: Thank you for the reviewer's suggestion. (1) We have added a strong comparative method called 'Pretraining-Finetuning' in Table 1, as shown in the table below: | Model | $H_d$ | $L_d$ | Params (M) | FLOPs (G) | From-Scratch | Pretraining-Finetuning | Ours | | ----- | ----- | ----- | ---------- | --------- | ------------ | ---------------------- | --------- | | Tiny | 3 | 12 | 5.7 | 1.2 | 61.44 | 66.36 | **70.28** | | Small | 6 | 12 | 22 | 4.6 | 68.56 | 75.01 | **78.43** | | Base | 12 | 12 | 86.6 | 17.5 | 77.22 | 78.13 | **79.50** | From-Scratch trains models for 100 epochs, while the Pretraining-Finetuning and our method only fine-tune the downstream models for 50 epochs. Compared with the other two methods, our approach demonstrates the best initial performance across different scales. (2) In Table 2, we have supplemented the results on ImageNet as shown in the table below: | Method | I-Params | ImageNet | | ---------------------- | -------- | --------- | | Pretraining-Finetuning | 10.5 | 68.85 | | From-Scratch | 0 | 64.91 | | Heuristic-Learngene | 5.6 | 66.44 | | Weight-Transformation | 10.5 | 68.20 | | Auto-Learngene | 10.5 | 67.78 | | Cluster-Learngene | 7.5 | **71.36** | Our method continues to consistently outperform the baselines on the ImageNet dataset. > Q3: After Adaptively Learngene Clustering, the number of layers obtained is variable. How to deal with that if it is not equal to the number of layers in the downstream model? R3: Your question is thoughtful. We have already addressed this issue in the **Appendix A.3**:"According to the adjustments in the number of attention heads, the weights $\mathbf{W}^O$ of the projection layer are also proportionally pruned or expanded with the hyperparameter $\omega$ and then inherited by the descendant models." > Q4: In priority weight-sharing (Figure 2), what if the number of head centroids is less than the number of head in the downstream model? R4: Even if the number of head centroids is fewer than the number of heads in the downstream model, we can initialize the heads in the downstream model multiple times with priority weight-sharing of head centroids. Therefore, our priority weight-sharing strategy does not require any absolute size relationship between the number of head centroids and the number of heads in the downstream model. > Q5: Are all parameters of the downstream model (inherited or not) updated (or fixed) during training? R5: All parameters of the downstream model (inherited or not) update during training. However, due to leveraging weight-sharing, the computational cost of our descendant models remains smaller compared to fine-tuning the entire pre-trained model. > Q6: In L222, “The learnable parameters in Eqns. (7) are implemented through a nonlinear mapping such as a neural network with the rectified linear units (ReLU)”, does this mean that 𝑊^𝑡 is more than just a parameter? R6: I apologize for any confusion caused by the description in my paper. The symbol $\mathbf{\widehat{W}}_ {t}$ indeed represents the newly introduced learnable parameters that are used to expand the $t^{th}$ feedforward network (FFN). However, in the implementation of this expanding FFN process, not only do we apply the simple transformation with these learnable parameters $\mathbf{\widehat{W}}_ {t}$, but we also incorporate the ReLU (Rectified Linear Unit) activation function to provide some additional non-linear transformations. --- Rebuttal 2: Comment: Dear Reviewer GgZj, This paper received mixed ratings. I would really appreciate it if you could check the authors' responses and post your further concerns (if there are still remaining concerns). Thank you so much! Your AC
Rebuttal 1: Rebuttal: > Q1: Experiments focused on image classification: The experimental evaluation is primarily on image classification tasks. Testing on a broader range of vision tasks like object detection or segmentation would demonstrate the generalizability of the approach better. R1: Thanks for your suggestion. We conduct additional segmentation experiments on the ADE20K [A]. We set the base learning rate to $10^ {−3}$ and train for 160K iterations with a batch size of 8. The results are as follows: | Method | T-Params (M) | I-Params (M) | FLOP (G) | mIoU | | ---------------------- | ------------ | ------------ | -------- | --------- | | Pretraining-Finetuning | 98.5 | 83.2 | 105 | 47.08 | | Heuristic-Learngene | 57.8 | 42.5 | 105 | 40.12 | | Cluster-Learngene | 74.7 | 59.4 | 105 | **48.30** | , where “T-Params" and “I-Params" correspond to the **T**otal number of parameters in the downstream/descendant models and the number of parameters **I**nherited into the downstream/descendant models, respectively. We validate the efficient cross task initialization capability of our Cluster-Learngene. On the other hand, Fine-tuning exhibits inferior performance due to the risk of negative transfer compared to our Cluster-Learngene. [A] Bolei Zhou, , et al. "Semantic understanding of scenes through the ADE20K dataset." Int. J. Comput. Vis., 127(3):302–321, 2019. > Q2: The hyperparameter Eps (ε) is currently set manually and plays a crucial role in the effectiveness of the clustering. R2: In the **Appendix A.6**, we have analyzed the varying settings of the hyperparameter Eps (ε). The following table displays the results for different values of ε: | $\varepsilon=1$ | $\varepsilon=10$ | $\varepsilon=100$ | | --------------- | ---------------- | ----------------- | | 85.27 | **85.38** | 80.45 | $\varepsilon=1$ implies no FFN clustering, potentially causing negative transfer. $\varepsilon=100$ means clustering all FFNs in adjacent layers with identical head centroid counts, resulting in an excessive cluster of FFNs and subsequent degradation in the performance of initialized descendant models. Our Cluster-Learngene ($\varepsilon=10$) strikes a good balance between these issues. > Q3: How computationally expensive is the clustering step compared to the savings in downstream fine-tuning? Is there a break-even point in terms of number of downstream tasks where this becomes beneficial? R3: (1)For the largest model, DeiT-Base, the computational time for the clustering step is **less than 6 minutes** on a single GeForce RTX 3090 GPU, which is negligible compared to the several hours or even a full day required for fine-tuning downstream tasks.(2)Indeed, our Cluster-Learngene reaches a break-even point with as few as two downstream tasks, indicating that the conditions for our algorithm to be beneficial are quite lenient. Considering that real-world applications always involve multiple tasks, our approach provides clear benefits. > Q4: The discussion of broader impacts. R4: In practice, learngene strives to preserve the core knowledge from the original model, avoiding the redundancy found in large pre-trained models. Consequently, descendant models initialized with the learngene reduce the biases that are present in the original large pre-trained models.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Bandit-Feedback Online Multiclass Classification: Variants and Tradeoffs
Accept (poster)
Summary: The work studies multiclass classification with a finite label set $\mathcal{Y}$ and a model class $\mathcal{H}$ of $\mathcal{Y}$ -valued functions. The analysis focuses on optimal mistake bounds as opposed as optimal regret. In particular, the goal is to find tight relationships between optimal regret bounds, $\mathrm{opt}(\mathcal{H})$ for deterministic and randomized algorithms, against oblivious and randomized adversaries, in the full and bandit feedback models. The main contribution is an upper bound relating $\mathrm{opt}(\mathcal{H})$ for randomized algorithms and adaptive adversaries in the bandit model to $\mathrm{opt}(\mathcal{H})$ for randomized algorithms in the full feeedback model. The main result is proven through an auxiliary technical result proving a nearly optimal randomized mistake bound for the problem of prediction with expert advice in the $r$-realizable setting (where $r$ is the mistake bound of the best expert). Strengths: The work provides new results characterizing the price of bandit information in the optimal mistake boind for multiclass classification. The topic has been intensively studied in the literature on learning theory and is still of interest for the community. The technical contribution is original and nontrivial. The presentation is clear and the results are well placed in the context of previous works. It is obvious that the authors are very familiar with the topic. The connection with prediction with expert advice is interesting. Weaknesses: The contribution is more technical than conceptual. Many of the proof techniques are extensions of previous work, but the work is honest about it. The relationship between mistake bounds and regret bounds for the same setting could have been fleshed out better. It would be useful to mention absolute (as opposed to relative) upper and lower bounds on the various $\mathrm{opt}(\mathcal{H})$ whenever they are available. Technical Quality: 3 Clarity: 3 Questions for Authors: Can you elaborate more on the relationship between mistake bounds and regret bounds for bandit multiclass classification? I am also referring to the extent to which proof techniques can be re-used in the other setting. Can you list the available bounds on the various $\mathrm{opt}(\mathcal{H})$ that only depend on the hypothesis class and the cardinality of the label class? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: There is not an explicit section about limitations, but the authors adequately point out the open questions. The work is theoretetical with no immediate societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Response to weakness: The main goal of the paper is to measure the role of natural and well-studied resources given to the learner/adversary, and such bounds are inherently relative. However, it is well-known that $\mathsf{opt}_{\operatorname{full}}^{\operatorname{det}}(\mathcal{H})$ (and up to a factor of $2$ also its randomized parallel) is exactly quantitatively characterized by the Littlestone dimension [1,2], which is a combinatorial dimension depending only on the class $\mathcal{H}$. Therefore, the bounds of Theorem 1.1 can be seen as absolute, and we agree that we should clarify it in the paper. Further, the upper bounds in Theorem 1.2 and 1.4 are also absolute, as witnessed by Equation (1). The absolute bounds implying the lower bound of Theorem 1.2 are explicitly written in lines 93-97. The absolute bounds implying the lower bound of Theorem 1.4 are explicitly stated and proved in Appendix F (Theorem F.1). In the next version of the paper, we will add the statement of Theorem F.1 to the main text. Response to question #1: The relationship between a mistake bound and its matching regret bound is “regret bound = mistake bound - $r$”, where $r$ is the number of mistakes made by the best hypothesis in class. Therefore, every mistake bound can be converted to a regret bound and vice versa, via this equation. We will add this formal equation to the paper, to make this relationship more clear. Response to question #2: See response to weakness. References: [1] Nick Littlestone. Learning quickly when irrelevant attributes abound: A new linear-threshold 390 algorithm. Machine learning, 2(4):285–318, 1988. [2] A. Daniely, S. Sabato, S. Ben-David, and S. Shalev-Shwartz. Multiclass learnability and the erm principle. In COLT, 2011. --- Rebuttal Comment 1.1: Comment: Thanks for addressing my questions. Your "response to weakness" is addressing very well my comment. Your "Response to question #1" is missing the point: I wanted to know the extent to which existing proof techniques for bounding regret may be re-used for proving your mistake bounds. I am asking this because it would help relate your results with the existing results for regret in bandit multiclass classification. --- Reply to Comment 1.1.1: Comment: Thank you for responding and clarifying. In the context of prediction with expert advice, we discuss this in detail in lines 296-313. The main message of those lines is that to the best of our knowledge, known regret bounds are proved for a different, easier in some sense (for the learner) definition of $r$-realizability than ours. Our definition only requires that the best expert is inconsistent with the adversary's feedback for at most $r$ many rounds. Known bounds use a definition requiring that the best expert makes at most $r$ many mistakes. In our definition, even the best expert might make much more than $r$ many mistakes. This definition is useful when proving Theorem 1.1, as explained in Section 2.1. However, if the adversary is oblivious, then both definitions of $r$-realizability coincide. The reason we can not simply convert a regret bound in this setting to a tight mistake bound, is that known regret bounds depend on the number of rounds in the game. Mistake bounds obtained from such regret bounds could only be tight in specific regimes where $r$ is relatively large. As for learning concept classes, our mistake bounds, as well as known regret bounds [1] are proved via a reduction to prediction with expert advice, so a similar discussion holds for this problem as well. [1] Amit Daniely and Tom Helbertal. The price of bandit information in multiclass online classification. 374 In Conference on Learning Theory, pages 93–104. PMLR, 2013.
Summary: This paper studies the mistake bounds of multiclass classification with adversaries. This paper provides the mistake bound gaps between the bandit feedback setting and full information setting, between the adaptive and oblivious adversaries for randomized learners under the bandit feedback setting, and between the randomized and deterministic learner. Strengths: - This work considers various settings. - The proposed gaps between adaptive and oblivious settings and between randomized and deterministic learners are nearly optimal. - Several future directions are discussed. Weaknesses: - For the agnostic setting with oblivious adversaries, the mistake bound is not tight for large $r^*$. - The randomized algorithm proposed for the expert setting involves minimax calculation, which is inefficient. Technical Quality: 2 Clarity: 2 Questions for Authors: N/A Confidence: 2 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: No potential negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Response to weakness #1: As stated in lines 310-313, we are mainly interested in the experts setting as a means for proving Theorem 1.1. We believe that the paper already explores a sufficient range of problems and variations within the setting of learning hypothesis classes. Therefore, we intentionally leave the oblivious adversary case of the experts setting for future work, as solving it may require a fundamentally different approach. Response to weakness #2: It is indeed an interesting open question to find a more natural and efficient algorithm for the experts setting. This question is discussed in detail at the open questions section. --- Rebuttal Comment 1.1: Title: Response to Rebuttal Comment: Thanks for your response, and I have no further questions.
Summary: This paper considers online multi class classification in the adversarial setting with a focus on the worst-case expected number of mistakes as a performance measure. The paper studies the effects of information (i.e. feedback provided to the learner), adaptivity (of the adversary) and randomness (of the learner), and derives new upper and lower bounds relevant to each of these considerations. For information, the paper solves an open problem from Daniely and Helbental (2013) and provides results for showing the price of bandit versus full-information for randomised learners. The proof of this theorem is the main technical contribution of the paper and relies on a reduction of the problem to a particular instance of prediction with expert advice, and the derivation of new bounds for the prediction with expert advice problem. The results give upper and lower bounds on the mistakes which are tight to logarithmic factors. Similarly, near-matching bounds are derived showing the cost of facing an adaptive adversary, rather than an oblivious one and for following a deterministic policy rather than a randomised one. Strengths: The paper answers a number of open questions in the area, providing a comprehensive piece of work which covers several important aspects of the important problem of online classification. The theoretical contributions are non-trivial and well explained, particularly the sketches of the key ideas of Theorem 1.1 and 1.5 in section 2, which highlight the main novel contributions of the paper. The connections to existing literature are very well established within the text. Weaknesses: The main weakness of the paper is its readability. A lot of key concepts are deferred to the appendices, presumably to highlight the most technically impressive contributions sooner, and I feel that the paper would be more accessible if more space were given to describing fundamental aspects of the problem in the introductory sections. See questions section for some specific suggestions, but a general commitment to making the paper easier to understand would be welcome. Technical Quality: 4 Clarity: 2 Questions for Authors: Lines 21-29 would facilitate easier understanding for the subsequent sections if a more explicit detail of the problem setup could be given here. Perhaps some of the definition from lines 434-442 could be ported to here, or some additional details and a reference to Appendix A? Lines 66-69 describe the concepts of inconsistency and realisability only very briefly, in a way that is likely to be challenging for the unacquainted reader. Can you perhaps add some further details here? Line 153: the list of related literature preceding this is dense and it is hard to parse which papers contribute what from it. It is then not absolutely clear what the precise novelty is at this point: is this the first paper to derive any bounds for multi-class prediction with expert advice and bandit feedback? Line 135: maybe these new results should also be highlighted in the abstract if they are of independent interest? Confidence: 2 Soundness: 4 Presentation: 2 Contribution: 4 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Contribution of Theorem 1.1: We first want to make sure that the contribution of Theorem 1.1 is completely clear. In the summary, the following statement is made with respect to Theorem 1.1: “The results give upper and lower bounds on the mistakes which are tight to logarithmic factors.” However, Theorem 1.1 establishes a tight upper bound on the role of information. This is especially important because proving a bound which is tight up to a logarithmic factor is possible even without using randomness, as stated in Equation (1). It is also known that shaving this extra log factor is impossible if the learner is deterministic, as stated in Equation (2). The open question that we solve is, whether this extra log factor can be shaved by leveraging randomness in the prediction method. Theorem 1.1 answers this question affirmatively. Response to Weakness: We agree that some important concepts and discussions appear only in the appendix. Since our paper studies various problems and variations, this is unfortunately unavoidable due to space constraints. We sincerely tried to keep the most important discussions in the main paper. However, we completely agree with your valuable suggestions written in the questions section. We will incorporate those in the next version of this paper, and will also make another pass to see if there are any other improvements that can be made to improve the paper’s readability. We thank you for helping us to improve the paper! Response to question/suggestion #3: Many previous works studied regret bounds for prediction with expert advice with bandit feedback. However, converting those regret bounds to mistake bounds (in which we are interested in this work) results in bounds which are generally extremely sub-optimal, as they depend on the number of rounds in the game. More details can be found in lines 306-310. --- Rebuttal Comment 1.1: Title: Response to Rebuttal Comment: Thank you for your response and clarifications. I am inclined to retain my score and confidence score for the paper, as it remains the case that I am not the most familiar with the setting it studies (so am not fully versed on its significance and some technical details) but what I have been able to verify appears accurate and substantive and will be suitably well explained with the promised clarifications.
Summary: This paper studies online multi-class classification in the mistake bound model. It focuses on understanding how various resources affect the optimal mistake bounds of the learner. These resources concern feedback models (bandit feedback vs full information), adversarial models (adaptive vs oblivious), and learning strategies (randomized vs deterministic). The paper provides a collection of nearly tight upper and lower bounds and addresses some open problems. To prove one of the results, the paper also presents new results for the problem of prediction with expert advice under bandit feedback. Strengths: - The paper addresses several important questions in online learning. Some of these questions have previously been studied for different settings, and this paper fills some of the gaps in the literature. The paper presents an interesting collection of nearly tight upper and lower bounds that are relevant to the learning theory community. In particular, it shows that in the bandit feedback setting, adaptivity of the adversary and randomness of the learner play a bigger role in the mistake bound, in comparison with the full feedback setting. - The results are both clean and thorough, and the paper generalizes some of them to the agnostic setting. - The new techniques for prediction with expert advice under bandit feedback can be interesting on its own. - The results lead to some interesting open questions and future directions. Weaknesses: - There are still some gaps in the provided upper and lower bounds. For example, in Theorem 1.2, the lower bound only applies to certain hard pattern classes. Also, the algorithm proposed for the prediction with expert advice problem does not seem practical. - The paper is highly technical and may be challenging for non-expert readers. Many important discussion are also deferred to the appendix. - The results are interesting from a learning-theoretic perspective; however, it would be nice if the paper can discuss the practical implications of the findings. Technical Quality: 3 Clarity: 3 Questions for Authors: - Are there any unique challenges in the multi-class setting as opposed to binary classification? - Can you further discuss the implication of the lower bound in Theorem 1.4? Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The settings are properly discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Response to weakness #1: It is indeed an interesting open question to find a more natural and efficient algorithm for the experts setting. This question is discussed in detail at the open questions section. Response to weakness #2: We agree that some important concepts and discussions appear only in the appendix. Since our paper studies various problems and variations, this is unfortunately unavoidable due to space constraints. We sincerely tried to keep the most important discussions in the main paper. In the next version of this paper, we will incorporate the suggestions of reviewer x7iL, and will make another pass to see if there are any other improvements that can be made to improve the paper’s readability. Response to question #1: Yes. Even with full-information feedback, multiclass learnability was only fully characterized recently, in [1]. Also, in binary classification there is no difference between bandit and full-information feedback, so the challenging nature of bandit-feedback is not manifested in binary classification. Response to question #2: The implications of the lower bound in Theorem 1.4 are explained in detail in lines 121-134. The significant consequence of this bound is the fact that for certain classes, while exactly quantifying the deterministic mistake bound, the bandit-Littlestone dimension can be quadratic in the randomized mistake bound. On the other hand, there are some classes for which this dimension quantifies both deterministic and randomized mistake bounds. The conclusion is that a combinatorial characterization of randomized learners (which are very common in online learning), if exists, cannot be obtained solely by the bandit-Littlestone dimension. References: [1] Steve Hanneke, Shay Moran, Vinod Raman, Unique Subedi, and Ambuj Tewari. Multiclass online learning and uniform convergence. Proceedings of the 36th Annual Conference on Learning Theory (COLT), 2023. --- Rebuttal Comment 1.1: Comment: Thank you for the responses. I intend to maintain my score. The results seem interesting, but it would be really nice if the paper can be made more readable.
Rebuttal 1: Rebuttal: We thank the reviewers for taking the time to carefully read our work, and for their thoughtful comments and suggestions to improve it. We will make our best efforts to incorporate their valuable suggestions in the next version of this paper. We respond to specific issues raised by each reviewer in a comment to each review.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Evidence of Learned Look-Ahead in a Chess-Playing Neural Network
Accept (poster)
Summary: The author examine if a chess NNs encode future moves in their activations. The results suggest that Leela predicts future self-play moves and that the activations can be manipulated to change the predicted moves. Strengths: # originality Linear probes for difference concepts are well known in the chess literature, but the activation patching and explicitly looking at future moves as concepts are novel. # quality The analysis looks good, but the lack of consideration of alternative hypothesizes or attempts to disprove the results weakens the results. # clarity The paper is clear and flows well. # significance I think this work is interesting and presents some good ideas for chess AI and the broader XAI community. Weaknesses: My main concerns with this paper is that it does not test the methods outside of a very limited scope. Even looking a puzzles where Leela gets the next move wrong would be a good first step, i.e. what is the accuracy of your move predictor when the model is wrong, or can you patch to get the correct move. I'm also not that surprised by this result and don't think the main motivation is addressed. The Leela NN is trained to predict the future actions of itself, and those are based on an explicit tree model of the game MCTS. The model internally doing a depth 3 search seems very plausible and I don't think proves the motivating claim of "look-ahead in neural networks", as the heuristics could simply be on the depth 3 tree instead of the depth 1 tree. One options for this analysis would be to look at a chess NN that's not trained on selfplay data (Maia, CrazyAra, etc). Technical Quality: 2 Clarity: 3 Questions for Authors: Could you give a more formal definition of " look-ahead in neural networks" and explain how this work proves it? Does the move probe work on human games or non-puzzle positions? Figure 8, what is the accuracy measuring? There are 64 squares on a chess board, how is random getting above 2% accuracy? What would a falsification look like in this method? The patching for example seems so show that you can disrupt the chosen square, but this assumes the activations are encoding this linearly. If they were non-linear or only partially in the patch how would that changes these results? Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: Discussed above Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review and feedback! > looking a puzzles where Leela gets the next move wrong would be a good first step, i.e. what is the accuracy of your move predictor when the model is wrong, or can you patch to get the correct move. We focus on puzzles that Leela gets right simply so we can apply our interpretability methods. For example for the move predictor probes: what should be the “ground truth” for the 3rd move in cases where Leela gets the puzzle wrong? We could see if the probe can still predict the 3rd move of the correct continuation, but given that Leela got the puzzle wrong, there’s no reason to expect this. Leela might still use look-ahead along some incorrect continuation, so we could ask how good the probe is at predicting the 3rd move in this incorrect line. But of course, there are many different incorrect lines (vs one correct line), and we don’t have ground truth for which, if any, Leela is considering. For patching, it will massively depend on how invasive an intervention we perform. For example, patching the right squares in the final layer could just override the move output to be whatever we want. > The model internally doing a depth 3 search seems very plausible and I don't think proves the motivating claim of "look-ahead in neural networks" We don’t follow; if the network is doing a depth 3 search (over possible move sequences), we’d certainly consider that an instance of look-ahead in neural networks. Note that we are not claiming look-ahead arbitrarily many steps into the future, is that the point of confusion? > Could you give a more formal definition of " look-ahead in neural networks" and explain how this work proves it? By look-ahead in neural networks, we mean internally representing future moves (of the optimal line of play) and using those representations to decide on the current move (see line 45). Our probing results (section 3.3) are our most straightforward evidence for representations of future moves, though the residual stream patching experiments (section 3.1) also suggest this already. The patching experiments in section 3.2 give evidence of such representations being used the way we’d expect, and in particular to decide on the current move. For example, L12H12 seems to move information from future move target squares back to the target square of the next move, and a very targeted ablation in this head has outsized effects on the output. > Does the move probe work on human games or non-puzzle positions? All puzzles in the Lichess dataset we use come from human games, they are simply selected to be tactically interesting states. In many chessboard states, there are many plausible lines of play, so predicting several moves into the future is fundamentally much less feasible. Using puzzles means that at least there is a clear ground truth correct line, and we can then check whether Leela represents it. > Figure 8, what is the accuracy measuring? There are 64 squares on a chess board, how is random getting above 2% accuracy? The “probe on random model” line in Fig. 8 is a probe trained on a randomly initialized model, which is different from a randomly initialized probe (or random predictions). Even the activations of a randomly initialized model contain information about the input, they just don’t contain more sophisticated features. So this baseline checks how well future moves can be predicted just by the probe itself, without interesting features from Leela. > What would a falsification look like in this method? If the experiments we ran didn’t yield non-trivial effects, we’d consider that evidence against look-ahead. For example, a probe trained to predict future moves could have achieved accuracy not much better than the baseline probe, or patching on future move squares could have had effects no bigger than patching on other relevant squares. > the lack of consideration of alternative hypothesizes or attempts to disprove the results weakens the results. We did implicitly consider alternative hypotheses in every one of our experiments. For example, for the residual stream patching experiment (section 3.1), we wondered whether future move target squares only have big patching effects because these squares tend to be “obviously”/”heuristically” important in the puzzle starting state. That is why we used a very strong baseline, by taking the maximum effect over other squares. While future move target squares might often be important for heuristics, so are many other squares, and under a non-look-ahead hypothesis, it seems likely that the maximum over other squares should typically be larger than the importance of one specific future move square. We have similar baselines to rule out simple alternatives in all our experiments. We realize that this reasoning might not always be apparent in the paper, so we’ll include the motivation for baselines more explicitly. Thanks for drawing attention to this! Of course, we can never fully prove that there's no alternative explanation for our results, we can only rule out specific alternative hypotheses. We don’t know of any concrete hypotheses not involving look-ahead that predict or explain our results, but we would be very curious in case you have a specific one in mind. > The patching for example seems so show that you can disrupt the chosen square, but this assumes the activations are encoding this linearly. If they were non-linear or only partially in the patch how would that changes these results? There seems to be a misunderstanding here. We do not assume any form of linearity for our patching results. We do operate under the hypothesis that information about future moves is localized to their squares, but this is something we argue for with our results rather than an assumption (see line 128). --- Rebuttal Comment 1.1: Comment: Thank you for your detailed response. I continue to be concerned that this work is not falsifiable as presented. You only look at features in a highly specific dataset (Lichess puzzles that the model is successful on) and never checked to see if the features are absent when not looking at the dataset. This is also a result of your definition which considers only the positive case, i.e., a model that represents all lines equally before discarding the results and deciding independently would also meet your definition. This is the _"Leela might still use look-ahead along some incorrect continuation"_ scenario that you mentioned. In particular I am concerned by the lack of "real' games, the Lichess puzzles dataset only contains positions that meet certain easy to evaluate heuristics. It is impossible to determine if you are observing these heuristics or more robust features in this test. I would be much more confident of a claim of general look-ahead if they showed the effects outside of a cherry-picked set of samples. Explicitly, presenting a hypothesis for this work and explaining how your tests would disprove it would greatly improve things on this front, currently a negative result simply suggests you're not searching hard enough. --- Reply to Comment 1.1.1: Comment: Thank you for the follow-up! > I continue to be concerned that this work is not falsifiable as presented. You only look at features in a highly specific dataset (Lichess puzzles that the model is successful on) and never checked to see if the features are absent when not looking at the dataset. Is your concern (1) that we should check that we get *negative* results on inputs where there is no look-ahead, as a sanity check/baseline? Or (2) that we should present *positive* results on a wider distribution? If (1), it's unfortunately unclear how to find inputs where we can rule out that look-ahead is happening. But we do have other types of baselines in each experiment. If (2), as described above, we only claim look-ahead on this specific distribution. So in this case, the question in our mind would be how interesting this claim is, rather than issues with our evidence. Please let us know in case we misunderstood your point! We unfortunately still don't see why you think our work "isn't falsifiable." > a model that represents all lines equally before discarding the results and deciding independently would also meet your definition. This is the "Leela might still use look-ahead along some incorrect continuation" scenario that you mentioned. We don't think so. If a model discarded results from considering future lines, then the representations related to those future lines would not influence the output of the model. So we would not consider this look-ahead under our definition (because it's lacking the "using those representations to decide on the current move" part). We perform interventions to check this in sections 3.1 and 3.2, so for such a model, we also wouldn't get most of the positive evidence of look-ahead we present. The "Leela might still use look-ahead along some incorrect continuation" scenario is different: here, Leela would not be *discarding* the look-ahead along incorrect lines, but instead would be using it to output an (incorrect) move. Please let us know in case we misunderstood your concern. > the Lichess puzzles dataset only contains positions that meet certain easy to evaluate heuristics. It is impossible to determine if you are observing these heuristics or more robust features in this test Could you clarify what you mean by "easy to evaluate heuristics?" In case you mean chess heuristics to easily find the best move (without look-ahead), then (1) as we've described in the paper (section 2.2), we've specifically tried to make the puzzles *not* solvable by simple heuristics, and expect they are much more difficult than typical chess states, and (2) our experiments give evidence of look-ahead without assuming that look-ahead is required to solve those puzzles (see also the beginning of our response to SpQ5). > Explicitly, presenting a hypothesis for this work and explaining how your tests would disprove it would greatly improve things on this front, currently a negative result simply suggests you're not searching hard enough. Could you say more about what kind of hypothesis you are looking for here? A complete hypothesis in the sense of pseudocode for how Leela might play chess this well without look-ahead is clearly infeasible: all known algorithms for this level of chess performance involve explicit look-ahead/search, so if Leela wasn't using look-ahead, it would have to use some method or set of heuristics that haven't been discovered in decades of chess engine development. (This is certainly plausible a priori, but we naturally couldn't specify any concrete such hypothesis.) That is why we've focused on hypotheses to "explain away" our specific results in our response above (see the alternative explanation we considered and falsified for our results in section 3.1). As mentioned, we now intend to include this in the paper to make it explicit. If this kind of hypothesis is not what you are suggesting, it would be very helpful if you could elaborate a bit more. Thank you!
Summary: **Update after rebuttal:** To me personally, the authors' response clarifies all open questions and misunderstandings and adds interesting new results. I remain convinced that the work is ready for publication and interesting to the NeurIPS audience, which means my score remains 'Accept'. The paper investigates whether it is possible to reliably identify functional signatures of look-ahead in a Leela Chess Zero (policy) network. This analysis in a complex domain (chess) with a relatively large network is challenging. To do this, the paper uses three interpretability techniques for transformers: activation patching to measure the effect of specific interventions on neuron-activations and the sub-network that they influence, analysis and structured ablation of attention patterns, and training of simple read-out probes to test whether certain information is represented in the internal state or not. Additionally, the paper comes up with a method to automatically collect a large, high-quality dataset of challenging chess situations (that simpler chess engines cannot solve) that have a short and unique solution. This allows reliably identifying candidate-targets to aim the look-ahead investigation at. It also allows automatically constructing highly non-trivial interventions on the board state, which are crucial for meaningful activation patching. The paper finds clear and convincing evidence (from multiple angles and with various control experiments) that at least in these situations, the Leela Chess Zero network performs a one-step look-ahead of its next move (an opponent move is in-between, making it a two-step look-ahead in terms of game steps). Strengths: * Very well written paper, with clear intro, and explanation of the fairly involved methodology, and great supporting figures. * Very high-quality case-study of interpretability and analysis of a concrete capability/mechanism “in the wild” (meaning in contrast to networks trained on synthetic data to facilitate or simplify the analysis). By its very nature, interpretability work is typically bespoke, and while some general techniques can be developed, I think the field greatly benefits from a body of well executed case studies that provide at least an abstract recipe and approach for others to adapt. * Great experimental work, with convincing evidence, important controls, and multiple investigations aimed at the same question to provide strong evidence. Despite the challenging setting, experiments are conducted with care and rigor, and attention to ruling out some obvious alternatives that could lead to the same findings. * The filtered dataset is a contribution in itself, as well as the particular model used in this paper, which may easily be picked up by follow-up research. Weaknesses: * By the nature of a case study, the paper’s findings are limited to the particular network and the particular domain. The authors have managed to successfully identify and exploit aspects of the architecture (due to the network architecture, the internal state seems to map well onto a 2D chess board) and the data (filtered puzzle set with unique and short solution trajectories). I think this is crucial for the success of the current study and is highly non-trivial and original work, but naturally this means that the method cannot be straightforwardly applied e.g., to a LLM. I do believe though that the work here is highly inspiring to adapt the techniques to other models and domains. * (minor) Throughout the paper I was wondering why the policy network and not the value network was used. As the appendix says, both networks share the same torso and simply add relatively shallow heads, meaning the findings in the paper also apply to the value network. I think it would be good to mention this and a few more details in the main paper (such as training via supervised learning, and fine-tuning to remove history-dependence). I will list these below in ‘Improvements’. **Verdict:** The paper is a great case-study in interpretability and analysis, and I greatly enjoyed reading it. The question tackled in the paper is challenging, and I personally believe that the paper does a great job at producing a number of convincing pieces of evidence, with well chosen control experiments (such as a random network for the probes, and control-ablations for activation-patching and attention-ablations). While there is a challenge to transfer the method to other settings, the paper helps with this by clearly explaining the reasoning behind the various steps. Ultimately, I appreciate a well executed domain-specific case-study over a sloppy execution in a more general setting. Interpretability research is hard, particularly mechanistic interpretability, and I think that the field will be a composition of general techniques (which are also used in the paper) and exemplary case-studies for others to follow and apply to their research questions and domains. Therefore, I think the current paper may well have significant impact beyond the specific findings, which are limited to a particular network and a particular task. I currently recommend acceptance of the paper: all main claims are critically investigated and supported by evidence, the work is novel and very original, and I think is interesting to a wide part of the NeurIPS audience and may be quite impactful for follow-up work. **Improvements:** 1. I think it would be nice to mention a few more details about Leela in the main paper. In particular that the torso is the same as for the value network (and the value-head is relatively shallow and has no additional attention mechanism, so findings also hold for the value network). Maybe something like L425-428 in the appendix plus a brief description of the training (supervised on a set of high-quality trajectories, plus fine-tuning to remove history-dependency). 2. It would be interesting to see how the findings regarding look-ahead evolve during training. This is completely optional, and beyond the scope of the paper (and the authors may not have access to the original training procedure / checkpoints). But it would be interesting to see whether attention heads and the activation patching results develop gradually or sharply and whether that can be related to a similar increase in performance on the filtered puzzle dataset used in the paper. Technical Quality: 4 Clarity: 4 Questions for Authors: 1. Are there situations where the weaker Leela gets the puzzle right, but the stronger Leela does not? Or is the stronger Leela strictly better? 2. Related: how has the weaker Leela been trained? Is it plausible that, e.g. stronger MCTS during training of the weaker Leela means look-ahead is much less important, hence why it does not develop look-ahead (or does look-ahead only develop at sufficiently large and strong networks)? [It is perfectly fine to respond that you do not have the answers for these questions and not spend any more time on this; I am just curious.] 3. The probe accuracy in Fig 8 is relatively high in early layers - how does this fit with the activation patching results that show the largest effect only in medium-to-late layers 6-11? 4. As far as I am aware the precise details about the training process of this variant of Leela have not been published (in an academic venue). While this is beyond the scope of this paper, having these details (or as many as possible), e.g, in the appendix, would greatly help the scientific community to use this version of Leela as something that can be reliably reproduced. This may help boost Leela’s popularity for *scientific* research using openly available chess models. Similarly, the model and dataset used in this paper (with the fine-tuning) are a nice contribution. 5. L225 typo: “are seem”. Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: Limitations are nicely discussed in Sec 5. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your careful review and great suggestions! > Throughout the paper I was wondering why the policy network and not the value network was used. Great question, there was no particular reason for this choice except that we decided to focus on only one head for the purposes of exposition. We have re-run all our experiments with the value head (i.e. using the log odds of the win probability instead of the log odds of the top move as the effect metric). As you suspected, the results are qualitatively the same as for the policy head, see the PDF attachment to the general response. We’ll include these in the paper and also clarify that the network has a shared body and two small heads. > I think it would be nice to mention a few more details about Leela in the main paper. Thank you for the suggestion, we agree this would help readers and will move some information from the appendix to the paper! > It would be interesting to see how the findings regarding look-ahead evolve during training. We agree this would be very interesting to study, but also that it’s out of scope for this paper. There are public training checkpoints for at least some versions of Leela, so in principle, this would be feasible to study in future work. Applying our exact methods would require finetuning away the history requirement for each checkpoint, which should be possible (albeit somewhat computationally intensive if we want to use many checkpoints in order to even detect potential sudden transitions). > Are there situations where the weaker Leela gets the puzzle right, but the stronger Leela does not? Or is the stronger Leela strictly better? The weaker Leela very occasionally does better than the strong version “by luck.” For example, there could be a move that looks good to the strong Leela but is, in fact, very bad for a subtle reason that even the strong model misses. If the reasons in favor of this move are themselves somewhat subtle, then the weak model might not even consider it, and just play a mediocre move, rather than the strong model’s bad move. But apart from such edge cases, the difference in playing strength is quite large, and we would be surprised if there are any classes of states where the weak model is systematically stronger. > how has the weaker Leela been trained? Is it plausible that, e.g. stronger MCTS during training of the weaker Leela means look-ahead is much less important, hence why it does not develop look-ahead (or does look-ahead only develop at sufficiently large and strong networks)? Both models were trained using supervised learning on MCTS rollouts (so the loss functions are the same as in MCTS training, but the data comes from an existing strong network, rather than the network under training). Given that MCTS trains the network to predict the results of tree rollouts, we don’t think strong MCTS would make look-ahead less important (this would only be true if the network was optimized end-to-end to maximize the playing strength of the overall MCTS process). We’d indeed guess that look-ahead becomes more prevalent for stronger networks, but of course, this would require future work to actually test. > The probe accuracy in Fig 8 is relatively high in early layers - how does this fit with the activation patching results that show the largest effect only in medium-to-late layers 6-11? Good question, we are not entirely sure what the answer is. One possibility is that there is a collection of different mechanisms involved in look-ahead, and not all of them activate in the same cases or are placed in the same layers. Probes might learn to exploit look-ahead mechanisms that are present in early layers, whereas the more targeted activation patching experiments only pick up on a subset of look-ahead mechanisms. L12H12 is already an example of a mechanism that is sometimes involved in look-ahead but is important less often than look-ahead mechanisms in general (such as the piece movement heads, contrast figs. 5 and 7). So it’s plausible that there are also some mechanisms that *only* probing picks up on. > As far as I am aware the precise details about the training process of this variant of Leela have not been published (in an academic venue). While this is beyond the scope of this paper, having these details (or as many as possible), e.g, in the appendix, would greatly help the scientific community to use this version of Leela as something that can be reliably reproduced. Indeed, most information about Leela is only available on the Leela Discord (which is public but naturally not designed to convey all that information compactly). We’d be happy to extend appendix A with some additional information (but a full description of Leela’s training details would fill its own paper). --- Rebuttal 2: Title: Thank you for the detailed responses Comment: Thank you for answering my clarifying questions and commenting on my (mostly optional) suggestions for improvement. I am positively surprised to see a full analysis of the value-network too. Overall I am happy with the authors' responses and additional results. I stand by my original score - I think this is a great interpretability case study with interesting results, that is ready for publication, well executed, and interesting to the NeurIPS audience. I have also read the other reviewers' comments and authors' responses and consider all raised issues sufficiently addressed (though I will happily take into account reviewers' future comments in case they disagree). abVe and p7Jw seem to be mainly concerned by the dataset filtering and only showing results on this filtered dataset - while this is understandable criticism at first, I think constructing such a dataset in a reasonable manner is a contribution in itself. The difficulty is that a large neural action/value predictor behaves quite differently from a search-based chess algorithm; in many board states it may be that the neural system does not use look-ahead-search but instead relies on memorization and exploitation of (statistically) similar patterns. It is thus crucial to first identify states where look-ahead-search may be needed. I am also not too worried about the use of the puzzle dataset - previous works (e.g. the 'Grandmaster-level chess without search' that abVe mentioned) have found strong correlation between puzzle performance and actual game-playing strength across a variety of models. The paper claims (and is very clear about this) that convincing evidence of look-ahead-search can be found *in these situations*. It does not claim that this is the main mechanism that the network uses at all times. Without proper filtering, the evidence for this mechanism might quickly "drown" within the (potentially large) number of situations where no look-ahead search is performed. While it would be interesting to know "how often" the network uses its learned search, this question is beyond the scope of the paper, whose goal is to clearly establish that the paper has learned to use look-ahead search at least sometimes (which I consider a highly non-trivial result). --- Rebuttal Comment 2.1: Comment: Thank you for your positive feedback about our additional results and taking the time to write detailed and insightful responses to our rebuttal and the overall discussion on our work so far.
Summary: This paper conducts a mechanistic interpretability analysis of Leela and shows evidence that it learns to look ahead in the network. Strengths: This paper provides a convincing answer to an important question: are networks like Leela's analogues of System 1-style pure intuition, or do they encode some amount of System 2-style calculation? The technique of corrupting the state and using activation patching is clever and effective. It's clear from the experiments that yes, there is some look-ahead being done. The authors are also careful to not overstate their claims: an actual broad search isn't necessarily being done, but at least some 1- or 2-step look-ahead is happening. Weaknesses: The paper is essentially an existence proof of lookahead, which is a useful contribution, but it would be nice to know something about the the prevalence of lookahead. The dataset is chosen to maximize the probability that lookahead is happening. How often does it happen? I thought the claim that "look-ahead or other sophisticated algorithms should pick up on the importance of this difference, but shallow heuristics should mostly ignore it." required more justification. Some of the tactical patterns picked up by the method are so common that it's conceivable that a "shallow heuristic" is designed for them, rather than look-ahead. The lack of results/explanations regarding why the squares of the 2nd move aren't important weakens the argument somewhat. Why aren't there pawn or king heads, similar to the other piece types? Nitpick: the motivation in the first paragraph about chess being different from other domains because in other domains "models can solve their tasks with a single simple algorithm" also applies to chess. Technical Quality: 4 Clarity: 4 Questions for Authors: How often does look-ahead occur? What is the evidence that look-ahead is occurring as opposed to a shallow heuristic designed to pick up on common tactical patterns? Why aren't the squares of the 2nd move similarly important? Why aren't there pawn or king heads, similar to the other piece types? Confidence: 5 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful feedback and questions! We answer them below and clarify potential misunderstandings. > the claim that "look-ahead or other sophisticated algorithms should pick up on the importance of this difference, but shallow heuristics should mostly ignore it." required more justification. Some of the tactical patterns picked up by the method are so common that it's conceivable that a "shallow heuristic" is designed for them, rather than look-ahead. We’d like to clarify that this claim is not a load-bearing assumption for our results. The sentence you quote only motivates why we use corrupted inputs that are similar to clean inputs (in the sense that a weak model gives the same output): if we used corrupted inputs that differ too much from the clean ones, then too many network activations would differ and drown out the evidence of look-ahead. If our argument for look-ahead was behavioral, i.e., the fact that Leela can solve these supposedly difficult puzzles, then it would indeed be an issue if some of them are also solvable by shallow heuristics. But our argument instead rests on observations about internal representations, and these observations suggest look-ahead irrespective of what exactly the corrupted inputs are. For example, if we could show that future move target squares are unusually important under random corruptions, this would be just as convincing in our mind. In summary, we agree that shallow heuristics could likely pick up on some of the tactics, but we don’t think this weakens our arguments. > What is the evidence that look-ahead is occurring as opposed to a shallow heuristic designed to pick up on common tactical patterns? We think all three lines of evidence we present favor look-ahead over shallow heuristics. For example, a shallow heuristic (which decides on the immediate next move based on matching to common patterns) would not need to explicitly represent which future moves will actually be played. But we find in section 3.3 that such representations of future moves exist, as a look-ahead hypothesis would predict. We similarly think that hypotheses that don’t involve look-ahead would not predict or explain any of our other results, but please let us know if you have specific concerns about this. > How often does look-ahead occur? Good question, but unfortunately difficult to answer precisely, for two reasons: - It depends on where we’d draw the boundary for “look-ahead” (for example, which effect size would be sufficient in our various experiments). While we think our results are strong enough to conclude that look-ahead is involved in at least some inputs (those with high effect sizes), it’s much less clear where exactly to set the threshold. - Our methods can only establish a lower bound on how often look-ahead occurs. For example, throughout the paper, we consider representations on future move target squares. In principle, additional look-ahead mechanisms that use entirely different pathways could exist, which we might not pick up on. With those caveats in mind: Our dataset is 2.5% of the initial Lichess dataset (see lines 83 and 91). On this dataset, we think look-ahead is involved in most states (but as discussed, it depends on where we draw the boundary). However, these 2.5% of states were, of course, not directly selected for high effect sizes; rather, we only used a behavior-based filtering procedure. So the true number of states where look-ahead is important is likely significantly larger than this among Lichess puzzles. > Why aren't the squares of the 2nd move similarly important? As discussed in line 163: “We are unsure why the squares of the 2nd move aren’t similarly important. This may simply be because the opponent’s move is typically “obvious” in our dataset or because suppressing the opponent’s best response doesn’t reduce the quality of the 1st move.” If the two hypotheses we give there are right, the apparent unimportance of the 2nd move would largely be an artifact of our experimental setup (studying tactics puzzles where the current player is winning). But we can’t rule that there is some interesting deeper reason that would require a more detailed understanding of Leela’s internal mechanisms. > Why aren't there pawn or king heads, similar to the other piece types? Great question! There are, in fact, pawn and king heads, they just involve a few complications, and so we decided to ignore them for simplicity. But we see that not mentioning them was a mistake, thank you for drawing attention to that! We’ll rectify this by adding a few comments to the paper: - A few king heads exist (based on attention patterns). But our dataset doesn’t contain many puzzles where the starting player moves their king on the first move, so we wouldn’t have much data for our piece movement head ablation. This is simply because king moves are much rarer in tactics puzzles. - Pawns move in different ways depending on context—they usually take one step forward, but they capture diagonally, and they can take two steps at once if they’re in their starting position. Based on attention patterns, it seems that there are distinct heads for these different types of movement, so we decided to ignore pawns to simplify the presentation in the paper. Again, we recognize this omission is also liable to lead to confusion, so we will bring this up in the paper. > Nitpick: the motivation in the first paragraph about chess being different from other domains because in other domains "models can solve their tasks with a single simple algorithm" also applies to chess. Thank you for pointing this out, our phrasing here is unfortunate and we’ll improve it. What we meant to gesture at is simply the difference in complexity between playing chess well and e.g., finding the path from the root of a tree to a specific leaf (which is the task studied in the most similar prior work). --- Rebuttal Comment 1.1: Comment: Thanks for your detailed responses.
Summary: This paper closely examines, using various forms of activation patching and probes, the inner workings of the state-of-the-art policy network of Leela Chess Zero (for the game of Chess). The authors find several different pieces of evidence that suggest it is likely that the network has learned to carry out some form of look-ahead search, at least in complex states that require this to arrive at the correct solution. Strengths: I found this to be a very interesting paper, and have little to remark on it. It's always good to see a paper that is actually trying to improve our understanding (and doing a good job at that), rather than just being "here is our new technique and it has bigger numbers". It is well written, using good examples and illustrations. Weaknesses: My only "important" criticism is that I object to the use of "existence proof" in both the abstract and the conclusion. I think this is too strong, and would suggest rephrasing to "evidence". I do find the evidence provided in the paper to be compelling, but still do not think it can be definitively described as proof rather than evidence. --- Minor nitpicky comments: - line 86 has a random period in the middle of a sentence (right before the footnote) - I feel like it would be more natural for the paragraph of lines 222-224 to be moved a bit earlier. It's essentially describing a part of the experiment setup, but after some of the results have already been presented and discussed. - Line 225: "are seem" Technical Quality: 3 Clarity: 4 Questions for Authors: 1) Probably a difficult question, but curious to hear your thoughts if you have any: is it definitively possible to draw a hard line between "heuristics" and look-ahead? Especially depth-limited look-ahead (like what you seem to be finding evidence for in this paper)? Here's what I'm thinking. If you only have a very basic heuristic (say, only material count), then you need at least 1 ply on search on top of that to find which moves let you capture something. But you could just add a smarter heuristic that counts the values of pieces your pieces can attack in one move, and then you no longer need one ply for this. With the most basic heuristic, you would need at least 2 plies of search to also consider the responses that the opponent can make. But, you could add a heuristic that counts which pieces defend their own friendly pieces, and that could partially account for what the second ply would typically be searching. Is there a limit to this? Could we conceivably create heuristics that detect the 3-ply situations explored in this paper, without search? 2) As a philosophical follow-up to the above: can we ever really distinguish between look-ahead search and heuristics, if the look-ahead search is depth-limited (I suppose any "learned look-ahead" in non-recurrent neural networks would always be depth-limited)? Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 4 Limitations: There is a good discussion of limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review and insightful questions! We agree that “existence proof” was poor wording on our part and will rephrase that to “evidence” in both places, thank you for the suggestion. Thanks also for the minor comments, which we’ll incorporate. > is it definitively possible to draw a hard line between "heuristics" and look-ahead? This is a great question, and as you say, a full answer is probably a difficult philosophical problem. We give a few initial thoughts below. Drawing a very precise line around “heuristics” might not be possible (in the sense that the space between ad-hoc heuristics and general reasoning algorithms might be relatively continuous and it’s not exactly clear where the boundary is). Even so, we can probably say that certain algorithms do involve look-ahead and certain others don’t (while leaving the question open for some borderline cases). Implementing look-ahead by simply counting attacks/defenses in the current state, as you describe, is an interesting example of blurring this boundary. But capturing all relevant considerations this way likely only works well for 1- or perhaps 2-ply look-ahead. In the 3-ply case (which the paper focuses on), a piece might be moved twice (and already in the 2-ply case, heuristics would need to be very careful about counting defenders correctly to account for lines closing or opening up as pieces are moved). So while in some sense, there is no limit to what heuristics can implement—in the extreme case, a look-up table could implement arbitrary policies—we expect algorithms that explicitly use look-ahead to be far more efficient. > can we ever really distinguish between look-ahead search and heuristics, if the look-ahead search is depth-limited The thoughts outlined above suggest that a depth limit does not preclude a distinction between simple heuristics and look-ahead: while given enough capacity, both might be able to implement the same policies, they can differ in how much capacity that takes (and for sufficient depths, avoiding explicit look-ahead might be infeasible). Of course, our paper does not rest on these speculative conceptual claims; we aim to give evidence of explicit look-ahead rather than argue for its inevitability. (As one simple example, the method of counting attacks/defenses would not need to explicitly represent which future moves will actually be taken, so it would not predict our probing results.) --- Rebuttal Comment 1.1: Comment: I acknowledge that I have read the rebuttal, and thank the authors for still taking the time to engage in an interesting discussion when there were also other reviewers' comments present for which it was probably more urgent to respond. I do not plan to further raise my score (which was already very high), but must say I am very puzzled by some of the perceived "weaknesses" described in some other reviews. I will argue against them in the Reviewer-AC discussion if necessary.
Rebuttal 1: Rebuttal: Thank you for your detailed reviews and suggestions! We’re encouraged that you found the results *“interesting and inspiring”* (abVe), thought the paper gave a *“convincing answer to an important question”* (SpQ5), and said it has *“great experimental work, with convincing evidence, important controls, and multiple investigations aimed at the same question”* (ScDd). **Value head experiments:** In response to feedback from reviewers abVe and ScDd, we’ve added versions of all our experiments that use Leela’s value head instead of its policy head to measure the effect of interventions. The results are similar to the policy head results in the paper (likely because most of the network is a shared body). We’ve attached the new results as a PDF, the interpretation of all figures is exactly the same as for the corresponding figures in the paper. The only difference is that we use the log odds of the win probability rather than the log odds assigned to the best move. Note that the probing results (section 3.3) are not specific to value or policy head, since they operate entirely on the shared body. We’re also making a few minor changes for clarity based on feedback from reviewers. We mention these in response to individual reviews, along with answering questions and clarifying a few misconceptions. Pdf: /pdf/959dc8e593be6b3d878e1032c104e16595d4490c.pdf
NeurIPS_2024_submissions_huggingface
2,024
Summary: The paper try to discover and analyze the evidence of learned look-ahead in Chess playing network. They take experiments on filtered Chess puzzle dataset and the policy network of Leela chess policy. The claims are mainly analyzed from three perspectives, including Activation patching, attention head analysis and probing experiments. Strengths: 1. The finding of the paper is interesting and inspiring. I like the idea of getting inside the network to understand how the policy network or transformer work to predict the optimal action. 2. The paper is easy to follow and well-written. 3. Experiments on different perspectives to help verify the conclusion. Weaknesses: 1. The analysis seems to strongly related with the specific setting: Chess and transformer -- where each grid can be considered as a token. This can somehow limit the generality of the analysis method. 2. The main issue about the weakness comes from the experiments. Since this is mainly a paper for discovering phenomenon and analysis, with no newly proposed algorithm or showing how this finding can help anything, the standard for NeurIPS paper requires it to have comprehensive experiments to validate the solidness of the finding. I do like this new finding, but the experiment is not enough to make it solid -- for instance, here are several experiments I think necessary for authors to present: 2.1. Present how this phenomenon happens without explicit designing test/evaluation dataset. The authors do a lot of work in section 2.2 on filtering the testing samples to make it more likely to have this phenomenon -- and they use this to claim that the policy network has this phenomenon. This is quite weird since you also somehow optimize the evaluation dataset to help you find the claimed phenomenon. If this is not a general finding or conclusion, it can make the argument pretty weak and may only work on some human created settings -- an extreme case is that I can manually design some test cases that can definitely present this ability but it lacks generality. I agree with the argument that not all boards present this phenomenon -- but at least the authors need to present more things, like in which condition this phenomenon is likely to happen and which will not, and how/why this can happen. Is there any key issue that prevent or support the emergence of this phenomenon? Also the authors filter out all puzzles that Leela Chess may fail. This is really weird and can largely weaken the conclusion since if you have already tested it to solve the puzzle, it is quite normal that the representation of policy network can have something closely related with future optimal action (in the activation patching experiments or probing one). 2.2 Is this conclusion generalizable on other transformer-based Chess policy? For example in "Grandmaster-Level Chess Without Search " they also have a policy trained by supervised learning with no RL or MCTS like the leela chess does. Will this setting affect the emergence of this behavior? 2.3 Does only the policy network have this phenomenon or the same thing can happen at critic network? For example you can test it on leela-chess/stockfish evaluation function. There are also a lot of ablation studies on modules or hyperparameters to make the conclusion solid. So again, solid and comprehensive experiments and ablation studies are necessary for such analysis/discovering paper, while this paper doesn't offer enough evidence. Technical Quality: 3 Clarity: 2 Questions for Authors: 1. There are several papers about transformer/Chess/AlphaZero that are also related work to this paper, including: (1) Zahavy, Tom, et al. "Diversifying ai: Towards creative chess with alphazero." arXiv preprint arXiv:2308.09175 (2023). (2) Feng, Xidong, et al. "Chessgpt: Bridging policy learning and language modeling." Advances in Neural Information Processing Systems 36 (2024). (3) Noever, David, Matt Ciolino, and Josh Kalin. "The chess transformer: Mastering play using generative language models." arXiv preprint arXiv:2008.04057 (2020). (4) Stöckl, Andreas. "Watching a language model learning chess." Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP 2021). 2021. 2. Can the author explain more on the activation patching. Is it something like: you replace one neuron in one layer from the neuron of the same place in the forward process of corrupted board? Then I am wondering how you choose this corrupted board. Also in figure 3 why 1st/3rd but no 2nd action? 3. Also I 'd suggest authors to include more illustrative examples/figures in the main paper or appendix to help readers (and those not familiar with chess) to understand the phenomenon. Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: See weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thorough review and suggestions! **Policy vs value network (2.3):** Great suggestion, thank you! We have now run all our experiments on Leela’s value head and will include these in the paper. We used the logs odds of the win probability as the effect size for patching experiments. (Probing experiments operate only on the shared network body so don’t differ between heads.) All our findings for the policy head still hold for the value head, see the PDF attached to our general response. The only new finding is that L14H3 seems very important to the value head (Fig. 2 in the attached PDF), but L12H12 is still just as important as it was for the policy head (log odds reduction of 0.49 when we ablate it). **On dataset filtering (2.1):** We appreciate that filtering our dataset can seem strange. However, we think this filtering process is essential to test our hypothesis and doesn’t negatively affect the validity of our results. We clear up potential misunderstandings below. > This is quite weird since you also somehow optimize the evaluation dataset to help you find the claimed phenomenon. Investigating our claimed phenomenon fundamentally requires creating a suitable evaluation dataset *somehow*. As discussed (e.g. line 80), our claim is only that Leela often uses look-ahead *in certain types of states*, namely tactically complex ones. Such hypotheses specific to a certain input distribution are omnipresent in mechanistic interpretability (e.g. [1, 2, 3, 4]). To test this hypothesis, we naturally evaluate it on those states for which we claim Leela uses look-ahead. Our filtering process formalizes the vague notion of “tactically complex states.” > may only work on some human created settings -- an extreme case is that I can manually design some test cases that can definitely present this ability but it lacks generality Indeed, manually creating inputs would be very suspect, so we want to emphasize that we start with an existing dataset of inputs and then apply a simple automated filtering step. Importantly, we also do not use internal representations in any way during filtering. > Also the authors filter out all puzzles that Leela Chess may fail. This is really weird and can largely weaken the conclusion We focus on puzzles that Leela solves simply so we can apply our interpretability methods. All our methods check whether Leela represents a specific future line. In correctly solved puzzles, we can look for representations of the *correct* continuation. In puzzles that Leela fails to solve, this wouldn’t tell us as much: Leela may be representing an incorrect continuation (and hence fail the puzzle), but there are many incorrect lines (vs only one correct one). So we can’t test for look-ahead like we can for correctly solved puzzles. We briefly explain this in line 89 but realize that this deserves more space, so we’ll incorporate this explanation in the updated paper. As to whether this weakens the conclusion: we want to explain why Leela often solves puzzles correctly and don’t claim to explain why it sometimes fails. We’ve tried to be transparent about this (e.g. line 11 in the abstract) but will do another editing pass to make this clear. > if you have already tested it to solve the puzzle, it is quite normal that the representation of policy network can have something closely related with future optimal action Crucially, we don’t think that Leela solving a puzzle necessarily implies the claims we make about internally represented look-ahead. Drawing such mechanistic conclusions from behavioral evidence is dubious, which is why interpretability is needed in the first place. We agree that the model’s ability to solve difficult puzzles might *lead us to expect* that its representations would have something to do with future actions. Our paper's contribution lies in actually testing a more precise version of this hypothesis. **Testing other transformer-based chess-playing networks (2.2):** This would be an excellent future direction! Unfortunately, the model weights you mention were released only a bit over a month before the NeurIPS deadline. Transferring our experiments to new models would be a non-trivial effort, e.g., since we need to add instrumentation for our patching experiments. Regarding RL vs. supervised learning, note that the version of Leela we use was, in fact, trained using supervised learning (see line 422 in the appendix for details). The difference is just that it uses data from rollouts of an MCTS-trained Leela, rather than Stockfish evaluations. **Answers to questions:** 1. Thank you for the references, we will cite these as additional examples of transformer-based chess models. 2. That’s right, except we patch entire activation vectors on a square, rather than individual neurons. Our procedure for finding corrupted inputs is described in line 120 and appendix D. In brief, we randomly generate corruptions and then pick one that doesn’t change a weak model’s output, but does change Leela’s output a lot. In Fig. 3, we omit the 2nd move target since it is very often the same as the 1st move target (see line 93). Figure 9a demonstrates that the effect is mainly about the 1st and 3rd rather than 2nd move, which is why show the 1st instead of 2nd move. 3. Are there specific aspects that you think are difficult to follow and would benefit from additional figures? That would help us a lot! [1] Wang et al, 2022. Interpretability in the Wild: a Circuit for Indirect Object Identification in GPT-2 small [2] Hanna et al, 2023. How does GPT-2 compute greater-than?: Interpreting mathematical abilities in a pre-trained language model [3] Lieberum et al, 2023. Does Circuit Analysis Interpretability Scale? Evidence from Multiple Choice Capabilities in Chinchilla [4] Nanda et al, 2023. Fact Finding: Attempting to Reverse-Engineer Factual Recall on the Neuron Level --- Rebuttal Comment 1.1: Comment: Thank you for your rebuttal! I am satisfied with additional explanations and results. But I am still concerned with the dataset filtering -- I appreciate the authors' transparency about this. But if this is a conclusion that only applies for 20k puzzle boards this can largely weaken my surpriseness about the conclusion, especially that the title seems to ignore this limited scope. I will keep my score for now -- but maybe will increase it after my discussion with other reviewers and ACs. Thanks for your engagement again. --- Reply to Comment 1.1.1: Comment: Thank you for your response! We appreciate the time to read through our rebuttal. > if this is a conclusion that only applies for 20k puzzle boards We wanted to mention that the lichess puzzles come from real games played on the lichess website. These games were analyzed via Stockfish at 40 meganodes (which "took more than 50 years of CPU time" by the lichess team), and we further filtered these down using a simple filtering method (discarding board positions that a smaller model can solve). Therefore, our dataset is a subset of board positions from real games filtered for our description of hard positions that might require look-ahead to solve. Thank you for engaging with our discussions again, we really appreciate your responses!
null
null
null
null
null
null
CV-VAE: A Compatible Video VAE for Latent Generative Video Models
Accept (poster)
Summary: The paper introduces CV-VAE, a 3D VAE that is trained through latent space regularization using an existing two-dimensional VAE decoder. This design facilitates seamless integration with 2D VAE-based diffusion models. The concept, while simple and straightforward, is notably efficacious and offers practical utility. Moreover, relative to conventional 2D VAEs, the proposed framework significantly compresses the latent space dimensionality by integrating temporal downsampling within the 3D VAE architecture. Strengths: 1. The method is novel and delicate, which effectively compresses video latents and maintains latent distribution. 2. Experiments and visualizations demonstrate that CV-VAE performs equally to SD VAE on image-based/frame-based encoding and falls marginally behind on video encoding with little cost of finetuning SD models. Weaknesses: 1. As presented in Figure 6, CV-VAE suffers from more severe flickering problems than the original SVD, which may possibly be due to the gap between the latent distribution of CV-VAE and SD-2.1. In the paper, the author also pointed out that text-to-image generation may experience differences in color. Authors may convince readers that such flaws can be overcome through further fine-tuning, e.g., fine-tuning full parameters of SVD, or fine-tuning for longer iterations. 2. Missing details about training: The authors didn’t present details about training batch size, augmentations, video fps, etc. Also, whether the default CV-VAE displayed in the experiment section is trained on video and image data separately or jointly (as vae of Opensora) remains unknown. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Please refer to weakness. 2. In Table 3 and figure 6, authors may also present the metrics of SVD on 97 frames straightly or by using the frame interpolation model of SVD. 3. Authors may specify whether the decoder-based regularization is still employed when training with images as inputs for CV-VAE. 4. Authors may also present metrics comparison between SVD and SVD + CV-VAE on text-to-video generation. 5. Can the method be supported by theoretical proof? For example, can we minimize the distribution distance between video latents of CV-VAE and one of corresponding image latents encoded by 2d VAE? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **W1: Flickering problems of CV-VAE** Our optimization loss is a trade-off between compatibility and better reconstruction quality. Therefore, there is still a domain gap between CV-VAE and diffusion models, which leads to color shifts or flickering problems. This offset can be alleviated by further fine-tuning. **W2: Missing details about training** Thank you for your comments. Actually, we provided the training details in the manuscript. (1) In the "Training Details" paragraph of Section 4.1, we introduced the training datasets, resolution, batch size, learning rate, and other augmentations. We also mentioned joint training with two image datasets of different resolutions (256x256, 512x512) and two video datasets of different resolutions (9x256x256, 17x192x192), with batch sizes of 8, 4, 1, and 1 for the four settings, respectively. (2) We will add some details about the video fps. During the training process, we randomly used a value between 1 and 4 as the frame stride for sampling. **Q2: Comparison between CV-VAE and interpolation model** Thank you for your suggestion. As SVD only released versions for 14 and 25 frames, direct inference of 97 frames would cause the video to collapse. Therefore, we provided comparative results for the frame interpolation model in attached file **(Table 3)**. We use RIFE [4] as the frame interpolation model for comparison, as it is popular and has 4.2K stars on GitHub. Our model outperforms RIFE in 2 out of 3 metrics, further validating the effectiveness of our method. **Q3: Decoder-based regularization during training with images** During the training process, we use both images and videos for joint training, so decoder-based regularization is also employed when taking images as inputs. **Q4: Comparison between SVD and SVD + CV-VAE on text-to-video generation** Thank you for your suggestion. Regrettably, SVD only released the image-to-video version. As an alternative, we compared the performance of VideoCrafter2 and VideoCrafter2+CV-VAE on text-to-video in Figure 10. Additionally, we provide the quantitative metrics and extra visual results for text-to-video in the attached file **(Table 4, Figure 2)**, which further validates the compatibility of CV-VAE with various video generation models. **Q5: Can the method be supported by theoretical proof?** Currently, the effectiveness of this method is proven through empirical evidence, and latent regularization can also be seen as a form of knowledge distillation [1], which is applied in different fields [2][3]. Providing a detailed theoretical proof is quite challenging and can be a direction for future exploration. [1] Distilling the knowledge in a neural network, arXiv 2015. [2] Adversarial diffusion distillation, arXiv 2023. [3] MiniLLM: Knowledge distillation of large language models, ICLR 2024. [4] Real-Time Intermediate Flow Estimation for Video Frame Interpolation, ECCV 2022. --- Rebuttal Comment 1.1: Title: Response to Rebuttal Comment: Thank you for your rebuttal and my concerns have been addressed. Accordingly, I have raised my rating to 6.
Summary: This paper designs a 3D VAE consistent with the 2D VAE, whose output mode can losslessly switch between 2D and 3D while retaining the characteristics of both the 2D VAE and 3D VAE. This allows for obtaining performance similar to the original 2D VAE while increasing the frame count of the 3D output. Strengths: 1. Designing a 3D VAE with 2D latent consistent characteristics is very meaningful, and this paper successfully achieves this. 2. The bidirectional VAE alignment method designed in this paper is very clever. A similar design is found in [1], but the scenario used in this paper is completely different. 3. Achieving effective performance in both 2D and 3D simultaneously is of high practical value to the community, for example, supporting pre-training and frame interpolation in video Diffusion models. 4. The writing is fluent, and the experimental design is excellent. [1] Christensen, Anders, et al. "Image-free classifier injection for zero-shot classification." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023. Weaknesses: 1. Figure 10 attempts to demonstrate the effect of the 3D VAE on frame count enhancement. I understand that the conclusion of this figure is actually the most critical value proof of this paper: retaining the 2D VAE latent consistency while having the embedding capability of 3D features, such as extending the frame count. This is quite challenging, as the 3D VAE might tend to completely collapse to perform similarly to the original 2D VAE, simply replicating the 2D results. The loss between replicated 2D frames is very small, making optimization difficult. While the distance of the horse's legs in Figure 10 might prove it's not direct replication, the other two cases are hard to distinguish differences. Can the authors provide more evidence to explain that the model is not just learning simple replication? 2. In Table 1, when comparing, which pretrained 2D VAE is aligned with "Ours"? Technical Quality: 4 Clarity: 4 Questions for Authors: 1. Would training the VAE with bf16 cause a performance collapse? 2. In Lines 218-219, how are the images of 256 × 256 and 512 × 512, as well as the videos of 9 × 256 × 256 and 17 × 192 × 192 organized during the training cycle? Do you train on images first, then videos, or start with low resolution and then move to high resolution? 3. What impact does training the VAE at low resolution or low frame rate have when performing SVD on high resolution and high frame rate? Confidence: 5 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: Author added a limitation section in paper and I think there is no obvious potential negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **W1: More examples to explain the model is not just learning simple replication** Thank you for your suggestion. **(1)** Due to the size limitation of the paper (50MB), we did not include the text-to-video results as videos in the pdf. The visual results were provied in the supplementary material. **(2)** We alslo provide more visual results of VideoCrafter2 + CV-VAE and quantitative metrics in the attached file **(Table 4, Figure 2)**. It can be observed that the motion between frames is smooth, and it is not simply a duplication. **W2: Which pretrained 2D VAE is aligned with "Ours" in Table 1?** Thank you for your comments. In Table 1, both of our models (2D+3D, 3D) are aligned and compatible with VAE-SD2.1. Our models are also aligned with a lot of open-source diffusion models in the community that use VAE-SD2.1 as the auto-encoder. **Q1: Would training the VAE with bf16 cause a performance collapse?** Yes, training the model with either bf16 or fp16 would result in numerical collapse (NaN), which might be due to the instability caused by GAN loss. Therefore, CV-VAE is trained with fp32. **Q2: Jointing training between images and videos** We use four settings for joint training: 256 × 256 images, 512x512 images, 9 × 256 × 256 video clips, and 17 × 192 × 192 video clips. In each iteration, a batch from one of the four settings is generated with different batch sizes (8, 2, 1, 1), allowing samples of different resolutions to be trained simultaneously. **Q3: Performing SVD on high resolution and high frame rate** Since CV-VAE is composed of convolutional networks and we train it on different resolutions and frame rates, CV-VAE can adapt to higher resolutions and different frame rates with a negligible performance degradation. --- Rebuttal Comment 1.1: Title: Thanks for rebuttal. Comment: I have no further questions, and I stand by my rating. Personally, I believe the quality and contribution of this paper generally meet the bar for NeurIPS.
Summary: This paper proposes a new video VAE, starting with pretrained image VAE. It includes several techniques that are still capable of handling images and do not suffer from significant computational overhead. First, they use pretrained weights of 2D VAE (from Stable diffusion) by using their weights as initialization with model inflation. They also used efficient 3D architecture to only half of 2D convolutions as 3D convolutions to minimize the computation increase and also applied temporal tiling to handle long videos. They also use a pretrained frozen image decoder to regularize the latent space from the 3D encoder/encoder to ensure the latent space becomes similar to the stable diffusion latent space. The paper shows the proposed video encoder can have a compact latent space with a temporal compression factor (of 4) but is very similar to pretrained image latent space, which can be used for longer and smoother video generation by finetuning existing latent video diffusion models in this latent space. Strengths: - The paper is generally well-written and well-motivated. In particular, the paper tackles an important problem of constructing a compact latent space of videos that is similar to image space. - The performance drop from fine-tuning does not seem that large and shows a good potential for training a latent video generation model. - The paper shows a real use-case of this VAE by finetuning stable video diffusion. Weaknesses: - The paper lacks a comparison with recent video autoencoders. Specifically, the baselines that the paper provided are quite outdated (e.g., TATS and VQGAN), considering that there are many recent attempts to design a better video autoencoder, to name a few [1, 2, 3]. The authors should discuss what's the pros and cons of this approach compared with these approaches, and if possible, the authors should compare the performance as well. In particular, [2] provides very similar intuition to this paper because they also try to construct a latent space capable of jointly handling images and videos. - Lack of novelty: The proposed method is mainly composed of components that have already been widely used to extend image models to videos. - The paper does not mention the total training time (even though it includes the number of GPUs and the total number of training iterations). [1] Video Probabilistic Diffusion Models in Projected Latent Space, CVPR 2023. [2] Language Model Beats Diffusion: Tokenizer is key to visual generation, ICLR 2024. [3] Efficient Video Diffusion Models via Content-Frame Motion-Latent Decomposition, ICLR 2024. Technical Quality: 3 Clarity: 2 Questions for Authors: - The paper only provides video generation results on image-to-video generation tasks; I wonder if one naturally achieves more smooth video generation model by fine-tuning existing text-to-video generation models in this latent space (such as modelscope). - Are you planning to release the code and the model parameters? Confidence: 5 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: The paper addresses the limitation appropriately. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **W1: Comparison between recent video autoencoders** **(1)** There might be a missunderstanding. We have compared with the latest Video VAE in Table 1: VAE-OSP, which was released in May 2024. (2) MAGVIT2 [2] and CMD [3] are SOTA methods, but they have not released their weights. Following the suggestion of the reviewer to compare with more recent Video VAEs, we add new quantitative and qualitative comparisons of Open-MAGVIT2 [4] (released in July 2024) and PVDM[1] in the attached file **(Table 2, Figure 1)**. Open-MAGVIT2 is an opensource project to replicate MAGVIT2. [1] Video Probabilistic Diffusion Models in Projected Latent Space, CVPR 2023. [2] Language Model Beats Diffusion: Tokenizer is key to visual generation, ICLR 2024. [3] Efficient Video Diffusion Models via Content-Frame Motion-Latent Decomposition, ICLR 2024. [4] Open-MAGVIT2: Democratizing Autoregressive Visual Generation, 2024. **W2: Lack of novelty** We would like to claim that the major contribution of this work is the framework (latent regularization with 2D decoder) to obtain a video VAE from a pretrained image VAE, which is compatible with existing image and video diffusion models, rather than a novel network component or a loss. The compatibility greatly saves additional efforts of training diffusion models to adapt the VAE and being compatible with a wide range of community models. For example, Open-Sora-Plan, based on the incompatible Video VAE, spent an additional ~7138 GPU hours to obtain the image diffusion model. **W3: Training Time** Training the CV-VAE took approximately 800 A100 GPU hours. We will clarify this in the final version. **Q1: Results of Text-to-video diffusion model** Thanks for your concern. **(1)** We have provided the visual results of CV-VAE + VideoCrafter2 (text-to-video) in Figure 10 and the supplementary material. **(2)** We also provide the qualitative results and more visualizations in the attached file **(Table 4, Figure 2)**. **Q2: Release of code and weights** We will make the code and weights publicly available to promote community development. Following the rules of response, we have submitted a preview version of the open-source code to the Area Chair. --- Rebuttal Comment 1.1: Title: Response Comment: Thanks for the clarifications. Many of my concerns are now addressed. Please add this discussion, particularly for a comparision with MAGVIT2 because it shares lots of similarities to this work. I have a few more questions: 1. What is a meaning of "Comp" in Table 1 in the rebuttal PDF file? 2. I checked the MAGVIT2 repository and it seems there are two versions available - the first one uses 16x downsampling and the second uses 8x downsampling factor for training and inference. Which model is used for the comparison? --- Rebuttal 2: Comment: Thank you for your comments. We clarify that CV-VAE and MAGVIT are completely different works: (1) MAGVIT2 represents images or videos as **discrete tokens**, which are used in **autoregression models** to generate images or videos; Our CV-VAE represents images or videos as **continuous latents**, which are used in **diffusion models** to generate images or videos. (2) Our main goal and contribution is to design a Video VAE that is compatible with 2D VAEs, while MAGVIT2 is not compatible with any other VAEs. here are the responses: **R1:** The term "Comp." has the same meaning as in Table 1 of the main paper, indicating compatibility. This also represents the core contribution of our work, which is to design a Video VAE that is compatible with other 2D VAEs, Image Diffusion Models, and Video Diffusion Models. **R2:** We use the 8x downsampling version from MAGVIT2 for comparison, which is also the same as the downsampling factor in our CV-VAE. --- Rebuttal 3: Title: Response Comment: Thanks for the further clarification - can authors can provide rFID and rFVD (reconstruction FID and FVD) of the proposed methods and other baselines in MSR-VTT? It seems MAGVIT-2 results show too high LPIPS and low SSIM scores considering that Open-MAGVIT2 shows quite a good performance on ImageNet and the original MAGVIT2 paper shows surprising low LPIPS on UCF-101. --- Rebuttal 4: Comment: The comparison results of FID and FVD metrics on MSR-VTT are as follows. we use the first frame of MSR-VTT to calculate FID. | Methods | Comp. | FID | FVD | |:--------------:|:------------:|:------:|:-------:| | VAE-SD2.1 | - | 1.31 | **7.42** | | VQGAN | $\times$ | 7.56 | 23.59 | | TATS | $\times$ | 8.27 | 19.36 | | VAE-OSP | $\times$ | 1.28 | 9.81 | | PVDM |$\times$ | 7.74 | 22.38 | | Open-MAGVIT2 | $\times$ | 2.38 | 12.13 | | Ours |$\checkmark$| **1.26** | 8.55 | Due to quantization error, discrete VAEs usually lose more information than continuous VAEs. For example, MAGVIT2 encodes an image with resolution of $256\times 256$ into **integers** ranging from 0 to 262144 with a size of **$32\times 32$**, while continuous VAEs (SD-VAE2.1, VAE-OSP, CV-VAE) encode an image with resolution of $256\times 256$ into **floating-point vectors** with a size of **$32\times32\times z$** where $z$ represents channels of latent and are equal to 4 for the continuous VAEs in Table 1. Therefore, it is not surprising that continuous VAEs can achieve better reconstruction results than discrete VAEs. Moreover, reconstruction quality is not our primary goal in designing CV-VAE, since the reconstruction quality can be easily improved by increasing the size of $z$ without changing the model structure and size [1][2]. For example, in Stable Diffusion 3[2], the PSNR of VAE with $z = 4$ is 25.12dB, while the PSNR of VAE with $z=16$ is 28.62dB. Our primary goal is to design a compatible Video VAE based on the existing 2D VAE, so we must keep the size of $z$ the same as the 2D VAE ($z=4$ for VAE-SD2.1). [1] Emu: Enhancing Image Generation Models Using Photogenic Needles in a Haystack, arXiv 2023. [2] Scaling Rectified Flow Transformers for High-Resolution Image Synthesis, ICML 2024. --- Rebuttal Comment 4.1: Title: Response Comment: Thanks for the further clarification. I increased my rating from 4 to 5 accordingly.
Summary: This paper focuses on a compact video VAE suitable for both image and video generation tasks. The motivation stems from the absence of spatial-temporal compressed continuous latent 3D VAE models for video generation. To effectively utilize 2D VAE and seamlessly integrate image generation models, the authors propose a latent space regularization and initialization method to maximize the use of 2D VAE. A 3D CNN-based architecture is also proposed to compress both temporal and spatial dimensions efficiently. The evaluation of the proposed method is primarily based on reconstruction tasks and image/video generation tasks. Strengths: The motivation is clear: the current video generation method may suffer from a lack of high-quality and high-efficiency spatial-temporal VAE if it can be used with the existing T2I model, which also makes it more general and effective. The decoder loss seems a simple and novel design, which helps the model learn spatial latent efficiently and align with the original 2D VAE. Initialization of 3D VAE with 2D ones seems to provide some insights in practice. The qualitative results are good. Weaknesses: 1. Improvements over the baseline are presented in Table 1, where the authors compare the main reconstruction results with different open-sourced VAE methods. However, I don’t think it is a fair comparison to claim superior performance. The training data is a key factor in this table. The authors use in-domain data, such as Webvid, for both training and evaluation, whereas other methods do not. Therefore, it is unclear if the proposed method can truly outperform other methods when trained on the same data. 2. Many generation models have been released based on SD VAE. Since compactness does not require further fine-tuning, it is crucial to evaluate more models to strengthen the claim. Evaluating only one model in Table 2/3 is not convincing. Technical Quality: 3 Clarity: 3 Questions for Authors: The term "mapping function" seems to require clarification, as it appears to be more of a simple sampling process from video frames. In Table 7, a higher CLIP score is better. Why does random sampling work best? Providing more exploration and explanation can help readers understand this better. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Yes, the author discussed the limitations and potential social impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **W1: Fairness of training and evaluation data** Thank you for your comments on the fairness. **(1)** Unified training data. Since the VAE models in Table 1 are trained on different datasets and some of them did not release their training and dataset details, such as VAE-OSP, it is difficult to train all models on the same dataset. **(2)** Unified testing data. We appreciate you pointing out that evaluating on the in-domain validation set is unfair. Therefore, we conduct additional evaluations of all VAE models on the publicly available MSR-VTT validation set that is not mentioned as the training set of those models. The evaluation results are presented in the attached file **(Table 1)**. Our method performs better than other video VAE models. **W2: Evaluation on different diffusion models** **(1)** We would like to first clarify that we have performed compatibility validation on multiple diffusion models, including quantitative evaluations on SD2.1 (text-to-image), SVD (image-to-video) (Table 1, 2), and visual evaluations on SD2.1, SVD, and Videocrafter2 (text-to-video) (Figure 5, 6, 10). **(2)** To further validate our compatibility with text-to-video models, we have additionally conducted quantitative evaluations on Videocrafter2+CV-VAE and also provided visual results in the attached file **(Table 4, Figure 2)**. **Q1: Clarification of "mapping function"** Thank you for your comments. We will revise "mapping function" to "sampling function" to clarify the expression. We observed that the "1st Frame" results in poorer reconstruction of subsequent frames due to the lack of constraints on other frames, "Average" leads to more motion blur in the reconstructed frames, and "Slice" causes frames without 2D Decoder constraints to be more prone to artifacts. "Random" is proposed based on "Slice" to cover the content in the whole sequence and uses the randomly sampled frames instead of the average frame in a slice to avoid motion blur. --- Rebuttal Comment 1.1: Title: Thanks for the rebuttal Comment: The additional results addressed my concerns.
Rebuttal 1: Rebuttal: We thank the reviewers for their thoughtful analysis and feedback, we are glad the reviewers find that * Ours proposed question are valuable * "Designing a 3D VAE with 2D latent consistent characteristics is very meaningful, and this paper successfully achieves this " - Reviewer U7Gm. * "Achieving effective performance in both 2D and 3D simultaneously is of high practical value to the community" - Reviewer U7Gm. * "the paper tackles an important problem of constructing a compact latent space of videos that is similar to image space." - Reviewer tAbR. * Our solutions are novel and effective and our experiments are well-conducted * "The bidirectional VAE alignment method designed in this paper is very clever." - Reviewer U7Gm. * "The method is novel and delicate, which effectively compresses video latents and maintains latent distribution." - Reviewer LvuQ. * "The decoder loss seems a simple and novel design, The qualitative results are good." - Reviewer xmxT. * Our paper is well-written * "The writing is fluent" - Reviewer U7Gm. * "The paper is generally well-written" - Reviewer tAbR. Attached you can find a file containing new experiments suggested by the reviewers. * Reconstruction comparison results of different VAEs on a public out-domain dataset. * Quantitative and visual comparison results of CV-VAE with some recent methods. * Quantitative and visual results of CV-VAE in text-to-video generation models. * Comparison results between CV-VAE and interpolation model. Pdf: /pdf/66a1642ac2d8de867fea72d777f00ae9bfb47ac9.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Globally Convergent Variational Inference
Accept (poster)
Summary: The paper studies an alternative objective for variational inference, the expected forward KL divergence. Under some technical assumptions, convexity is shown, which facilitates global optimization. Moreover, a tractable surrogate objective is presented and it is shown that the approximation error can be made arbitrarily small. Finally, an experimental evaluation suggests that global convergence may even occur when the technical assumptions are violated. Strengths: The paper addresses a very interesting problem and the methodology is clever. The technical development seems very careful. Weaknesses: The paper is very technical and dense. It would be helpful to provide more intuition. Technical Quality: 3 Clarity: 2 Questions for Authors: 1. How restrictive are the assumptions? 2. Can you explain the intuition more how your approach achieves convexity? 3. I understand you presented an example where the expected forward KL outperforms the standard ELBO. How does optimization of these two objectives compare more generally in practice? I imagine there are situations where optimizing the ELBO still yields better approximations to the actual posterior. Do you have any insights into these more practical aspects? Confidence: 1 Soundness: 3 Presentation: 2 Contribution: 4 Limitations: Technical limitations are stated but it would be helpful to discuss more prominently how restrictive the assumptions are. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your review. Based on your comments, we will add some clarifying sections to motivate the key intuition further. We aim to address your main points below. > *How restrictive are the assumptions?* We believe our assumptions are mild enough to apply to many practical settings, and are generally less restrictive than those found in related work. Several regularity conditions such as compactness of the data space $\mathcal{X}$, continuity of $\sigma$ and boundedness of $\sigma'$, and positivity of the limiting kernel $K_\infty$ are generally standard prerequisites for proving NTK-type results. Restricting the setting to a particular neural network architecture and initialization is also a standard approach to the problem. Our work stands out from previous works in the generality of the setting: we allow for network outputs of arbitrary dimension, and consider an objective function that is an expectation over an ``infinite dataset'', i.e. $L_F(f) = \mathbb{E}_{P(X)} \ell(X, f(X))$. Although we consider the forward KL integrand for $\ell$, our results actually apply to any $\ell$ that is convex in the network outputs $f(X)$. To our knowledge, existing NTK-based analyses all restrict themselves in some way, only considering i) scalar-valued outputs, ii) mean-squared error loss, or iii) and objective function over a finite training dataset. > *Can you explain the intuition more how your approach achieves convexity?* The main intuition for our work is that we can make use of convexity by conducting the analysis of the forward KL optimization problem in a function space. In parameter space, there is no hope of utilizing convexity arguments -- neural networks may have millions of parameters, and the objective functions used to fit them are highly non-convex in these parameters. When neural network functions are viewed simply as points in a more general Hilbert space of functions, though, the forward KL objective $L_F$ is in fact convex in its function argument -- this is the result of Corollary 1. The remainder of the analysis utilizes NTK-based analysis to show that the gradient dynamics in parameter space essentially mirror those in function space, i.e. optimization behaves as if we follow a convex objective to its global optimum. > *I imagine there are situations where optimizing the ELBO still yields better approximations to the actual posterior. Do you have any insights into these more practical aspects?* This is a good point, and we have updated the discussion section accordingly to discuss in more detail settings where using the ELBO may be more useful for practitioners. Choosing to optimize the ELBO can be advantageous in several situations: if the likelihood function of the model is (approximately) convex in the main region of interest, optimization should behave well, and in certain situations the ELBO can even be updated with non-stochastic gradients. In the case where the generative model is unknown and needs to be learned simultaneously with the variational approximation, the ELBO can be used as an objective to fit both the model and the variational distribution, which is appealing in practice. Finally, the ELBO can be optimized in a non-amortized fashion -- this may be preferable for simplicity, but this approach is still known to struggle with the same issue of converging to local optima. While these particular situations do arise, minimization of the forward KL can be applied in essentially any setting, and maintains its convexity guarantees even when the model is arbitrarily complex -- this approach can even be used in some settings where ELBO-based training is impossible. One large class of methods for which the ELBO is unusable for likelihood-free inference, where the generative model typically consists of a highly complex simulator that does not admit a tractable likelihood function. --- Rebuttal 2: Comment: I thank the authors for their response. I do not have further questions.
Summary: This work addresses a common problem of non-convexity while approximating posterior distributions using variational inference. Although using ELBO as a variational objective is popular, this paper considers a forward KL(FKL) divergence objective. Its first main contribution is to show that when the variational family belongs to the exponential family of distributions, the FKL objective is strictly convex in the variational parameters. In particular, the paper parameterizes the exponential variational family by a neural network. Their second contribution is to show that under certain conditions on the neural architecture, the solution of the FKL objective is only \e-suboptimal to the global functional solution of the FKL objective. The paper also demonstrates the efficacy of the proposed with the help of some experiments. Strengths: 1. Using expected forward KL with the exponential family is simple and interesting when dealing with ELBO's non-convexity. 2. The idea of using an exception with P(X) also seems interesting, as it simplifies the efforts of sampling from the posterior while using the FKL objective. Is this idea novel? If yes, the authors should signpost it. 3. The connection between NTK and the gradient dynamics of the functional objective is also interesting and novel. 4. The paper is well written. Weaknesses: 1. I think the authors underestimate their findings in Lemma 1, i.e., the convexity of the variational objective when forward KL is used with the exponential family. Can they demonstrate its efficacy on a toy problem that doesn’t use any neural network? 2. It is slightly misleading to say that the FKL objective with NTK finds global optima. Since the NN is highly non-convex, the result only shows that the local solution is close to the global one and that too in the limit of NN width. 3. I don't understand how Lemma 1 shows that L_F is strictly convex. I believe it just proves convexity. Technical Quality: 3 Clarity: 3 Questions for Authors: Minor comments. L45: L_p to L_P L771: What does \eta over equality mean? L843: a,\in - remove Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review. Below, we attempt to address your main points. > *The idea of using an expectation with P(X) also seems interesting...is this idea novel?* Although it's not novel (e.g. Section 2.1), we do think it is underappreciated. Our main contribution lies in the analysis of this objective function. Previous motivations for using the expected forward KL divergence come from either 1) a setting where likelihood function is not available or 2) the practitioner's desire for conservative uncertainty quantification, a property of minimizers of this objective. Our analysis suggests a much stronger motivation for using the expected forward KL objective: it behaves like a convex objective and thus yields unique solutions, regardless of random seeds, initializations, etc. The shortcomings of numerical optimization are a major trouble point for use of VI over related methods such as MCMC in practice, and our work resolves many of these concerns. Beyond popularizing this choice of objective function as an alternative to the ELBO within the VI community, we hope that our results convince other Bayesians (e.g. practitioners of MCMC) of the validity of VI. We help resolve a major concern that one may end up with a local solution of unknown suboptimality. > *Can [the authors] demonstrate [Lemma 1]'s efficacy on a toy problem that doesn't use any neural network?* Lemma 1 in itself is interesting, but not practical. It illustrates the atypical paradigm of having a function that is well behaved (convex), but **not computable**. Neither $\ell(x, \eta)$ nor $\nabla_\eta \ell(x, \eta)$ are computable, or even unbiasedly estimable -- except in certain rare cases where the posterior itself is tractable, which of course makes VI unnecessary in the first place. On a non-amortized problem where one has a single $x$ and wishes to minimize $\ell(x, \eta)$ in $\eta$, the convexity result becomes an afterthought -- optimization of this objective cannot proceed because we cannot run SGD (we do not have access to unbiased gradients). The corollary implying that the amortized problem $L_F$ is convex is the foundation for the rest of our work. It implies the existence of a global minimum, and the remainder of our analysis shows the gradient dynamics of $L_P$ can converge arbitrarily close to this global optimum. A distinguishing feature of $L_P(\phi)$ is that we can estimate its gradients unbiasedly (e.g., as described in Appendix B). > *It is slightly misleading to say that the FKL objective with NTK finds global optima...the result only shows that the local solution is close to the global one.* Although asymptotic results such as ours will indeed only hold approximately in practice, we contend that many meaningful theoretical results take this form. Even some that might be regarded as standard results, such as the convergence of SGD with diminishing step size, are asymptotic results that are only approximated in practice with finite time and compute. Within the NTK literature in particular, asymptotic results of the nature of ours (i.e., in the limit of an arbitrarily wide network) are difficult to avoid due to the intractability of finite-width analysis. Although our results are asymptotic, we have made them as strong as possible (e.g., almost sure convergence), and have shown in our experiments that the asymptotic behavior (i.e., optimization trajectories similar to those in convex optimization) can actually be obtained in practice. You are correct that one will ultimately obtain a local solution in the parameter space. We show, though, that the "local solution" can be made arbitrarily close to the global minimizer $f^*$ by increasing the network width $p$ -- this allows one to obtain the ``de facto'' global minimizer. The ability to alter the width provides the user with a large amount of control over behavior of the optimization problem. Without our analysis, understanding the results of the optimization problem seems intractable: as you correctly point out, in parameter space the objective function is highly nonconvex, and *the degree of suboptimality* of the local solution is totally unknown for any given run of the optimization routine. Our result, on the other hand, bounds the degree of suboptimality, and more importantly shows that the user can shrink this bound by increasing the network width. We can always get $\epsilon$-close to the global optimum for any $\epsilon$. We show this result extends to practice for even modest network widths, and $\epsilon$ appears to be small enough on real problems that one i) converges to the same minimizer regardless of initialization and ii) this minimizer is practically indistinguishable from the true global minimum $f^*$ in the RKHS. > *I don't understand how Lemma 1 shows that $L_F$ is strictly convex. I believe it just proves convexity.* Thank you for catching this; we have added a mild condition to Corollary 1 under which strict convexity holds. Previously, any two functions $f,g$ such that $f \neq g$ but $f = g$ almost surely would violate strict convexity because $L_F(f) = L_F(g)$ even though $f \neq g$. We have added the condition that the domain of $L_F$ is a RKHS $\mathcal{H}$ with respect to the measure $P(X)$. Under this assumption, two functions $f,g$ equal to each other almost surely on $P(X)$ are in fact regarded as the same element in the RKHS (they correspond to the same equivalence class of functions). With this condition, strict convexity holds as any $f \neq g$ in the RKHS differ on a set of nonzero measure, allowing the strict convexity inequality to extend to the integral. > *L771: What does $\eta$ over equality mean?* This notation meant "equality up to constants that do not depend on $\eta$", i.e. the right-hand side has the same gradient with respect to $\eta$ as the left-hand side. We have updated the notation to make this more clear and now instead use a generic constant $c$ on the right-hand side. --- Rebuttal Comment 1.1: Comment: I thank the authors for satisfactorily responding to my queries.
Summary: The authors established the global convergence of a particular VI method, which is based on forward KL and variational family parameterized by neural network. The analysis techniques are extended from widely studied NTK for two layer neural network. The authors also conducted experiments to verify the theoretical results. Strengths: The paper provided rigorous global convergence guarantee of amortized VI based on forward KL with exponential variational family and NTK. Although the results were asymptotic in the width of neural network, various numerical experiments were conducted to show that the theoretical results seem to hold with finite width. Weaknesses: The major concern is about novelty. What is the difference between the analysis in this paper and existing analysis of two layer neural network in supervised learning setting? Since the authors assumed that the variational family is exponential family, the optimization objective terms out to be a convex function of the output of two layer neural network, which has been widely studied. Although this is new in the literature of VI, the techniques used are not novel and can be easily extended from supervised learning setting to VI based on forward KL. Technical Quality: 3 Clarity: 3 Questions for Authors: The setting for variational inference in this paper assumes that data can be simulated during training (or with known latent variables), which is a bit different from standard VI where the data is fixed. How useful would it be for more general VI problems where the forward KL seems to be challenging to use? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The limitations are pointed out in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for the review. Below, we try to answer your main questions and concerns. > *What is the difference between the analysis in this paper and existing analysis of two layer neural network in supervised learning setting...which has been widely studied?* Firstly, let us highlight that applying the NTK to the variational inference setting is novel and a contribution in its own right. Amortization (and the use of neural networks at all) is a relatively recent advancement in variational inference, and even today the main motivation for its use is for utility or cost-saving. An analysis of this type is novel in this setting and could have a significant impact towards adoption of amortization in VI. We have shown amortization is more than just a way to save compute, but actually has tangible benefits for optimization. Purely with respect to the NTK literature, our analysis is still novel and innovative in following ways: - We allow for network outputs $\eta := f(x; \phi)$ of arbitrary dimension ($\eta \in \mathbb{R}^q)$. - We study a general, convex loss function $\ell$. - We minimize the population loss $\mathbb{E}_{P(X)} \ell(X; f(X;\phi))$. Extending to the general setting above was necessary -- in the current literature, existing results could not be applied to analyze the expected forward KL objective. Our generalizations contrast with the restrictive assumptions that are often featured in existing NTK analyses, which are often specifically focused on a particular setting consisting of i) a scalar-valued network and ii) mean-squared error loss. Additionally, virtually all works consider iii) empirical risk minimization, i.e., minimization of $\sum_{i=1}^n \ell(f(x_i;\phi), y_i)$ for finite training data, rather than for an infinite population, as we require. To solve this problem, we have innovated on existing NTK results in the restrictive settings described above. One of our results that may be of particular interest to the NTK community is that of *uniform* convergence of the NTK to its limit, i.e. Proposition 2 and Proposition 3 in the appendices (l. 875 and 932, respectively). Previous works relied on pointwise convergence to analyze an empirical loss, but to analyze the population quantity $\mathbb{E}_{P(X)} \ell(X; f(X;\phi))$ we required uniform convergence of kernels and proved these results. Within our proofs, several techniques may also be of interest, in particular the utilization of generalized Gronwall inequalities (e.g., line 820) to bound differences over an interval $[0,T]$. Lemmas 4 and 5 (lines 900 and 921), proved using this approach, are (minor) standalone results that may aid additional work. Finally, we want to emphasize that although we write primarily for a Bayesian audience and restrict our analysis to the expected forward KL objective, our results apply to more general convex loss functions and may be extended the analysis of any objective that satisfies the convexity conditions, include those in settings beyond variational inference. > *The setting for variational inference in this paper assumes that data can be simulated during training...* We want to emphasize that the *expected* forward KL objective that we optimize is not challenging to use; ease of implementation of this method for practically any setting is one of the strengths of this approach. A forward KL divergence without the expectation over $P(X)$, on the other hand, is infeasible to optimize -- neither the objective nor its gradient are unbiasedly estimable, as estimating either quantity requires samples from the exact posterior, which cannot be obtained. We touched on this point briefly (line 104), but have now added exposition to contrast the differences between the forward KL and the expected forward KL for clarity. Our analysis of the *amortized* objective in this work may encourage more widespread use of the forward KL. The amortized objective resolves the point above: expectations over $P(X)$ and $P(\Theta \mid X)$ can be estimated unbiasedly as a single expectation over $P(\Theta, X)$ by using the ``trick'' of combining these as in Appendix B, and using ancestral sampling. The assumption we can simulate draws from $P(\Theta, X)$ is not restrictive; in fact, this assumption is strictly looser than assuming that the likelihood function $p(\theta, x)$ is readily available, a key assumption for any ELBO-based analysis. Modern probabilistic computing packages such as PyTorch and Pyro ensure that sampling from arbitrary distributions is straightforward via ancestral sampling of $\Theta \sim P(\Theta)$, $X \sim P(X \mid \Theta)$, etc. Expected forward KL minimization generalizes to standard VI settings with a single observable $x_{\textrm{true}}$ as well with minimal additional overhead -- we discussed such settings briefly in our submission (e.g. line 68, 99, and 354), but as also requested by other reviewers, we have added more substantial discussion on practical aspects to Section 6. A counterintuitive implication of our work is that simulation-based minimization of the expected forward KL may be useful even for non-amortized VI problems such as described above. Because simulation is often computationally inexpensive, and network training rapidly converges to a global minimizer, one can obtain a unique variational approximation $q(\theta; f(x_{\textrm{true}}, \phi))$. ELBO-based training, on the other hand, might yield vastly different posterior approximations to $p(\theta \mid x_{\textrm{true}})$ for different initializations or random seeds. Therefore, our work could simplify VI for practitioners. However, ELBO-based optimization still has many merits. We have added discussion of additional practical considerations to our discussion section, including cases where ELBO-based training may be preferable (e.g., for fitting model parameters alongside the variational approximation; if one wants to obtain mode-seeking approximations, etc.).
Summary: The authors study convergence of forward-KL variational inference in the neural posterior estimation (NPE) setting with an exponential family variational distribution, whose natural parameters are produced by a neural network. For this setting, it is known that the forward-KL is convex in the natural parameters of the variational distribution. The authors show how this extends to the NPE setting, i.e. expected forward-KL, by linearity of the outer expectation and establish the existence of a global minimizer for the functional NPE objective. The authors further show that the limiting neural tangent kernel (NTK) of the network can be used to construct a Reproducing Kernel Hilbert space in which optimization via kernel gradient flow converges to the unique minimizer. Finally the authors show how this finding can be extended to the parametric setting by noting that, for certain network architectures, the parametric NTK tends to the limiting NTK as the width of the network tends to infinity. Interestingly, the authors demonstrate empirically that in practice infinite width might not be required to converge to a region close to the global minimizer. Strengths: The results are novel and theoretically interesting to the variational inference community. I found the paper, given its theoretical nature, well written and relatively easy to follow along. Weaknesses: The result stated in Lemma 1 is well known (unless I am missing something), please consider citing a textbook or relevant review paper for the fact that the forward-KL divergence is convex in the natural parameters of the variational distribution in the exponential family case. The experiments are not directly compared to ELBO/IWBO based optimization, hence, it is unclear how big the actual gap in performance is. In settings where the true posterior is a member of the variational family, the minimizer of the forward-KL divergence also minimizes the reverse-KL and hence maximizes the ELBO and vice-versa. A plot that reports the symmetric-KL over training for both the ELBO and FKL objective would be insightful. In the end, if FKL-optimization indeed finds the minimizer (or close to it), the learned variational distribution should outperform ELBO/IWAE-optimization no matter which divergence (or corresponding bound) is used for comparison. --- Edit --- The authors addressed my concerns by adding appropriate references and providing additional results that demonstrate the utility of their approach. Consequently, I have raised my score to an Accept (7). Technical Quality: 3 Clarity: 3 Questions for Authors: - Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: - Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the detailed review. We will incorporate your main suggestions as outlined below. > *The result stated in Lemma 1 is well known...please consider citing a textbook or relevant review paper* Although we agree that Lemma 1 follows easily from the standard properties of the exponential family, we have not been able to find the convexity of the forward KL in the natural parameter stated as such in any relevant paper, despite extensive searching. (We would welcome an exact citation if you are aware of one.) We suspect the result has not been derived before because the non-amortized objective and its gradient are not generally computable, rendering the convexity of the objective irrelevant. Similarly, the amortized objective is non-convex in the neural network parameters that are optimized, so thoughts of convexity have not been considered in-depth until now. Although the result is simple, it is fundamental to our analysis of the amortized problem. The best we can do with what we have found in a literature search is to add a reference to Proposition 2 of Wainwright \& Jordan's *Graphical Models, Exponential Families, and Variational Inference*, which proves that the log-partition function of an exponential family is convex in the natural parameter. Our lemma follows fairly easily from this proposition. > *...it is unclear how big the actual gap in performance is...a plot that reports the symmetric-KL over training for both the ELBO and FKL objective would be insightful.* This is a great suggestion; in the global response, we have provided an analysis of the quality of the solution found by minimizing the expected forward KL via several different objective measures. By all measures, we find that the minimizer of the expected forward KL outperforms the local optimum found by minimizing the negative ELBO -- the minimizer of the expected forward KL even has a lower reverse KL divergence to the exact posterior, despite not directly optimizing this quantity. --- Rebuttal Comment 1.1: Title: Re: Rebuttal by Authors Comment: Thank you for your efforts in addressing my concerns! - After spending about 10 minutes searching for references, I indeed found it surprisingly difficult to find more specific ones. However, I believe Wainwright & Jordan is a solid reference. - I also think the additional results are excellent for completing the overall picture and make a compelling case for why the well-posedness of the underlying optimization problem might matter, even though it is often overlooked in common practice. Overall, after considering all reviews and corresponding rebuttals, I’ve decided to raise my score to an Accept (7). I believe this is a technically solid paper that presents novel theoretical results, rigorously extending existing NTK results and applying them to the VI setting. This makes it of interest to both the variational inference community and the broader NeurIPS audience!
Rebuttal 1: Rebuttal: We thank all the reviewers for their time and helpful comments. We are encouraged by your largely positive feedback and appreciate your thoughts on areas of improvement. We have responded to each of your points individually in the review-specific rebuttals. In this global rebuttal, we wish to emphasize the significance of our contributions with respect to two research areas: variational inference and NTK analysis. We also provide additional experimental results (as suggested by Dszp, but likely of interest to all reviewers). **A breakthrough in variational inference.** In the VI community the use of posterior approximations parameterized by neural networks (i.e., amortization) is still a relatively recent phenomenon and is by no means ubiquitous. Our results are a major step towards expanding the use of amortization in variational inference. We show that amortization has benefits beyond reducing computational costs; in fact, targeting the amortized objective can actually be desirable because it admits a unique solution. Through our analysis, we resolve a significant complication for VI that previously seemed impenetrable: that of convergence only to a local optimum of the objective. The objective we consider in this work is convex, and we show that its gradient leads to the global optimum. Our results may help expand the use of VI in general. In practice, MCMC still remains more widespread than VI, in part due to potentially unreliable VI optimization -- our work is a significant step towards resolving this concern. **A technically sophisticated adaptation of the NTK.** We emphasize that our results are not a mere application of existing NTK analyses to a different objective function. Existing NTK-based analyses (e.g., [1]-[5]) are restricted to specific settings that exclude variational inference. These restrictions typically include one or more of: - a scalar-valued network output - a mean-squared error loss - a finite training set These assumptions are simplifying and commonly used in practice for general machine learning, but do not apply in the settings commonly seen in variational inference, which may have i) parameters of arbitrary dimension, ii) diverse loss functions, and iii) population-type objective functions (i.e., an expectation over a continuous distribution). Bridging the gap between existing literature and analysis tailored to the VI objective we consider in this work required generalizing beyond the setting outlined above. To do so, we proved results for arbitrary loss functions and introduced new machinery (e.g., uniform rather than pointwise convergence of the NTK) necessary for establishing convergence over a continuous distribution of inputs. [1] Generalization ability of wide residual networks, Lai et al. [2] Gradient descent provably optimizes over-parameterized networks, Du et al. [3] Linearized two-layer neural networks in high dimension, Ghorbani et al. [4] Loss landscapes and optimization in over-parameterized non-linear systems and neural networks, Liu et al. [5] On exact computation with an infinitely wide neural net, Arora et al. **New experimental results** Suggested by Dszp, we add a small case study that quantifies the *quality* of the variational approximation found by expected forward KL minimization -- in other words, how well it approximates the true posterior. We use a rotated MNIST example, with angle $\theta \sim \mathrm{Unif}[0, 2\pi]$ and for all $i=1, \dots 50$ we have $x_i \mid \theta \sim \mathcal{N}(\mathrm{Rotate}(\mu_i, \theta), \sigma^2)$, where $\mu_i$ is a fixed, synthetic MNIST digit, and $\mathrm{Rotate}(\cdot)$ applies a rotation of $\theta$ radians. The variational distribution is taken to be Gaussian with fixed $\sigma = 0.5$. We aim to measure the quality of the solutions found by ELBO-based optimization and expected forward KL minimization; because our approach tends toward the global optimum of the expected forward KL objective, we expect this solution to outperform local optima of other objectives by any metric used to measure the quality of the approximation. In the attached pdf, we plot several quantities across fitting, where fitting either minimizes the negative of the ELBO or minimizes the expected forward KL. The negative ELBO objective was fit to a fixed dataset $x$ of $50$ images drawn from the model above with common rotation angle of 260 degrees. We denote these images as $x_{\textrm{true}}$, and the true rotation angle $\theta_{\textrm{true}}$. The variational approximation is denoted as $q(\theta \mid x_{\textrm{true}})$. For the negative ELBO objective, the estimated angle converges to about 90 degrees, while expected forward KL minimization (as expected) is centered near correct angle value (Figure 1). This translates to a better held-out negative log likelihood on the true latent angle value, as expected (Figure 2). We also display both the forward (Figure 3) and reverse (Figure 4) KL divergences across training. These quantities are difficult to estimate exactly -- we compute the forward KL using importance sampling with the prior $p(\theta)$ as the proposal and $K=1000$ importance samples, and the reverse KL is approximated using the ELBO plus an estimate of the log evidence. Perhaps surprisingly, expected forward KL minimization outperforms optimizing the negative ELBO *even with respect to the negative ELBO objective function*. In other words, the variational distribution fit to minimize the expected forward KL turns out to have a lower (better) reverse KL value that the distribution fit to minimize reverse KL. This arises from the intuition that a global optimum of a different objective may be preferable to a local optimum of one's original target. This is the motivation behind our manuscript: the global minimum of the expected forward KL is closer to the exact posterior than a local optimum of the ELBO. Pdf: /pdf/0f7786ebe19e6c4c1fb29720bf00a579cdfa8328.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Conformalized Multiple Testing after Data-dependent Selection
Accept (poster)
Summary: This paper addresses the problem of conformalized multiple testing following data-dependent selection procedures. To manage the distorted distribution resulting from the selection process, the authors propose adapting the calibration set according to the selection rule. Under the assumption that the selection procedure is stable or weakly stable, the authors prove that the guarantee of the Benjamini-Hochberg procedure is still maintained. Experiments on synthetic and real data have shown the effectiveness and efficiency of the method. Strengths: 1. Multiple testing after data-dependent selection in the predictive setting is an important problem. 2. The definition of selection stability and weak stability is formalized and generalized to more selection conditions. The authors provide extensive theoretical proof for the theorem. 3. Adequate experiments demonstrate the effectiveness and efficiency of the proposed method. Weaknesses: 1. The methodological novelty is limited: the idea of adapting the calibration set with the selection strategy to retain exchangeability is not very novel. Although dealing with conformal prediction, [1] also constructs the reference set based on the selection rule. The investigated selection methods are also similar. Overall, the main contribution of this method may be the rigorous proof (which, due to the length, I was unable to verify in every detail within the time constraints). 2. Minor errors: a. Line 28, $D_u = \{Z_i\}$ should be $D_u = \{X_i\}$. b. Line 153, 'label set' should be 'labeled set'. [1] Ying Jin and Zhimei Ren. Confidence on the Focal: Conformal Prediction with Selection Conditional Coverage. Technical Quality: 3 Clarity: 2 Questions for Authors: May I ask if the idea of adapting the calibration set is similar to constructing the reference set in [1]? Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: The authors have adequately addressed the limitations of the i.i.d. setting and noted that the stability assumptions of the selection procedures are relatively limited. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > Weakness 1: The methodological novelty is limited: the idea of adapting the calibration set with the selection strategy to retain exchangeability is not very novel. Although dealing with conformal prediction, [1] also constructs the reference set based on the selection rule. The investigated selection methods are also similar. Overall, the main contribution of this method may be the rigorous proof (which, due to the length, I was unable to verify in every detail within the time constraints). **To W1**: Thanks for your insightful comments on the methodological novelty of our approach. We agree that recent research has explored selective p-values from various perspectives. However, our work differs significantly from these studies, and we would like to provide some clarifications. - The references you mentioned have made valuable contributions by studying the properties of selective p-values, with a focus on individual p-values. However, **our paper addresses multiple testing**, which requires studying the **interactions among all selective p-values rather than focusing on a single p-value**. This distinguishes our work from previous works in the literature. Specially, in our selective setting, the p-values are complicatedly correlated and the selection is also data-dependent. This makes the validity of conventional BH procedure suspicious and requires rigorous verification. - To address the difficulty arising from data-dependent selection, our main technical contribution lies in **developing a unified analytical framework**. This framework builds upon the conditional calibration framework [2], but it is not trivial due to the challenges imposed by selection in FDR control. **The number of test units **$|\hat{\mathcal{S}}_u|$** is a random variable** that can be complicatedly dependent with the p-values. Even though we can replace $|\hat{\mathcal{S}}_u|$ with $m$ for valid FDR control, it would be too loose and lead to reduced power. - To fix the selective randomness in analyzing FDR, we **utilize the stability property of selection rule**. Our stability condition helps mitigate the randomness in the test number which is not addressed in a single test. And dependence association between the selective p-values and the final rejection set can also be decoupled through the stability. We hope that these interpretations can ease your doubts. If you have any further questions, please feel free to ask us. [1] Jin, Ying, and Zhimei Ren. Confidence on the focal: Conformal prediction with selection-conditional coverage. arXiv, 2024. [2] William Fithian and Lihua Lei. Conditional calibration for false discovery rate control under dependence. AOS, 2022. > Weakness 2: minor errors **To W2**: Sorry for our incautiousness. Thank you for pointing out these errors. We will revise them in the future revision. > Questions: Is the idea of adapting the calibration set similar to constructing the reference set in [1]? **To Q**: Your question is very thoughtful and we would like to discuss it with you in depth. - Indeed, our adaptive strategy has the same intension as "swapped" strategy, that is to maintain the exchangeability between selected calibration set and test unit. From a theoretical standpoint, the selective p-values constructed by both strategies are valid p-values given the selection event. - **However, the core objectives of our adaptive strategy and the swapped rule in [1] and [2] are fundamentally different.** Ours focuses on conducting multiple testing over these p-values and offering theoretical guarantees. In contrast, the p-values generated by the swapped rule are utilized to construct prediction intervals with selection conditional coverage, a property specific to individual cases and not reliant on interactions between these p-values. - **The motivation of our adaptive strategy is directly related to weak stability**, where the selection rule satisfies $\mathbf{S}_{\mathcal{D}_c,\mathcal{D}_u}(X_i)=\mathbf{S} _{\mathcal{D}_c\cup\{Z_j\},\mathcal{D}_u\backslash\{Z_j\}}(X_i)$ for any $j\in\hat{\mathcal{S}}_u$ and $i\in\mathcal{U}$. Thus, the constructed p-values are based on the selection rule of $\mathbf{S} _{\mathcal{D}_c\cup\{Z_j\},\mathcal{D}_u\backslash\{Z_j\}}$ and this is different from the swapping selection rule $\mathbf{S} _{\mathcal{D}_c\backslash\{Z_i\}\cup\{Z_j\},\mathcal{D}_u\backslash\{Z_j\}\cup\{Z_i\}}$ . - Moreover, **the swapping selection rules are designed to satisfy a general range of selection scenarios rather than selection rules of specific properties**. For weakly stable selection rule $\mathbf{S}$, employing the swapping selection rule$\mathbf{S}_{\mathcal{D}_c\backslash\{Z_i\}\cup\{Z_j\},\mathcal{D}_u\backslash\{Z_j\}\cup\{Z_i\}}$ can yield a selection result $\hat{S}_u^{-j}$ that differs from $\hat{S}_u$, unlike our method. This difference complicates the substitution of $\hat{S}_u^{-j}$ with $\hat{S}_u$to mitigate the randomness of the test number. - **From an empirical standpoint, our strategy is more computationally efficient** since for each $j\in\hat{\mathcal{S}}_u$, we only need to compute the selection rule once, while the swapping rule needs $|\mathcal{C}_0|$times. [1] Jin, Ying, and Zhimei Ren. Confidence on the focal: Conformal prediction with selection-conditional coverage. arXiv, 2024. [2] Bao, Yajie, et al. CAS: A General Algorithm for Online Selective Conformal Prediction with FCR Control. arXiv, 2024. --- Rebuttal Comment 1.1: Comment: Thanks to the authors for addressing my questions. I have raised my score. --- Reply to Comment 1.1.1: Comment: Thank you so much for taking the time to review our work and raising your score. If there's anything else you would like to discuss or inquire about, please don't hesitate to reach out. Once again, we appreciate your support.
Summary: This paper proposes a method for multiple testing in the conformal setting that outputs the largest possible rejection set with FDR control contained with a data-dependent selection. The proposed method involves two steps: (1) constructing selective conformal p-values (i.e., p-values solely use points in the calibration set that are chosen by the selection rule), (2) applying the Benjamini-Hochberg (BH) procedure to the selective conformal p-values, but using a total hypothesis size of the selected set $|\widehat{\mathcal{S}}_u|$. The method ensures FDR control only when the selection rule satisfies a form of "stability", which the authors provide several definitions for. Further, they provide several examples of practical selection rules which satisfy the stability condition (e.g., top-K, quantile), which either depend solely on the covariates in the test set, $\mathcal{D}_u$ or along with the covariates of the calibration set $\mathcal{D}_c$. Strengths: The paper provides an interesting approach to rejecting a discovery set that is a data-dependent subset of hypotheses. It's a nice combination of the techniques from [1] and [4] to solve a new selection multiple testing problem, and is similar to [3]. Weaknesses: I think the main weaknesses of the paper is that it lacks comparison to two key baselines/prior art. 1) Self-consistent/compliant adjustment: Using the marginal p-values in (1), one can directly achieve FDR control under any data-dependent selection simply by taking the largest self-consistent rejection set, i.e., the largest subset $\mathcal{R}$ s.t. $p_i \leq \alpha' \mathcal{R} / K$ for each $i \in \mathcal{R} \subseteq \mathcal{S}$, where $\alpha'$ is the largest value that satisfies $\pi_0 \alpha' (1 + \log(1 / (\pi_0\alpha'))) \leq \alpha$ --- here $\pi_0 = |\mathcal{C}_0| / |\mathcal{C}|$ is the null proportion. This is a direct consequence of Theorem 3 of [5] and the PRDS property of conformal p-values from [2] referred to in your paper. Note that this is *not* the same as your AMT baselines --- those work with the selective conformal p-values (which I presume are less powerful than the marginal conformal p-value in (1)). I think understanding the performance of your procedure to this procedure would be key to seeing how the tradeoff between the gain in power from using a smaller BH threshold vs. loss in power from using a selective conformal p-value in your method compares to a method that purely uses marginal p-values. Post-rebuttal update: the authors ran these experiments and their method performs well. I have changed my score to an accept. 2) InfoSCOP [3]: Although InfoSCOP is cited in the paper, it is not compared against --- the FCR guarantee of InfoSCOP directly implies FDR control on a data-dependent selection set simply by making its "informative prediction set" be informative against the null hypothesis being tested (the types of sets it is informative against are precisely the types of null hypotheses you are interested in testing against). In this vein, I think a more detailed comparison of your method and the InfoSCOP method is needed (i.e., what type of selection rules are allowed for each method, and how the power differs, etc.) for this paper to be comprehensive. I think the paper should include comprehensive comparisons to these rather significant baselines/prior art to merit publication. Technical Quality: 3 Clarity: 4 Questions for Authors: The assumption of having access to a calibration set and still trying to perform selective inference in a conformal setup is a bit strange. One can directly estimate a population level threshold for selecting units based (with false positive error control) on the calibration set directly, since every unit (including the test units) is drawn from the same population --- this is what allows the conformal inference to succeed in this setup. This is quite different from the outlier detection application introduced in [2], where there are units/hypotheses where the distribution is not the same as the calibration setup, and hence one cannot estimate a population level threshold, since there is no singular notion of population. Could you elaborate on why the conformal setup makes sense here, instead of trying to directly estimate a rejection cutoff for $T(X_i)$ so the false positive rate is controlled at the population level? - [1] Yajie Bao, Yuyang Huo, Haojie Ren, and Changliang Zou. Selective conformal inference with false coverage-statement rate control. Biometrika, 2024. - [2] Stephen Bates, Emmanuel Candès, Lihua Lei, Yaniv Romano, and Matteo Sesia. Testing for outliers with conformal p-values. The Annals of Statistics, 2023. - [3] Ulysse Gazin, Ruth Heller, Ariane Marandon, and Etienne Roquain. Selecting informative conformal prediction sets with false coverage rate control. arXiv:2403.12295, 2024. - [4] Ying Jin and Emmanuel J Candès. Selection by prediction with conformal p-values. JMLR, 2023. - [5] Weijie Su. The FDR-Linking Theorem. arXiv:1812.08965, 2018 Confidence: 5 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: The limitations are well addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > Weakness 1: Comparison with self-consistent adjustment Thank you for the valuable suggestion. We have incorporated theoretical and empirical comparisons with your proposed method. And these discussions will be added in the future version of our work. - From the theoretical point of view, we have observed that the power loss associated with utilizing a selective conformal p-value is usually less than that incurred by the FDR-Linking method. To illustrate this, assume $\pi_0=0.7$ and $\alpha=0.1$ as in the simulation setting of quantile selection, then we derive $\alpha^{'}\approx0.025$. The AMT method adjusts the marginal p-value after selection by multiplying the selection proportion $\hat{\theta}=1/0.7$. This is equivalent to employing the BH procedure on the marginal p-value with $\alpha=0.07$, which evidently yields greater power than SCA. Additionally, AMT does not make full use of the information from the selection procedure. In contrast, our proposed method uses a smaller p-value than AMT, which suggests more power increase. - In terms of empirical performance, as demonstrated in both cases from our paper, the SCA method suffers from a power loss, confirming our theoretical analysis. | | | | QUAN | | MEAN | | | --- | --- | --- | --- | --- | --- | --- | | | | FDR | Power | FDR | Power | | Case A | SCA | 3.59 | 88.7 | 2.85 | 84.7 | | | SPCV | 9.83 | **93.9** | 9.90 | **93.9** | | | AMT | 6.23 | 92.1 | 8.28 | 92.6 | | Case B | SCA | 3.79 | 75.7 | 3.38 | 66.4 | | | SPCV | 9.82 | **84.9** | 9.79 | **81.1** | | | AMT | 8.71 | 77.0 | 5.81 | 79.5 | > Weakness 2: Comparison with InfoSCOP Thank you for the constructive comments and we apologize for overlooking this important reference. As you correctly point out, the InfoSCOP involves a procedure for FDR control after selection via applying BH procedure to selective conformal p-values, which aligns closely with the fundamental approach of our work. Notably, the FDR control guarantee in InfoSCOP requires the selection rule to satisfy a specific assumption, which can be transformed into the joint-exchangeable condition in our context. For strongly stable selections, our method can be simplified and degenerate into a form similar to InfoSCOP. We would like to clarify the **following key differences** between our method and InfoSCOP: - Firstly, we provide the FDR control guarantee for general selection rules with strong stability, which is **beyond the joint-exchangeable selection**. For example, the assumption in InfoSCOP is not satisfied by the quantile selection rule based solely on test data. Thus, their theoretical results are not applicable in such cases, while our framework bridges this theoretical gap. - Secondly, **our approach covers a wider range of selection rules**. For instance, when dealing with weakly stable rules, we employ conditional calibration on adaptive p-values to ensure rigorous FDR guarantees. The table below compares the performance of our approach with InfoSCOP under mean selection rule. The InfoSCOP shows reasonable empirical performance, which is similar to ours. Therefore, it is possible that InfoSCOP may still work under mean selection, making it an interesting topic for theoretical investigation, which remains unexplored in InfoSCOP. In contrast, we provide FDR control guarantee under a variety of selection scenarios. | | Case A | | Case B | | | --- | --- | --- | --- | --- | | | FDR | Power | FDR | Power | | InfoSCOP | 9.85 | 94.0 | 9.80 | 78.4 | | Ours | 9.86 | 93.4 | 9.80 | 78.1 | - Lastly, our approach and InfoSCOP are **designed for different goals, resulting in different analytical frameworks**. Ours is specifically designed to address the multiple testing problem across various selection rules. From the perspective of conditional calibration, our method is unified, where **the BH procedure for strongly stable selection can be seen as a special case**. As a comparison, InfoSCOP is an excellent work for selecting an informative set with FCR control, but it is **not primarily designed for multiple testing after data-dependent selection**. Their FDR guarantee is an extension of FCR control, which limits their method's applicability to different selection rules. Based on your suggestions, we will include a comprehensive comparison of our method with InfoSCOP in future version. Hope the above interpretation can ease your doubts. > Question: Could you elaborate on why the conformal setup makes sense here? Thank you for the insightful question. We would like to make some clarifications. - Indeed, **our method is versatile and not limited to conformal setups.** For instance, in scenarios where only null-labeled data is available, such as in outlier detection, our approach can still generate selective conformal p-values and perform the appropriate procedures to control the FDR. In such cases, we assume that the distribution of calibration data is identical to the test data because the join-exchangeable selection rule is expected to be applicable in this scenario. However, it is not necessary for quantile or mean selection rules which based on test data only. - Your comment on population level based selection is indeed insightful and valuable, particularly for scenarios where the goal is to directly select a subset from the original data. However, our framework offers a broader applicability beyond this. In many cases, **we are only concerned about the specific selected subgroup, and the FDR over this set should be controlled**. For example, in brain scan experiments, researchers hope to find brain locations for specific signal with FDR control. Given that there are several encephalic regions, each should be treated separately [1]. In such cases, controlling FDR within these subsets is of concern, and a global cutoff may have no theoretical guarantee. [1] Efron, B., Simultaneous Inference: When Should Hypothesis Testing Problems Be Combined? AOAS, 2008. --- Rebuttal Comment 1.1: Comment: Thank you for running experiments comparing the methods and thoroughly explaining the differences with existing work. --- I have updated my score to accept. As a final comment, I think it would be helpful to describe the relationship of your conditional calibration approach with the application of boosting with conditional calibration in Section 6 [1] as applied to conformal multiple testing --- can your procedure be seen as a generalization or specific case of their derandomization (or even randomized) approach to getting FDR control? [1] J. Lee and Z. Ren. Boosting e-BH via conditional calibration. arXiv, 2024. --- Reply to Comment 1.1.1: Comment: > Comment: Describe the relationship of your conditional calibration approach with the application of boosting with conditional calibration in Section 6 [1] as applied to conformal multiple testing. Can your procedure be seen as a generalization or specific case of their derandomization (or even randomized) approach to getting FDR control? **To C**: Thank you once again for updating the score and providing your insightful comments. - Indeed, our procedure can be viewed as a generalization for a selective scenario of their approach [1]. The conditional calibration approach with random pruning is equivalent to the e-BH procedure applied to $\{e_j/\epsilon_j:j\in\hat{\mathcal{S}}_u\}$, where $e_j=\frac{\hat{\mathcal{S}}_u\mathbb{I}(p_j\leq\frac{\alpha\hat{R}_j(\mathbf{p})}{\hat{\mathcal{S}}_u})}{\alpha\hat{R}_j(\mathbf{p})}$ and $\epsilon_j$ are independent standard uniform random variables. Additionally, our approach with deterministic pruning is equivalent to the e-BH procedure applied to $\{e_j:j\in\hat{\mathcal{S}}_u\}$. - Under our stability assumption, we can confirm that $e_j$ is a valid e-value in a manner similar to Lemma E.2 in our paper. However, exploring the property for general selection rules remains a task, as $\hat{\mathcal{S}}_u$ is a random variable correlated to $p_j$. - With this equivalence property, the boosting method [1] can be directly applied to our deterministic pruning approach by constructing the new boosting e-value to enhance power. This boosting method enhances the power of e-BH without sacrificing its FDR control or introducing additional randomness. This excellent work can significantly improve the reproducibility of results from our conditional calibration approach. We will add this discussion to the article in future versions. [1] J. Lee and Z. Ren. Boosting e-BH via conditional calibration. arXiv, 2024.
Summary: This paper considers the problem of sample selection among a pre-specified group. The authors formulate this problem as a multiple testing problem and develop a procedure based on conformal inference, in which special treatment is adopted to find a specific calibration set that is exchangeable to the selected test unit. The proposed method achieves FDR control when the pre-selection rule has some strong exchangeability property. Strengths: 1. The paper indeed considers an interesting problem, and the formulation makes sense. 2. The paper is well-presented and easy to follow. Weaknesses: **Technical contribution.** As introduced in the paper, the technical difficulty of sample selection among the selected units with FDR control lies in (1) constructing valid p-values in the presence of selection and (2) dealing with the dependency between p-values for FDR control. The current paper mainly addresses the first problem, while the second problem is only parially solved for quite restrictive selection rules. In fact, finding the exchangeable group/constructing valid conformal p-values for selected units has already been quite extensively discussed in [1] and [2]. It would be helpful to clarify the technical contribution given this context. 1. Jin, Ying, and Zhimei Ren. "Confidence on the focal: Conformal prediction with selection-conditional coverage." arXiv preprint arXiv:2403.03868 (2024). 2. Bao, Yajie, et al. "CAS: A General Algorithm for Online Selective Conformal Prediction with FCR Control." arXiv preprint arXiv:2403.07728 (2024). Technical Quality: 3 Clarity: 3 Questions for Authors: See the "Weaknesses" section. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The paper has partially discussed its limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > Weaknesses for technical contribution: As introduced in the paper, the technical difficulty of sample selection among the selected units with FDR control lies in (1) constructing valid p-values in the presence of selection and (2) dealing with the dependency between p-values for FDR control. The current paper mainly addresses the first problem, while the second problem is only partially solved for quite restrictive selection rules. In fact, finding the exchangeable group/constructing valid conformal p-values for selected units has already been quite extensively discussed in Jin and Ren (2024) and Bao et.al (2024). It would be helpful to clarify the technical contribution given this context. Thanks for your constructive comments on the technical contribution of our work. We acknowledge that our problem also relies on the idea of ``finding the exchangeable group/constructing valid conformal p-values for selected units'', which has been discussed in previous works [1] and [2]. However, our approach addresses a more challenging multiple testing problem. We would like to provide some clarifications to highlight the contribution of our work and explain how it differs from existing studies. - **First, our problem setup differs significantly from previous works**, which focus on constructing prediction intervals after selection. The goal of this paper is to **conduct multiple testing after data-dependent selection** such that the final rejection set has controlled FDR in a finite sample regime. This process involves the complex interaction among the p-values and the data-dependent selection process. In contrast, [1] and [2] both proposed a swapped strategy to construct valid conformal p-values after selection and then use them to build prediction intervals with selection conditional coverage. The selection conditional coverage is an individual notion and only requires the validity of a single p-value. However, this does not account for the correlations among p-values and thus does not guarantee the validity of multiple testing procedures. - Second, **our main technical contribution lies in developing a unified analytical framework for handling the randomness arising from data-driven selection in the context of multiple testing**. This framework builds upon the conditional calibration framework [4], but we go beyond it by tackling the **challenges imposed by selection** in FDR control. In our approach, the number of test units is denoted as $|\hat{\mathcal{S}}_u|$, which is a random variable that can have complicated dependencies with the p-values. Although it is possible to replace $|\hat{\mathcal{S}}_u|$ with a fixed value $m$for valid FDR control, doing so would be too conservative and would result in a significant loss of power. - Finally, to address the data-dependent selection effects, we leverage the **stability property** of the selection rule. Through a detailed investigation of the stability properties, we demonstrate that **our procedure can have finite sample FDR control across many important selection rules**. Notably, this advantageous property of stability was not investigated in [1] and [2]. For example, under the mean selection rule, which we have confirmed to be weakly stable in our framework, [1] and [2] did not fully leverage their stability characteristics, leading them to adopt a different approach for constructing selective conformal p-values. Our approach is based on $\mathbf{S}_ {\mathcal{D}_ c\cup\{Z_ j\},\mathcal{D}_ u\backslash\{Z_j\}}$ in Section 3.3, while they are based on the swapped selection $\mathbf{S}_{\mathcal{D}_c\backslash\{Z_i\}\cup\{Z_j\},\mathcal{D}_u\backslash\{Z_j\}\cup\{Z_i\}}$. Our adaptive selective p-value is more computationally efficient as it only requires computing the selection rule once instead of $|\mathcal{C}_0|$ times for each test unit. Also, our p-value is related to the selected subset $\mathbf{S} _{\mathcal{D} _c\cup\{Z _j\},\mathcal{D} _u\backslash\{Z _j\}}$, which has a closer relation to our FDR control analysis. We hope these clarifications are helpful. If you have any further questions, please feel free to reach out. [1] Jin, Ying, and Zhimei Ren. Confidence on the focal: Conformal prediction with selection-conditional coverage. arXiv, 2024. [2] Bao, Yajie, et al. CAS: A General Algorithm for Online Selective Conformal Prediction with FCR the Control. arXiv, 2024. [3] Ying Jin and Emmanuel J Candès. Selection by prediction with conformal p-values. JMLR, 2023. [4] William Fithian and Lihua Lei. Conditional calibration for false discovery rate control under dependence. AOS, 2022. --- Rebuttal Comment 1.1: Comment: Thank you for clarifying your contribution! I will raise my score to 6. --- Reply to Comment 1.1.1: Comment: Many thanks for the review and raising your score. If you have any other questions, concerns, and comments, please let us know. We would like to provide our responses and address them in the future revision. Thank You!
Summary: The authors study the validity of Benjamini-Hochberg like procedure on conformal p-values computed on data selected with a particular rule. Assumptions on the rules (and examples verifying them) are mentioned and guarantees demonstrated in those cases. Experiments on synthetic and classical real data and also conducted. Strengths: The presentation is clear and progressive. The problematics are easily understood, and the difficulties of the explicitly mentioned (e.g. computing p-values on data selected by a data-dependent procedure). The contribution is interesting for the conformal/statistics community, as there is numerous applications of the result, or at least the arguments, to complex conformal tasks. Weaknesses: I'm surprised by the lack of references to Vovk's work in particular, having studied conformal p-values and conformal testing for a long time. Theoretically, the weakness is due mainly to the weak rule setting, although it is not a significant issue. However, I think the experimental part is the most limiting here, being limited to a very classical testing setting in a conference where applications related to deep learning would be of interest. Technical Quality: 4 Clarity: 3 Questions for Authors: Could you compare or clarify the contribution with regards to the conditional calibration scheme mentioned in the article ? Confidence: 4 Soundness: 4 Presentation: 3 Contribution: 4 Limitations: The approach taken in the weak selection rule has a lower power. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > Weakness 1: the lack of references to Vovk's work in particular, having studied conformal p-values and conformal testing for a long time. **To W1**: Thank you for the nice suggestions. We will incorporate more Vovk's work to enhance the clarity of the review. There are several literature we plan to review in the future version: [1] Vovk, V., Gammerman, A. , and Saunders, C. . Machine-learning applications of algorithmic randomness. ICML, 1999. [2] Papadopoulos, H., Proedrou, K., Vovk, V., & Gammerman, A. . Inductive confidence machines for regression. ECML,2002 [3] Vovk, V., Nouretdinov, I., & Gammerman, A. . Testing exchangeability on-line. ICML, 2003. [4] Vovk V, Lindsay D, Nouretdinov I, et al. Mondrian confidence machine. Technical Report, 2003. [5] Vovk V. Conditional validity of inductive conformal predictors. Machine Learning, 2013. > Weakness 2: experimental part is limited to a very classical testing setting in a conference where applications related to deep learning would be of interest **To W2**: We greatly appreciate your feedback on the current experimental limitations, particularly the absence of a modern setting. Your insights have inspired us to explore the application related to deep learning. We apply our method to the field of drug discovery [1] as an initial trial. Based on the DAVIS dataset [2], we aim to identify drug-target pairs with high log binding affinity (Y). Our hypothesis is $H_{0,t}: Y_t<9.21$ with target FDR $\alpha$=10%. To transform the structural information of protein and chemical compounds into numerical features, we attach a bidirectional recurrent neural network on top of the 1D CNN output to encode them. Subsequently, we train a small neural network with 3 layers and 5 epochs based on the encoded information $X$ and binding affinity $ Y$. Below, we present our basic results for the quantile selection rule. In this rule, j-th sample is selected if $\hat{\mu}(X_j)$ is larger than the 35%-quantile over the predicted values in the test set. The results are outlined as follows, demonstrating that our procedure (SCPV) can control the FDR precisely. | | FDR | Power | | --- | --- | --- | | SCPV | 9.72 | 76.67 | | OMT | 12.30 | 93.42 | Due to limited time, more experiments related to deep learning will be made in future version. [1] Ying Jin and Emmanuel J Candès. Selection by prediction with conformal p-values. JMLR, 2023. [2] Mindy I Davis, et. al. Comprehensive analysis of kinase inhibitor selectivity. Nature Biotechnology, 2011. > Question: Could you compare or clarify the contribution with regards to the conditional calibration scheme mentioned in the article? **To Q**: Your question is very meaningful. Here we make a detailed discussion about the contribution with regards to the conventional conditional calibration. The conventional conditional calibration [1] offers a flexible framework to decouple the dependence between p-values. It requires a carefully constructed quantity $\Phi_j$, such that appropriate $c_i^*$ can be identified to satisfy the condition $\mathbb{E}[\mathbb{1}\{p_j\leq c^*_i\}/|\hat{\mathcal{R}}_j|\mid \Phi_j]\leq \alpha/m$, where $p_j$ is a p-value under null, $\hat{\mathcal{R}}_j$ is a substitution of the original rejection set $\hat{\mathcal{R}}$ and $m$ is the number of test units. In our selective setting, the number of test units is $|\hat{\mathcal{S}}_u|$, which can be complicatedly dependent with both $p_j$ and $\hat{\mathcal{R}}_j$. And when analyzing the FDR, the event that j-th sample is selected is also involved. So our our primary focus is on ensuring $\mathbb{E}\left[\frac{\mathbb{1}\{p_j\leq c^*_i,j\in\hat{\mathcal{S}}_u\}}{|\hat{\mathcal{R}}_j|}|\hat{\mathcal{S}}_u|\mid \Phi_j\right]\leq \alpha.$ The conditional calibration framework primarily focuses on the correlation of p-values. However, a significant challenge arises because **FDR control in a selective setting involves not only individual p-values but also the selection procedure itself**. Consequently, the selective effects are unavoidable when implementing conditional calibration. To address this, we leverage the stability property of the selection rule, which allows us to effectively conduct analysis over the selected subset effectively and rigorously. [1] William Fithian and Lihua Lei. Conditional calibration for false discovery rate control under dependence. AOS, 2022. > Limitation: The approach taken in the weak selection rule has a lower power. **To L**: Thank you for pointing this out. The conditional calibration approach will lose certain power due to the pruning process. Our simulation results shown in the Appendix are based on deterministic pruning and it lacks power indeed. However, various techniques exist, which can enhance power through randomization. Here we present the improved results of heterogeneous random pruning, whose power is nearly as powerful as the BH procedure. | | Case A | | Case B | | | --- | --- | --- | --- | --- | | | FDR | Power | FDR | Power | | OMT | 15.8 | 97.1 | 16.1 | 83.8 | | Con | 9.86 | 93.4 | 9.80 | 78.1 | | BH | 9.86 | 94.0 | 9.80 | 78.4 | --- Rebuttal Comment 1.1: Comment: Thank you for your detailed response and addressing my concerns. I have a clearer understanding of conditional calibration as compared to your work. I moreover appreciate the additional experiments. I maintained my rating but increased my confidence in it. --- Reply to Comment 1.1.1: Comment: We greatly appreciate your efforts in reviewing our work and increasing your confidence. If there are any additional insights or suggestions you would like to share, we are eager to hear them. Thank you once again for your support.
Rebuttal 1: Rebuttal: # Response to All Reviewers Dear reviewers, thanks for your great efforts and valuable comments on our paper! We are glad that the reviewers found our paper "considers an interesting problem", "provides an interesting approach" and "provides extensive theoretical proof". Multiple testing after data-dependent selection in the predictive setting is an important problem, and we are the first systematic investigation to provide a unified solution. First, we would like to emphasize our contributions again. - Conducting multiple testing after data-dependent selection in the predictive setting is crucial in numerous real-world problems and **it is barely considered in previous work**. Our work **represents the pioneering effort in tackling this problem**. - Our proposed procedure is theoretically verified to have **FDR control for data-dependent selection** with stability which **covers a wide range of selection rules**. - Our method can be easily **integrated with any black-box prediction model for both regression and classification settings**. Extensive numerical experiments indicate the superiority of our method. Next, we aim to expound our main theoretical challenge. - There are more **complex randomness** to handle such as the test number which need not be considered in a single test. Our approach **incorporates stability conditions designed for the difficulties** encountered in multiple testing. It helps us to **mitigate the randomness** in the test number and the correlation between p-values. - The existence of **selections beyond exchangeability** **brings** **extra intricate correlations** to the selective p-values and makes it difficult to do multiple testing. Our p-values are **constructed to decouple with the stability of selection,** particularly for the p-values under weakly stable selections. This construction can effectively counteract the correlations due to the non-exchangeability. In the rebuttals, we have clarified the unique contributions of our research by highlighting its novelty compared to existing literature and supplemented our findings with additional numerical results to strengthen the validity of our method. In addition, we also explain the technical challenges faced by our approach.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
GO4Align: Group Optimization for Multi-Task Alignment
Accept (poster)
Summary: The paper designs a multi-task optimization method, namely GO4Align, to address task imbalance by aligning optimization processes across tasks. It proposes an adaptive group risk minimization strategy, formulated as a bilevel optimization problem where the lower-level optimization is a task grouping optimization and upper-level optimization is a weighted optimization over task group losses. Then, this method optimizes the problem by alternating two steps: (1) Dynamical Group Assignment where tasks are clustered using a dynamic group assignment process, implemented via K-means clustering, to capture beneficial task interactions. (2) Risk-guided Group Indicators: indicators are designed to balance task risks and align learning progress by combining scale-balance and smooth-alignment operations. GO4Align is evaluated on benchmarks, including NYUv2, CityScapes, QM9, and CelebA. The results show that it outperforms existing gradient-oriented (MGDA, PCGRAD, CA-GRAD, IMTL-G, GRADDROP, and NASHMTL) and loss-oriented methods (Linear scalarization, Scale-invariant, Dynamic Weight Average, Uncertainty Weighting, Random Loss Weighting, and FAMO.), achieving lower performance drops and better computational efficiency. Lastly, the study explores the contributions of each component of GO4Align, the influence of group assignments, and the role of group weights. It shows that the proposed AGRM principle can integrate with existing MTO methods, further improving their performance. Strengths: - This paper proposes an adaptive group risk minimization strategy to address task imbalance, formulated as a bilevel optimization problem where the lower-level optimization is a task grouping optimization and upper-level optimization is a weighted optimization over task group losses. - GO4Align is evaluated on benchmarks, including NYUv2, CityScapes, QM9, and CelebA. The results show that it outperforms 12 existing gradient-oriented and loss-oriented methods, achieving lower performance drops and better computational efficiency. Weaknesses: - There are many existing clustering algorithms that can be potentially used for task grouping, such as spectral clustering and SDP-based clustering [1]. What is the rationale for choosing K-means clustering in the method? It would be better to discuss and ablate the clustering algorithms, as it is an important component of the proposed method. - Another important component is the choice of the group indicators for clustering tasks. This work uses indicators based on task loss trajectories. How about using gradients and model features as group indicators? Would it be worse than the proposed indicators in Section 3.3? - It would be better to explain why the proposed method uses the comparable time as linear scalarization. How does the proposed method scale the number of tasks? It would be better to provide a comparison of the additional computation across each method? [1] Relax, no need to round: integrality of clustering formulations. https://arxiv.org/abs/1408.4045 Technical Quality: 3 Clarity: 3 Questions for Authors: - How is the convergence difference evaluated in Figure 2? How is the $\Delta m$ metric defined? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: This work has discussed its limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: *We sincerely thank # Reviewer sLBS for their insightful comments. The following mainly addresses the concerns and answers questions.* --- **1. Clustering methods.** In our main experiments, we employed standard K-means for instantiation. K-means is a widely used clustering approach that worked well in our experiments, so we did not explore this part further (as it was not the main focus). We appreciate your suggestion and have conducted additional experiments to evaluate the impact of using alternative clustering algorithms. We thank you for bringing the reference [a]; however, due to the absence of open-source code for [a], we explored another SDP-based clustering method [b]. Specifically, we conduct the experiments on NYUv2 by substituting the K-means clustering in our method with SDP-based clustering [b] and spectral clustering [c]. As demonstrated in the table below, these alternative clustering methods also outperform state-of-the-art approaches (FAMO, -4.10%), particularly by enhancing the performance of each task over STL. For training efficiency, we implemented K-means using the >kmeans_pytorch package, which offers GPU support. Interestingly, our experiments show that the K-means clustering algorithm we deployed outperforms both the spectral and SDP-based clustering methods. This is because the hyperparameters for the latter algorithms have yet to be thoroughly investigated. We have presented promising initial results and will include this ablation study and related discussions in the main manuscript. | Method | Package | GPU support of clustering | Relative runtime $(\downarrow)$ | $\mathbf{\Delta seg.} (\downarrow)$ | $\mathbf{\Delta depth} (\downarrow)$ | $\mathbf{\Delta normal} (\downarrow)$ | $\mathbf{\Delta m} (\downarrow)$ | |----------------------|-----------------|-------------|--------------|---------------|----------|-------------|------------------| | Ours w/ SDP-based clustering [b] | sdp_kmeans | NO | 1,20X | -2,97 | -18,76 | -1,09 | -5,44 | | Ours w/ Spectral clustering [c] | sklearn | NO | 1,17X |-1,78 | -18,58 | -0,06 | -4,56 | | Ours w/ K-means clustering (in the paper) | kmeans_pytorch | **YES** | **1,02X** |**-4,03** | **-20,37** | **-1,18** | **-6,08** | --- **2. Choice of the group indicators.** It is also plausible to use the gradient information as clustering indicators. As shown in **Table 4**, we tried to replace the proposed risk-guided group indicator with gradient-guided task weights(MGDA and NashMTL). Our work outperforms these alternatives. The main reason could be that our group indicator yields better representations of learning information by capturing the differences in the per-task risk scale and exploring the learning dynamics over time. Moreover, from the perspective of training efficiency, the task-specific gradients need to back-propagate the shared architecture for $M$ times, where $M$ is the number of tasks, which increases computational cost linearly with the number of tasks. --- **3. Comparable time with linear scalarization and scale-up number of tasks.** We follow representative MTO works (RLW[39] and FAMO[14]) in choosing linear scalarization (LS) as the relative baseline for training time. This is a good choice because LS is a common MTL baseline, where each task has equal weights without extra loss-oriented or gradient-oriented techniques. As shown in **Fig. 4**, when the number of tasks scales up from 2 to 40, the advantages of reducing computation cost with our method become increasingly significant compared to other gradient-oriented methods, e.g., NashMTL=2.07 versus Ours=1.01 with 2 tasks, NashMTL1=2.49 versus Ours=1.01 with 40 tasks. We will add this discussion in **Line 299**. --- **4. Questions.** **Convergence difference in Figure 2**: We evaluate the convergence difference by the standard deviation of the task-specific epoch numbers to reach convergence. We will polish this description in **Line 90**. **Definition of $\Delta m$**: $\Delta m$ is the average per-task performance drop relative to STL as representative MTO works (NashMTL and FAMO). We will polish this in **Line 268**. --- **Reference**: [a] Awasthi, P., Bandeira, A. S., Charikar, M., Krishnaswamy, R., Villar, S., & Ward, R. (2015, January). Relax, no need to round: Integrality of clustering formulations. In Proceedings of the 2015 Conference on Innovations in Theoretical Computer Science (pp. 191-200). [b] Tepper, M., Sengupta, A. M., & Chklovskii, D. (2017). The surprising secret identity of the semidefinite relaxation of k-means: manifold learning. arXiv preprint arXiv:1706.06028. [c] Damle, A., Minden, V., & Ying, L. (2019). Simple, direct and efficient multi-way spectral clustering. Information and Inference: A Journal of the IMA, 8(1), 181-203. --- *Thank you for your time and efforts. We hope our experimental results and clarifications have addressed your concerns. Please don’t hesitate to reach out if you have further questions or need more information.*
Summary: This paper proposes a multi-task optimization method designed to address task imbalance by aligning optimization processes across different tasks. To accomplish this, the authors developed an adaptive group risk minimization strategy, which includes two key techniques: (i) dynamic group assignment, clustering similar tasks based on their interactions, and (ii) risk-guided group indicators, leveraging consistent task correlations and risk information from previous iterations. Extensive experimental results across various benchmarks show that the proposed method outperforms others while requiring even lower computational costs. Strengths: 1. The paper is well-organized and easy to follow. 2. The proposed method outperforms several newly proposed MTO algorithms. 3. Extensive experimental results across various benchmarks show that the proposed method outperforms others while requiring even lower computational costs. Weaknesses: 1. To help readers better understand the application scenarios of the proposed method, the author can provide examples illustrating the phenomenon of task imbalance in MTO tasks. 2. In Eq. (4), what is the optimization method for $w$ and $G$? Is it performing standard k-means on $\gamma$? 3. In Line 172, why is it possible to invert the $K\times M$ matrix $G$? 4. In the introduction of the dataset, there is a lack of explanation regarding the imbalance phenomenon in the relevant learning tasks. Technical Quality: 2 Clarity: 3 Questions for Authors: See "Weaknesses". Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: No. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: *We sincerely thank # Reviewer 9PzS for their insightful comments. The following mainly addresses the concerns and answers questions.* --- **1. The phenomenon of task imbalance and application scenarios.** Thanks for your kind suggestion. This phenomenon of task imbalance describes that some tasks are severely under-optimized during multi-task training. This can be observed in **Fig. 2 and Table 1**. (i) In the left subplot of Fig. 2, task “normal” is under-optimized, leading to convergence earlier than other tasks. (ii) In Table 1, most baselines achieve comparable performance to STL on the segmentation and depth estimation tasks but significantly sacrifice the performance on the surface normal estimation task. We note that on NYUv2, our work is the only method that improves each task’s performance relative to the corresponding STL results, especially for the surface normal estimation task. This demonstrates that our method performs better in alleviating task imbalance. Meanwhile, we would like to specify one typical application scenario of GO4Align. It lies in vision-based autonomous driving, such as Autopilot, which needs to simultaneously handle several tasks (instance segmentation and depth estimation). Significant differences in the improvements across tasks could exist due to diverse scales, different task difficulties, or asynchronous learning dynamics. In this case, the proposed method can be deployed to balance their learning process, particularly without extra training costs. --- **2. What is the optimization method for $\omega$ and $\mathcal{G}$?** We use standard K-means with “Euclidean” distance (*import kmeans_pytorch*) on $\gamma$ to optimize the $\omega$ and $\mathcal{G}$. --- **3. Why is it possible to invert the matrix $\mathcal{G}$?** Special thanks for pointing out this. The assignment matrix $\mathcal{G}$ is a full-row rank. For simplicity, we formulate its generalized inverse, especially one-sided right inverse, $\mathcal{G}_{R}^{-1}=\mathcal{G}^{\top}(\mathcal{G}\mathcal{G}^{\top})^{-1}.$ We will add a detailed description in **Line 172**. --- **4. Imbalance phenomenon in datasets.** The task-imbalance phenomenon in the MTO literature [1, 27] mostly refers to the imbalanced optimization process rather than data distributions in the task space. We will clarify this in the paper, and specifically, we will add the description of task imbalance in **Line 257**. --- *Thank you for your time and efforts. We hope this has addressed your concerns and answered your questions. Please don’t hesitate to reach out if you have further questions or need more information.*
Summary: The paper proposes GO4Aligh, a multi-task optimization approach, which targets the task imbalance issue. Specifically, the authors devise two objectives in multi-task optimization: 1) the first is the dynamical group assignment, which can attribute similar tasks to the same cluster and distinguish different ones. 2) The second is the risk-guided group indicators, which include scale-balance and smooth-alignment. These two objectives are incorporated into the AGRM principle for multi-task optimization. Strengths: From an overall view, this paper is quite well-structured, with clear writing and an organized layout. The primary motivation, which is to optimize task imbalance, is clearly articulated and valuable. The methodology is understandable and easy to reproduce. Also, the experiments show that GO4Align can reduce 6.08% metrics with no further training time. Weaknesses: While I do not have major concerns about this paper, it’s worth noting that the performance improvement of GO4Align over FAMO is marginal, which may limit its broader impact. The concept of grouping-based task interactions is innovative, but not necessarily groundbreaking. Besides, I have some questions. Please refer to the next part. Technical Quality: 3 Clarity: 4 Questions for Authors: - Given the existence of Eq.4, I’m unsure whether Eq.5 or Eq.6 would have a significant impact. How would the performance change if only Eq.4 is included? - The paper asserts that GO4Align has even lower computational costs, yet it is 1.02x that of STL. Does this mean that GO4Align is more efficient compared to the baselines (not the STL)? Confidence: 3 Soundness: 3 Presentation: 4 Contribution: 2 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: *We sincerely thank # Reviewer AXsM for their insightful comments. The following addresses their concerns and provides answers to their questions.* --- **1. GO4Align versus FAMO.** We thank the reviewer for the comment. GO4Align and FAMO are both strong candidates for handling task imbalance in the MTO literature. However, on NYUv2, our work is the only method that improves each task’s performance relative to the corresponding STL results, especially for the challenging surface normal estimation task. This demonstrates that our method performs better in alleviating task imbalance. --- **2. Questions.** **Performance change with only Eq.4**: Eq.4 is a bi-level optimization process, where lower-level optimization takes the group indicators (Eq.5 and Eq.6) as inputs to update the assignment matrix and group weights. Thus, the optimization process can not be performed individually without specifying the group indicator. More details can be found in **Code Line 6-10** of Algorithm 1. **Computational cost**: Each method’s training time is computed relative to a common MTL baseline (LS, linear scalarization), the same as used in representative MTO works (RLW[39] and FAMO[14]). LS weights each task equally without extra loss-oriented or gradient-oriented techniques. Moreover, our method has the advantage of computational efficiency over STL. STL takes o(M) the space or time cost, where M is the number of tasks. In contrast, our work uses o(1) space and time due to the shared architecture and loss-oriented mechanism. --- *Thanks again for your time and efforts. We hope this has answered your questions. If you have any other questions, we are happy to discuss them and provide further clarification.* --- Rebuttal Comment 1.1: Comment: Thanks for your response. I have no further question and keep the current rating at this phase.
Summary: This paper presents an approach for multi-task learning aiming to reduce interference among tasks by adaptive loss weighting. Instead of task-specific weights as in existing works, the authors proposed to group similar tasks and all tasks in the same group share the same weight. The paper also includes a new derivation of task level weights (referring to group indicator), which contains two components: scale-balance and smooth-alignment. Evaluation was done using four benchmark datasets, three in computer vision and one in chemistry (predicting property of molecules) Strengths: • The manuscript is very well written and easy to follow • The proposed idea is neat. The proposed task grouping is flexible, which can easily work with other task-weight deriving methods. • Ablation study was done to evaluate the contribution of ingredients in the proposed method. Weaknesses: • I see that the technical contributions of this work are twofold. One is the derivation of the group indicator (essentially task specific weights) and the other one is to use the same weight to weigh all tasks in a group instead of just using task specific weights. I do not see clear motivation of both. More discussion is needed why both are expected to be better than existing approaches. • Figure 5 needs more description to make it easier to read. It took me quite some effort and time to figure the x-axis in the plots are epochs and the intensity of the color indicates the weight value (if I am correct). • There are results from only one dataset which contains just three tasks. Results from more datasets especially those with larger number of tasks are desired, given the proposed method aims at task grouping. • The reported results are not strong. (1) Based on the results from CityScapes (table 2), which has only two tasks (meaning there is not much grouping involved and difference in performance should be due to that in weight calculation), GO4Align is not the best performing one, implying the proposed weight calculation could be inferior to existing one. (2) All numbers in Table 2 are positive, implying models perform worse than those in single-task learning, which defeats the major advantage of MTL, enhancing the model generalizability and performance. • I do not see an ablation study that compares with grouping to without grouping at all. • More details in experiment setup is needed to enhance reproducibility, especially, for each of the experiment how partition of the dataset was done for training, validation, and testing. Technical Quality: 3 Clarity: 3 Questions for Authors: Refer to weakness Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: *We sincerely thank # Reviewer FqZ5 for their insightful comments. The following addresses their concerns and provides answers to their questions.* --- **1. Motivations.** Thank you for acknowledging the two technical contributions to our paper. We will clarify the motivations for both in the main manuscript as follows: **Group indicator**: The motivation of the group indicator derivations is to fully utilize the risk information to explore the relationships among tasks, where applying risk information can avoid the computation cost of gradients. Compared with other loss-oriented methods (such as RLW, DWA, UW, and FAMO), our group indicator can capture the differences in the per-task risk scale and fully utilize the learning dynamics over time, yielding better representations of risk information. Empirically, **Table 4 in Line 340** shows comparisons with *other task-specific weights (MGDA, NashMTL and FAMO) + the proposed AGRM principle*; our group indicators result in superior performance over other alternatives. **Group-specific weights versus task-specific weights**: The motivation for group-specific weights is that tasks with close group indicators have similar behaviors on risk, and similar tasks can benefit from training together by sharing parameters as much as possible [21], even weights in the multi-task objective. With the group-specific weights, our method can align tasks with similar risk behaviors during joint training, alleviating the task imbalance issue. As shown in **Table 3 in Line 307**, our work with group-specific weights outperforms its variants with task-specific weights (without Eq.4). This further demonstrates that our group-specific weights can effectively alleviate the under-optimization of some tasks. --- **2. Caption of Figure 5.** Thanks for your suggestion. We've taken your advice and added the description to the caption of **Figure 5**. > The x-axis in the subplots denotes the epoch, and the intensity of the color indicates the weight value. --- **3. Datasets with a larger number of tasks.** In MTL literature, it is common to evaluate datasets with a small number of tasks, such as NYUv2 (3 tasks) and CityScapes (2 tasks) [6, 13, 14, 16]. As task grouping may be beneficial for larger numbers of tasks, we also included QM9 (11 tasks) and CelebA (40 tasks). On these last two benchmarks, our method achieves better overall performance than other loss-oriented methods, with higher training efficiency than gradient-oriented baselines. The overall results are in **Table 2 in Line 276** and details in Appendix **Table 5 & 6**. --- **4. Experimental analysis on Table 2.** We appreciate the reviewer's careful observations of Table 2. We want to provide clarifications on two points: **Weight calculations on CityScape**: GO4Align achieves comparable performance, securing the third position ($\Delta m$) on CityScape. It’s critical to note that this benchmark, comprising only two tasks, offers limited grouping options, which constrains the effectiveness of the proposed weight calculation (risk-guided group indicator). Furthermore, in contrast to other weight calculation methods on NYUv2, such as **Table 4 in Line340**, the proposed risk-guided group indicator surpasses alternative weight calculations. **MTL methods underperform STL in terms of the overall evaluation**: This is a common phenomenon in MTL literature (CAGrad, NashMTL, and FAMO), caused by the task-imbalance issue, where some tasks are under-optimized during joint training. In particular, we note that: (i) In Appendix **Table 5 & 6** with detailed task-specific performance, we can observe that most baselines achieve comparable performance over STL on some tasks but significantly sacrifice the performance on the other tasks. (ii) MTL still offers advantages such as improved computational efficiency and reduced training time, which significantly contribute to real-world systems. --- **5. Our model with grouping versus without grouping.** We refer the reviewer to the ablations in **Table 3 in Line307**. As the grouping is performed by Eq.(4), the first two rows in Table 3 are the variants of our method without grouping, and the last row is our method with grouping. Table 3 empirically examines the performance gains of task grouping over models without grouping. We will make this more clear in **Line 307**. --- **6. Detailed experimental setup and open-source plan.** This work follows the same experimental setting used in NashMTL [13] and FAMO [14], including the dataset partition for training, validation, and testing. The benchmark partition is attached. We will add this table in Appendix Sec.B.1. we also note that NYUv2 and Cityscapes do not have validation sets. Following the protocol in [13-14], we report the test performance averaged over the last ten epochs. ***Importantly, we will release our code to facilitate MTO research after the final decision.*** | Datasets | Total | Training | Validation | Test | |------------|----------|----------|------------|--------| | NYUv2 | 1449 | 795 | N/A | 654 | | CityScapes | 3475 | 2975 | N/A | 500 | | QM9 | ~130k | ~110k | 10k | 10k | | CelebA | 202,599 | 162,770 | 19,867 | 19,962 | --- *Thank you for your feedback. We greatly appreciate the time and effort you put into reviewing our work. We have carefully considered your comments and made improvements based on your suggestions. We hope you will reconsider your evaluation of our work. Thank you once again.* --- Rebuttal Comment 1.1: Comment: I appreciate the response from the authors. Despite I still do not see the clear explanation why grouping helps, my other comments have been largely addressed. Considering this together with the general positive ratings from other reviewers, I have increased my rating.
Rebuttal 1: Rebuttal: Dear Reviewers and Area Chair: We sincerely thank you for your time, insightful suggestions, and valuable comments. We are encouraged by your support and positive reviews of our work: + Neat/innovative idea of adaptive task grouping in MTO **[# Reviewers FqZ5/AXsM]** ; + Manuscript well-written/structured and easy to follow **[# Reviewers FqZ5/AXsM/9PzS]**; + SOTA performance with no further training time **[# Reviewers AXsM/9PzS]**; + Extensive experiments with sufficient ablation studies **[# Reviewers FqZ5/9PzS/sLBS]**. To address your concerns, we have been working diligently on improving the paper in several aspects. We summarize the major changes that we will update in the main manuscript: + Add additional conceptual analysis to clarify motivations for our method **[# Reviewer FqZ5]** and the computation efficiency over STL **[# Reviewer AXsM]**; + Add additional experimental analysis for the main results and the ablations, such as "Experimental analysis on Table 2" **[# Reviewer FqZ5]**, "Our model with grouping versus without grouping" **[# Reviewer FqZ5]**, "GO4Align versus FAMO" **[# Reviewer AXsM]**, "The phenomenon of task imbalance and application scenarios" **[# Reviewer 9PzS]**, "Choice of the group indicators" **[# Reviewer sLBS]**, and "Comparable time with linear scalarization and scale-up number of tasks" **[# Reviewer sLBS]**; + Provide additional experimental results to investigate the effects of different clustering methods. For more details and discussions, please refer to the response to **[# Reviewer sLBS]**; | Method | Package | GPU support of clustering | Relative runtime $(\downarrow)$ | $\mathbf{\Delta seg.} (\downarrow)$ | $\mathbf{\Delta depth} (\downarrow)$ | $\mathbf{\Delta normal} (\downarrow)$ | $\mathbf{\Delta m} (\downarrow)$ | |----------------------|-----------------|-------------|--------------|---------------|----------|-------------|------------------| | Ours w/ SDP-based clustering | sdp_kmeans | NO | 1,20X | -2,97 | -18,76 | -1,09 | -5,44 | | Ours w/ Spectral clustering | sklearn | NO | 1,17X |-1,78 | -18,58 | -0,06 | -4,56 | | Ours w/ K-means clustering (in the paper) | kmeans_pytorch | **YES** | **1,02X** |**-4,03** | **-20,37** | **-1,18** | **-6,08** | | + Clarify descriptions of figures **[# Reviewer FqZ5]**, experimental setups **[# Reviewers FqZ5/9PzS]**, generalized inverse of the assignment matrix **[# Reviewer 9PzS]**, and evaluation metrics **[# Reviewer sLBS]**. Once again, we thank all reviewers and area chairs. Your efforts and suggestions helped us improve this paper. Please see the **reviewer-specific response** for more detailed information.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Private Algorithms for Stochastic Saddle Points and Variational Inequalities: Beyond Euclidean Geometry
Accept (poster)
Summary: This paper studies private SSP beyond Euclidean geometry. They prove a near optimal bound on SP-gap for geometry between $\ell_1 ,\ell_2$. This result is then extended to SVI. Strengths: The results are solid improvement over previous work. The method on overcoming the generalization issue is novel and interesting. Weaknesses: I have a few concerns on the technical results. (1) what’s $A_{emp}$ in your algorithm. It’s the key subroutine while never formally defined or specified. Can you give a concrete example as you claimed in line 182? (2) is there any algorithmic novelty compared with previous work, or it’s just an improved analysis? The comparison with BGM23 can be made more clear. (3) can you give some practical examples to prompt the need of considering noneuclidean geometry, otherwise it looks somewhat incremental. There are numerous typos. Technical Quality: 2 Clarity: 2 Questions for Authors: See weaknesses. Confidence: 2 Soundness: 2 Presentation: 2 Contribution: 3 Limitations: See weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: 1. The exact instantiation of $\mathcal A_{emp}$ we use is given by Algorithm 2, Stochastic Mirror Prox. Lemma 3 shows that the implementation satisfies the needed relative accuracy guarantee. We will add the following comment to the ``Algorithm Overview'' section, line 179, to make this more clear. *The saddle point problem defined in each round of Algorithm 1 is solved using some empirical subroutine, $\mathcal A_{emp}$. This subroutine takes as input a partition of the dataset, $S_t$, the regularized loss function for that round, $f^{(t)}$, a starting point, $[\bar w_{t-1},\bar \theta_{t-1}]$, and an upper bound on the expected distance to the empirical saddle point of the problem defined by $S_t$ and $f^{(t)}$. The exact implementation of $\mathcal A_{emp}$, Algorithm 3, will be discussed in the next section. Here, we focus on the guarantees of Recursive Regularization given that $\mathcal A_{emp}$ satisfies a certain accuracy condition.* 2. There is some algorithmic novelty in that we 1) need to use non-Euclidean regularizes and 2) need to implement new DP techniques for the subroutine $\mathcal A_{emp}$. That is, for the purposes of [BGM23], noisy stochastic gradient descent-ascent was sufficient, but our more general setup required a private implementation of the stochastic mirror prox algorithm. With that said, our main claim to novelty is in our analysis, which differs in crucial and non-obvious ways from [BGM23], as we detail in Section 3. 3. Yes, the $\ell_1/\ell_2$ setup is particularly important. This is used to formulate problems that allow an adversary to mix different possible loss functions. Concretely, assume $f_1(w;x),...,f_k(w;x)$ are $\ell_2$-Lipschitz loss functions. Then one can consider the saddle point problem: $$ F_{\mathcal D}(w,\theta) = \mathbb E_{x\sim\mathcal D}\Big[{\sum_{j=1}^k \theta_j f_j(w;x)}\Big],$$ where $\theta$ is constrained to the standard simplex and $w$ is constrained to some compact, $\ell_2$-bounded set. This setup has been used in agnostic federated learning [MSS19] as just one example. We will add this example to our paper using the extra space given for the revision. [MSS19]: Mehryar Mohri, Gary Sivek, and Ananda Theertha Suresh. Agnostic federated learning. ICML 2019 --- Rebuttal Comment 1.1: Comment: Thank you for your response! The paper can benefit from adding these explanations/discussions. I will maintain my score.
Summary: This paper is quite far from my area, so please consider this review accordingly. The paper addresses the problem of private Stochastic Saddle Points and Variational Inequalities. The primary contribution is extending previous work that focused solely on the L2/L2 setup to more general lp/lq settings, where the primal problem follows an lp-setup and the dual problem follows an lq-setup. The main result is the development of an algorithm that achieves optimal excess error measured by the strong SP-gap. Strengths: The paper, in general, feels quite dense. For instance, the second paragraph mentions monotone operators without providing a definition. Additionally, the contribution section is not clear to me. The algorithmic aspect of the work is very similar to [BGM23]. However, the authors needed to make some changes to the analysis, and they did a good job describing these necessary modifications. Weaknesses: The main weakness of the work is its presentation. It is very difficult to parse many parts of the paper. Technical Quality: 3 Clarity: 3 Questions for Authors: n/a Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We provide a definition of monotone operators in the preliminaries section, line 122. We can use the extra page allowed in the final version to provide more background. With regards to the contribution of our work, while some algorithmic changes are needed in comparison to [BGM23], we emphasize that our primary contribution is our analysis technique. There is some algorithmic novelty in that we 1) need to use non-Euclidean regularizes and 2) need to implement new DP techniques for the subroutine $\mathcal{A}_{emp}$. That is, for the purposes of [BGM23], noisy stochastic gradient descent-ascent was sufficient, but our more general setup required a private implementation of the stochastic mirror prox algorithm. With that said, our main claim to novelty is in our analysis, which differs in crucial and non-obvious ways from [BGM23], as we detail in Section 3.
Summary: This work studied stochastic saddle point and variational inequality problems in potentially non-Euclidean cases. For stochastic saddle point problems, they proposed a recursive regularization framework, and provided the convergence guarantee and sample complexity for convex-concave problems. They further extended the framework to variational inequalities and incorporated differential privacy. Corresponding convergence guarantees and complexities results are also provided. Strengths: 1. First work on SSPs and SVIs in general non-Euclidean settings. 2. The proposed rate is nearly optimal Weaknesses: 1. The boundedness assumption is a bit restricted, regarding many unconstrained problems in practice. 2. Some important assumptions are hidden in the statement of Theorems, for example, the strong convexity of $||\cdot||_\omega$ and $||\cdot||_o$, while it is not fully rationalized (beyond $\ell_p, p\in(1,2]$ case), and it may not be satisfied in some important special case like $\ell_1$, I think the motivation for non-Euclidean 3. The paper flow and main results are a bit similar to [BGM23], which makes it a little incremental. Even though the authors claimed some differences, but from the appendix, many proofs are still very similar to those in [BGM23] with minor changes like $\kappa$. But I agree the changes in some parts like the proof of Property P.2 reveal certain novelty. Typo: 1. In Algorithms, the parameters of $\mathcal{A}_{\text{emp}}(\cdot,\cdot,\cdot,\cdot)$ are not clearly defined Technical Quality: 3 Clarity: 3 Questions for Authors: / Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: / Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: 1. Assuming the parameter space is bounded is very common in SSPs due to the problems unconstrained domains incur. For example, even for simple bilinear losses, say $f(w,\theta) = \langle w, \theta \rangle$, an unbounded domain means the strong gap is *infinite* at any non-zero point. 2. It is indeed not always the case that $\lVert \centerdot \rVert_w^2$ and $\lVert \centerdot \rVert_\theta^2$ are strongly convex. However, as we elaborate more in Section 4, this assumption is satisfied for $p\in[1+\frac{1}{\log(d)}, 2]$, where the squared norm is *strongly convex*. Further, for $p=1$ (and more generally, for $p\in[1,1+\frac{1}{\log(d)}]$), we easily solve the problem by instead solving a problem with $p'=1+\frac{1}{\log(d)}$, as we describe in Section 4. We will add more discussion after Theorem 1 so that this point does not feel hidden from the reader. 3. While [BGM23] contains several ideas that serve as a starting point for the current submission, note that this work makes key novel contributions which allow the nontrivial extensions to SVIs and non-Euclidean settings, as acknowledged by reviewer nH8p. Most importantly, our new generalization analysis for these problems (see `key proof ideas' in page 6) permits the use of sequential regularization in SVIs and non-Euclidean settings. To our knowledge, this result is entirely new, and of interest beyond differential privacy. 4. We will clarify the parameters of $A_{emp}$ by adding the following comment to the ``Algorithm Overview'' section, line 179: *The saddle point problem defined in each round of Algorithm 1 is solved using some empirical subroutine, $A_{emp}$. This subroutine takes as input a partition of the dataset, $S_t$, the regularized loss function for that round, $f^{(t)}$, a starting point, $[\bar w_{t-1}, \bar \theta_{t-1}]$, and an upper bound on the expected distance to the empirical saddle point of the problem defined by $S_t$ and $f^{(t)}$. The exact implementation of $A_{emp}$, Algorithm 3, will be discussed in the next section. Here, we focus on the guarantees of Recursive Regularization given that $A_{emp}$ satisfies a certain accuracy condition.*
Summary: The authors study differentially private algorithms for stochastic saddle point (SSP) problems and stochastic variational inequalities (SVI). The proposed method relies on recursive regularization approach and obtain near optimal rates for settings were the parameters of interest are constrained to be in a bounded \ell_p ball and p \in [1,2]. Strengths: The methods proposed by the authors recover many existing optimal results using different proof techniques. They extend the scope of the existing results with a unified analysis. Weaknesses: The paper does not seem to be self-contained. Some important components of the algorithms are not explicitly described, which makes it difficult to verify some of the claims in this work. The authors should include more details in the supplementary materials. Technical Quality: 3 Clarity: 3 Questions for Authors: I have two main comments: 1. The subroutine $\mathcal{A}_{emp}$ is never explicitly introduced. Here are some of sources of confusion for the reader: - Is this subroutine the same one in Algorithms 1 and 3? - In line 6 of Algorithm 1 the subroutine takes 4 inputs, none of which seems to be related to privacy parameters ?. However, in lines 258-258 the authors say that the privacy of Algorithm 1 follows from the privacy of $\mathcal{A}_{emp}$. - The construction of $\mathcal{A}_{emp}$ in lines 754-757 consists of taking the output of the SVRG algorithm of Palaniappan and Bach (2016) and add Gaussian noise to it? I think the precise SVRG algorithm the authors have in mind should also be presented. Some none trivial adaptations seems to be required. 2. Algorithm 1 requires $T=O(\log n)$ since $\lambda\geq \frac{K\kappa}{B\sqrt{n}}$ and $T=\log_2(\frac{L}{B\lambda})$. However, Lemma 3 requires a much larger number of gradient evaluations of roughly $\tilde{\Omega}(n^{3/2})$ to get the claimed accuracy that in turn is also needed in Corollary 1? Essentially the same comment applies to Theorem 2 and Corollary 2. Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The nature of this work is theoretical but it is natural to wonder if the methods are easy to implement. Have the authors tried to run any numerical experiments? Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: 1. We will add the following text to the ``Algorithm Overview'' paragraph (line 179), as well as additional comments: *The saddle point problem defined in each round of Algorithm 1 is solved using some empirical subroutine, $A_{emp}$. This subroutine takes as input a subset of the dataset, $S_t$, the regularized loss function for that round, $f^{(t)}$, a starting point, $[\bar w_{t-1}, \bar \theta_{t-1}]$, and an upper bound on the expected distance to the empirical saddle point of the problem defined by $S_t$ and $f^{(t)}$. The exact implementation of $A_{emp}$, Algorithm 3, will be discussed in the next section. Here, we focus on the guarantees of Recursive Regularization given that $A_{emp}$ satisfies a certain accuracy condition.* Additionally to your other points: - Yes, we state in the paper that we use Algorithm 3 for the subroutine when discussing both SSPs and SVIs; see lines 241-244 and 321. - The subroutine does indeed depend on the privacy parameters when one is interested in implementing Recursive Regularization in a private way. However 1) there may be value in our algorithm/analysis beyond privacy and 2) the privacy parameters do not change for each run of $\mathcal{A}_{emp}$, and so omitting them reduces cumbersome notation. - We can include the pseudocode for SVRG if the reviewer feels it is necessary, but we are unaware of the non-trivial changes being referred to, as we use it as a black box. Note that [PB16] details their algorithm for monotone operators in their appendix, as we state in our paper; see footnote 2 on page 24. 2. Algorithm 1 runs in roughly $T=O(\log n)$ rounds. Each round uses runs the subroutine $\mathcal{A}_{emp}$. To obtain Corollary 1, we show an implementation of this subroutine which runs in roughly $n^{3/2}$ gradient evaluations. Thus the overall running time of the algorithm is $\tilde{O}({n^{3/2}})$ gradient evaluations.
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
BoostAdapter: Improving Vision-Language Test-Time Adaptation via Regional Bootstrapping
Accept (poster)
Summary: This paper proposes a training-free test-time adaptation approach for vision-language models. It combines the idea of entropy minimization with a training-free adaptor to enhance adaptation performance. The experimental results generally demonstrate the effectiveness of the proposed method. Strengths: 1. The paper effectively combines entropy minimization for storing sample information and a training-free adaptor for test-time adaptation on VLMs. This approach proves to be effective in most cases. 2. The figures presented in the paper are clear and easy to follow. The performance figure, in particular, is very intuitive. 3. The method is theoretically supported, which adds credibility to the proposed approach. 4. The experiments are well-conducted and cover a broad range of considerations. Weaknesses: 1. While the proposed method is effective, it lacks significant novel ideas or insights. The contributions could be seen as incremental rather than groundbreaking. 2. **Figure 2 (b):** The double-arrow directions in Figure 2 (b) should be colored in green and orange for clarity. This part is confusing. 3. **Figure 3:** The term "Boosting Cache" in Figure 3, which appears to store both boosting samples and historical samples, is not explained or used elsewhere in the manuscript, leading to confusion. 4. Several typographical errors need to be corrected. 5. The paper misses parameter study such as the threshold $\tau$. 6. In Figure 4b, the setup for how the model performs entropy minimization, updates learnable prompts or updates the LN/FULL model is missing. This information is crucial for understanding the ablation study. 7. The "Hand-crafted Prompt" is used in this paper, but it is unclear what it entails. Additionally, it is important to investigate if the method maintains stable performance across different prompts. 8. I am also wondering if the method could be categorized into TTA, as it maintains low-entropy samples per class in the cache. Even with such extra information for one or two, it cannot surpass the zero-shot clip. (Figure 4.c) Technical Quality: 3 Clarity: 3 Questions for Authors: 1. What unique insights or techniques does your approach introduce that differentiate it from existing methods? 2. Why the proposed BoostAdaptor cannot perform as well as clip when the total shot capacity is less than 3? Is this a general phenomenon for all datasets? 3. Could you explain why the independent cache is good for some datasets as shown in Table 7? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: 1. While the method is effective, the paper lacks significant novel insights or groundbreaking ideas. The combination of entropy minimization and a training-free adaptor, though practical, may be viewed as an incremental improvement rather than a major innovation. 2. Refer to the weakness and question part. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1, Q9, Q12. Technical insights and difference with existing baselines.** Please refer to Q1 in Global Response. **Q2, Q3, Q4. Figure 2 (b), Figure 3 and typo.** Thanks for your advice. We will revise the figures, rewrite the corresponding part, and fix the typo in the revision. **Q5. Ablation study on threshold $\tau$.** We follow the setting in TPT [1] and empirically set the threshold $\tau=0.1$. To have a better understanding of the threshold, we provide the ablation results on the Aircraft dataset in Figure 1 of the rebuttal PDF. The results are consistent with the conclusion from Figure 4(b) in the original TPT paper, showing that a threshold near $0.1 \sim 0.2$ contributes to the best results, while higher thresholds lead to noisy samples that may result in misleading predictions. With the default setting that utilizes 64 augmented views, we can find that $6$ high-quality boosting samples are sufficient to bring significant improvements to the cache models. **Q6. More details about entropy minimization.** When implementing entropy minimization of the training-required methods, we follow the pipeline of TPT [1] and update the learnable prompts in the input while freezing other weights of the model. Taking entropy as a self-supervised objective, we perform gradient descent over both historical and boosting samples. Prompt tuning is more lightweight than full model tuning but still requires a large computational cost during model optimization. It is observed from Figure 4 (b) of the manuscript that either training-required or training-free methods benefit from both historical and boosting samples, in line with the theoretical analysis provided in Propositions 2 and 3. This further highlights our contribution to bridging the gap between training-required and training-free methods. Thanks for your advice and we will modify these parts to provide a clearer description. **Q7. Hand-crafted Prompt.** We follow the pipeline of TDA and adopt hand-crafted prompts in the training-free adapters. Take the Action Recognition dataset UCF101 for instance, TDA utilizes the prompt "a photo of a person doing \{\}." to better incorporate the prior knowledge of the dataset into the model. To study the influence of these hand-crafted prompts, we further equip CLIP, TDA, and BoostAdapter with different prompts (the standard prompt "a photo of \{\}." and the hand-crafted prompt "a photo of a person doing \{\}.") and compare their performance on the UCF101 dataset. As can be seen from the results in Figure 3 of the rebuttal PDF, BoostAdapter surpasses TDA and CLIP with both the standard prompt and the hand-crafted prompt. Additionally, the hand-crafted prompt brings improvements in performance to all the methods to some extent. **Q8, Q10. Comparison with zero-shot clip when the shot capacity is less than 3.** Our methods could be categorized into TTA since it adaptively makes predictions for streaming data during test time based on feature retrieval of historical and boosting samples. Besides, the reviewer points out that BoostAdapter cannot perform as well as CLIP when the shot capacity is less than 3. However, this is not always the case. As shown in Figure 2 of the rebuttal PDF, BoostAdapter outperforms CLIP across all 4 tasks on the OOD benchmarks. On the Aircraft dataset, the low-entropy samples stored in the boosting cache with a small shot capacity may be biased and not diverse enough, thereby leading to the performance drop. **Q11. Independent cache v.s. Joint cache.** The independent cache will retain all the knowledge from both historical and boosting samples, whereas in the joint cache, we update the cache of historical samples with boosting samples. Due to the limited cache size, the historical samples in the joint cache will be replaced by lower-entropy boosting samples when necessary. In most cases, BoostAdapter performs better with the joint cache compared to the independent cache. However, in some cases, when the test sample benefits from a sufficient amount of diverse cross-sample interactions, the replacement of boosting samples in the joint cache may lead to a slight performance drop. Generally, it is preferred to utilize the joint cache rather than the independent cache due to lower storage cost and better performance on average. **Reference** [1] Shu, Manli, et al. "Test-time prompt tuning for zero-shot generalization in vision-language models." Advances in Neural Information Processing Systems 35 (2022): 14274-14289. --- Rebuttal Comment 1.1: Comment: Thank you for your reply. I appreciate your additional experiments and explanation. Actually, after check out your rebuttal, I still feel confused about the Q1 and answer in your global reply. I've checked TDA and the idea of this paper is quite aligned with them. While discussing entropy minimization in TTA (and as you claimed ``Our main contribution lies in the theoretical and experimental connection between training-required and training-free methods.''), for me, it seems like combining effective training-free and training-required techniques. In this case, I would like to explain this paper in the way: 1. Using a Tip-adapter-like (also TDA-like) training-free adapter. 2. Using a memory bank to save reliable (historical and ) and diverse (augmented) test samples, using clip prediction as evidence. 3. Apply this stored information to ``correct'' the clip logits for final prediction. Please correct me if I have any misunderstanding. I am also confused about the relationship between historical and boosting samples. Will the boosting samples constructed by augmented filtered historical test samples? If this is the case, I saw so many similar ideas in the online test-time adaptation on pages 12-13 [R1]. Could you explain further about the insights of your strategy and why it is better than others? How to differentiate your contribution among them? It will be better to make the whole sample saving or caching process a bit clearer. In the current version, it is hidden among the equations. Another question is, could you specify why using EATA as the TTA baseline? EATA has carefully designed the FIM module and used the source sample information. It is also better for continual TTA. Why don't use other TTA methods? Thank you. Reference: [R1]. Wang, Z., Luo, Y., Zheng, L., Chen, Z., Wang, S., & Huang, Z. (2023). In search of lost online test-time adaptation: A survey. arXiv preprint arXiv:2310.20199. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for the insightful comments, and we are happy to discuss some implementation details. **Q1. The implementation steps of BoostAdapter.** Your description of the implementation steps is correct. We would like to add some noteworthy points. - The boosting cache (memory bank) in step 2 is instance-adaptive. We create a copy of the historical cache (referred to as the boosting cache) and construct boosting samples for current test image to update the this cache for feature retrieval. - These boosting samples will be discarded and will not be used by other images after step 3. It is rational because the boosting samples are close to the current test image rather than others, ensuring the bound of empirical risks in Proposition 3. **Q2. The relationship between historical and boosting samples** For the current test image, the boosting samples will be derived only from itself rather than from historical samples. We check the survey and find that all the relevent methods perform techniques like augmentation and clustering over **only historical samples** in the memory bank, without considering any information of the current test image. We would like to point out that these methods may show poor generalization performance especially in downstream tasks that require fine-grained knowledge or when historical samples share insufficient similarity. So we construct the instance-aware boosting cache to perform information mining over the current test sample and incorporate this knowledge with historical samples during feature retrieval. The survey provides a clear description of memory-bank-based methods, so we will mention it and rewrite the corresponding section for a detailed discussion with these methods in the revision. **Q3. Using EATA as additional training-required methods.** The online test time adaptation setting discussed in BoostAdapter can be seen as a special case of continual TTA since we deal with samples from the test data stream. Therefore, we previously use EATA [1] simply due to its importance in test time adaptation and its applicability to our tasks. In practice, we follow the idea of a diversity-based selective strategy proposed by EATA to construct boosting samples in the cache. We further incorporate techniques from more training-required methods, including the Pseudo-Label Probability Difference metric (PLPD) from DEYO [2] and the consistency filter from TSD [3]. Specifically, in the BoostAdapter+DEYO variant, we filter out augmented views with a PLPD lower than 0.2. For the BoostAdapter TSD variant, we discard augmented views that have different cache predictions and CLIP predictions to ensure consistency of the boosting samples. The results are provided in Table A and we can observe performance improvements with the help of different training-required methods, demonstrating the versatility of BoostAdapter. **TableA: Unification of more training-required methods.** | | -V | -S | -A | -R | Average | | --------------------- | ----- | ----- | ----- | ----- | ------- | | CLIP-ViT-B/16 | 60.86 | 46.09 | 47.87 | 73.98 | 57.20 | | TDA | 64.67 | 50.54 | 60.11 | 80.24 | 63.89 | | BoostAdapter | 65.03 | 50.66 | 64.27 | 80.64 | 65.15 | | BoostAdapter+EATA [1] | 65.27 | 50.82 | 64.83 | 81.15 | 65.52 | | BoostAdapter+DEYO [2] | 65.51 | 51.01 | 64.57 | 81.11 | 65.55 | | BoostAdapter+TSD [3] | 65.49 | 51.50 | 64.37 | 81.15 | 65.63 | **Reference** [1] Niu, Shuaicheng, et al. "Efficient test-time model adaptation without forgetting." International conference on machine learning. PMLR, 2022. [2] Lee, Jonghyun, et al. "Entropy is not enough for test-time adaptation: From the perspective of disentangled factors." The International Conference on Learning Representations. ICLR (2024). [3] Wang, Shuai, et al. "Feature alignment and uniformity for test time adaptation." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023.
Summary: This paper studies the problem of test-time vision-language model adaptation. The authors devise training-free method by maintaining a key-value memory for feature retrieval from both historical and boosting samples. The boosting samples are drawn from regional bootstrapping and capture the knowledge of the test sample it self. Experiments demonstrate the effectiveness of the proposed method. Strengths: The studied training-free adaptation setting is practical, broadening the application scope of test-time adaptation in real-world scenarios. The authors propose incorporating augmentations of each sample into the cache to enable the cache-based classifier to perform better on more fine-grained classifications. This approach is both interesting and technologically sound. The authors also provide theoretical analyses to establish connections between training-required and training-free TTA methods. Weaknesses: It would be beneficial for the authors to discuss the detailed technical differences from TDA more thoroughly. From my perspective, the key difference appears to be the introduction of additional augmented views of the same sample into the Boosting Cache for intra-sample iteractions. If this is the case, the technical contribution of this work might be a bit limited. The proposed method relies on multiple augmentations, requiring multiple forward passes to achieve better performance than TDA, which sacrifices efficiency. For Figure 4(a), could the authors provide results on more datasets (both OOD domains and cross domains) to demonstrate the sensitivity of the proposed method to this hyper-parameter? This would help verify whether the proposed method can achieve good performance with fewer augmentations across multiple datasets. The improvement on Cross-Domain Benchmarks with RN-50 backbone is a bit marginal. Technical Quality: 3 Clarity: 3 Questions for Authors: Could the authors provide some computational complexity analyses and comparisons for the proposed method, including wall-clock time and GPU memory consumption? It would also be much better to conduct the analysis in Figure 4(c) on more datasets. Is the proposed method applicable to, or have the authors tested it on, pure CNN/ViT models? If not, I recommend including ‘vision-language model adaptation’ in the title. The authors claim that *“prior methods like TDA only consider inter-sample interactions and may fail to generalize well when the downstream tasks require fine-grained knowledge or there is 109 insufficient similarity across samples.”* Although I acknowledge the technological soundness of the proposed method can perform better on more fine-grained classification, are there any empirical evidences to further justify this? In the Boosting Cache, do the authors store the original samples or their corresponding features? Storing images may pose privacy and additional computation issues. It would be helpful to indicate this in Figure 3. [minor] How about the performance of the proposed on corruption datasets such as ImageNet-C? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: the potential limitation is the computational efficiency of the proposed method, which can be alleviated if the method works well on multiple datasets with a small number of augmentations Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1. Technical insights and differences with existing baselines.** Please refer to Q1 in Global Response. **Q2, Q5. Computation overhead and efficiency.** Please refer to Q2 in Global Response. **Q3, Q11. Number of augmented views.** The analysis of the augmented views can be found in Table 12 of the Technical Appendix. It can be observed that BoostAdapter shows superior performance on average compared to TDA with 32 views on the OOD benchmark and only 16 views on the Cross-Domain Benchmark. We would like to point out that TDA also utilizes augmentation to obtain high-quality embeddings of test samples to store in the cache, as depicted in Table 1 of the rebuttal PDF. Furthermore, most of the existing TTA methods use 64 views of augmentation by default. Therefore, using 64 views is not considered particularly large and is acceptable from a computational cost perspective. **Q4. Cross-Domain Benchmarks results with RN-50 backbone.** The marginal improvement is mainly due to the limited performance of the RN-50 backbone. To verify the robustness of BoostAdapter, we further provide results with more checkpoints in Tables 2 and 3 of the rebuttal PDF. BoostAdapter shows promising improvements and consistently outperforms TDA in 7, 8, and 7 out of 10 tasks on the Cross-Domain Benchmarks with RN101, ViT-B/32, and ViT-L/14 backbones, respectively. **Q6. Shot capacity.** We provide ablation results of the shot capacity over all four datasets on the OOD benchmarks in Figure 4 of the rebuttal PDF. The results are consistent with the conclusion in the manuscript that the boosting cache will achieve a balance of diversity and relevance as the shot capacity increases. **Q7. Applicability on Vision-Only Backbones.** Yes. We believe it is possible to apply BoostAdapter to vision-only TTA, but transferring might be non-trivial since vision-language TTA and vision-only TTA are generally two distinct research sub-fields with different settings. Our paper focuses on the adaptation of the vision-language model, while addressing vision-only models could be considered for future work. We appreciate your suggestion on refining the title of this paper and will incorporate it in the revision. **Q8. Fine-grained knowledge.** We provide qualitative results in Figure 5 of the appendix and Figure 4 of the rebuttal PDF to investigate how BoostAdapter leverages boosting samples to extract fine-grained knowledge. As depicted in the figures, boosting samples with low entropy incorporate the prior of label systems to filter out the noisy parts of the test images and guide the model on where to focus. Most importantly, we only need to perform random cropping and random horizontal flipping to achieve this, making BoostAdapter more applicable in real-world scenarios. **Q9. Storage in Boosting Cache.** In practice, we store the features of both the historical and boosting samples to construct the key-value cache, which is privacy-preserving and time-efficient. Thanks for your advice, and we will modify the figure for better clarity in the revision. **Q10. ImageNet-C.** Please refer to Q3 in Global Response. --- Rebuttal 2: Comment: Dear reviewer, We would like to thank you for your insightful feedback. We hope that your concerns are addressed with our rebuttal. As we are getting really close to the deadline of the discussion phase, please let us know if there are any further questions that need clarification. Many thanks, Authors --- Rebuttal 3: Title: Follow up from reviewer Comment: Thanks for the authors’ response. Regarding the computational efficiency, could the authors provide more details of the experimental setup, including but not limited to the GPU, and batch size? For FPS, I am confused about why BoostAdapter (64 forward passes of image encoder) achieves 11.23 fps and CLIP only achieves 82.3 fps which only needs 1 forward propagation. Meanwhile, the memory consumption of BoostAdapter and CLIP is the same. Do you test Inference Speed (fps) using batch size 64 (64 views also equal to batch size 64), and test Memory using a different batch size? Moreover, the performance could also be directly reported in Table 1 of the PDF. --- Rebuttal Comment 3.1: Comment: **Q1. More details** We follow the setting in TDA and deal with the test samples from the data stream one by one. Therefore, we cannot increase the batch size to handle multiple different test samples simultaneously, but only for different views of the same test sample. Thus, we augment 64 views of the test image as TDA does and use 64 as the batch size to obtain the corresponding features. In BoostAdapter, we utilize a simpler augmentation (random crop and random horizontal flip) than AugMix in TDA to save time. Additionally, we set num_workers=8 in the dataloader to leverage multiprocessing for acceleration. Furthermore, we perform feature retrieval over the stored features instead of images. The additional retrieval time compared to TDA comes from the operation of updating the cache with boosted samples. All our experiments are conducted with a 64-core Nvidia 3090 24GB GPU. **Q2. FPS and memory.** For a better view of the time consumption, we provide the average wall-clock time consumption of each component for 1000 samples in TableA. The total time can be mainly divided into three parts: data augmentation, model forwarding, and feature retrieval. - Regarding augmentation time, note that we set num_workers = 8 for the dataloader, so the augmentation takes up a small percentage of the total time. TDA and BoostAdapter take a similar amount of time since they both utilize 64 views of augmentations. - The model forwarding of BoostAdapter utilizes approximately 8 times more time than CLIP. This is reasonable since we use parallel forward propagation for the 64 views instead of sequential forward propagation, so the difference in comsumption overhead will not be as large as 64 times. The only difference is the batch size of the model input (1 for CLIP and 64 for BoostAdapter). - We build a new cache from the historical cache and update it with boosting samples so the feature retrieval time of BoostAdapter takes slightly longer than TDA. Overall, the time consumption in feature retrieval is much smaller than model forwarding. **TableA. Computation cost of each component.** | | Data Augmentation | Model Forwarding | Feature Retrieval | Total | | ------------ | ---------------------- | ----------------------- | ---------------------- | ---------------------- | | CLIP | - | 0.01208 seconds (100%) | - | 0.01208 seconds (100%) | | TDA | 0.00116 seconds (1.4%) | 0.08065 seconds (96.6%) | 0.00167 seconds (2.0%) | 0.08348 seconds (100%) | | BoostAdapter | 0.00098 seconds (1.1%) | 0.07989 seconds (91.5%) | 0.00644 seconds (7.4%) | 0.08731 seconds (100%) | The memory consumption of BoostAdapter is similar to TDA because both utilize 64 views of augmentation, and the model takes 64 views of the image (batch size 64) as input. This model forwarding part accounts for most of the memory usage. During feature retrieval, we store the features in the cache without significant memory overhead, even with the integration of boosting samples' features. In the online test-time adaptation setting, we deal with test samples from the data stream one by one, so we cannot increase the batch size by parallel modeling of different test samples. We only perform parallel forward propagation for different views of the same test sample. Therefore, changing the batch size here is not appropriate, as it corresponds to the number of augmented views for the current test sample. **Q3. Efficiency analysis table.** Thanks for your insightful advice, and we add the performance into the efficiency analysis in the TableB. **TableB. Efficiency analysis with performance results.** | | Augmentation | Views | Inference Speed (fps) | Memory (GB) | OOD Benchmarks Results | Cross-Domain Benchmarks Results | | ------------ | ------------------------------ | ----- | --------------------- | ----------- | ---------------------- | ------------------------------- | | CLIP | - | - | 82.3 | 1.2 | 57.20 | 63.58 | | TPT | Augmix | 64 | 0.29 | 4.5 | 60.81 | 65.10 | | DiffTPT | Diffusion | 64 | 0.10 | 14.4 | 60.52 | 66.92 | | TDA | Augmix | 64 | 11.89 | 1.2 | 63.89 | 67.53 | | BoostAdapter | Rand. Crop & Rand. Horiz. Flip | 64 | 11.23 | 1.2 | 65.15 | 68.52 |
Summary: The paper focuses on gradient-free test time adaptation of CLIP model with ViT-B/16 and ResNet-50 backbones on out-of-distribution datasets. The authors take inspiration of augmentations from gradient-based test time methods and incorporate this concept of augmentations in gradient-free and memory (cache) based test time methods. Previous work considers only historic samples to be in cache, whereas this work also includes the augmentations of test samples with lower entropy into the cache. Results on OOD benchmark and cross-domain benchmark show improved average performance compared to prior works. Strengths: 1. Proposed a simple but effective approach to include low entropy augmentations into the memory. 2. Established theoretical bounds to justify the inclusion of augmentations into memory and its relation to minimization of empirical risk. 3. Proposed approach brings significant improvements on ImageNet-A, Aircraft and EuroSAT datasets. Weaknesses: I don’t have major concerns on the proposed approach, as the method is simple and straightforward. My concern lies in increase of computation overhead and extended run time due to running CLIP model on multiple augmentations during test time (as acknowledged by the authors). In addition, it can be noticed that results are comparable on almost all datasets except ImageNet-A, Aircraft and EuroSAT datasets. It would be interesting to provide rationale for the proposed approach to work much better on these datasets. Results are shown on single ViT architecture, however providing results on multiple CLIP based ViT backbones would be interesting as the method targets for test time performance. It would be helpful to evaluate the method on more OOD distributions like corruptions (ImageNet-C) to understand it better. Technical Quality: 3 Clarity: 3 Questions for Authors: Please refer to Weakness above. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Limitations are discussed in the work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1. Computation overhead and efficiency.** Please refer to Q2 in Global Response. **Q2. Specific Datasets.** As shown in Figure 5 in the appendix, BoostAdapter benefits from boosting samples to capture fine-grained knowledge of the test samples. We further provide more qualitative results on the ImageNet-A, Aircraft, and EuroSAT datasets in Figure 4 of the rebuttal PDF to investigate how BoostAdapter performs on these datasets. ImageNet-A consists of real-world examples that are misclassified by ResNet models, while Aircraft is a benchmark dataset for the fine-grained visual categorization of aircraft, and EuroSAT consists of satellite images for land use and land cover classification. These datasets require fine-grained information for classification, and the boosting samples filter out noisy parts of the test samples while retaining useful information, contributing to significant performance enhancements. **Q3. More backbones.** In order to further validate the robustness of our method, we compare BoostAdapter with baseline models across various backbones including both RN and ViT checkpoints. The results in Table 2 and Table 3 of the rebuttal PDF indicate that BoostAdapter shows strong compatibility and consistently outperforms TDA in most of the cases over different backbones. For instance, BoostAdapter shows superior performance to TDA over 7, 8, and 8 out of 10 tasks with RN-101, ViT-B/32, and ViT-L/14 checkpoints on the Cross-Domain Benchmark, respectively. **Q4. ImageNet-C.** Please refer to Q3 in Global Response. --- Rebuttal 2: Comment: Dear reviewer, We would like to thank you for your insightful feedback. We hope that your concerns are addressed with our rebuttal. As we are getting really close to the deadline of the discussion phase, please let us know if there are any further questions that need clarification. Many thanks, Authors --- Rebuttal 3: Comment: Dear authors, I thank you for the responses, and additional experiments provided in the rebuttal. I have read fellow reviewers comments and authors responses. I find that the method is slightly better than TDA consistently across all benchmarks, particularly helpful for ImageNet-A. However, my major concern on limited novelty still exists and hence I tend to keep a borderline rating. I will discuss with my fellow reviewers for a final decision.
null
null
Rebuttal 1: Rebuttal: We thank all reviewers for their valuable feedback and are encouraged by the positive comments on our contributions, including 1. Soundness and Novelty: - Training-free adaptation broadens real-world applicability (Reviewer a236). - Innovative use of sample augmentations in the cache for fine-grained classifications (Reviewer a236 & YJpm). 2. Theoretical Contribution: - Establishes connections and bounds between training-required and training-free TTA methods (Reviewer a236 & YJpm). - Adds credibility and justification to the approach (Reviewer HZ6e). 3. Solid Experiments: - Significant improvements on ImageNet-A, Aircraft, and EuroSAT datasets (Reviewer YJpm). - Well-conducted experiments cover a broad range of considerations (Reviewer HZ6e). 4. Presentation: - Clear and intuitive figures, especially the performance figure (Reviewer HZ6e). In the following parts, we will first respond to the common questions raised by reviewers and then respond to the rest of the concerns of each reviewer from point to point. We believe the comments & revisions have made the paper stronger and thank all the reviewers for their help. ***Please let us know if these address your concerns and if there are any further questions that need clarification.*** **Q1. Technical insights and difference with TDA.** The mainstream of training-required TTA methods is entropy minimization, while TDA serves as a training-free baseline that maintains a key-value cache of historical samples. We argue that our work is far more than merely introducing augmentation views into training-free adapters. **Our main contribution lies in the theoretical and experimental connection between training-required and training-free methods.** Specifically, we provide a bound for empirical risk as a theoretical guarantee in unifying the two streams. We also show that both methods can benefit from each other, enhancing performance through either entropy minimization or feature retrieval of both historical and boosting samples, as illustrated in Figure 4(b) of the manuscript. In practice, we focus on boosting training-free methods rather training-required ones due to their lower computational cost and better generalization performance. **Additional evidence on this point:** From the unified perspective, we can also enhance training-free adapters with additional training-required methods. Here we take EATA [2], which introduces momentum statistics to combat forgetting in the test-time adaptation, as the showcase. When equipping BoostAdapter with the technique of EATA, we observe further improvement and find that training-free adapters can benefit from various boosting techniques of training-required methods. These results can be found in Table 2 of our rebuttal PDF. **Q2. Computation overhead and efficiency.** We agree that efficiency is an important metric, and we have already provided the computational efficiency analysis in Table 16 in the Technical Appendix. More information is available in Table 1 of our rebuttal PDF. We would like to point out that TDA follows TPT [1] and utilizes 64 views with AugMix to obtain high-quality embeddings of test samples, which leads to a similar computation cost with the boosting augmentation in BoostAdapter. As can be seen from the results, the inference time for BoostAdapter is slightly longer than TDA but remains significantly faster than training-required methods such as TPT and DiffTPT. Therefore, considering the performance enhancement it provides, the additional inference cost and comparable memory consumption of BoostAdapter are acceptable. **Q3. More results on ImageNet-C.** To further evaluate the generalization ability of BoostAdapter in new test-time scenarios, we compare BoostAdapter with baseline methods on the Imagenet-C dataset at the highest severity level 5. These results are presented in Table 5 of the rebuttal PDF. The key observation from these results is that BoostAdapter consistently outperforms TDA across all 15 corruption types, highlighting its practical applicability in real-world situations. BoostAdapter's superior performance stems from its capability to capture the knowledge of the test sample even under severe corruption. This is achieved with the help of the boosting samples, which effectively filter out noisy parts while retaining useful information in the images. **Reference** [1] Shu, Manli, et al. "Test-time prompt tuning for zero-shot generalization in vision-language models." Advances in Neural Information Processing Systems 35 (2022): 14274-14289. [2] Niu, Shuaicheng, et al. "Efficient test-time model adaptation without forgetting." International conference on machine learning. PMLR, 2022. Pdf: /pdf/4b36b6d20f03fefa83e9b8af644aa536e52caa86.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
ReST-MCTS*: LLM Self-Training via Process Reward Guided Tree Search
Accept (poster)
Summary: This paper proposed ReST-MCTS* to assist the large language model to answer reasoning questions. A variant of MCTS, which utilizes the evaluation of current state as the value function, is employed to automatically annotate the process reward of each intermediate node via sufficient times of rollouts. A self-refine process is employed to finetune the LLMs. Tested on SciBench, ReST-MCTS* has achieved better performance than existing self-training approaches and reasoning policies. Strengths: 1. This paper builds a value and policy network to assist the reasoning process of LLM , borrowing from the MuZero framework, and has achieved siginicant improvements. 2. Theoratical analysis is given to help the understanding of ReST-MCTS* algorithm. 3. The experiment is conducted comprehensively and convincingly. 4. The paper is written concisely and clearly. Weaknesses: 1. More discussion about the value function is needed. In refincement learning, value is the expected total reward, rather than the evaluation of the current state. These two definitions are completely different, and I would like to see more explanations. For example: a) Can UCT still converge to the optimal solution when the simulation times approach infinity? b) In most cases, $v_k=v_{k-1}+w_{s_k}$ is established for two neighboring nodes in the search tree. Assume $S^k=\\{s_1^k,s_2^k,\cdots, s_N^k\\}$ denotes all children nodes of $s_{k-1}$. All nodes in $S^k$ share the same $v_{k-1}$ value in Equation (1). $v_{k-1}$ and $m_k$ are also the same in Equation (2), only $r_{s_k}$ is different. Considering that MCTS select the traversed node from the set of nodes with the same parent node while do simulation, why not predict $r_{s_k}$ directly? c) In the value backpropagation, $v_C$ of $s_t$ seems to be the weighted sum of the estimated values of all expanded nodes in the subtree rooted with $s_t$. What's the meaning of the average of state evaluation values? A larger evaluation value does not necessarily indicate a better expansion direction, especially when the distance to the goal is far. d) Using evaluation score as the value function makes the algorithm more likely to be a multi-step greedy algorithm, instead of a heuristic search algorithm like MCTS. What is your opinion on this matter? 2. Can you provide evidence of the quality of value function training based on prediction errors, in addition to the search results? Sometimes, even if the search results are good, it doesn't necessarily mean that the value function predictions are accurate. For example, set the estimated values to always be $0$, $A^*$ search can also perform well in some situations. Technical Quality: 2 Clarity: 3 Questions for Authors: 1. Do different algorithms have differences in their actual running time, considering that MCTS requires a significant amount of Monte Carlo simulations to estimate action values, often resulting in higher computational requirements. 2. The definition of $r_{s_k}$ needs to be provided in advance. It is mentioned in Line 135, but is defined in Line 181. If I do not know that $r_{s_k}=1-r_{s_k}^{HE}$, observation 2 will be confusing to me. 3. Why is the assumption made that $w_{s_k}\in [0,1]$ when proving Theorem 1 in Appendix C.1? It is obvious that $w_{s_k}$ can be negative. 4. In line 648-650, $v_{k-1}\in [0,1]$ and $v_k\in [v_{k-1}, 1+v_{k-1}]$, why this makes $v_k$ in the range of $[0,1]$. It seems $v_k$ is in $[0,2]$. Confidence: 5 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: Please refer to the Weaknesses and Questions part. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ​​Thank you for acknowledging our contribution to LLM reasoning and raising valuable concerns and questions about various aspects of our work. We appreciate the time and effort you have dedicated to thoroughly assessing our work. To address your concerns and questions, we now provide a detailed response to each of them. ``` Q2 & Q3 & Q4: Questions about specific definitions, assumptions, and conclusions. ``` We want to clarify these questions first to help better understand this work. For Q2, we will surely rearrange the sequence of definitions as suggested. For Q3 and Q4 which focus on the deduction details in Appendix B.1, we sincerely apologize for making a mistake. In fact, reaching the conclusion of $v_k \in [0,1]$ does not require the assumption of $w_{s_k} \in [0,1]$. As an important factor measuring the correctness and contribution of a reasoning step, $w_{s_k}$ is designed to be signed and bounded within the range $[-1,1]$, rather than $[0,1]$. We’ve demonstrated the details in the **Official Comment**. ``` W1: Concerns on the definition of quality value. ``` We appreciate your concerns about the design of weighted reward and quality value. They remind us that there are a few things we may have neglected and require more explanation. Please refer to the **Global Author Rebuttal** for a general explanation of our approach design. a) **Your concerns regarding the convergence of UCT for our case are profound and thoughtful.** As mentioned in the **Global Author Rebuttal**, our methodology is offline RL and doesn’t face issues with UCT convergence. However, we agree that the convergence of UCT in our setting for online RL is unclear. In consideration that our definitions of reward and value are novel, it certainly requires considerable effort to design an appropriate online RL paradigm that aligns with these definitions. b) In fact, under our definition, the reasoning distance $m_k$ is the minimum reasoning steps required to access the final answer starting from a node with partial solution $p_k=[s_1, s_2, \cdots, s_k]$. This means when we generate children nodes of $s_{k-1}$, the reasoning distance of these nodes, denoted as $m_k^i(i=1, 2, \cdots, N)$, is already different. Different children make different contributions to the final answer, leading to varied search directions that require different numbers of reasoning steps. Thus, **though $v_{k-1}$ is the same, both $r_{s_k}$ and $m_k$ are different**. c) The value backpropagation process is not compulsory. However, we regard this process as a heuristic that provides insight into the selection of search direction. Updating value estimation using the average method promotes exploitation in more promising directions in general. Please see the details in our **Official Comment**. d) We believe former discussions have addressed this question. By adopting UCT and value backpropagation, our algorithm values exploration as important as exploitation. Concerning implementation, the exploration constant also allows the different extents of the exploration. Though we use a different value definition, the core idea of MCTS has not changed. ``` W2: Concerns on the quality of the value model. ``` We appreciate your concerns that our value model may not be truly effective. To address your concern, we present two pieces of evidence aside from the results of the search. Firstly, our value model achieves an accuracy of approximately 70% with an absolute tolerance of 0.1 on the test set consisting of 14k data samples, as demonstrated in Appendix D.1. This means the value model can already gain considerable knowledge even before self-training, justifying its quality. Moreover, we also experiment to compare the performances of critics alone. We use the same policy and the same CoT sampling strategy to generate solutions for GSM8K and MATH500. These solutions are then evaluated and filtered using different methods. Results shown in Table 3 indicate that the filtration method based on our value model + SC achieves the best accuracy on both datasets, outperforming baselines like SC, ORM, and PRM of Math-Shepherd. This also justifies the quality of our value model, since all other influences are eliminated. ``` Q1: Questions on running time and computational requirements of different algorithms. ``` Concerning computational costs, we have already compared the number of tokens consumed by each algorithm to achieve certain accuracy on MATH and SciBench, as shown in Figure 2. Results reveal that to reach a certain expected accuracy, MCTS* basically requires fewer tokens than other algorithms like SC and ORM+BoN. This means that based on the same expected standards, MCTS$^*$ outperforms other algorithms while maintaining a reasonable computation cost. As for the actual running time, we have recorded the average running time of different algorithms (under our basic experiment settings) on a single question, as shown below. | Method | CoT+SC | ORM+BoN | PRM+BoN | MCTS* | --- | --- | --- | ---| --- | Running time (s) | 41 | 43 | 73 | 108 | | Accuracy on MATH (%) | 37.0 | 33.5 | 34.5 | 41.5 | Indeed, MCTS$^*$ spends more time on exploration and simulation than other simple algorithms. However, since our method adopts a different design of value, it doesn’t require massive Monte Carlo estimations. This reduces the running time of our algorithm and limits the time consumption to a reasonable range. Notice that MCTS* can achieve high accuracy that other algorithms can never attain even at unlimited cost, we believe this extra time is fairly acceptable. Lastly, we kindly appreciate your in-depth evaluation of our work and thank you for your valuable questions and suggestions, which have greatly contributed to improving our work. If you believe that our responses have satisfactorily addressed your concerns about the issues, we kindly request that you consider adjusting the final evaluation to reflect this. --- Rebuttal Comment 1.1: Title: Looking forward to your feedback Comment: Dear reviewer S8hx, thank you very much for your valuable feedback. We hope that our responses and clarifications have addressed your questions and concerns.  If you believe that our responses have satisfactorily addressed your concerns, we kindly request that you consider adjusting the final rating to reflect this. If there are any remaining concerns or require additional clarification, please let us know. We are looking forward to your reply. Thank you for your time and efforts on this paper. Best regards, Authors of ReST-MCTS* --- Rebuttal 2: Title: More details related to value backpropagation process and deduction. Comment: Thank you for your valuable questions and thorough evaluation of our work! Here we present some details related to your concerns and questions, as mentioned in our rebuttal. ``` W1: Concerns on the question about the value backpropagation process. ``` We regard the value backpropagation process as a heuristic that provides insight into the selection of search direction. Since the quality value reflects the valid progress made toward a correct final answer, a higher value generally represents that the corresponding direction is more accessible to the answer, more probable to be the right direction, and more promising for the policy to make progress. Therefore, updating value estimation using the average method promotes exploitation in more promising directions in general. On the other hand, this doesn’t mean our algorithm is prone to ignore directions that currently receive low-value estimation. As we utilize UCT-based selection criteria, nodes that are overly exploited will not be selected in future rollouts. Besides, the average value method also helps to adjust inaccurate value estimation. If a node’s value is underestimated, in future rollouts its children nodes will still be evaluated. If the children nodes reveal some promising directions, the update of value will adjust the underestimation, and vice versa. ``` Q2 & Q3 & Q4: Concerns on the problems with deduction. ``` We want to present the correct deduction here. For quality value, we can derive the expected boundedness from Inequality 17 in our paper, because: $w_{s_k} \leq |1-v_{k-1}|$, $v_k=max(v_{k-1}+w_{s_k},0)$ and $v_0=0$. Inductively, we can derive that $v_k \in [0,1]$ as long as $v_{k-1} \in [0,1]$, eventually reaching the conclusion that for every k, $v_k \in [0,1]$. We will modify these mistakes in the revised manuscript. Thank you for such a meticulous inspection of our paper! --- Rebuttal 3: Title: Thank you for your reply! We have addressed the concerns on the 'evaluation' as a reward. Comment: Dear Reviewer S8hx, thank you for your comments! Regarding the ‘evaluation’ as a reward, we would like to further explain and justify our approach from the following five aspects. (1) We’d like to highlight that **we predominantly focus on complex reasoning scenarios. In fact, for this scenario, the evaluation of intermediate states is very important.** Unlike most traditional RL tasks where the final reward is regarded as more important, complex reasoning processes rely more on intermediate steps, making the quality of an intermediate step non-negligible. (2) Methods based on traditional RL adopt sparse reward, where a reward signal is received only when the reasoning is finished. However, these methods neglect an important fact: Even when a reasoning trace looks good overall (concerning the final outcome), it may still suffer from intrinsic logical faults [1]. This point has been mentioned from line 32 to line 37 in our submission. Similarly, **although traditional ways of modeling the reward are relatively easy and concise, omitting intermediate rewards makes it a compromise strategy.** (3) To achieve more complex reasoning, **modeling intrinsic rewards may be inevitable, this elevates the significance of Process Reward Models (PRMs).** However, the reward for intermediate reasoning steps is hard to model. A reward should reflect the transition of states and the value of an action. Since the transition of states is determinant once an action is taken when reasoning, the main problem comes to the second term, which is somewhat unclear for reasoning scenarios. To tackle this issue, we referred to the common process of exam grading. In this process, graders examine the contribution of each solution step, assigning a total score (reward) based on the cumulated score of each step, this analogizes our design of the reasoning process and reward. **In our approach, we not only optimize the policy to attain a higher final reward but also encourage it to explore and generate better intermediate steps through our careful design of reward and algorithm.** A policy trained with our method learns to seek more promising search directions (high intermediate rewards) and will look for other alternatives when it cannot make more progress in the current direction, just like a human test taker in a math test. (4) Furthermore, **the design of PRM reward/value of our work thoughtfully ensures accurate estimation of quality of actions.** In our definition, we incorporate the process reward $r_{s_k}$ (evaluates the probability of correctness of a single step) and the reasoning distance $m_k$ (evaluates contribution or importance, though in an indirect way). These factors involve information that can help evaluate a step more accurately, thus we believe involving these factors in the design of reward and value may improve the effectiveness of our method. Under this setting, weighted reward reflects correctness and contribution, while quality value reflects the progress made in the right direction. An action (step) receives a higher reward when it correctly tackles more components of the problem, which is natural and reasonable. (5) Finally, as shown in Figure 2 and Table 3, experiment results indicate that **our designed PRM outperforms both traditional PRM and ORM in various aspects.** This justifies the validity of our design. Reference: [1] T. Lanham, A. Chen, A. Radhakrishnan, B. Steiner, C. Denison, D. Hernandez, D. Li, E. Durmus, E. Hubinger, J. Kernion, et al. Measuring faithfulness in chain-of-thought reasoning. https://arxiv.org/abs/2307.13702 Once again, we sincerely thank you for your thoughtful questions, which have greatly contributed to improving our work. We believe that the revisions we have made adequately address your concerns and questions regarding the ‘evaluation’ as a reward of ReST-MCTS*. If you believe that our responses have satisfactorily addressed your concerns about the issues, we kindly request that you consider adjusting the final evaluation to reflect this. --- Rebuttal Comment 3.1: Title: Looking forward to your feedback Comment: Dear Reviewer S8hx, Thank you very much for your valuable comments. As we approach the 'evaluation' as a reward of the discussion phase, we would like to know whether the responses have addressed your concerns about the significance of our work and the reward design issue. If you believe that our responses have satisfactorily addressed your concerns, we kindly request that you consider adjusting the final rating to reflect this. If there are any remaining concerns, please let us know. We are more than willing to engage in further discussion and strive to address any remaining issues to the best of our abilities. We are looking forward to your reply. Thank you for your time and efforts on this paper. Best regards, Authors of ReST-MCTS*
Summary: This paper proposes a novel approach for self-training large language models (LLMs) that combines process reward guidance with Monte Carlo Tree Search (MCTS). This method generates high-quality reasoning traces and per-step values to train policy and reward models, eliminating the need for manual annotation. Experimental validation on multiple benchmarks shows that ReST-MCTS* outperforms existing self-training methods by achieving higher accuracy in reasoning tasks. Strengths: - The combination of Monte Carlo Tree Search (MCTS) with process reward models represents a novel method for improving self-training in LLMs. This integration allows for automatic generation of high-quality reasoning traces, which is a significant advancement over existing methods. - The paper provides clear definitions and theoretical support for key concepts such as quality value and weighted reward. This enhances its robustness and effectiveness in the self-training process for reasoning problem. Weaknesses: - This paper introduces a method to evaluate values for each intermediate step in the reasoning process. Providing more evidence to demonstrate the validity and reasonableness of these intermediate values would make the paper more convincing. - Scalability. While the method aims to eliminate manual annotation, the scalability of the proposed approach in extremely large datasets or more complex reasoning tasks might still be a challenge. Maybe complex task require more intermediate steps and wonder if proposed quality value still work for those cases. Providing additional strategies or evidence to support scalability would strengthen the paper. - Although the paper demonstrates improved performance on selected benchmarks, a broader range of datasets and tasks would provide a more comprehensive validation of the method’s generalizability and robustness. Technical Quality: 3 Clarity: 2 Questions for Authors: - Typically, simultaneously training the reward and policy can lead to instability and convergence difficulties. Do you have any concerns about this issue, and how do you address it? Are there any techniques you employ to mitigate these challenges? - The notation is confused. V_{\theta} in line 149 is value function but in line 175 is process reward model. Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: The experiments were conducted on a restricted set of benchmarks, which limits the generalizability of the findings. As mentioned in the weaknesses, the scalability of the proposed approach in handling extremely large datasets or more complex reasoning tasks remains uncertain. Addressing this issue with additional strategies or evidence would strengthen the paper and its applicability. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for acknowledging our contribution to LLM self-training, clear definitions, and theoretical support and raising valuable concerns and questions about various aspects of our work. We appreciate your dedicated time and effort to thoroughly assess our work. We provide a detailed response to each of them to address your concerns and questions. ``` W1: Concerns on validity and reasonableness of proposed reward/value design in ReST-MCTS*. ``` We appreciate the reviewer's attention to this concern. Allow us to delve deeper into this concern. The formulation of our weighted reward and quality value (as depicted in Equations 1 and 2) is structured with a specific rationale in mind. In our methodology, **we assign varying rewards to different reasoning steps based on their contributions**. This contrasts with previous approaches like Math-Shepherd, which often rely on sparse rewards and the probability of success as values, neglecting the nuanced importance of each step and potentially leading to suboptimal search outcomes. Our approach integrates the concept of process reward $r_{s_k}$ (assessing the probability of correctness) and reasoning distance $m_k$ (evaluating contribution or importance indirectly). By incorporating these factors into our reward and value design, we aim to enhance the accuracy of step evaluation. The effectiveness of ReST-MCTS$^*$ over MS, as demonstrated in Table 3, underscores the practical benefits and advantages of our design. Furthermore, through extensive experiments detailed in Table 2, we showcase the efficacy of our design across various tasks and LLMs, thereby verifying the scalability and robustness of our method. ``` W2 & L2: Concerns on scalability. ``` While this article mainly addresses mathematical reasoning tasks and shows effectiveness, it is still suitable for working with very large data sets or more complex tasks, such as code generation scenarios, which require additional intermediate steps due to the need for them. To enhance scalability, additional strategies can be explored. For instance, (1) for longer and more complex codes, higher-level structures can be used as a single reasoning step. These structures could include lines of code, code blocks, or even entire functions. (2) similar to approaches used in math reasoning tasks, a similar PRM provides expert reward feedback for value function estimation. In this way, the MCTS algorithm can always filter out better traces regardless of the policy’s competence. Alternatively, if we continue to rely on MC simulations, the policy and an existing reward model could be updated during this process. RL methods such as Q-learning can be adopted. ``` W3 & L1: Concerns on a broader range of datasets and tasks. ``` To further bolster the method's generalizability and robustness, expanding the evaluation to encompass a broader range of datasets and tasks beyond common mathematical reasoning could offer a more comprehensive validation. As highlighted previously, this study primarily concentrates on mathematical reasoning tasks, which is reflected in the evaluation of mathematical benchmarks such as MATH (Table 2, Figure 2), GSM8K (Table 3), MATH500 (Table 3), and various Math subsets (Table 4). Moreover, the method's performance is also assessed on scientific reasoning benchmarks, including GPQA_{Diamond} and CEval-Hard (Table 2), as well as SciBench datasets covering Mathematics, Chemistry, and Physics (Table 4, Table 6, Figure 5), and SciEval (Table 7). Corresponding to the W2 & L2, this paper provides a more comprehensive assessment of the method's effectiveness across a wider spectrum of applications and facilitates a deeper understanding of its generalizability. ``` Q1: Concerns about simultaneously training the reward and policy. ``` We acknowledge the importance of considering the issue of online RL. However, **it is vital to emphasize that our methodology primarily focuses on the offline self-training paradigm**. Within this framework, data is generated via MCTS$^*$ using a static policy and critic within each iteration. The newly synthesized data is verified and filtered according to the ground truth rather than the outputs of the value model. Subsequently, this filtered data is utilized to individually train the policy and value models, aligning our algorithm with an offline and relatively stable approach. In tackling the instability and convergence challenges inherent in simultaneously training the reward and policy and enhancing the overall performance and robustness of the training process, typical of online RL strategies, we can employ several techniques: Experience Replay: firstly, during training, the agent stores transition in a replay buffer, accumulating experiences over time. Then, instead of using experiences immediately, the agent samples mini-batches of experiences randomly from the replay buffer during training. By sampling randomly from past experiences, experience replay breaks the temporal correlations present in sequential data, which can help prevent the model from getting biased toward recent experiences. Finally, using experience replay can lead to more stable and efficient learning by providing a diverse set of experiences for the agent to learn from, smoothing out the learning process. ``` Q2: Concerns on the $V_\theta$. ``` Thanks for the suggestion, $V_{\theta}$ is indeed the definition of the process reward model. We will update this typo (line 149) in our manuscript. Once again, we sincerely thank you for your thoughtful evaluation and valuable suggestions, which have greatly contributed to improving our work. We believe that the revisions we have made adequately address your concerns and questions regarding quality value, scalability, and related aspects of ReST-MCTS*. If you believe that our responses have satisfactorily addressed your concerns about the issues, we kindly request that you consider adjusting the final evaluation to reflect this. --- Rebuttal Comment 1.1: Title: Looking forward to your feedback. Comment: Dear Reviewer r9nL, Thank you very much for your valuable comments. As we approach the conclusion of the rebuttal phase, we would like to know whether the responses have addressed your concerns about the significance of reward/value design, the scalability issue, and evaluation benchmarks. If you believe that our responses have satisfactorily addressed your concerns, we kindly request that you consider adjusting the final rating to reflect this. Other reviewers have acknowledged solid theoretical foundations and comprehensive experiments and gave positive evaluations. (One Reviewer has raised the rating from 5 to 7. ) If there are any remaining concerns that have led to a negative evaluation, please let us know. We are more than willing to engage in further discussion and strive to address any remaining issues to the best of our abilities. We are looking forward to your reply. Thank you for your time and efforts on this paper. Best regards, Authors of ReST-MCTS*
Summary: This paper introduces a novel approach for self-training large language models (LLMs) called ReST-MCTS*. This method integrates process reward guidance with Monte Carlo Tree Search (MCTS) to collect high-quality reasoning traces. These traces are then used to train policy and reward models without relying on manual annotations for every reasoning step. The paper claims that this method outperforms existing self-training techniques in terms of accuracy on several reasoning tasks. Strengths: 1. The application of MCTS to improve the capabilities of LLMs is a highly promising approach. This integration allows for more structured and effective exploration of reasoning paths, leading to better model performance. 2. The proposed method has demonstrated its effectiveness across various reasoning tasks, significantly enhancing the performance of LLMs. Weaknesses: 1. One major issue with the paper is the lack of novelty, particularly due to the absence of comparison with AlphaLLM [1]. AlphaLLM also uses MCTS to enhance LLM performance, and the ideas and implementation in both papers are strikingly similar. The omission of a comparison or even a citation significantly undermines the contribution and novelty of this paper. 2. The method still treats each step's reward equally concerning the final answer, only decreasing based on the distance from the root node. This approach does not effectively differentiate the importance of various steps, failing to allocate different weights to different steps appropriately. Reference: [1]. Toward Self-Improvement of LLMs via Imagination, Searching, and Criticizing. https://arxiv.org/abs/2404.12253 Technical Quality: 3 Clarity: 3 Questions for Authors: 1. How does the method avoid the scenario where an incorrect intermediate step leads to the correct final answer as mentioned in the paper? Can the Process Reward Model (PRM) alone handle this issue? If an incorrect step during the expansion phase eventually leads to the correct result, would this negatively impact the value function training? 2. Why is the weighted value designed in its current form? Is there any theoretical justification or practical benefits of this design? 3. In the LLaMA-3-8B-Instruct results, is the performance of ReSTEM in the first iteration (3.84) a typographical error? Why is the performance so low? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: 1. Lack of comparison or missing citation of important baseline. 2. Some design details and implementations can be explained in more detail. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable feedback and concerns regarding novelty, the design of reward/value/PRM, and related aspects of our paper. We genuinely appreciate your dedicated time and effort to thoroughly assess our work. We have carefully considered your comments and have made the necessary responses to address these concerns and improve the transparency and credibility of our research. Below, we provide a detailed response to each of your points. ``` W1 & L1: Concerns on lack of comparison with AlphaLLM. ``` **The concurrent work AlphaLLM appeared on ArXiv on April 18th.** We appreciate you bringing this to our attention and will cite AlphaLLM in our paper. As an approach that aims to enhance LLM inference, AlphaLLM utilizes a tailored MCTS algorithm and critic models to provide precise feedback. Even though AlphaLLM also adopts MCTS and critic models for self-improvement, their approach is different from ours in various crucial aspects, as elaborated below. **1) Design of MCTS algorithm.** For the level of search, AlphaLLM’s $\eta$MCTS considers options as action, with termination signals delivered by a termination function $\beta$. In contrast, we use reasoning steps as action, which is achieved through tailored prompt design. Concerning critic models, we use a single value model to provide evaluation for intermediate nodes. The model is trained to predict specially designed quality values that reflect completeness and correctness of partial solutions, rather than estimating the conventional definition of value function in RL. In addition, we also incorporate self-critic mechanisms into the tree search algorithm to provide insights for the policy (Appendix C.1), which AlphaLLM does not adopt. **2) Definition of reward/value.** Our definition of weighted reward and quality value is novel, leading to significant differences between our method and AlphaLLM across various processes such as critic model training, data synthesizing, and data filtration. Since our design of quality value involves information on process reward and reasoning distance, our value model trained on this target can naturally provide sufficient feedback during the search, with no need for implementing other critic models mentioned by AlphaLLM. **3) Self-Training algorithm.** Although AlphaLLM also includes iterative self-training, the implementation method varies greatly. Most importantly, their critical model is static throughout the iterations, which means they focus more on the improvement of policy. In comparison, we also consider the impacts of self-training on the critic value model. As demonstrated in Algorithm 1, we calculate process rewards and quality values according to the final search tree of questions within each iteration, which are then used as new training data for the value model. ``` W2 & Q2 & L2: Concerns and questions on design of weighted reward/quality value. ``` We appreciate the reviewer's concern and query about the validity and effectiveness of our reward/value design. Please allow us to elaborate more on this issue. We design the weighted reward and quality value this way (Equation 1, 2) for two main reasons. Please see our **Official Comment** for more details on this part. 1) First, we believe that **different reasoning steps should receive varied rewards according to the contribution they make.** To achieve this, we incorporate the process reward $r_{s_k}$ (evaluates the probability of correctness) and the reasoning distance $m_k$ (evaluates contribution or importance, though in an indirect way). These factors involve information that can help evaluate a step more accurately, improving the effectiveness of our method. W2: Actually, each step’s reward is equal only when the corresponding reasoning trace is the idealized “perfect” solution, where all steps are important, necessary, and concise. Otherwise, steps that make more contributions to the solution have a smaller reasoning distance and higher quality value. 2) Another important reason is scalability and accessibility. Unlike some methods, ours can easily obtain an estimation of each node’s reasoning distance as illustrated in Section 3.2. This enables scaling up our self-training process, which is a significant advantage and practical benefit of our design. Furthermore, Table 2 proves that our design is indeed effective among various tasks and LLMs, further verifying the scalability of our method. We believe this answers the reviewer’s Q2. ``` Q1: Concerns on the impact of incorrect intermediate steps. ``` We agree that an incorrect intermediate step that reaches the correct answer will be harmful to the iterative training of the value model. However, the possibility of the occurrence of this issue in our work is very small. Our value model is firstly trained on a credible data set that instructs the model to distinguish correct intermediate steps from false ones before self-training. Therefore, during self-training, the MCTS* algorithm can avoid expanding false nodes naturally by probability. Another measure for this issue is to filter out the nodes on a trace that reach the correct answer but obtain lower quality values than their parent node (this means they have a negative reward). This helps to purify the newly generated train data based on previous knowledge of the PRM itself. ``` Q3: Question on the low performance of LLaMA-3-8B-Instruct. ``` Thank you for raising this question. We have thoroughly reviewed and re-evaluated the results multiple times to ensure the accuracy and reliability of the final result. The result should be 30.84 and we will update this typo. Once again, we sincerely thank you for your thoughtful evaluation and valuable suggestions, which have greatly contributed to improving our work. If you believe that our responses have satisfactorily addressed your concerns about the issues, we kindly request that you consider adjusting the final evaluation to reflect this. --- Rebuttal 2: Title: More details related to the design of weighted value. Comment: Thank you for your valuable questions and thorough evaluation of our work! Here we present some details related to your concerns and questions, as mentioned in our rebuttal. ``` W2 & Q2 & L2: More details on the design of weighted reward/quality value. ``` W2: Actually, our method only treats each step’s reward equally when the corresponding reasoning trace is the idealized “perfect” solution, where all steps are important, necessary, and concise. In this situation, it’s reasonable and natural to assign equal rewards. In other circumstances, suppose we perform expansion at a node with a partial solution $p_{k-1}$, then its children nodes will have different $r_{s_k}$ and $m_k$ according to the generated step $s_k$. For steps $s_k$ that make more contributions to the solution, their corresponding child node requires fewer steps to reach the correct answer. This means they have a smaller reasoning distance, resulting in higher weighted reward and quality value. Therefore, **our method can differentiate the importance of steps, as long as the critic model can estimate the designed reward accurately**. Q2: It may be more direct to think of modeling the “importance” or “contribution” of a step and train a critical model to predict that. However, it is extremely difficult to acquire or access sufficient data that allows training of a valid critic model, not to mention that the modeling of “importance” or “contribution” is already fairly hard. As an alternative, our definition of reasoning distance reflects these factors in an indirect way. Through the tree-search based data synthetic process, we can easily obtain an estimation of each node’s reasoning distance using the method illustrated in Section 3.2. This enables scaling up of our self-training process, which is a crucial goal of our work. --- Rebuttal 3: Comment: Thanks for your response. My main concerns are addressed. Although I believe that AlphaLLM does not strictly qualify as concurrent work due to the significant time gap between its arXiv posting and the NeurIPS submission deadline, I hope that in subsequent versions, the authors can compare their method with AlphaLLM and further clarify the differences between the two approaches. I have raised my rating to 7. Congrats on your great work! --- Rebuttal 4: Title: Thanks for your great reviews! Comment: Thanks for your great reviews! In our final version, we will provide a thorough comparison with AlphaLLM. Thank you again for the score adjustment!
Summary: The paper introduces ReST-MCTS*, a novel framework for self-training LLMs using MCTS combined with process reward guidance. The core innovation is in addressing the limitations of traditional self-training methods, which often include incorrect intermediate reasoning steps despite producing correct final answers. ReST-MCTS* leverages a modified MCTS algorithm, that integrates a process reward model to estimate the probability that each intermediate step contributes to the correct final answer. This allows for the automatic generation of high-quality reasoning traces without requiring dense human annotations. The inferred rewards serve as value targets for refining the process reward model and selecting high-quality traces for self-training the policy model. Experimental results on benchmarks like SciBench and MATH show that ReST-MCTS* not only outperforms previous self-training approaches (e.g., ReSTEM, Self-Rewarding LM) but also enhances the LLMs' accuracy through iterative self-improvement. Strengths: - Originality: - Proposes an innovative integration of MCTS with process reward guidance. - Quality: - Theoretical foundations are robust, and the methodology is supported by extensive experimental validation. - Demonstrates substantial improvements in performance on multiple benchmarks, showcasing the effectiveness of the proposed approach. - Significance: - The approach addresses a critical challenge in multi-iteration LLM self-training, enhancing the quality of generated training data. ope Weaknesses: - Related Work: While this paper mentions related work, such as Xidong, et al.'s [1], it does not adequately discuss the differences between its handcrafted value target and TD(λ) in [1]. A more thorough comparison with above approach would provide better context and highlight the novel contributions of this work. (e.g. TS-LLM is missing in figure 6) - Clarity: The paper suffers from clarity issues, as many definitions and notations are introduced later in the text rather than upfront. Additionally, several important figures are placed in the appendix, making it difficult for readers to follow the key points without constantly referring to supplementary materials. [1] Feng, X., Wan, Z., Wen, M., Wen, Y., Zhang, W., & Wang, J. (2023). Alphazero-like tree-search can guide large language model decoding and training. arXiv preprint arXiv:2309.17179. Technical Quality: 3 Clarity: 2 Questions for Authors: - Other than the experiment in Figure 2 showing that multi-iteration ReST-MCTS* significantly outperforms SC, the other experiments only demonstrate marginal gains over SC. What do you think is the reason behind this? Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: NA (Good Enough) Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks a lot for acknowledging the strengths of this work as an innovative self-training method, robust theoretical foundations, and extensive benchmarks. We have given a detailed discussion with related work, clarity issues, and the performance of SC. ``` W1: Concerns on lack of comparison between proposed ReST-MCTS* and TS-LLM. ``` In Table 1, this paper initially contrasts TS-LLM with our proposed ReST-MCTS$^*$ concerning reasoning policy and reward guidance. To offer more depth, we have conducted a comprehensive comparison between these two approaches, aiming to offer a richer context and underscore the unique contributions of our work. As an approach that aims to enhance LLM inference decoding and training, TS-LLM utilizes a tailored MCTS algorithm and value function to provide precise feedback. Their work proved that TS-LLM is indeed an efficient framework that significantly boosts the performance of LLMs without requiring extra data annotations. Even though TS-LLM also adopts MCTS and value function for self-improvement, their approach differentiates from ours in various crucial aspects elaborated as follows. **(1) Design of MCTS algorithm.** For the level of search, TS-LLM considers each sentence or each token as an action, with termination signals delivered by a termination function $\alpha$. In contrast, we use reasoning steps as action, which is achieved through tailored prompt design. Our proposed value model is trained to predict specially designed quality values that reflect completeness and correctness of partial solutions, rather than estimating the conventional definition of value function in RL. Additionally, we also incorporate self-critic mechanisms into the tree search algorithm to provide insights for the policy, as mentioned in Appendix C.1, which TS-LLM does not adopt. **(2) Definition of reward/value.** Our definition of weighted reward and quality value is novel, leading to significant differences between our method and TS-LLM across various processes such as critic model training, data synthesizing, and data filtration. Since our design of quality value involves information of $r_{s_k}$, $m_k$, and $v_{k-1}$, our value model trained on this target can naturally provide sufficient feedback during the search, with no need for implementing other critic models mentioned by TS-LLM. **(3) Self-Training algorithm.** Although TS-LLM also includes iterative self-training, the implementation method varies greatly. Most importantly, their reward model is ORM throughout the iterations, which means they focus more on the improvement of policy. In comparison, we consider the impacts of self-training on the critic value model. As demonstrated in Algorithm 1 and Appendix C.2, by carefully designing the data synthetic process, we calculate process rewards and quality values according to the final search tree of questions within each iteration, which are then used as new training data for the value model. Through comprehensive experiments, we further reveal that both the policy and the critic can be continuously improved for multiple iterations of self-training (Table 2), achieving significant enhancement in search accuracy under reasonable token consumption (Figure 2). In addition, we will add the key differences between TS-LLM and our work in Figure 6 in our revision. ``` W2: Concerns on clarity issues. ``` We sincerely apologize for the inadequate arrangement of the sequence of definitions. We will surely modify the sequence definitions introduced according to your suggestion to make our paper more intelligible to readers. Once again, we would like to express sincere apology for our negligence on these details, and we will certainly modify them in the revised manuscript. Thank you for such a meticulous inspection of our paper! ``` Q1: Question about the effectiveness of SC. ``` Regarding the performance comparison between SC and ReST-MCTS$^*$, we provide all the results among them. In fact, in addition to the experiments in Figure 2 showing that ReST-MCTS$^*$ with multiple iterations is significantly better than SC, other experiments also show that ReST-MCTS$^*$ has a significant improvement over SC. For instance, In Figure 2, ReST-MCTS$^*$ significantly outperforms SC after multi-iterations, which verifies the effectiveness of self-training algorithm. In Table 4, CoT is indeed the CoT-SC setting, we can find that ReST-MCTS$^*$ outperforms CoT-SC on the separate accuracy of most subjects and the final average accuracy. In Table 3, we first provide the results of ReST-MCTS$^*$ on GSM8K and MATH500, which are 86.8 and 37.4, respectively, and then calculate the relative improvement of value models (ORM, MS, and ReST-MCTS*) compared to SC. | Dataset | SC | ORM | \% Improv. | --- | --- | --- | --- | GSM8K | 83.9 | 86.2 | 2.74 | | | | MS | \% Improv. | | | | 87.1 | 3.81 | | | | ReST-MCTS* | \% Improv. | | | | 86.8 | 3.46 | | Dataset | SC | ORM | \% Improv. | --- | --- | --- | --- | MATH500 | 35.1 | 36.4 | 3.70 | | | | MS | \% Improv. | | | | 37.3 | 6.27 | | | | ReST-MCTS* | \% Improv. | | | | 37.4 | 6.55 | In the whole view, ReST-MCTS$^*$ significantly outperforms SC in multiple experimental settings, e.g., multi-iterations, scientific reasoning benchmark SciBench, and mathematical reasoning benchmarks. Once again, we sincerely thank you for your thoughtful evaluation and valuable suggestions, which have greatly contributed to improving our work. We believe that the revisions we have made adequately address your concerns and questions regarding the comparison and effectiveness of SC. If you believe that our responses have satisfactorily addressed your concerns about the issues, we kindly request that you consider adjusting the final evaluation to reflect this. --- Rebuttal 2: Title: Looking forward to your feedback Comment: Dear reviewer 7P64, thank you very much for your valuable feedback. We hope that our responses and clarifications have addressed your questions and concerns. If you believe that our responses have satisfactorily addressed your concerns, we kindly request that you consider adjusting the final rating to reflect this. If there are any remaining concerns or require additional clarification, please let us know. We are looking forward to your reply. Thank you for your time and efforts on this paper. Best regards, Authors of ReST-MCTS*
Rebuttal 1: Rebuttal: Dear ACs and reviewers, thank you very much for your valuable feedback. We list the main issues raised by reviewers and explain them below. ``` The motivation and reason behind the design of ReST-MCTS*. ``` First, the main reason **we design a different reward and value for MCTS lies in the defect of conventional reward/value for the case of reasoning**. We believe that different reasoning steps (actions) should receive varied rewards according to the contribution they make, while conventional methods like Math-Shepherd often consider using sparse reward (only the final step receives reward) and probability of success as value. These methods omit the varied importance of steps and can lead to ineffective searches. On the other hand, modeling intrinsic rewards that follow the conventional RL paradigm is very difficult. Some approaches like AlphaLLM attempt to model the reward of intermediate steps in a descriptive way, but its implementation is complicated and the results are still unsatisfactory. These issues inspire us to look for alternative reward and value designs that may be more suitable for reasoning tasks, jumping out from the conventional RL paradigm. ``` The benefits of the design of ReST-MCTS*. ``` The design of the proposed ReST-MCTS$^*$ has three main benefits: **accurate evaluation of intermediate steps, balanced exploration and exploitation, and scalability**. Firstly, we achieve a better evaluation of steps by incorporating the process reward $r_{s_k}$ (evaluates the probability of correctness of a single step) and the reasoning distance $m_k$ (evaluates contribution or importance, though in an indirect way). These factors involve information that can help assess a step more accurately, thus we believe involving these factors in the design of reward and value may improve the effectiveness of our method. Secondly, we regard MCTS as a heuristic algorithm that balances exploration and exploitation, since it considers both visit frequency and expected reward in the UCT formula. Despite the differences in reward/value definition, we adopt the core UCT formula to achieve a balanced search in MCTS$^*$, resulting in better search outcomes. Thirdly, our design enables easy acquisition of estimations of each node’s reasoning distance and quality value as illustrated in Section 3.2. This enables scaling up our self-training process, which is a significant advantage and practical benefit of our design. Furthermore, Table 2 proves that ReST-MCTS$^*$ is indeed effective among various tasks and LLMs, further verifying the scalability of our method. ``` Different definitions of value functions in traditional RL and our proposed MCTS*. ``` Compared to the expected total reward in traditional RL, our value function focuses on the evaluation of the current state. Even though the settings of reward/value are different in our work, the core of the algorithm can still be preserved and adopted. On one side, we want to further explore the nodes that make more progress in the right direction (higher quality value). On the other side, we can not simply discard the search directions that haven’t made much progress for some reason. As a special search algorithm, MCTS$^*$ inherits the core decision method of MCTS and adapts to the usage of weighted reward and quality value. It suffices to perform scalable data synthesis for iterative self-training of policy and value models, achieving significant improvements as our experiments show. ``` ReST-MCTS* focuses on the offline self-training paradigm. ``` It is crucial to emphasize that our methodology predominantly focuses on the offline self-training paradigm. Within this framework, data is generated through MCTS$^*$ using a static policy and critic within each iteration, as elaborated in Section 3.2. The newly synthesized data is verified and filtered according to the ground truth, rather than the outputs of the value model. Subsequently, this filtered data is utilized to individually train the policy and value models, rendering our algorithm offline and relatively stable.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
NeuroClips: Towards High-fidelity and Smooth fMRI-to-Video Reconstruction
Accept (oral)
Summary: The paper introduces NeuroClips - a framework for reconstructing high-fidelity videos from fMRI data. It combines pixel-level and semantic-level visual learning through perception and semantic reconstruction pathways. The Perception Reconstructor (PR) ensures smoothness and consistency by creating a rough, continuous video, while the Semantics Reconstructor (SR) generates high-quality keyframes. These components are integrated into a video diffusion model, resulting in high-quality, detailed videos. Also, no additional post-diffusion training is required. NeuroClips sets a new standard in fMRI-to-video reconstruction, demonstrating significant advancements in semantic precision and pixel-level matching by improving SSIM by 128% and spatiotemporal metrics by 81%. Strengths: - Framework Design introduces a dual-component approach with a Semantics Reconstructor and a Perception Reconstructor to handle both high-level semantics and low-level perceptual details, respectively. Which is a novelty. - Authors provide a comprehensive overview of existing methods for fMRI-to-video and fMRI-to-image tasks. - The paper provides a thorough explanation of the NeuroClips framework, including the components, training procedures, and inference process. The implementation details are well-written so that readers can follow the technical aspects of the work. It also ensures reproducibility. - The validation procedure is well-structured and clearly explained. The division of validation metrics in two (frames and video flow) allows one to better understand the performance of the NeuroClips framework. - The newly introduced NeuroClips’ SR which allows the generation of longer videos. The novelty of the Neuroclip pipeline, the high validation metrics scores and creative approach to dealing with fMRI data makes the proposed framework significant. Weaknesses: - The multi-fMRI fusion strategy is briefly described, but the implementation details and the rationale behind specific design choices are not fully elaborated. I suppose the scheme of the multi-fMRO strategy in the supplement would increase the clarity of this paragraph. - As it can be seen from the examples provided with the code repository, the proposed framework does not account for a change of the scene in the video (which was briefly mentioned by the authors in the Limitations). The pipeline with chosen keyframe might hinder the NeuroClips to catch this rapid change in the video. No ablation is done in this direction, which could explain how neural NeuroClips decodes fMRI signals. - The paper primarily evaluates the method on a specific dataset (with only 3 patients), which, obviously, may not fully capture the diversity of real-world video content and fMRI recordings.This should be at least mentioned in the Discussion (or Conclusion) of the work. - The neurobiological justification of keyframe usage seems ambiguous. The improvement of text clearance and up-to-date references would increase the significance of the work. Technical Quality: 3 Clarity: 3 Questions for Authors: - The paper mentions accounting for the hemodynamic delay (BOLD signal delay of approximately 4 seconds), there is limited discussion on how this delay specifically impacts the reconstruction quality and temporal alignment with video frames. - The use of ridge regression to map fMRI signals to lower-dimensional embeddings assumes a relatively linear relationship between neural activity and visual stimuli. However, the brain's processing of visual information is highly nonlinear and complex. At least, it should be mentioned in the Limitations too. - Fig 12: The weights of subject 3 are really similar for PR and SR tasks, can you elaborate on it? - Fig 2: Example images are too small to perceive it even with a zoom, I recommend to make them bigger (as in Fig 7). - I recommend adding the major limitations in the Conclusion section of the main text. - Authors should include in the Supplement section which software was used to build brain maps with weights and if any data pre-processing/normalization was applied. - Lines 75-76: “NeuroClips achieves a 128% improvement in SSIM and an 81% improvement in spatiotemporal metrics and also performs better overall on most video semantic-level metrics.” - report not only the percent of improvement, but also the values of the metrics. - In the frame validation the main image quality metrics are PSNR and SSIM. Authors should mention that those metrics have flaws (Zhang et al, The Unreasonable Effectiveness of Deep Features as a Perceptual Metric, 2018). Will it be possible to consider other quality evaluation metrics? Like, for example, Visual information fidelity metrics (Sheikh et al., Image information and visual quality, 2006). It is also interesting why PSNR and SSIM were used for frames-based evaluation but not for video-based evaluation with modification to catch spatio-temporal data (ST-PSNR and ST-SSIM)? Another question is why authors choose to evaluate performance only with “CLIP image embeddings on each frame” and omit other metrics like Fréchet Video Distance (FVD), MSE or already mentioned ST-PSNR and ST-SSIM? - Fig 12: The visualization of voxel weights on a brain flat map to interpret neural activity is a significant strength. However, now it is difficult to comprehend the differences in weights from the image with naked eyes. The addition of a third row with the difference between the weights of participants can increase the clarity of what authors are trying to say. - The study focuses primarily on the visual cortex. While this is appropriate for the simplest visual stimuli (i.e., dots, simple geometrics shapes, etc.), it may limit the generalization of the approach to other types of brain activity or cognitive functions. Discussing potential extensions to other brain regions and types of neural data could provide a more comprehensive neurobiological perspective. - Lines 316-318: “From the quantitative results, it can be seen that there is a trade-off between semantic and perception reconstruction” - there is no clear explanation or reasoning why there is a trade-off between SR and PR - Line 38: “However, the visual decoding of MinD-Video significantly diverges from the brain’s visual system, exhibiting limitations in perceiving continuous low-level visual details.” - the more precise listing of what hinders MinD-Video to perceiving continuous low-level visual details would be more constructive, rather than comparison with the brain's visual system. - Lines 41-42: “Notably, the human brain perceives videos discretely [8, 9] due to the persistence of vision [10, 11] and delayed memory [12].” 1. discretely -> discreetly 2. Was it intentional to use the paper [10] of 1892 year for the “persistence of vision” concept? Also, the fact that the brain has a “persistence of vision” was criticized. Actually, the second citation you’ve used [11] is the critique of this concept (J Anderson et al., The myth of persistence of vision revisited, 1993). The citations of the novel research papers are required to justify this statement. The paper will be significantly improved by more valid justification of claimed neurobiological concepts. Minor: - Line 644: “However, these limitations will not be tackled overnight” - this phrase is informal and somewhat colloquial, which may not be entirely appropriate for a scientific paper - Line 56: “This process is reflected in the fMRI signal” - can you provide the link to the paper supporting this statement? - Line 129: “...loss to train the PR, the overall loss L_{PR} of PR can be donated as…” - “donated” is a confusing word to use, please use another word. - Lines: 222-223: “Since the technical field of long video generation is still immature, we chose a more straightforward fusion strategy that” - can not see why immatureness of the field can be used as a reason to use a straightforward strategy. Please, make the reason for usage more understandable. - Figure 4 in page 8: I recommend not to use comic sans font in images. - Lines 596-597: “It proves that there may be a large difference in the understanding of the video” - please refrain from using the word “prove” here, since it implies comprehensive research or, at least, references to other works that proves it. - Could you report the number of classes you have at least in training and testing dataset (for the “N-way top-K accuracy classification test as the semantics-level metric”) - Line 55-56: “generating high-level images in the cerebral cortex” - it is also knows that brain interpolates the seen scenes (Vacher, Jonathan, et al. Texture interpolation for probing visual perception, 2020), which could be used as justification to use keyframe approach. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Important limitation - the dataset of only 3 participants - was not mentioned. This may result in varying interpretations with a careful review by a neurobiologist. Additional evaluation is needed with other datasets and more participants. Flag For Ethics Review: ['Ethics review needed: Research involving human subjects'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate your careful review. In particular, you have provided us with more than twenty constructive suggestions and insightful questions, which is extremely precious at a time when the quality of reviews in the community is deteriorating. We value each of your suggestions and provide the following responses: > **Weakness 1: Details of Multi-fMRI Fusion** We acknowledge and accept your suggestion. Considering that Multi-fMRI Fusion is merely an extra merit of NeuroClips and the page limitations, we were unable to present more details regarding multi-fMRI fusion. Thanks to the design of keyframes in the Semantic Reconstructor (SR), Multi-fMRI Fusion can be effectively achieved. In this study, we obtain the CLIP representations of reconstructed neighboring keyframes and train a shallow MLP based on the representations to distinguish whether two frames share the same class. The exact process is also shown as the GIF in the code repository. This training process operates at the image level. Despite our efforts to assess whether neighboring fMRI frames belong to the same scene class at the fMRI frame level using an MLP with fMRI frames as inputs, aiming for fMRI anomaly detection, our results were suboptimal, as illustrated in the following table. As can be seen, the fMRI level analysis frequently led to excessive false fusions. Conversely, established techniques for categorizing images perform reliably, particularly within the proficient CLIP space we employ. We promise to include more technical details in the final version. Thank you for your suggestion. | Fusion-level | Subject 1 | Subject 2 | Subject 3 | | ----------------------- | --------- | --------- | --------- | | fMRIs | 59.7% | 58.8% | 57.2% | | Reconstructed keyframes | **86.3%** | **87.1%** | **85.6%** | > **Weakness 2: Rapid Scene Change** **We will present four perspectives on the inability of NeuroClips to perceive rapid changes in the scene** * **Regarding Dataset.** The rapid scene change depicted in the code repository is a **distinct** alteration influenced by **human intervention**, diverging from authentic real-world visuals. In the CC2017 dataset, video sequences were randomly divided into discrete segments, then amalgamated and presented to subjects. Such spliced videos are seldom encountered in everyday visual encounters, as scenes typically unfold continuously. This is what we have mentioned in Limitation about *cross-scene fMRI*. * **From the fMRI Data.** We must acknowledge that **continuous** scene transitions occur in actual visual perception, such as when a person turns their head. However, it remains unknown whether and how rapid scene changes occurring within an fMRI frame (e.g., 2 seconds) are reflected in the fMRI signal due to its intricate nature. Consequently, decoding scene changes from fMRI data poses a significant challenge. * **From Keyframe Design.** When scene changes occur within one fMRI frame (e.g., 2s), the design of the keyframes may adequately capture it, although the blurred video would be continuous. However, the keyframe-based method to guide the generation of continuous video can serve as a foundation for the subsequent design of decoding two consecutive scenes separated rapid scene change. * **From Cross-scene Diffusion Models.** Presently, most of the current video generation diffusion models are not capable of very silky smooth scene switching, like SORA, to the best of our knowledge. We believe that, from a technical point of view, the issue can be better alleviated if consecutive semantics can be decoded from a single fMRI frame, or if the temporal resolution of fMRI can be refined. Thanks again for your suggestions. We will provide the results of some of our exploratory experiments and a detailed discussion in the final version. > **Weakness 3: Generalization Capability** **Diverse real-world video content**. The videos in the CC2017 dataset used are, somehow, sufficiently reflective of real-world visual experiences. As the original dataset paper states 'All video clips were chosen from Videoblocks and YouTube to be diverse yet. For example, individual video clips showed people in action, moving animals, nature scenes, outdoor or indoor scenes, etc' [1]. **Diversity of fMRI recordings.** To ensure consistency with the baselines [2] and to make a fair comparison, we experimented on the dataset, which unfortunately has only 3 subjects. We appreciate you pointing out that this should at least be mentioned in Limitations, and we will add it in the final version. Publicly available datasets for fMRI-to-video reconstruction are valuable and not easy to find. Until now, we discovered that the Algonauts 2021 [3] dataset can also be used for video reconstruction, but it is a great pity that this dataset is currently in an unpublished stage. To show the generalization capability of our approach, we finally chose to perform fMRI-to-image reconstruction on the Natural Scenes Dataset (NSD) [4] to assess the keyframe effect of our Semantic Reconstructor (SR). The visual results of the reconstructed image and groundtruth images are shown in the **`PDF`** appendix. Notably, our method exhibited satisfactory reconstruction outcomes even when applied to fMRI data with a distinct distribution, signifying the generalization capabilities of NeuroClips. **Reference** [1] Neural encoding and decoding with deep learning for dynamic natural vision, *Cerebral cortex 2018* [2] Cinematic mindscapes: High-quality video reconstruction from brain activity, *NeurIPS 2023* [3] The algonauts project 2021 challenge: How the human brain makes sense of a world in motion, *2021* [4] A massive 7T fMRI dataset to bridge cognitive neuroscience and artificial intelligence, *Nature neuroscience 2022* --- Rebuttal 2: Title: Response to Weaknesses of Reviewer oPVf Comment: > **Weakness 4: Neurobiological Justification** Thank you for sharing your expert advice from neuroscience. In numerous studies within **cognitive neuroscience** and **computational neuroscience**, researchers have delved into the mechanisms related to the **brain's visual information processing** and memory functions. Building on these studies, we innovatively propose using keyframes to guide our research, addressing the issue of frame rate mismatch between visual stimuli and fMRI signals, and enhancing the model's accuracy in fMRI-to-Video Reconstruction. Specifically, [1] demonstrate that **'key-frames'** play a crucial role in how the human brain recalls and connects relevant memories with unfolding events. In [2], a novel video abstraction paradigm which use the brain response reflected by fMRI to guide the extraction of visually informative segments from videos was proposed to quantitatively reveal the attentional engagement of human brain in the comprehension of video. In [3], the key frames in a video clip were used to extract these features, with the combined features from all **keyframes** representing the entire **video clip**. [4] provided a framework and explanation for video summarization. We will revise and update our text clearance and references in the final version. **Reference** [1] Brain mechanisms underlying cue-based memorizing during free viewing of movie Memento, *NeuroImage 2018* [2] Video abstraction based on fMRI-driven visual attention model, *Information sciences 2014* [3] Bridging the semantic gap via functional brain imaging, *IEEE Transactions on Multimedia 2011* [4] A comprehensive survey and mathematical insights towards video summarization, *Journal of Visual Communication and Image Representation 2022* --- Rebuttal 3: Title: Response to Questions of Reviewer oPVf Comment: > **Question 1: Hemodynamic Delay** Indeed, a considerable number of **exploratory experiments** have been conducted on the topic of **hemodynamic delay**, which has also been considered in Mind-Video. In the current version of NeuroClips, which employs a fixed 4-second delay, it was observed that the semantics of some of the generated keyframes exhibited latency, particularly in instances where the video clips of the scene were of a **greater duration**. Accordingly, a **sliding window** comprising two or three fMRI frames was devised to actively learn the aforementioned delay. However, it was discovered that employing a sliding window resulted in a **notable reduction** in the final evaluation metrics, with a **more pronounced negative impact**, particularly in the case of shorter video clips. It may be the case that longer videos have a more enduring effect on human brain fMRI signals. In light of the experimental outcomes, we ultimately opted to discard this methodology and instead fix the delay. > **Question 2: Ridge Regression** As you mentioned, the human brain processes information in a highly complex and non-linear way. However, empirical evidence [1, 2, 3] underscores the **effectiveness and sufficiency of linear mapping** for achieving desirable reconstruction outcomes. Notably, **complex nonlinear models will easily overfit to fMRI noise**, leading to poor performance in the test set [4]. We will add more discussion in the Method Section. **Reference** [1] Reconstructing the mind's eye: fMRI-to-image with contrastive learning and diffusion priors, *NeurIPS 2023* [2] High-resolution image reconstruction with latent diffusion models from human brain activity, *CVPR 2023* [3] MindEye2: Shared-Subject Models Enable fMRI-To-Image With 1 Hour of Data, *ICML 2024* [4] Through their eyes: multi-subject Brain Decoding with simple alignment techniques, *Imaging Neuroscience 2024* > **Question 3: Voxel Weight Visualization** In Figure 12, each column represents the PR and SR visualization weights for the same subject. The voxel distribution is the same in each column since the **voxels initially selected by each subject are fixed**. For each subgraph, the weights of the voxels were visualized after normalization, with **brighter regions** representing **higher weights** and **darker regions** representing **lower weights**. For subject 3, the distribution of voxel weights in PR and SR is quite different. For example, in the right region of V4 of the right brain, PR is brighter and SR is darker. We appreciate your suggestion to add another line to visualize and highlight the **difference** between PR and SR. In this way, we can better show the difference in voxel weights between the two modules, and we will add it to the final Supplement. > **Question 4: Larger Example Images** We will modify it to a vector figure and adjust the number of images to make it clearer. > **Question 5: Major Limitations** Since NeurIPS allows an additional page to be added to the final version, we will move Limitation from the appendix to the main body. Thanks for your suggestion. > **Question 6: Visualization Software** We use **Connectome Workbench** from Human Connectome Project (*HCP*), and flatmap templates were selected as 'Q1-Q6_R440.L.flat.32k_fs_LR.surf.gii' and Q1-Q6_R440.R.flat.32k_fs_LR.surf.gii' . Additionally, the cortical parcellation was manually delineated by **neuroimaging specialists** and **neurologists**, and aligned with the public templates in **FreeSurfer software** with verification. We **normalised** the voxel weights, scaling them to between 0 and 1. Finally, to show a better comparison, the **Colorbar** was chosen to be 0.25-0.75. We will add this note to the final Supplement section. > **Question 7: Metric Improvement** We'll add extral analysis to indicate the improvement values on the metrics. --- Rebuttal 4: Title: Response to Questions of Reviewer oPVf Comment: > **Question 8: More Metrics** The two new metrics ST-SSIM and ST-PSNR you mentioned are very interesting and enlightening, and we have carried out comparative experiments with MinD-Video in terms of the two metrics. We also evaluated NeuroClips with Visual Information Fidelity (VIF) metrics, and the results are shown in the table below (All results are averages of 3 subjects). Notably, NeuroClips' performance at the ST-level far exceeds that of MinD-Video, and even more than pixel-level, which further indicates that NeuroClips has a smoother video reconstruction capability. | Method|ST-SSIM|SSIM|ST-PSNR|PSNR|VIF| | - | - | - | - | - | - | | MinD-Video |0.489|0.171|11.595|8.662|0.113| | NeuroClips | **0.785** | **0.390** | **17.200** | **9.211** | **0.170** | As commonly used evaluation metrics in the field of **video generation**, *Fréchet Video Distance (FVD)* is more often used to assess the performance of video diffusion generation models. However, since we **freeze** the pre-trained parameters of the advanced generation model *Animatediff* [1], the effect of our video generation model must be better than the previous video models. Therefore, the evaluations on these video metrics may not be a **fair** one. Instead, we consider the CLIP rerepsentations for evaluation. Note that the consistency in CLIP representation space is more revealing of the degree of **semantic consistency**, reflecting the superiority of Semantic Reconstructor (SR) in NeuroClips. In the table above, we conducted an evaluation using the Visual Information Fidelity metric (VIF), instead of the FVD metric. When assessing FVD, all of Pytorch's open-source methods necessitate a video frame rate exceeding 10fps due to their need for some level of downsampling. Despite NeuroClips making significant advancements in frame rate, it falls to meet this requirement. [1] Animatediff: Animate your personalized text-to-image diffusion models without specific tuning, *ICLR 2024* > **Question 9: Difference Voxel Weight Visualization** We will add a third row with the difference between the weights of subjects as described in *Question 3*, thanks again! > **Question 10: Voxel Selcetion** In fact, we did **not** select voxels **specifically for the visual cortex**. We calculated the voxel-wise correlation between the fMRI voxel signals of each training movie repetition for each subject. **The significant voxels**(Bonferroni correction, **P < 0.05**) were considered to be **stimulus-activated voxels** and used for subsequent analysis, as described in the pre-processing paragraphs of **Section 4.1**. We agree with you that other brain regions may also contribute to video decoding as well, so we expanded the range of significance (**MinD-Video using P < 0.01**), and the voxels used in NeuroClips are more than MinD-Video. The following table shows the number of voxels selected by the two methods. |Method|Subject 1|Subject 2| Subject 3| |-|-|-|-| | MinD-Video|6016| 6224|3744| | NeuroClips|13447|14828|9114| We believe that our voxel-selective paradigm is more easily migrated to other fMRI decoding tasks and provides a more comprehensive neurobiological perspective. > **Question 11: Trade-off Between PR & SR** Due to the parallel design of our Semantic Reconstructor (SR) and Perception Reconstructor (PR), SR focuses more on the decoding of **video semantics**, while PR is more geared towards the reconstruction of **pixel-level information**. SR can significantly improve semantic-related metrics, and PR can improve pixel-level related metrics. So in the end, when the two are combined together, there is a trade-off between semantic and perception reconstruction. During training, the Video Diffusion model will achieve a compromise effect between semantic and perception reconstruction. We will provide more discussions and deeper insighs on the compromise effect. > **Question 12: Low-level Visual Details** Thanks for your suggestion. We willl modify it to '***MinD-Video lacks design of low-level visual detailing, so it significantly diverges from the brain’s visual system, exhibiting limitations in perceiving continuous low-level visual details.***' > **Question 13: Persistence of Vision** Thanks for your suggestion. Regarding the persistence of vision, we will careful check the references and discussions and supplement more supporting references outlined below. **Reference** [1] Ultra-High Temporal Resolution Visual Reconstruction From a Fovea-Like Spike Camera via Spiking Neuron Model, *TPAMI 2023* [2] CaRiNG: Learning Temporal Causal Representation under Non-Invertible Generation Process, *Arxiv 2024* [3] Persistence Of Vision Display-A Review, *IOSR-JEEE e-ISSN 2015* [3] POV: Persistence of Vision: International Journal of Ethics in Engineering & Management Education [4] Persistence of vision: the interplay of vision, *Vision, Memory and Media 2010* --- Rebuttal 5: Title: Response to Minors of Reviewer oPVf Comment: > **Minor 1** We would modify this sentence to "***The alleviation of these limitations will require joint advances in multiple areas and significant further effort will be required.***" Thank you. > **Minor 2** fMRI measures brain activity by detecting changes in blood flow, and can thus reflect and quantify brain activity evoked by visual stimuli which has been widely applied in studies. The following are relevant references: **Reference** [1] Reconstructing perceived images from human brain activities with Bayesian deep multiview learning, *TNNLS 2019* [2] Survey of encoding and decoding of visual stimulus via FMRI: an image analysis perspective, *Brain imaging and behavior 2023* [3] Compressive spatial summation in human visual cortex, *Journal of Neurophysiology 2013* [4] fMRI evidence for areas that process surface gloss in the human visual cortex, *Vision research 2015* [5] A comparison of fMRI adaptation and multivariate pattern classification analysis in visual cortex, *Neuroimage 2010* [6] Spontaneous activity associated with primary visual cortex: a resting-state FMRI study, *Cerebral cortex 2008* [7] The human visual cortex, *Annual review of neuroscience 2004* > **Minor 3** Thanks for your advice! We will replace ‘donated’ with **‘described’** in the final version. > **Minor 4** In the context of the current **diffusion model** and **attention-based transformer model** as the dominant models for image generation, the computational overhead required for image generation models is sufficiently large. The content of video grows linearly with the number of frames, so the technical field of long video generation is still immature. We value your opinion and also believe that a clearer explanation is needed here. Therefore, we will revise the above statement to make it more understandable. > **Minor 5** Thank you for your professional suggestion. We decide to change the font in the Figure 4 to 'Times New Roman' in the final version. > **Minor 6** Thanks for your kind reminder. Upon careful consideration, we acknowledge that using **'prove'** is indeed inappropriate since the paragraph is an exposition of the results of the experiment. We agree with your suggestion and will change the sentence to ***"This may indicate that there were differences in the understanding of the video between subjects."*** > **Minor 7** The classifiers we use are **frozen pre-trained classifiers**, so the total number of categories is fixed independent of the CC2017 dataset. The image classifier is an ImageNet classifier, pre-trained on ImageNet-1K[1] hence **1000 image classes**. The video classifier based on VideoMAE [2] is trained on Kinetics-400 [3], an annotated video dataset with **400 classes**, including motions, human interactions, etc. **Reference** [1] Imagenet: A large-scale hierarchical image database, *CVPR 2009* [2] Videomae: Masked autoencoders are data-efficient learners for self-supervised video pre-training, *NeurIPS 2022* [3] The kinetics human action video dataset, *ArXiv 2017* > **Minor 8** Thank you for your valuable addition. We will carefully review this literature and include it as a justification for using the keyframe approach in the final version. &nbsp; *** We would like to extend our heartfelt gratitude to you! Our sincerest thanks! Your insights and suggestions from a neuroscience perspective have provided us with numerous inspiring ideas, greatly benefiting not only our current work but also our future research endeavors. Additionally, your detailed feedback has made our submission more solid and complete. We hope our response adequately addresses your questions, and we eagerly look forward to further communication and discussions. Once again, we sincerely express our deepest gratitude and highest respect for your effort and time! Best wishes, All authors of Submission 351 --- Rebuttal Comment 5.1: Title: Post-rebuttal thoughts Comment: Thank you for a rebuttal as thoroughly prepared as the submission itself. Despite the limitation of a low-n study and some debatable conclusions for the neurobiology (which had motivated my original score) AND given the authors keep their 5 rebuttal promises for the camera ready, I am now willing to give an extra point and be a proponent of this work during the discussion with ACs. Thank you for interesting read and good luck! --- Reply to Comment 5.1.1: Title: Heartfelt Thanks Comment: We greatly appreciate your constructive feedback and meticulous review. And it is good to know that our response has addressed some of your concerns. Although some others have not yet reached a mutual agreement, we are committed to resolving them promptly to ensure the high quality of the work. Lastly, we would like to express our sincere gratitude for your increased rating and further support towards our work! We hope this paper achieves satisfactory results, not in vain of your efforts and suggestions. Best wishes, All authors of Submission 351
Summary: The proposed framework NeuroClips introduces a strong pipeline for fMRI-to-Video reconstruction in the field of Brain Visual Decoding. The Perception Reconstructor(PR) maintains the motion of the video and the Semantic Reconstructor(SR) ensures the semantic information of the video. Multi-fMRI Fusion raises upper video length limit, and overall model achieves impressive results. Strengths: 1. The paper is clearly written and easy to read, with clear diagrams and charts. 2. Experiments and discussions are conducted extensively, including rich video reconstruction and neural interpretation visualization content to validate the model's performance, which strengthens the results and the paper in general. 3. The proposed rough-video reconstruction in Perception Reconstructor(PR) and the strategy of Multi-fMRI Fusion are generally innovative, and have greatly contributed to break through the original low frame rate and fixed length of 2s video limitations in previous methods. In addition, the designs of NeuroClips for textual semantics and pre-trained diffusion models for video generation are also unique. Considering NeuroClips' powerful results and methods, I think it can be a strong baseline for the emerging fMRI-to-video reconstruction field. Weaknesses: 1. I browsed the anonymous site, and the generated results are impressive. However, the lighting of some of the reconstructed videos varies considerably compared to the ground truth, which can also be seen on the right side of Figure 2, and the authors need to rationalize this. 2. Existing state-of-the-art video generation models can generate high-frame rate videos such as 24 fps for up to 1 min or even longer, however NeuroClips at this stage has not yet reached this level. Technical Quality: 3 Clarity: 4 Questions for Authors: 1. As discussed in mind-video [1], the nature of hemodynamic response has been considered and specific modules are designed, which seems not to be included in NeuroClips. Are there other considerations or is there no need to take into account for the BOLD signals? 2. Since a number of cross-subject models already exist in the image reconstruction field [2], does NeuroClips need to train a separate model for each subject? 3. As you mention in the limitation section of the appendix, the cc2017 dataset test set contains too many no-show categories, and I'm curious if the unsatisfactory results of previous methods are more due to the low quality of the dataset, less than the method itself? 4. Why text contrastive loss is placed after diffusion prior and not before like in [3]? [1] Chen, Zijiao, Jiaxin Qing, and Juan Helen Zhou. "Cinematic mindscapes: High-quality video reconstruction from brain activity." Advances in Neural Information Processing Systems 36 (2024) [2] Scotti, Paul S., et al. "MindEye2: Shared-Subject Models Enable fMRI-To-Image With 1 Hour of Data." arXiv preprint arXiv:2403.11207 (2024). [3] Sun, Jingyuan, et al. "NeuroCine: Decoding Vivid Video Sequences from Human Brain Activties." arXiv preprint arXiv:2402.01590 (2024). Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank you for taking the time to review our work. We value each of your suggestions and provide the following responses: > **Weakness 1: Light Variations** We highlight that **light variations** are also a feature that distinguishes video from images. Our initial hypothesis was that this phenomenon was caused by the presence of partial light variations in the blurred video of the Perception Reconstructor (PR). However, subsequent experiments involving the **removal** of the blurred video revealed that the effect persisted. This may be caused by the pre-training of AnimateDiff, where a corresponding discussion will be further explored in the final version of the paper. It is still important to emphasise that although the phenomenon exists, it is only observed in a **limited number** of videos. > **Weakness 2: Longer and High-frame Videos** + **From the Dataset.** After counting the video clips in our dataset, we discovered that the longest video spanned merely around 10 seconds. Hence, there is no need to reconstruct videos that extend up to a minute in duration. + **From an availability standpoint.** Currently, the most cutting-edge video generation technology has now advanced to create longer videos, like **`Sora`**. However, the quality of the generated content still raises concerns due to the current limitations of the technology. Additionally, it's important to highlight that the majority of these technologies are **not open source.** + **From a Research Perspective.** fMRI-to-video reconstruction is an emerging field. At this critical stage, we believe that the applicability and scalability of innovative methods are of greater consequence. Empirically, once longer blurred videos or video generation tools are available, **NeuroClips** can generate **24-frame and longer videos**. However, this will **remarkably increase the GPU usage**. From a research standpoint, we believe that the current reconstructed video is impressive and sufficient. > **Question 1: Hemodynamic Delay** Indeed, a considerable number of **exploratory experiments** have been conducted to study **hemodynamic delay**, a topic also considered in Mind-Video. In the current version of NeuroClips, with a fixed 4-second delay, it was observed that the semantics of some of the generated keyframes exhibited latency, particularly in instances where the video clips of the scene were of a **greater duration**. To address this, a **sliding window** comprising two or three fMRI frames was devised for the purpose of actively learning the aforementioned delay. It was discovered that the application of a sliding window resulted in a **notable reduction** in the final evaluation metrics, with a **more pronounced negative impact**, particularly in the case of shorter video clips. It may be the case that longer videos have a more enduring effect on human brain fMRI signals. In light of the experimental outcomes, we ultimately opted to discard this approach and instead maintain a fixed delay. > **Question 2: Cross-Subject** Yes, it is necessary to train a **distinct** model for each subject when using NeuroClips. As cross-subject approaches in this area still fail to achieve satisfactory results, we finally choose to utilise and explore a **single-subject** model in this paper. However, in light of the recent advancements in **fMRI-to-image reconstruction**, a series of cross-subject models have emerged [1, 2, 3]. We believe that extending NeuroClips to encompass cross-subject content will be a **promising avenue** for future research. **Reference** [1] Mindbridge: A cross-subject brain decoding framework, *CVPR 2024*. [2] Psychometry: An omnifit model for image reconstruction from human brain activity, *CVPR 2024*. [3] MindEye2: Shared-Subject Models Enable fMRI-To-Image With 1 Hour of Data, *ICML 2024* > **Question 3: Low Quality of Dataset** Since both NeuroClips and the previous baselines are founded upon the **same dataset**, the ensuing comparison is deemed to be **equitable**. The assertion that the CC2017 dataset is of poor quality is intended to **highlight** the difficulty in achieving impressive fMRI-to-video reconstruction results in **comparison to fMRI-to-image reconstruction**. The capacity to produce superior outcomes on datasets of inferior quality serves to illustrate the **resilience** of the NeuroClips' method. It would be of interest to ascertain whether NeuroClips could achieve even more impressive results with a higher-quality dataset. > **Question 4: Enhancement from Text Modality** Considering that the diffusion prior loss is rooted in MSE at the representation level, this prior is inherently **unstable** and the semantic information is susceptible to **bias**. It is important to recognise that text modality possesses its own **distinctive characteristics** and **robust semantic support**, which serve to complement the image representation space. To enrich the semantic depth of the representation, we strategically place text assistance subsequent to the prior. &nbsp; *** You have offered many constructive and valuable suggestions, making our submission more solid and complete. Once again, we sincerely express our best gratitude for your effort and time! Best wishes, All authors of Submission 351 --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for their responses. It is good to see that my concerns have been sufficiently addressed. I think the submission is solid, and I will raise my rating. --- Reply to Comment 1.1.1: Comment: We greatly appreciate for your constructive feedback. We would like to express our sincere gratitude for your increased rating and further support towards our work! We hope that this paper achieves satisfactory results, not in vain of your efforts and suggestions. Once again, thank you, Reviewer mcBd. Best wishes, All authors of Submission 351
Summary: This paper proposes NeuroClips, a framework that decodes high-fidelity and smooth video from fMRI. NeuroClips uses a semantics reconstructor for video keyframes to ensure semantic accuracy and consistency, and a perception reconstructor for capturing low-level perceptual details, ensuring video smoothness. Strengths: NeuroClips is the framework to decouple high-level semantics and low-level perception flows for fMRI-to-video reconstruction, achieving high-fidelity and smooth video outputs. The framework addresses the temporal resolution gap between fMRI and video data, ensuring smooth and consistent video outputs through innovative modules like Inception Extension and Temporal Upsampling. Weaknesses: The video captures the semantic meaning well but fails to accurately follow the ground truth movement. Technical Quality: 3 Clarity: 3 Questions for Authors: - Why is there no model or loss function for movement(motion)? - Are you willing to make the code publicly available? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: As mentioned above, motion reconstruction has not been resolved. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank you for taking the time to review our work. Your effort has ensured that our submission received adequate attention and review. We address the questions and clarify the issues accordingly as described below. > **Weakness 1: NeuroClips Fails to Accurately Follow the Ground Truth Movement** + Thanks for pointing out your concern. The fMRI-to-video decoding task presents significant challenges related to both high-level semantic comprehension and low-level motion perception. On the one hand, fMRI data is characterized by its high dimensionality, with hundreds of thousands of voxel signals captured across the entire brain. Even after rigorous voxel selection processes, approximately 10,000 voxels remain. On the other hand, fMRI data is heavily influenced by human behavior in response to visual stimuli, resulting in an extremely low signal-to-noise ratio. + The aforementioned hurdles complicate the decoding of precise and comprehensive ground truth movement from fMRI data in fMRI-to-video reconstruction. Nevertheless, through this endeavor, we present elaborated designed components and loss functions tailored for motion perception. These refinements enable NeuroClips to efficiently capture motion details and show impressive video reconstruction results. It is a valuable and motivating contribution to the fMRI decoding community. Further elaboration on these advancements is presented in the following responses. > **Question 1: No Model or Loss Function for Movement** ***NeuroClips is equipped with a specific model design and a tailored loss function, which guarantees its ability in capturing motion information.*** * **Regarding Model Design.** The Perception Reconstructor (PR) module we designed can perceive movement (motion) information. In this work, we focus on capturing ***generic motion*** within videos by modeling the motion information from ***two perspectives***: perceiving the ***structured information within frames*** (pertaining to objects' shape, position, and orientation) and grasping the ***dynamic information across frame sequences*** (related to the movements of an object or the dynamics of the scene). To accomplish this, we introduce a spatio-temporal attention module within **Temporal Upsampling**, which is explicitly designed to decode spatiotemporal (structural-dynamic) details, i.e., motion cues, from fMRI data. We utilize the cues to guide the Video Diffusion model toward a more nuanced perception of motion dynamics, ultimately guaranteeing the motion-awareness of the video generation of NeuroClips. This is our meticulous design from the model perspective for capturing movement (motion) information. * **Regarding Loss Function.** The Mean Absolute Error (**`MAE`**) part within the loss function (**Eq. 1**) quantifies the disparity between the generated video and the groundtruth video frame-by-frame, facilitating the perception of generic motion within videos. Note that the perception embeddings of video frames, denoted as $\mathbf{E}_\mathcal{X}$, are aligned to the latent space of the Stable Diffusion Variational Autoencoder, which can be the equivalent of the pixel space at the frame level. Numerous recent video generation models have shown that the effective perception of generic motion can be achieved through two fundamental yet straightforward designs: **temporal attention mechanisms** and **frame-level loss functions** [1, 2]. Consequently, the design inspiration is embraced in this study. * **Regarding Experimental Evidences.** In **Figure 3**, the turtle's swimming direction, the airplane's orientation and flight direction, and the motorcycle rider's posture, along with their corresponding motion details, are **accurately reproduced**. If NeuroClips be exclusively crafted at a semantic level, it becomes evident that semantics alone are inadequate for capturing these motions. In addition, the exceptional accuracy of NeuroClips, as evidenced by the **pixel-level** and **ST-level** metrics in **Table 1**, further corroborates these visualization outcomes. * **Regarding Fine-grained Motion Decoding.** Currently, decoding fine-grained movements from fMRI data poses a **significant challenge** due to its **intricate and noisy nature**. Unlike text, fMRI data lacks straightforward cues such as motion-descriptive words, making the motion perception more complex. NeuroClips stands apart from previous text-to-video models that are capable of generating motions closely aligning with textual motion semantics. Note that NeuroClips has achieved significant progress in fMRI-to-video decoding tasks. Certainly, NeuroClips is by no means the final solution for fMRI-to-video decoding tasks; undoubtedly, superior solutions will emerge in the future. Looking ahead, there are **promising prospects** for enhancing the accuracy of decoding **fine-grained** movements from fMRI data in the future. This will be the focal point of our upcoming efforts. > **Question 2: Code Release** We promise to release the code at the earliest opportunity, following the acceptance of the paper. We appreciate and support open source because it helps more researchers contribute to the field and advances its development. This is an effective way to further enhance the significance and value of our research. **Reference** [1] Stable video diffusion: Scaling latent video diffusion models to large datasets, *Stability AI* [2] Align your latents: High-resolution video synthesis with latent diffusion models, *CVPR 2023* &nbsp; *** You have offered many constructive and valuable suggestions, making our submission more solid and complete. Once again, we sincerely express our best gratitude for your effort and time! Best wishes, All authors of Submission 351 --- Rebuttal Comment 1.1: Comment: Thanks for your well-constructive and persuasive review. I will raise the score. --- Reply to Comment 1.1.1: Comment: We greatly appreciate for your constructive feedback. We would like to express our sincere gratitude for your increased rating and further support towards our work! We hope that this paper achieves satisfactory results, not in vain of your efforts and suggestions. Once again, thank you, Reviewer 85K8. Best wishes, All authors of Submission 351
null
null
Rebuttal 1: Rebuttal: We sincerely thank all reviewers, AC, and SAC for their valuable time and selfless dedication. We are very pleased to see that the reviewers recognize the quality of our presentation (mcBd, oPVf), consider our experiments solid and extensive (mcBd, oPVf), and approve the novelty or soundness of NeuroClips (85K8, mcBd, oPVf). In particular, we are greatly encouraged by reviewer oPVf's recognition of the significance of our work. Meanwhile, we deeply value the reviewers' precious suggestions and questions, which we have addressed one by one in the rebuttal. Pdf: /pdf/e75663b601a2e4b2a799869788a7caabea9b85e9.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Off-policy estimation with adaptively collected data: the power of online learning
Accept (poster)
Summary: This paper presents an approach to estimating linear functionals of the reward function in contextual bandit settings from adaptively collected data. They consider the class of augmented inverse propensity weighted (AIPW) estimators and prove guarantees about the quality of the estimator in terms of the quality of the plugin estimator of the mean reward. Specifically, they characterize the finite sample MSE of the AIPW estimator on adaptively collected data. They quality of the estimator (in terms of low MSE) depends on the quality of the plugin estimator of the mean. They relate online learning of the plugin estimator of the mean reward to online non-parametric regression; they then prove a regret bound, where regret relates to the quality of the plug-in estimator learned online. ***edited score based on rebuttal. See comment below*** Strengths: This paper's presentation is technically precise and seems mathematically thorough. The author's approach of relating the estimation problem online learning seems interesting and creative. Weaknesses: - There is significant missing discussion of relevant related work on finite sample approaches for off policy evaluation in contextual bandits. For example "Anytime-valid off-policy inference for contextual bandits" by Waubly-Smith et al., "Off-Policy Confidence Sequences" by Karampatziakis et al. and "Optimal and Adaptive Off-policy Evaluation in Contextual Bandits" by Wang et al., as some examples. I would recommend the authors compare to these papers both theoretically and ideally experimentally as well. - The main result the authors present that holds for general plug-in estimators of the mean is the derivation of the MSE upper bound for the AIPW estimator (Theorem 3.2) as a function of the "regret" of the plug-in estimator error. Beyond this the authors provide results for the plug-in estimators in tabular data settings (Theorem 3.3) and plug-in estimators that are linear models (Theorem 3.4). I feel this is a limited contribution in terms of the types of plug-in estimators the authors can provide guarantees for. - The main results presented in this work assume that the behavior has a constant exploration rate (Assumption 1). Other works on statistical inference after adaptive sampling generally can prove results when the exploration can decay. - The writing is technically precise but extremely dense and often lacks sufficient context. I give explicit examples below: (-) The final result in section 4 was extremely hard to parse. There is a mention of "mis-specification" in the section 4.1 title, but then there is no mention of mis-specification anywhere in the section itself. It is not easy to understand how the result presented in that section is related to mis-specification. (-) There was little discussion as to why the pertubed IPW estimator was introduced in section 3.1, rather than just starting with the AIPW estimator in 3.2 (since there were no examples discussed of the perturbed IPW estimator that were not some version of an AIPW estimator). Furthermore, if the discussion in 3.1 is kept, the connection between these two estimators presented in 3.1 and 3.2 respectively needs to be made more explicitly. For example, explicitly relating AIPW if for a certain choice of $f$ these two estimators are the same. Technical Quality: 3 Clarity: 2 Questions for Authors: - In line 97, I do not understand why $\mu$ is called a treatment effect. It is also called the "reward model" and appears to be the expected reward function (function of context and action). Why is this called a treatment effect? This is not what is "commonly referred to" as a treatment effect in the causal inference literature. - In line 107 you state that its assumed the propensities are "revealed", does this mean known? - Please define << notation used in 117 Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: - A severe limitation of this work (especially compared to previous literature) is that they only provide guarantees about the mean squared error of the AIPW estimator and do not provide any approach to constructing confidence intervals for the quantities of interest. I would expect most people interested in using statistical inference methods on adaptively collected data (especially if data is small enough to warrant using a linear plug-in model or be in a tabular data setting) would care solely about estimation and not uncertainty quantification. As a result, the practical utility of this work is very limited. - There are no simulations demonstrating their approach in practice or comparing to other methods. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the detailed reviews. Below are point-by-point responses. We hope the reviewer can increase the score if our responses address your concerns. 1. Re "There is significant missing discussion of relevant related work on finite sample approaches for off policy evaluation in contextual bandits." Thank you for the suggested related works. We will discuss these in our paper. Our main goal is different from the suggested papers. The first two papers focus on constructing confidence intervals (or sequences) for the off-policy value, while our objective is in estimation of the off-policy value. The last paper (which we have cited in the paper) focuses on OPE with i.i.d. data, while our work focuses on OPE with adaptively collected data. Even if we translate the confidence intervals to prediction (in some way), it is hard to compare theoretically as no specific rate is provided for the confidence intervals in the paper “Anytime-valid off-policy inference for contextual bandits” by Waudby-Smith et al. Nevertheless, their methods are interesting, and we will try to compare our method with them empirically. 2. Re "The main result which holds for general plug-in estimators of the mean reward function." We note that our main results (Theorems 3.1 and 3.2) hold in general: Theorem 3.1 holds for any sequence of estimates of the treatment effect, and Theorem 3.2 holds for estimates of the treatment effect resulting from any no-regret learning algorithm. Theorems 3.3 and 3.4 are just two examples (i.e., corollaries) of Theorem 3.2 to demonstrate how to apply the framework of online learning for some concrete classes of outcome models. We also provide results for the case of general function approximation in Appendix B.6 that goes beyond the two examples on the tabular setting and linear designs (i.e., the case of linear function approximation). 3. Re "The main results presented in this paper assume that the behavioral policies have constant exploration rates (Assumption 1)." Similar to the response above, the strict overlap condition (Assumption 1) is imposed in the examples (Theorems 3.3 and 3.4, and Appendix B.6) of one of our main results---Theorem 3.2. Theorem 3.2 itself is applicable to cases with decaying exploration rates, but for simplicity we didn’t pursue this in this work. We leave this point as future work. 4. Re "The writing is technically precise, but extremely dense and often lacks sufficient context. " Thanks for the critique. We will revise the submission accordingly. Here, we give some clarifications on the terminologies “treatment effect” or “mean reward function.” These two terminologies are used separately in two communities: causal inference, and bandits and reinforcement learning. They can be related as seen in line 111 of our paper. This choice of terminology also aligns with a prior work “Off-policy estimation of linear functionals: Non-asymptotic theory for semi-parametric efficiency” by Mou et al. 5. Re "A severe limitation of this work (especially compared to the existing literature) is that the authors only provide guarantees about the mean-squared error." We cannot agree with the reviewer on this point. We view the estimation task as equally important as uncertainty quantification (e.g., constructing confidence intervals) from both theoretical and practical perspectives. Moreover, our proposal, the doubly-robust (DR) estimator using an online learning algorithm as a sub-routine, is new to the literature, and practically useful. 6. Re "There are no simulation results demonstrating their approach in practice or comparing against the other methods in the literature." We provided preliminary experimental results to corroborate our theory. Please check the attached PDF file. We mostly followed the experimental set-up in the paper “Off-policy evaluation via adaptive weighting with data from contextual bandits” by Zhan et al. Our DR estimator using an online learning algorithm as a sub-routine performs competitively against other estimators, while at the same time enjoying provable finite-sample performance guarantees. We will add more numerical simulation results to understand the pros and cons of each estimator. --- Rebuttal Comment 1.1: Comment: Hello, I will raise my score a bit. I appreciate the addition of empirical evaluation and the addition of discussion to related work. This addresses bullet 1 of weaknesses I stated. Through rewriting it seems like bullets 2 and 4 can be addressed. My concerns are still bullet 3 and that the amount of edits needed (adding simulations results, and changing writing for bullets 2 and 4) are quite a lot of changes/addition that are not being reviewed.
Summary: The paper investigates the challenge of estimating a linear functional of the treatment effect from adaptively collected data, commonly found in contextual bandits and causal inference studies. It introduces finite-sample upper bounds for the mean-squared error (MSE) of augmented inverse propensity weighting (AIPW) estimators and proposes a reduction scheme to minimize these bounds. The method is illustrated through three concrete examples. Additionally, the paper establishes a local minimax lower bound, demonstrating the instance-dependent optimality of the AIPW estimator. Strengths: - The paper extends the non-asymptotic theory of AIPW estimators to adaptively collected data. - The paper provides both an upper bound and a local minimax lower bound on the MSE of the off-policy value, which quantifies the similarity between a given target evaluation function $g$ and the treatment effect $\mu^*$. Weaknesses: - Although the paper is primarily theoretical, it would be helpful if the authors could include some simulation experiments to verify the theoretical results. For example, it would be interesting to see how the regret converges in practice in the examples of Section 3.5. - It would be beneficial if the authors could provide more explanations regarding certain definitions. For example, the off-policy value is defined as the expectation of the inner product between $g$ and $\mu^*$, rather than, e.g., the expectation of $g$ itself. What is the rationale behind this definition? In addition, why is the perturbed IPW estimator considered over the traditional IPW estimator? - The paper has generalized the theory developed for i.i.d. data to adaptively collected data and discussed the technical difficulties in Section 3.2. Could the authors compare the results for i.i.d. data with those for adaptively collected data? Is there any efficiency loss when the data is collected adaptively? Technical Quality: 3 Clarity: 2 Questions for Authors: See above. Confidence: 2 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: Limitations have been discussed in the checklist. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the suggestions. Below are the point-by-point responses. We hope the reviewer can increase the score if the weakness listed in the review has been improved. 1. Re "it would be helpful if the authors could include some simulation experiments to verify the theoretical results." We provided preliminary experimental results to corroborate our theory. Please check the attached PDF file. We mostly followed the experimental set-up in the paper “Off-policy evaluation via adaptive weighting with data from contextual bandits” by Zhan et al. Our DR estimator using an online learning algorithm as a sub-routine performs competitively against other estimators, while at the same time enjoying provable finite-sample performance guarantees. We will add more numerical simulation results to understand the pros and cons of each estimator. 2. Re "It would be beneficial if the authors could provide more detailed explanations regarding certain definitions." 2.1. Our formulation of the off-policy value using the inner product between the treatment effect $\mu^*$ and the evaluation function $g$ is versatile: As elucidated in lines 109~121 in our paper, our definition of the off-policy value in the equation (1) recovers several important quantities of interest including the average treatment effect (ATE) or its weighted variants and the value function in contextual bandits. 2.2. The perturbed IPW estimator reduces the variance of the standard IPW estimator. This can be observed from Proposition 3.1 in our paper. With a proper choice of the collection of auxiliary functions $f$, the perturbed IPW estimator has smaller variance. 3. Re "Could the authors compare the results for i.i.d. data with those for adaptively collected data?" As seen from both the upper bounds and lower bounds, the main difference lies in the measure of the size of noises: see the equations (7) and (2) in our paper. In an adaptive data collection model, the weighted $\ell_2$-norm of the noise depends on time-varying history-dependent behavioral policies, while in the i.i.d. data collection model, the weight is fixed over time and history-independent: see the paper “Off-policy estimation of linear functionals: Non-asymptotic theory for semi-parametric efficiency” by Mou et al. --- Rebuttal Comment 1.1: Comment: Thank you for your thoughtful and detailed response to my comments. I have no further questions at this time. As noted by other reviewers, the paper introduces some concepts that are less commonly seen in the literature, making it essential to provide sufficient context and explanation. It appears that the current work is an extension of [1], which indeed provides more background. I recommend including more intuition and context in future revisions to enhance clarity. [1] Mou, W., Wainwright, M.J. and Bartlett, P.L., 2022. Off-policy estimation of linear functionals: Non-asymptotic theory for semi-parametric efficiency. arXiv preprint arXiv:2209.13075.
Summary: This paper study the off-policy problem in the sequential decision setting with adaptively collected data. The authors propose to use augmented Inverse propensity weighting estimator to estimate the policy value and conduct extensive theoretical analysis on the estimator, including variance and mean square error. Based on the analysis, the authors propose the methods to learn the function estimator in AIPW estimators. Strengths: This paper comprehensively analyze the property of the AIPW estimator, including the variance and MSE bound. Therefore, I think this is a theoretically solid paper. And the connection between theory and the method is smooth and well-grounded. Weaknesses: I am confused about the claim of "adaptive" and "online learning" in the title. The two words seems that the decision-maker can adaptively select the action during the decision process. However, it seems that the record (context, actions, outcomes) are passively observed in the problem. So I concern that the paper may be mis-positioned. Therefore, the technical distinction between the sequential decision setting and static setting is not clearly presented. It seem to be a trivial extention of the traditional policy evaluations problem in the static setting. Technical Quality: 3 Clarity: 3 Questions for Authors: The formulation of perturbed IPW estimator in section 3.1 seems different with the doubly robust estimator. Can the authors provide the connection between the perturbed IPW estimator and the doubly robust estimator in other papers. For example, the doubly robust estimators in [1]. [1] Miroslav Dudik, John Langford, and Lihong Li. 2011. Doubly robust policy evaluation and learning. In International Conference on International Conference on Machine Learning. 1097–1104. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: I think the discussion of the limitation in the paper is not sufficient. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the comments. Below are our point-by point responses to better position our paper and explain our contributions. We hope the reviewer can increase the score if the confusion about the paper and its contributions has been resolved. 1. Re "I am confused about the claim of "adaptive" and "online learning" in the title." a) Why "adaptive" in the title? We focus on off-policy evaluation (OPE): the problem of estimating the value function based on an offline dataset. In OPE, the word “adaptive” means that in the offline dataset, the actions could be adaptively chosen based on previous samples. This is in contrast to OPE with i.i.d. dataset in which actions are drawn depending only on the current context. b) Why "online learning" in the title? We use an “online learning” algorithm (or a no-regret learning algorithm) to iteratively learn the treatment effect function, and then use them in forming the AIPW estimator to solve the OPE problem; see Algorithm 1 in the paper. This is also different from OPE with i.i.d. dataset in which an “offline learning” algorithm such as the empirical risk minimization (ERM) is applied to learn the treatment effect. This gives rise to the term “the power of online learning” in the title. 2. Re "the technical distinction between the sequential decision setting and static setting is not clearly presented." We hope the response above has already resolved some confusion about sequential vs static. We expand a bit on the challenge and our contributions here. We view OPE as a static decision-making problem as the pre-collected offline dataset is already provided to the learner, and the learner's goal is to estimate the off-policy value. The major challenge here is that the offline data is adaptively collected, that is, in the sequence of observations of the form (context, action, outcome), the actions can be chosen depending on previous data. Focusing on OPE with adaptively collected data, we investigate finite-sample guarantees of the class of AIPW estimators, propose a specific AIPW estimator with a sequence of estimates of the mean reward function learned via an online learning algorithm, and show its near-optimality in a minimax sense. All of these are new to the literature. 3. Re "The formulation of the perturbed IPW estimator in Section 3.1 seems to be different from the doubly-robust (DR) estimator. " The AIPW estimator (i.e., the DR estimator) is a special case of the perturbed IPW estimator with the collection of auxiliary functions $f^*$ takes form as the equation (5) in our paper, and the treatment effect $\mu^*$ is replaced by its empirical estimates. This will yield the DR estimator (cf. Eq (1)) in the paper “Doubly robust policy evaluation and learning” by Dudik et al. --- Rebuttal Comment 1.1: Comment: Thanks for your response. I have get the meaning of "adaptive" and "online learning". However, I am still confused that why this problem needs the technology of online learning. Why ERM fails in this setting? Can you give me an example in practice to verify the practical significance? --- Reply to Comment 1.1.1: Comment: We would like to thank the reviewer for engaging in the discussion. The classical empirical risk minimization (ERM) is not an appropriate strategy for estimation of the treatment effect $\mu^*$ in our setting for the following two reasons. First, we would argue that online learning is a more natural strategy for estimation of the treatment effect $\mu^*$ in our setting, compared to the classical ERM. In view of Theorem 3.1 in our paper, we need to aim at building a sequence of estimates of the treatment effect $\mu^*$ that minimizes the weighted average estimation error in the equation (11). This objective naturally falls into the realm of online learning, where a sequence of decisions are made to minimize a sequence of certain loss functions. On the contrary, for the i.i.d. data collection model, one only needs to construct a single estimate $\hat{\mu}$ of the treatment effect $\mu^*$ that minimizes a certain weighted mean-squared error; see the equation (11) in [1] for the construction of the estimate. In this case, the classical ERM (the non-parametric weighted least-squares estimate for this case) is more natural. Second, one could consider using the ERM in each step of the framework of online learning, i.e., an “adaptive” ERM. However, it is known that this algorithm may incur a linear regret in the worst case, which motivates us to employ no-regret learning algorithms such as the Follow-The-Regularized-Leader (FTRL; basically using the regularized ERM in each step) or its optimistic variants. [1] Wenlong Mou, Martin J. Wainwright, and Peter L. Bartlett, “Off-policy estimation of linear functionals: Non-asymptotic theory for semi-parametric efficiency”, arXiv preprint arXiv:2209.13075, 2022.
null
null
Rebuttal 1: Rebuttal: Please check the attached PDF file for preliminary experimental results to corroborate our theory in the paper. Pdf: /pdf/634369da97f80b3c1a2b7cbd01f2762023100dad.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Beyond the Doors of Perception: Vision Transformers Represent Relations Between Objects
Accept (poster)
Summary: The paper aims to answer how Vision Transformers (ViTs) perform tasks requiring visual relational reasoning. The study focuses on two tasks: identity discrimination and relational match-to-sample (RMTS), showing that ViTs process information in two distinct stages: perceptual and relational. Further analyses using mechanistic interpretability approaches were used to validate this results. Overall, this work offered insights into same-different judgement which is intuitive to humans but proven hard for AI. Strengths: - Similarity judgement task has been long studied in humans and animals, offering insights into perception and cognition, that's why the main question addressed in this study is of high interest for both psychology and AI community - The analyses included support the main claims - The boundaries of claims, and the questions remained to be answered were clearly stated in the discussion Weaknesses: - Related work is poorly presented: Despite the vast literature on both the question and the methodology, only one paragraph was allocated to the related work. There are other works suggesting a two phase processing in transformers (see below for an example) which I think discussing them in the context of this work would help to put the relational reasoning in a broader context. - Maybe related to point one, but a theoretical explanation supporting the findings, that is, under what conditions these distinct phases appear would make the work stronger. @misc{cui2024, title={A phase transition between positional and semantic learning in a solvable model of dot-product attention}, author={Hugo Cui and Freya Behrens and Florent Krzakala and Lenka Zdeborová}, year={2024}, url={https://arxiv.org/abs/2402.03902}, } Technical Quality: 3 Clarity: 3 Questions for Authors: I am curious about the potential impact of CLIP's text co-training, in contrast to DINO's image-only training, on their respective performances when fine-tuned for same-different tasks. Specifically, I wonder if CLIP's joint training on images and captions forces its embedding space (particularly in the later layers) to deviate from purely visual representations to accommodate textual information as well. Could this difference in training approaches be a significant factor in explaining their divergent behaviors on visual reasoning? In that case, the the perpetual/reasoning phases probably not a property of ViT (as title suggested) but a product of transformer combined with multi-modality. Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: - Related work section is very limited. - Claims on ViT seem to confined to CLIP, so probably a slight revision of the title and text makes the basis of the claim stronger. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful review! Find point-by-point replies below: **Weaknesses**: 1. “Related work”: We completely agree! We will use the extra space in the final submission to expand our literature review section to include more insights from both the (vast) cognitive science literature and the mechanistic interpretability literature (see global rebuttal #1 for specific topics). We are also happy to include more related work on the processing pipeline in transformers (including the reference you mentioned). 2. “Theoretical explanation”: We agree, and we are curious to find this out ourselves! While we believe a proper treatment of the learning dynamics that give rise to multi-stage processing in transformers is out of scope for this work, we are highly interested in pursuing this direction in the future. Our current understanding is quoted below and is supported by additional analyses on DINOv2, a vision-only ViT whose pretraining dataset is closer in scale to CLIP than the other models. Despite using a very different training objective, DINOv2 matches CLIP’s performance on discrimination and Relational-Match-to-Sample (RMTS) tasks (99.5% test accuracy on discrimination and 98.2% on RMTS); crucially, it also exhibits similar two-stage processing. We will include these results upon acceptance, though see Figure 1 in the supplemental PDF for an attention pattern analysis on DINOv2. > “Raghu et al. (2021) finds that models pretrained on more data tend to learn local attention patterns in early layers, followed by global patterns in later layers. This might give CLIP a particular incentive to create local object representations, which are then used in relational operations. Future work might test this hypothesis.” **Questions**: 1. “Multi-modality”: We were very interested in this question as well. As mentioned earlier, we analyze DINOv2 to explicitly test this. While CLIP does show some differences from DINOv2 (see general rebuttal #2 and the response to reviewer NSpf, question #2), DINOv2 largely recapitulates the results from CLIP. This suggests that data scale drives the adoption of two-stage processing rather than multi-modality. **Citations**: 1. Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., & Dosovitskiy, A. (2021). Do vision transformers see like convolutional neural networks? Advances in neural information processing systems, 34, 12116-12128. --- Rebuttal Comment 1.1: Comment: Thank you for your response. New experiments on the effect of multi-modality certainly helps support the main claim regarding ViTs (although not quite there yet) and it's encouraging to see authors are willing to address other limitations regarding prior work in the final version. I raided the confidence score. Looking forward to see the improved final version of the work.
Summary: This paper examines how vision transformers (ViTs) process visual relations, focusing on same-different tasks. The paper finds that pretrained ViTs fine-tuned on these tasks often develop a two-stage processing pipeline: a perceptual stage that extracts object features into separate representations, followed by a relational stage that compares these representations. Using interpretability techniques, they show that the perceptual stage creates disentangled object representations, while the relational stage implements somewhat abstract comparisons. The paper demonstrates that failures in either stage can prevent models from learning generalizable solutions. By analyzing ViTs in terms of these processing stages, the authors suggest we can better understand and improve how models handle relational reasoning tasks. Strengths: - Addresses an important drawback of supposedly generalist vision models - Clear demonstration and definition of tasks that the paper focus on (e.g. Figure 1) - Thorough analysis, both in terms of performance metrics and model mechanism, of results - Abundant visualization of results - Constructed dataset and tasks that can benefit future studies of this topic Weaknesses: - Focus on more toy settings. Lack evaluation on more advanced/real world tasks. - While the central claim of dividing ViT perception into perception and relation stage is inspired by infant and animal abstract concept (line 71), there's the risk of confirmation bias from this analogy. - Plots in the paper requires clearer annotations and explanations (e.g. Figure 2 has too many acronyms, and it's difficult to understand what conclusion is for the plot) - Lacks qualitative example of more intuitively describe how model processes relational examples - While the abstract claims that understanding the relational reasoning help rectify short comings of existing and futures models (line 19), the paper doesn't discuss much what can be done to improve model relational reasoning (especially empirical experiments for improvements). Technical Quality: 3 Clarity: 2 Questions for Authors: - What would the same analysis methods result when applied on real world images or more complex reasoning tasks such as ARC-AGI? - Can you compare the mechanistic interpretation result with other text-based interpretability methods such as logic lens? - How to improve relational reasoning capability in vision models? What are implications of such potential improvements? - How would success or failure in relational reasoning impact CLIP in terms of embedding's alignment with text descriptions? Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: The author doesn't discuss limitations (although the paper checklist claims that limitations are discussed in Section 8). Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your detailed review! Find point-by-point replies below: **Weaknesses**: 1. “Toy settings”: Fair point! To address this, we created a realistic same-different dataset using 3D models of objects and used it to evaluate our models, similarly to [1]. See Figure 2 in the supplemental PDF for examples of this dataset. We find that CLIP attains a zero-shot test accuracy of 93.9% on this dataset, while all other models attain chance performance. We also find that CLIP (and DINOv2, despite only achieving chance accuracy) exhibits similar two-stage processing on the realistic stimuli without additional fine-tuning on them (albeit with more attention paid to background tokens). This generalizes our results from the toy setting. See Figure 3 in the supplemental PDF for attention pattern analyses on these stimuli. We note that this task is not a perfect analogue to our synthetic setting due to more realistic perceptual variation (i.e. lighting, rotation, occlusion), but hopefully its inclusion can ameliorate concerns about real-world data/tasks! 2. “Confirmation bias”: This is a reasonable concern. However, we try to let the data speak for themselves. Though our attention pattern analysis might be more qualitative and up for subjective interpretation, our analyses in sections 4 and 5 make clear that there is a marked difference between early and late processing along the lines of what we have described as a “two-stage processing pipeline.” 3. “Clearer annotations”: We completely agree. Upon acceptance, we plan to use the extra space in the final submission to make this figure larger and include more detailed annotations. We are happy to clarify our interpretation of Figure 2 as well (especially in the caption). 4. “Qualitative examples”: We have some qualitative examples in Figure 11 (in the appendix), but we have created a new figure that gives much more intuition and have also written a greatly expanded appendix outlining this and Figure 11. See Figure 4 in the supplemental PDF for the new figure. 5. “Improve model relational reasoning”: In the submission, we attempted to induce relational reasoning by encouraging models to form local, object-level representations using an auxiliary loss in Section 7. However, your point is very well taken, and we had similar thoughts. In light of this, we have devised a new auxiliary loss derived from the attention head scores in Section 3 that explicitly encourages models to exhibit two-stage processing. We hypothesized that this would improve relational reasoning. We find that training randomly-initialized ViTs on the discrimination task with this additional loss term significantly boosts performance (76.5% to 93.9% test accuracy; +17.4). It also boosts the model’s compositional generalization (75.9% to 92.3% accuracy; +16.4). We have promising preliminary results using this loss to improve performance on the RMTS task and will comment on them when ready. This loss is also quite general, and we believe that it can be easily adapted to and possibly improve performance for a wide variety of relations. We will include these new empirical experiments and details about the loss in the main body of the paper upon acceptance. **Questions**: 1. “Real world images”: See response to weakness #1. 2. “Logit lens”: At your suggestion, we have implemented and run a logit lens analysis on several models, analyzing how different model components contribute to “same” and “different” classifications. The analysis reveals that CLIP seems to process examples in a qualitatively different way compared to the rest of the models, perhaps pointing towards a mechanism for its success on the realistic stimuli. In particular, CLIP appears to make greater use of register tokens in the background compared to other models; it also appears to use the CLS token in non-intuitive ways for intermediate computations (rather than just gradually storing the “same” or “different” decision in CLS across layers, as other models do). We will include this analysis in the appendix, though we leave further investigation to future work. 3. “Improve relational reasoning”: See response to weakness #5. 4. “Alignment with text descriptions”: In general, a vision model that cannot perform relational reasoning will not discriminate between closely matched sentences (i.e. “plants surrounding a lightbulb” vs. “a lightbulb surrounding plants”). Indeed, this has been born out in the literature in the form of Winoground [2] among other datasets. **Citations**: 1. Tartaglini, A. R., Feucht, S., Lepori, M. A., Vong, W. K., Lovering, C., Lake, B. M., & Pavlick, E. (2023). Deep neural networks can learn generalizable same-different visual relations. arXiv preprint arXiv:2310.09612. 2. Thrush, T., Jiang, R., Bartolo, M., Singh, A., Williams, A., Kiela, D., & Ross, C. (2022). Winoground: Probing vision and language models for visio-linguistic compositionality. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 5238-5248). --- Rebuttal Comment 1.1: Comment: Thank you for your response. These additional experiments are convincing, and I have raised my score. I have the following additional questions 1. Does stimuli pattern vary across samples and impact model performance? 2. You provide model training with auxiliary loss result in Table 1 of supplementary. What task is the model trained and evaluated on? --- Reply to Comment 1.1.1: Comment: Thank you for the followup questions! We are happy to answer as best we can. 1. I am not exactly sure what you mean by this, but the Gaussian noise applied to each object is randomly sampled independently for each image when we create our datasets. So, all blue X's (for example) have different noise patterns in each image. We explored removing the Gaussian noise earlier in this project, and found that this had little effect on downstream performance. 2. Table 1 contains results from training models on a discrimination task with the auxiliary loss applied. We use identical training and testing datasets with and without including the auxiliary loss. The training sets contain 32 different shape-color pairs, the test set contains new images with the same 32 pairs, and the compositional dataset contains the remaining held-out pairs. These datasets contain metadata defining the color and shape of each object in the image, and we use this metadata to define our auxiliary loss function. In the prose, we reference an analogous version of this experiment on the RMTS task. Table 4 in Appendix J presents these results. We hope this clears things up! Let us know if you have any further followup questions.
Summary: The paper uses techniques from mechanistic interpretrability to analyze the algorithms implemented by pretrained ViTs to solve abstract visual reasoning tasks. The authors use two synthetic same-different tasks: discrimination and relational match-to-sample (RMTS), to analyze CLIP pretrained ViTs, DINO pretrained and Imagenet pretrained ViTs, finetuned on these tasks, as well as a ViT trained from scratch. Some pretrained ViTs, especially CLIP pretrained demonstrated a strong perceptual stage in the early layers, disentangling object representations, followed by a relational stage in the later layers, which implements somewhat abstract same-different relations. The authors also demonstrate that the formation of only the perceptual stage is enough to solve simple relational tasks like discrimination, but not enough for RMTS. Strengths: 1. The paper is well written and easy to follow. 2. Interesting use of mechanistic interpretrability to analyze how pretrained ViTs solve same-different tasks: discrimination and relational match-to-sample (RMTS). 3. The paper shows that early layers of CLIP pretrained ViTs demonstrate local attention for within object processing (perceptual stage), whereas the later layers demonstrate global attention for between object processing (relational stage). The two stage processing doesn’t form strongly for the other pretrained models (DINO and ImageNet) especially the relational stage. 4. The perceptual stage is characterized by disentangled object representations (color and shape). Models trained from scratch can be enforced to have disentangled representations using an auxiliary loss which is enough for generalization in simple relational tasks like discrimination but not for RMTS task. Weaknesses: 1. Analysis is limited to simple tasks consisting of simple relations. It remains to be seen if the results would hold up for more tasks involving higher order relations and more complicated relations. 2. The paper doesn’t give a clear and detailed intuition behind why CLIP pretrained ViTs implement the two stage processing more strongly compared to other pretrained ViT models. Technical Quality: 2 Clarity: 3 Questions for Authors: 1. According to Fig 2a,c deeper layers don’t seem to be dominated by global heads as WO attention score is higher than WP? 2. Analysis of some models are missing in Fig 2 (e.g DINO Discrimination and Imagenet pretrained ViTs). 3. Can the authors also perform the same analysis for ViTs pretrained using masked autoencoding objective [1]? 4. Are the results in Fig 6 averaged for all models? [1] - He, K., Chen, X., Xie, S., Li, Y., Dollár, P. and Girshick, R., 2022. Masked autoencoders are scalable vision learners. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 16000-16009). Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: The authors have discussed the limitations in the last section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful review! Find point-by-point responses below: **Weaknesses**: 1. “Simple relations”: This is fair! With respect to “higher order relations”, we attempted to explore this using the Relational-Match-to-Sample (RMTS) task—an explicitly hierarchical version of the discrimination task. While this goes beyond much of the same-different literature (which focuses on variations of the discrimination task), we do limit our investigation to the same-different relation (rather than other relations) due to its particular significance in the study of abstract concepts in the cognitive sciences, as well as its conceptual simplicity for applying mechanistic interpretability techniques to. 2. “Detailed intuition”: We agree that this description (pasted below) was somewhat speculative during the submission: > “Raghu et al. (2021) finds that models pretrained on more data tend to learn local attention patterns in early layers, followed by global patterns in later layers. This might give CLIP a particular incentive to create local object representations, which are then used in relational operations. Future work might test this hypothesis.” * Since submission, we have rerun our behavioral analysis and attention pattern analysis on a DINOv2 ViT, which is pretrained on about 142M images; this is close in size (in order of magnitude) to CLIP’s pretraining dataset (about 400M images). Despite having a vastly different pretraining objective to CLIP, the DINOv2 model matches CLIP’s performance on both discrimination and RMTS tasks (99.5% test accuracy on discrimination and 98.2% on RMTS). It also exhibits two-stage processing like CLIP. This bolsters our intuition that pretraining data scale (rather than the type of pretraining supervision) is the key. We will include all of these results in the camera ready version. See Figure 1 in the supplemental PDF for supporting figures. **Questions**: 1. “WO attention score is higher than WP”: These attention scores refer to the maximum proportion of attention paid by any given attention head to either within-object, within-pair, between-pair, or background tokens. Though the max for WO is higher than the max for WP in later layers, it is more informative to look at the trend within a given head-type rather than between head types. Notably, we find that the peaks of these scores occur in the expected sequence. 2. “Analysis of some models are missing”: Thank you for catching this! We will include these graphs in the appendix of the final submission. 3. “MAE”: Can do! We fine-tune a pretrained ViT-MAE model on the discrimination and RMTS tasks and perform an attention pattern analysis (following Section 3). We find that ViT-MAE achieves very similar performance to ImageNet and DINO ViT. On discrimination, it achieves 98% test accuracy and 94.9% compositional generalization accuracy. On RMTS, it achieves 93.4% test accuracy (interestingly, somewhat higher than ImageNet and DINO) and 85.3% compositional generalization accuracy. Its attention patterns do not demonstrate two-stage processing like CLIP or DINOv2; instead, the local and global heads are mixed throughout the layers. We will add an attention pattern analysis on MAE to the appendix of the paper. 4. “Figure 6”: No—each data point corresponds to a different model, which achieves a different maximum disentanglement score. We will revise the caption for clarity. **Citations**: 1. Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., & Dosovitskiy, A. (2021). Do vision transformers see like convolutional neural networks? Advances in neural information processing systems, 34, 12116-12128. --- Rebuttal Comment 1.1: Title: Official comment by Reviewer DaaT Comment: Thank you for the detailed rebuttal, which has addressed some of my concerns. I have increased the rating to 6.
Summary: This work studies ViTs' learning behavior with relational tasks by experimenting on 2 same-different tasks: discrimination and RMTS tasks. And the authors propose a dataset to analyze. And discovers that there are 2 stages of attention processing of CLIP ViTs by attention scores from patches to other patches. They characterize 2 stages, perceptual stage and relational stage. Strengths: This paper defined a novel avenue with inspiration from mechanistic interpretability to understand the working mechanism of CLIP ViTs. And proposed approaches to study them with attention analyses. Weaknesses: 1. I believe readers in this track would generally not be experts in concepts of the proposed area of studies, the paper should include more literature review for context upfront. 2. I find the writing and the flow of the paper hard to follow, maybe adding flow-chart would help improve readability. 3. The dataset is not well-discussed. 4. I think there is a slight mismatch between claimed study and experiments. The title suggest the study centers around ViTs but use CLIP ViTs, leaving me confused if the points made are the same on the original ViTs. Technical Quality: 2 Clarity: 2 Questions for Authors: 1. ln 23-24, the motivation "little breakthrough progress on complex tasks involving relations between visual entities..." is that true? What about the recent breakthroughs or datasets of vision-language models, Llava, GPT4, etc. Please give more explanations. 2. ln 41, please define what are "algorithms" learned by ViT, and are there algorithms learned by other architectures, CNNs, LSTMs, etc.? 3. Are there no existing datasets that can study the relations of visual entities? I mean in general, not the RMTS, etc. 3. The motivation of using CLIP ViTs for study is not clear. Since part of the motivation is from infant learning, why not use more human-like CNN like ConvNext? Also, CLIP ViT is different from ViT as well. 4. Vaguely remember [1] discussed about using better disentangled model for generalization and color, are there new insights from Sec 6? 5. With the 2 stage processing, what would you suggest the architecture to change to build a more "robust" version? --------- [1] Better may not be fairer? [Chiu, et. al., 2023] Confidence: 3 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful comments and questions. Find point by point responses below: **Weaknesses**: 1. “...Literature review for context…”: We completely agree. Upon acceptance, we will use the extra space to expand our literature review section to include more insights from both the (vast) cognitive science literature and the mechanistic interpretability literature (see global rebuttal #1 for specific topics). 2. “Flow of the paper”: We are happy to add more signposting throughout the sections. Roughly, the paper is structured such that Section 3 motivates the investigation of two stages of processing, Sections 4 and 5 respectively investigate these stages in greater detail, and Sections 6 and 7 discuss the implications of some of our findings on downstream performance. We are happy to state the structure of the paper in the introduction! 3. “Dataset is not well discussed”: We have written a vastly extended appendix detailing the dataset and its construction, which we will include in the final submission. We are also happy to answer any questions about the dataset during the discussion. 4. “CLIP ViTs”: In this work, we investigate abstract relations in a variety of ViTs. In particular, we study ViTs that are pretrained with different kinds of supervision, e.g. CLIP, DINO, and ImageNet. However, as CLIP is the only type of pretraining that achieves near perfect test accuracy for our tasks, we focus on it for the majority of the paper. Since submission, we have added DINOv2 to our analyses, finding that it solves both discrimination and Relational-Match-to-Sample (RMTS) tasks with near perfect test accuracy (99.5% test accuracy on discrimination and 98.2% on RMTS); importantly, it also exhibits two distinct stages of processing similar to those we observe in CLIP. This generalizes our findings beyond CLIP and bolsters our intuition that pretraining data scale (rather than something particular to CLIP) provides the visual representations needed to solve these tasks. We will include these results in a new appendix to the paper and reference them throughout the main body. **Questions**: 1. “Little progress”: Great point—we should be more specific. We mean that across a variety of tightly controlled benchmarks, vision models tend to struggle with tasks that involve relations [3, 4], especially compared to tasks that involve semantic computations. Additionally, the models that you mention have demonstrated some progress in processing visual relations, but there are still shortcomings to be addressed. We are happy to add these references to the final manuscript in an updated related work section. 2. “Algorithms”: This terminology is borrowed partially from the mechanistic interpretability literature (i.e. in conceptualizing neural networks as implementing interpretable algorithms), and partially from the cognitive science literature (i.e. Marr’s levels of analysis). When we use the word “algorithm”, we are referring to the series of representations generated by a model to solve a particular task. 3. “Existing datasets”: There are plenty of datasets used to study visual relations! For example: SVRT [1] and CVRT [5]. However, these datasets are not controlled enough to enable us to adapt state-of-the-art techniques from language model mechanistic interpretability for ViTs. Our dataset is explicitly constructed to enable these techniques by e.g. consistently aligning objects within the bounds of ViT patches. Our new dataset appendix details our design choices. 4. “Motivation of using CLIP ViT”: As mentioned above (weakness #4), we use many different ViTs. Your point about CNNs being more human-like is well taken! We use ViTs because prior work has demonstrated that they can solve same-different tasks in a robust fashion [2]; they also enable us to use techniques from mechanistic interpretability that have previously been developed for Transformer language models (DAS, attention analysis, linear probing-based causal interventions, logit lens). Since CNNs lack tokens, it is not clear whether it is possible to apply these techniques to them. 5. “Disentangled model for generalization”: We are happy to include this reference in the main body of the paper. Notably, our work demonstrates the benefits of disentangled representations for compositional generalization (rather than standard generalization) in a vastly different setting! 6. “Build a more robust version”: We were very motivated by this question as well! Since submission, we have implemented a new auxiliary loss function derived from the attention head scores in Section 3 that encourages the model to adopt two-stage processing. We find that this helps a randomly-initialized ViT achieve significantly better downstream performance on the discrimination task (76.5% to 93.9% test accuracy; +17.4). It also significantly boosts compositional generalization accuracy for discrimination (75.9% to 92.3% accuracy; +16.4). We currently have promising preliminary results using this loss to improve performance on the RMTS task and will comment on them when they are ready. This loss is also very general and could possibly be used in the future to improve relational reasoning for a wide variety of visual relations. We will include these results and more details about the loss in the main body of the paper upon submission. **Citations**: 1. Fleuret, F., et al. (2011). Comparing machines and humans on a visual categorization test. 2. Tartaglini, A. R., et al. (2023). Deep neural networks can learn generalizable same-different visual relations. 3. Thrush, T. et al. (2022). Winoground: Probing vision and language models for visio-linguistic compositionality. 4. Zeng, Y., et al. (2024). Investigating Compositional Challenges in Vision-Language Models for Visual Grounding. 5. Zerroug, A., et al. (2022). A benchmark for compositional visual reasoning. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for their rebuttal and acknowledge that some concerns have been addressed.
Rebuttal 1: Rebuttal: We thank all of the reviewers for leaving thoughtful, high-quality comments and questions on our manuscript. Here, we address some common themes found in multiple reviews, and also list all of the additional analyses that we have performed (or are planning to perform) to address any outstanding concerns. 1. **Literature Review**: Several reviewers noted that our literature review was somewhat sparse. We agree, and we plan to greatly expand it using the additional page conferred upon acceptance. In particular, we will add an expanded discussion of prior work evaluating visual relations in computer vision models, a discussion of interpretability work identifying processing pipelines in transformers, and an expanded discussion of abstract visual reasoning in humans. 2. **Synthetic Task Concerns**: Several reviewers raise concerns about the relative simplicity of our datasets. We first note that our dataset is constructed to enable the use of state-of-the-art mechanistic interpretability techniques from NLP on a ViT (some for the first time, to our knowledge!). As in the language domain, these techniques require a high degree of dataset control, and they tend to trade task and data complexity for the ability to make precise interpretations. However, we understand these concerns and have run a behavioral evaluation on a realistic same-different dataset (made using 3D models of objects) to demonstrate the generalizability of our findings. We discover that CLIP attains a zero-shot test accuracy of 93.9% on the realistic dataset; all other models achieve only chance accuracy. Furthermore, CLIP exhibits the same two-stage processing on these stimuli without any additional training on them (albeit with greater attention paid to background tokens throughout the model). See Figure 2 in the supplemental PDF for example stimuli from the realistic dataset and Figure 3 for attention head scores for CLIP (and a new model, DINOv2; see reply #3 below) on these stimuli. 3. **CLIP ViTs vs. Other ViTs**: Several reviewers note that our analyses mainly focus on CLIP ViTs rather than ViTs in general. In the submission, this is simply because CLIP attains the best performance on our tasks. Thus, since CLIP yields the largest number of accurate responses, focusing on it simplified the application of the mechanistic interpretability techniques we use. However, this raises the question: is two-stage processing a result of pretraining data scale (as we suggest) or multimodal pretraining? To address this, we have since run additional analyses on DINOv2 [1], another ViT that is pretrained without linguistic supervision on a dataset of similar size (in order of magnitude) to CLIP (~142M images vs. CLIP’s ~400M). We find that DINOv2 performs as well as CLIP on both discrimination and RMTS tasks (99.5% test accuracy on discrimination and 98.2% on RMTS); it also demonstrates two-stage processing like CLIP. We will include these results in the paper upon acceptance; we have also included the attention pattern analysis for DINOv2 in Figure 1 in the supplemental PDF. 4. **How to Improve Models**: Several reviewers ask how our results might lead to models that achieve stronger relational reasoning skills. We had the same question! Since submission, we have derived a new auxiliary loss from our attention head scores in Section 3 that encourages models to exhibit the two-stage processing found in CLIP (and DINOv2). We find that introducing this loss during training significantly boosts the performance of a randomly-initialized ViT on the discrimination task (76.5% to 93.9% test accuracy; +17.4)—interestingly, it boosts the model’s compositional generalization as well (75.9% to 92.3% accuracy; +16.4). See Table 1 in the supplemental PDF for more results. We have also obtained promising preliminary results using this loss to improve performance on the Relational-Match-to-Sample (RMTS) task and will comment with those results when ready. The formulation of the loss is general (i.e. not specific to same vs. different), so we believe that it could potentially be useful for future work looking to improve performance on other visual relations. We plan to include these empirical results and more details about the loss in the final submission. **List of additional analyses**: - DINOv2 behavioral results and attention analysis (will include Section 4, 5, & 6 analyses on DINOv2 with final submission); Figure 1 in supplemental PDF - Realistic same-different evaluations; Figures 2 and 3 in supplemental PDF - Experiments using an attention pattern loss to improve relational reasoning; Table 1 in supplemental PDF - Logit lens implementation and analyses - ViT-MAE behavioral and attention analysis **Citations**: 1. Oquab, M., Darcet, T., Moutakanni, T., Vo, H., Szafraniec, M., Khalidov, V., ... & Bojanowski, P. (2023). Dinov2: Learning robust visual features without supervision. arXiv preprint arXiv:2304.07193. Pdf: /pdf/2e7e831a77047f4b124843e25a88fff143ad95b2.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Visual Prompt Tuning in Null Space for Continual Learning
Accept (poster)
Summary: This paper introduces the orthogonal projection into the visual prompt tuning for continual learning, which comprehensively considers the full operations of a transformer layer on the interference problem. Moreover, two sufficient consistency conditions for self-attention and an invariant prompt distribution constraint for LayerNorm are theoretically deduced, based on which an effective null-space-based approximation solution is introduced to implement the prompt gradient orthogonal projection for visual prompt tuning. Finally, extensive experimental results demonstrate the effectiveness of anti-forgetting on four class-incremental benchmarks with diverse pre-trained baseline models, and our approach achieves superior performances to state-of-the-art methods. Strengths: 1)The research motivation for algorithms is reasonable, and the theoretical proof is solid. Constraining the new learnable parameter orthogonal to the previous weight is a reasonable motivation to prevent historical knowledge forgetting. 2)Extensive evaluation and amazing performance show the superiority of the proposed method. Weaknesses: 1)Eq.(8) seems be error. 2)L139: Eq.(8) suggests that if the weight update ∆Θ is orthogonal to the previous input feature Xt during training in the new task, the corresponding output feature will remain unchanged. Moreover, will this degrade the model’s discriminative ability for the current task when the weight update is orthogonal to the previous feature? 3)What is the model’s complexity and running time compared to the baseline VPT? 4)Suggest using a more accurate diagram to describe the algorithm to help understand it more clearly. Technical Quality: 4 Clarity: 4 Questions for Authors: Please see “Weaknesses” Confidence: 5 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **1:** The symbol "$\Rightarrow$" may cause confusion in Eq. (8). Our intention was to convey that the left equation can be *simplified to yield* the right equation, rather than the left equation *being a sufficient condition* for the right one. We will correct this to ensure a more precise expression. **2:** The orthogonal projection will degrade the model’s discriminative ability for the current task, since the update direction is constrained into a subspace smaller than the original gradient space. This is a fundamental challenge known as stability-plasticity dilemma [48] in continual learning. The stability indicates the model’s discriminative ability for the old tasks, while the plasticity indicates the model’s discriminative ability for the current (new) task. A stronger stability usually causes a weaker plasticity and vice versa. **The plasticity-stability dilemma objectively exists in continual learning. We cannot completely eliminate this dilemma yet we can improve the overall accuracy by carefully balancing the stability and plasticity.** In our approach, two techniques help to enhance the plasticity of our model. **(1)** As introduced in the "Trade-off between Stability and Plasticity" section, we employ a hyper-parameter $\bar{\eta}$ to control the weight of each projection matrix: $\Delta \mathbf{P}=[\bar{\eta}\mathcal{B}_2 + (1-\bar{\eta})\mathbf{I}] \mathbf{P} _{\mathcal{G}} [\bar{\eta}\mathcal{B}_1 + (1-\bar{\eta})\mathbf{I}]$. When $\bar{\eta}$ is less than 1, the update direction of parameters is not strictly constrained to the orthogonal direction. Instead, the parameters can update in the original direction of the gradients. This relaxation enhances the model’s discriminative ability when learning a new task. As demonstrated in the experimental results of Figure 5, the model can achieve higher accuracy with higher (worse) forgetting as $\bar{\eta}$ decreases. It indicates the importance of the trade-off between stability and plasticity, and our approach can make a good trade-off. **(2)** As suggested in [36], the orthogonal projection matrix is constructed from an approximated null space, since an exact null space may not always exist in practice. The orthogonal projection can also be relaxed by the approximation to encourage the model to acquire new knowledge. > [48] Mermillod M, Bugaiska A, Bonin P. The stability-plasticity dilemma: Investigating the continuum from catastrophic forgetting to age-limited learning effects. Frontiers in psychology, 2013, 4: 504. **3:** (1) Complexity Compared to the baseline “VPT-Seq”, our approach introduces the following additional computation: 1) The null-space projections in each optimization step, *i.e.*, Eq. (25). For a group of prompts $\mathbf{P}$ of shape $M \times D$, the complexity of the two projections is $\mathcal{O}\left(DM(D+M)\right)$. Suppose there are $L$ ViT layers equipped with prompts. The batch size and epochs are represented as $n_{batch}$ and $n_{epoch}$, respectively. We denote the total number of samples in all $T$ tasks as $n_{total}$. After training all the tasks, the complexity introduced by the null-space projections is $\mathcal{O}\left( \frac{n_{total} n_{epoch} LDM(D+M)}{n_{batch}} \right)$. 2) The loss of prompt distribution. $i.e.$, Eq. (26). We use $E_{dist}$ to represent the computational overhead of the distribution loss between two scalar elements. Considering all the layers and optimization iterations during training, the complexity introduced by the prompt distribution loss is $\mathcal{O}\left( \frac{n_{total} n_{epoch} LDM E_{dist}}{n_{batch}} \right)$. 3) The forward process to obtain $\mathbf{Q} _{X_t} \mathbf{W}_k^\top$ and $\mathbf{S} _{P_t}$ for each sample after the training stage of a task, *i.e.*, line 21 of Algorithm 1 in the appendix. We denote the computational cost of the model’s forward propagation as $E _{model}$. Thereby, the introduced additional computation is $n _{total} E _{model}$. 4) Computation of uncentered covariance matrices, *i.e.*, line 24 of Algorithm 1. The complexity of computing the two uncentered covariance matrices is $\mathcal{O}\left( n _{total}^2 N^2 (D+M) \right)$ , where $N$=197 is the number of image tokens in the ViT-B/16 model. 5) Computation of the null-space projection matrices, *i.e.*, line 25 of Algorithm 1. In each task, the two projection matrices are only need to be computed and updated once. We use $E _{matrices}$ to denote the computational cost in this process. The total additional computation in $T$ tasks is $T E _{matrices}$. Overall, the complexity compared to the baseline VPT is $\mathcal{O}\left( \frac{n _{total} n _{epoch} LDM(D+M+E _{dist})}{n _{batch}} + n _{total} E _{model} + n _{total}^2 N^2 (D+M) + T E _{matrices} \right)$. (2) Running time We report the average running time over three runs for the baseline model and our approach on the four benchmarks in Table VI. Compared to the baseline VPT-Seq, the running time increases by 2\~9 minutes (2.38%\~6.43%) across these benchmarks, with an average increase of 4.5 minutes (3.84%). The additional running time introduced by our approach is acceptable as it constitutes only a small portion of the overall running time. **4:** Thank you for the valuable suggestion. We will add the diagram to describe our algorithm visually. --- Rebuttal Comment 1.1: Title: Please have a discussion Comment: This paper receives mixed reviews. The authors have provided a detailed response. Please give your reply and check whether there is still unclear point for authors to clarify. --- Rebuttal Comment 1.2: Comment: Thanks for the authors' response, that have addressed most of my concerns.
Summary: This paper introduces the orthogonal projection into the visual prompt tuning for continual learning, which comprehensively considers the full operations of a transformer layer on the interference problem. They propose two sufficient consistency conditions for the self-attention and an invariant prompt distribution constraint for LayerNorm, based on which an effective null-space-based approximation solution is introduced to implement the prompt gradient orthogonal projection for visual prompt tuning. Extensive experimental results show the superiority of the proposed method. Strengths: 1) The idea in this paper is novel and interesting. They introduce the orthogonal projection into the visual prompting tuning for continual learning, which comprehensively considers the full operations in the Transformer Layer. 2) The experimental results are sufficient and superior to the SOTA methods. 3) Overall, this paper is well organized and well-written, which is easy for the readers to follow. Weaknesses: 1) In figure 1 of the overall framework, it is unclear why the Qp term in the affinity matrix can be neglected. The authors should provide clear reasons for such conclusion. 2) In the method part of Eq.(10), In order to satisfy F_{Z_t}=F_{Z_t-1}, the authors transform Eq.(10) into the following Eq.(11) and Eq.(12), which are two sufficient conditions. Why not optimizing Eq.(10) directly? Please give detailed reasons. 3) What’s the main differences between the single-head and multi-head self-attention for introducing the proposed orthogonal projection into the visual prompting tuning for continual learning? 4) In the method optimization part, the authors propose the approximation method as illustrated in Eq.(25). Why not using the optimization method proposed in PGP[26]? What are the main differences ? Technical Quality: 4 Clarity: 4 Questions for Authors: See the weaknesses above. Confidence: 5 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **1:** In VPT-Deep [13], the output tokens corresponding to the input prompts of the current ViT layer will be replaced by new trainable prompts in the next ViT layer. Therefore, we do not need to compute the output tokens corresponding to the input prompts. According to the forward propagation of ViT, the output prompt tokens are derived from $\mathrm{Q}_P$. Consequently, the $\mathrm{Q}_P$ term can be neglected. Specifically, $\mathrm{Q}_P$ represents the prompt queries in the attention map. After the self-attention operation, the corresponding derived tokens of $\mathrm{Q}_P$ are denoted as $\mathrm{F} _{Q_P}$. Then $\mathrm{F} _{Q_P}$ will undergo another LayerNorm and an MLP to derive the output prompt tokens $\mathrm{Y} _{Q_P}$. $\mathrm{Y} _{Q_P}$ can be neglected due to the replacement of new trainable prompts. To reduce computation overhead, $\mathrm{Q}_P$ can be just neglected in the previous affinity matrix in self-attention. Note that omitting $\mathrm{Q}_P$ has no impact on the output image tokens of the ViT layer, as the subsequent Aggregation, LayerNorm and MLP operations are performed independently for each token. **2:** The reason is that optimizing Eq. (10) directly leads to difficulty in deriving the solution expressed in terms of $\Delta\mathbf{P}$. Eq. (10) introduces non-unique solutions and a quadratic term of $\Delta\mathbf{P}^2$, which is explained in detail as follows. According to Eq. (3), we derive the following equation when directly optimizing Eq. (10): $softmax\left(\frac{\begin{bmatrix}Q_{X_t}K_{X_t}^{\top}&Q_{X_t}K_{X_t}^{\top}\end{bmatrix}}{\sqrt{D}}\right)\begin{bmatrix}V_{X_t}\\\\ V_{X_t}\end{bmatrix} =softmax\left(\frac{\begin{bmatrix}Q_{X_t}K_{X_t}^{\top}&Q_{X_t}K_{X_{t+1}}^{\top}\end{bmatrix}}{\sqrt{D}}\right)\begin{bmatrix} V_{X_t}\\\\ V_{X_{t+1}}\end{bmatrix} $\ It can be further expanded as:\ $softmax\left(\frac{\begin{bmatrix}Q_{X_t}K_{X_t}^{\top}&Q_{X_t}[LN(P_t)W_k+b_k]^{\top} \end{bmatrix}}{\sqrt{D}}\right)\begin{bmatrix}V_{X_t}\\\\ LN(P_t)W_v+b_v\end{bmatrix} =softmax\left(\frac{\begin{bmatrix}Q_{X_t}K_{X_t}^{\top}&Q_{X_t}[LN(P_t+\Delta P)W_k+b_k]^{\top}\end{bmatrix}}{\sqrt{D}}\right)\begin{bmatrix}V_{X_t}\\\\ LN(P_t+\Delta P)W_v+b_v\end{bmatrix}$ However, it is hard to simplify the above equation to derive a solution expressed in terms of $\Delta\mathbf{P}$. First, the non-injection property of the softmax function causes non-unique solutions. That is to say, we cannot derive $\mathbf{a}=\mathbf{b}$ from $softmax\left(\mathbf{a}\right)=softmax\left(\mathbf{b}\right)$. Second, when we omit the softmax operation, the multiplication between $\mathbf{Q}_{X_t}\left[LN\left(\mathbf{P}_t+\Delta\mathbf{P}\right)\mathbf{W}_k\right]^\top$ and $LN\left(\mathbf{P}_t+\Delta\mathbf{P}\right)W_v$ derives a quadratic term $LN\left(\mathbf{P}_t+\Delta\mathbf{P}\right)^\top LN\left(\mathbf{P}_t+\Delta \mathbf{P}\right)$, which results in difficult optimization. Due to the above obstacles, we transform Eq. (10) into Eq. (11) and Eq. (12), rather than optimizing Eq. (10) directly. **3:** The main difference is that the parameter update should be orthogonal to the subspace spanned by the concatenation matrices from all heads for multi-head self-attention. Specifically, for the singe-head self-attention, the parameter update should be orthogonal to the subspaces spanned be $\mathbf{Q} _{X_t} \mathbf{W}_k^\top$ and $\mathbf{S} _{P_t}$ according to Eq. (23) and Eq. (24). When considering the multi-head self-attention, the subspaces should be spanned by $[\mathbf{Q} _{X_t.1} \mathbf{W}_k^\top; \mathbf{Q} _{X_t.2} \mathbf{W}_k^\top; \cdots; \mathbf{Q} _{X_t.H} \mathbf{W}_k^\top]$ and $[\mathbf{S} _{P_t.1}; \mathbf{S} _{P_t.2}; \cdots; \mathbf{S} _{P_t.H}]$, where $\mathbf{Q} _{X_t.h}$ and $\mathbf{S} _{P_t.h}$ denote the corresponding intermediate activations in the $h$-th head ($h\in\\{1,2,⋯,H\\}$), respectively. $H$ is the number of heads, and "$[;]$" represents the concatenation of matrices along the first dimension. Therefore, only an additional step of concatenation of the corresponding matrices from all heads is required for introducing the proposed orthogonal projection into the multi-head self-attention. **4:** In PGP [26], the matrices to be optimized in their two conditions are element-wise summed. Then, SVD is applied to the summed matrix to obtain the orthogonal projection matrix. To enable the addition of the two matrices, PCA dimensionality reduction is also used on one of the matrices to align the dimensions. However, this summing approach is not applicable in our method, since the two matrices $\mathcal{B}_1$ and $\mathcal{B}_2$ are multiplied on the right and the left, respectively. Even with dimensionality reduction, they cannot be summed and merged into a single matrix. In fact, the two projection matrices have different meanings: $\mathcal{B}_1$ is a constraint on individual tokens, while $\mathcal{B}_2$ is a constraint on a specific dimension of all tokens at the dimension level. Therefore, it is not suitable to add them directly in terms of their respective meanings. Consequently, the optimization method used in PGP [26] cannot be applied in our case. --- Rebuttal 2: Comment: I appreciate your detailed response. The authors have addressed all the concerns I raised in my initial review, and after considering the other feedback, I kept my score.
Summary: This work aims to eliminate the interference on previously learned knowledge for visual prompt tuning in the field of continual learning, so that catastrophic forgetting can be mitigated. To this end, it analyzes the conditions for keeping the output features unchanged in the transformer block that features the self-attention mechanism. Two consistency conditions for the self-attention are derived by deducing the proposed two sufficient conditions for the consistency objective. Moreover, a constraint on the distribution of the prompts is proposed to further simplify the LayerNorm operation. Consequently, the interference can be eliminated in theory by achieving the proposed two consistency conditions and the prompt-invariance constraint. The proposed approach implements the two consistency conditions by performing two null-space projections on the prompt gradients during training a new task; the constraint is implemented by an additional loss function that penalizes the drifting of prompt distribution across sequential tasks. Substantial experiments demonstrate the effectiveness of the proposed approach. Strengths: This work comprehensively analyzes the conditions for learning without interference in the transformer-based visual prompt tuning. The proposed two consistency conditions with the constraint provide a theoretical guarantee on eliminating the interference problem. With this solid mathematical support, the proposed approach, which performs null-space projection on prompt gradients, shows significant improvements in reducing forgetting and increasing accuracy. The effectiveness of the approach is validated on extensive experiments, involving four class-incremental benchmarks and various pre-training datasets and paradigms. By visualizing the evolution of training losses, the effectiveness in mitigating interference is demonstrated as well. The proposed approach also achieves state-of-the-art performance on the four benchmarks. Besides, the adaptive nullity and plasticity enhancement strategy is also well-motivated and validated to be an effective way of balancing stability and plasticity. The paper is clearly structured and easy to follow. The figures in the paper are clear and well-designed, enhancing the overall readability and comprehension of the paper. Weaknesses: 1. The consistency objective Eq. (10) is decomposed into two sufficient conditions, namely Eq. (11) and Eq. (12). A detailed explanation is necessary to understand why direct analysis and simplification of Eq. (10) is not pursued. 2. The variable Q_p is omitted during the deduction of consistency conditions. Further explanation is needed to clarify why it can be disregarded. 3. In the Adam-NSCL [36] method, the trade-off between stability and plasticity is controlled solely by the nullity. In contrast, this approach involves adding weighted identity matrices to the projection matrices. How do these two methods differ in balancing stability and plasticity? A deeper discussion is expected to highlight the advantage of the proposed trade-off strategy. 4. The number of training epochs in this approach (100) is greater than that in other approaches (e.g., 50 epochs in DualPrompt for ImageNet-R). It would be better to provide a justification for this training setting. 5. As described in line 4 of Algorithm 2, the projection matrix is normalized by a Frobenius norm. There is a lack of explanation for this operation. Technical Quality: 3 Clarity: 3 Questions for Authors: In addition to the parameters of the prompts, the classifier contains parameters that require updates (i.e. the weights and biases) as well. Why not perform an orthogonal projection on the parameters of the classifier to reduce forgetting? Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Reasonable discussion on limitations is included in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **1:** We do not simplify Eq. (10) directly because it leads to difficulty in deriving the solution expressed in terms of $\Delta\mathbf{P}$. Eq. (10) introduces non-unique solutions and a quadratic term of $\Delta\mathbf{P}^2$, which is explained in detail as follows. According to Eq. (3), we derive the following equation when directly simplifying Eq. (10): $softmax\left(\frac{\begin{bmatrix}Q_{X_t}K_{X_t}^{\top}&Q_{X_t}K_{X_t}^{\top}\end{bmatrix} }{\sqrt{D}}\right)\begin{bmatrix}V_{X_t}\\\\V_{X_t}\end{bmatrix} =softmax\left(\frac{\begin{bmatrix}Q_{X_t}K_{X_t}^{\top}&Q_{X_t}K_{X_{t+1}}^{\top}\end{bmatrix}}{\sqrt{D}}\right)\begin{bmatrix} V_{X_t}\\\\ V_{X_{t+1}}\end{bmatrix} $\ It can be further expanded as:\ $softmax\left(\frac{\begin{bmatrix}Q_{X_t}K_{X_t}^{\top}&Q_{X_t}[LN(P_t)W_k+b_k]^{\top} \end{bmatrix}}{\sqrt{D}}\right)\begin{bmatrix}V_{X_t}\\\\ LN(P_t)W_v+b_v\end{bmatrix} =softmax\left(\frac{\begin{bmatrix}Q_{X_t}K_{X_t}^{\top}&Q_{X_t}[LN(P_t+\Delta P)W_k+b_k]^{\top}\end{bmatrix}}{\sqrt{D}}\right)\begin{bmatrix}V_{X_t}\\\\ LN(P_t+\Delta P)W_v+b_v\end{bmatrix}$ However, it is hard to simplify the above equation to derive a solution expressed in terms of $\Delta\mathbf{P}$. First, the non-injection property of the softmax function causes non-unique solutions. That is to say, we cannot derive $\mathbf{a}=\mathbf{b}$ from $softmax\left(\mathbf{a}\right)=softmax\left(\mathbf{b}\right)$. Second, when we omit the softmax operation, the multiplication between $\mathbf{Q}_{X_t}\left[LN\left(\mathbf{P}_t+\Delta\mathbf{P}\right)\mathbf{W}_k\right]^\top$ and $LN\left(\mathbf{P}_t+\Delta\mathbf{P}\right)W_v$ derives a quadratic term $LN\left(\mathbf{P}_t+\Delta\mathbf{P}\right)^\top LN\left(\mathbf{P}_t+\Delta \mathbf{P}\right)$, which results in difficult simplification. Due to the above obstacles, we propose two sufficient conditions (*i.e.*, Eq. (11) and Eq. (12)), rather than simplifying Eq. (10) directly. **2:** In VPT-Deep [13], the output tokens corresponding to the input prompts of the current ViT layer will be replaced by new trainable prompts in the next ViT layer. Therefore, we do not need to compute the output tokens corresponding to the input prompts. According to the forward propagation of ViT, the output prompt tokens are derived from $\mathbf{Q}_P$. Consequently, the variable $\mathbf{Q}_P$ can be omitted. Specifically, $\mathbf{Q}_P$ represents the prompt queries in the attention map. After the self-attention operation, the corresponding derived tokens of $\mathbf{Q}_P$ are denoted as $\mathbf{F} _{Q_P}$. Then $\mathbf{F} _{Q_P}$ will undergo another LayerNorm and an MLP to derive the output prompt tokens $\mathbf{Y} _{Q_P}$. $\mathbf{Y} _{Q_P}$ can be neglected due to the replacement of new trainable prompts. To reduce computation overhead, $\mathbf{Q}_P$ can be just neglected in the previous affinity operation in self-attention. Note that omitting $\mathbf{Q}_P$ has no impact on the output image tokens of the ViT layer, as the subsequent Aggregation, LayerNorm and MLP operations are performed independently for each token. **3:** The difference lies in that Adam-NSCL balances stability and plasticity solely by controlling the nullity, while our method first achieves a near-optimal stability point and then enhances plasticity by weakening the orthogonal constraints. Adding a weighted identity matrix to the gradients in our approach is more flexible for enhancing plasticity. In our method, the orthogonal constraint is strict when $\bar{\eta}=1$, and the model's stability reaches a near-optimal level. As $\bar{\eta}$ gradually decreases to 0, the orthogonal constraints are progressively relaxed until completely eliminated. This enables the model to achieve maximum plasticity for learning new tasks. Therefore, incorporating a weighted identity matrix into orthogonal constraints facilitates the model's efficient and effective acquisition of new knowledge. **4:** Orthogonal projection-based methods constrain the update direction of parameters during training. To fully optimize the model parameters and converge to a (local) optimal point, the model usually requires adequate training with more epochs. Therefore, we adopt a larger number of training epochs. **5:** The projection matrix is normalized by a Frobenius norm to provide an upper bound for the scale of gradients after projection. Specifically, the Frobenius norm is sub-multiplicative [47]. For two matrices $\mathbf{A}$ and $\mathbf{B}$, the inequality $||\mathbf{AB}||_F\le||\mathbf{A}||_F||\mathbf{B}||_F$ holds. For convenience, we use $\tilde{\mathbf{U}}_0$ to denote $\mathbf{U}_0 \mathbf{U}_0^\top$ in line 4 of Algorithm 2. Then the projection matrix is denoted as $\mathcal{B}=\frac{\tilde{\mathbf{U}}_0}{||\tilde{\mathbf{U}}_0||_F}$. Consequently, when the gradient matrix $\mathbf{P} _\mathcal{G}$ is multiplied by the projection matrix $\mathcal{B}$, we have $ ||\mathcal{B}\mathbf{P} _\mathcal{G}||_F \le ||\mathcal{B}||_F ||\mathbf{P} _\mathcal{G}||_F=|\mathbf{P} _\mathcal{G}||_F$. This demonstrates that using the Frobenius norm can provide an upper bound for the scale of the projected gradients, thereby preventing excessive gradient magnitudes. > [47] Meyer C D. Matrix analysis and applied linear algebra Society for Industrial and Applied Mathematics, 2023. **6:** Response to the question: The reason is that the classifier in each task is trained independently rather than continuously. The classifiers of previously learned tasks remain fixed during training on new tasks, eliminating the need for gradient calculations. Moreover, the classifier of the current task is only updated in this task and is unaffected by the classes of previous tasks, rendering orthogonal constraints unnecessary. As a result, orthogonal projections are not required for classifiers in any task. --- Rebuttal Comment 1.1: Title: Please have a discussion Comment: This paper receives mixed reviews. The authors have provided a detailed response. Please give your reply and check whether there is still unclear point for authors to clarify. --- Rebuttal Comment 1.2: Title: Rebuttal Response Comment: I have read the rebuttals from authors and discussions from others. And most of my concerns have been addressed. Thus, I maintain my initial score. Thanks.
Summary: This paper proposes a novel paradigm for continual learning based on prompt tuning. By deriving the constraints for orthogonal projection of prompt gradients in ViTs, the method aims to minimize forgetting during the learning process. Experiments on four benchmarks show that NSP² achieves superior performance by avoiding forgetting. Strengths: 1. The paper focuses on orthogonal projection methods in continual learning, extending CNN-based methods to the ViT architecture, resulting in performance improvements. 2. The method is supported by mathematical derivations of the derived constraints, providing strong theoretical guarantees. 3. The paper addresses continual learning based on pre-trained models, which is a valuable topic. Weaknesses: 1. While orthogonal gradient updates help prevent forgetting in continual learning, they also hinder positive knowledge transfer between tasks. In methods like L2P, similar tasks might select the same prompts, potentially improving performance on previous tasks after learning new ones. 2. The derivation process in the methods section is overly lengthy, which leads to a very brief description of the experimental section. The comparison with existing methods is reduced to just three lines of text. Consider moving some derivations to the appendix. 3. The experimental tables contain many blanks, which is not conducive to a systematic comparison of different methods' performance and weakens the contribution of this method. 4. The paper omits comparisons with some leading methods, such as DAP[1], which is also based on prompt tuning. [1] Jung D, Han D, Bang J, et al. Generating instance-level prompts for rehearsal-free continual learning[C]. Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023: 11847-11857. Technical Quality: 3 Clarity: 2 Questions for Authors: According to my understanding, as the number of tasks increases, the shrinking gradient subspace will make gradient updates increasingly difficult. Therefore, I would like to see the performance of NSP² on longer sequences of continual learning tasks. Confidence: 5 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: As mentioned in the paper, additional constraints are introduced to simplify the consistency conditions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **1:** In theory, it seems that orthogonal projection methods do not have the merits of backward knowledge transfer. However, this problem can be alleviated by two techniques in our approach. **(1)** We adopt a plasticity enhancement strategy which employs a hyper-parameter $\bar{\eta}$ to control the weight of each projection matrix: $\Delta P=[\bar{\eta}B_2+(1-\bar{\eta})I]P_ \mathcal{G}[\bar{\eta}B_1+(1-\bar{\eta})I]$. By incorporating weighted identity matrices, the strict orthogonal update direction can be relaxed to some extent. This relaxation enhances the model’s ability to integrate new knowledge when learning a new task. **(2)** The orthogonal projection matrix is constructed from an approximated null space as suggested in [36], since an exact null space may not exist in practice. The orthogonal projection can also be relaxed by the approximation. We utilize the experiments on both **long-term** and **regular** CL settings to demonstrate the effectiveness of our approah. **(1)** We experiment on 5 benchmarks under the protocols of 50 tasks and 100 tasks to validate that our approach remains effective even within the context of **long-term CL**. The results are presented in **Table I** in the PDF and are strongly recommended for review by the reviewer. Despite lacking plasticity enhancement, VPT-NSP2 can outperform existing SOTA approaches and especially surpasses L2P by a large margin. This demonstrates that **forgetting is still the predominant factor affecting performance in long sequence of tasks**. With the plasticity enhancement, VPT-NSP2 achieves significant increase in accuracy (by 1.1%\~2.9%). **This demonstrates that our plasticity enhancement is effective in learning new knowledge in long-term CL.** **(2)** The experimental results across the four **regular CL** benchmarks are shown in Table II. NSP2 also outperforms L2P significantly even without plasticity enhancement. It achieves higher accuracy on all the benchmarks when using plasticity enhancement. Figure A shows the effects of the orthogonal projection weight $\bar{\eta}$. Accuracy can be improved with the decrease of $\bar{\eta}$, validating that **the proposed relaxed orthogonal constraints on gradients can promote learning new knowledge to achieve better performance**. The steady decrease in forgetting also verifies that **orthogonal constraints can be relaxed by decreasing $\bar{\eta}$**. In the above experiments, our approach focusing on anti-forgetting outperforms L2P and our baseline (i.e. VPT-Seq) significantly, implying that **forgetting has a greater impact on the performance of continual learning than backward knowledge transfer.** In L2P, prompts from old tasks may be selected and trained without constraints in new tasks, causing interference between tasks and leads to forgetting. Subsequent improvements, such as CODA-Prompt [32] and CPrompt [11], mitigate this issue by freezing prompts from old tasks and incrementally training new prompts for new tasks, thereby preventing the interference on old tasks and achieving better performance. The development of technical approaches also reflects that **catastrophic forgetting remains a predominant challenge to be addressed in continual learning**. Overall, the backward knowledge transfer ability is influenced by the plasticity of models. Nevertheless, the stability-plasticity dilemma remains **a fundamental challenge** in the field of CL. Achieving a better trade-off in this dilemma can enhance both anti-forgetting and backward knowledge transfer abilities. **In future work, we will focus on improving the backward transfer ability of our approach while maintaining its anti-forgetting capability to achieve better performance.** **2:** The detailed derivation process in the method section is crucial for explaining the derivation of the two projection matrices used in our approach. However, we understand the need for a more detailed experimental section. We appreciate the reviewer's suggestion and will consider moving some derivations to the appendix to provide more detailed experimental conclusions. **3:** We have made extensive efforts over the past week to reproduce 6 approaches for a more systematic comparison, including EvoPrompt [18] (AAAI’24), OVOR-Deep [12] (ICLR’24), DualP-PGP [26] (ICLR’24), InfLoRA [20] (CVPR’24), EASE [45] (CVPR’24) and CPrompt [11] (CVPR’24). The results are highlighted in blue in Table III. VPT-NSP2 achieves the highest accuracy, surpassing the second-best method by 0.6%\~2.4% across the four benchmarks. **4:** In Table III and Table 2 in our paper, we compare with 12 prompt-tuning-based methods proposed in 2023 and 2024. DAP has the problem of **batch information leakage which results in comparison unfairness.** As stated by Zhou et al. [46] in the discussions on comparison fairness, "during inference,... it is equal to directly annotating the task identity. When removing the batch information... a drastic degradation in the performance ..." The reproduced results reported in [46] are shown in Table IV. The accuracy declines by 4.8\~42.6% with an average of 27.3% when eliminating the batch information leakage. Besides, we reproduce DAP on four benchmarks used in our experiments. We also observe a drastic degradation (by 16.42%\~23.24%) in accuracy, as shown in Table V. The official code of DAP, line 210-236 in file vit.py, assumes that test samples are batched and all of them come from the same task, which is an unreasonable assumption. >[46] Zhou D W, et al. Continual learning with pre-trained models: A survey. preprint arXiv:2401.16386, 2024. **5:** Response to the question: The experimental results for 50 and 100 tasks across 5 benchmarks are shown in Table I. Our approach surpasses the second-best competitor by 0.4%~6.4% with an average improvement of 2.0%, demonstrating that **our approach has the ability to handle longer sequences of continual learning tasks**. --- Rebuttal Comment 1.1: Title: Please have a discussion Comment: This paper receives mixed reviews. The authors have provided a detailed response. Please give your reply and check whether there is still unclear point for authors to clarify. --- Rebuttal 2: Title: Concerns addressed Comment: Dear reviewer rV5K, Thank you very much for reviewing our paper and giving us some good questions. We have tried our best to answer all the questions according to the comments. Especially, we conduct experiments on **long-term continual tasks** across 5 benchmarks, and compare our approach with 6 existing SOTA methods. **We eagerly expect the reviewer to examine the PDF for the results and comparison.** The experimental results in Table I show that our approach remains effective and outperforms SOTA methods including L2P under the long-term CL protocol. It demonstrates that our approach can mitigate the shrinking gradient subspace and learn new knowledge effectively by the proposed relaxion for orthogonal projections, even under the setting of longer sequences of continual learning tasks. We sincerely hope that our responses can address all your concerns. Is there anything that needs us to further clarify for the given concerns? Thank you again for your hard work. --- Rebuttal Comment 2.1: Title: Concerns partially addressed Comment: I appreciate the detailed response from the authors and the efforts they made over the past week. The authors have partially addressed my concerns as follows: 1. The proposed method NSP² performs well on long-sequence CL tasks, and by adjusting the hyperparameter $\bar{\eta}$, it can further relax the orthogonality constraint to facilitate learning more new knowledge. 2. More results from comparison methods have been presented, which enhances the credibility and contribution of the paper. However, I still have some concerns: 1. Continual learning should not just involve learning each task separately to prevent forgetting. Like human learning mechanisms, algorithms should perhaps focus more on how to leverage existing knowledge to quickly and accurately master a new skill, as well as how to use new knowledge to better solve previous tasks, rather than learning entirely separately, like separate multi-task learning. Nevertheless, the orthogonal prompt tuning method proposed by the authors for Vision Transformers also provides a solid theoretical foundation for applying ViTs to continual learning, which is a significant contribution. 2. The results provided in Table III of the PDF seem to be lower for the comparison methods than those reported in the original papers, and the differences are substantial. For example, for 20S-CIFAR100, EASE is reported to achieve 91.51 in the original paper, while the authors report only 85.80. Similar discrepancies are observed for 10S-CIFAR100. I would like to understand if there are any differences in implementation or experimental settings. For these reasons, I have increased my score to 5. If the authors can further address my above concerns, I will consider raising the score further. --- Reply to Comment 2.1.1: Comment: Thank you for your kind reply and support for our work. We are glad that we have addressed most of your concerns. Below is our response to the remaining concerns in your comments. **Reply to Q1:** This is a valuable viewpoint on continual learning. It has always been our pursuit to enable deep neural networks to learn new knowledge better by utilizing the experience of previously learned knowledge like human beings. In our future work, we will consider exploring mining the parameters of different importance on learning old and new tasks, and combining this scheme with anti-forgetting techniques such as orthogonal projection to enhance knowledge transfer capabilities without forgetting. Specifically, the learnable parameters in a network can be divided into three parts according to the contribution to learning new knowledge and preserving old knowledge by the following process. After finishing training on an old task and before learning on a new task, we can use a criterion (e.g., gradients) to **measure the importance of different parameters on the old task and the new task**, respectively. Then we divide those parameters into three sets by: 1) **high importance on the old task**, 2) **high importance on the new task**, and 3) **moderate importance on both old and new tasks**. During training in the new task, different strategies are adopted for these three sets of parameters. 1) For the parameters important to the old task, we use **a completely strict orthogonal constraint** on them to reduce forgetting and keep stability. 2) For the parameters important to the new task, we **do not perform any constraint** on them to maximize the stability. 3) For the parameters of moderate importance on both old and new task, we use **an appropriate relaxed orthogonal constraint** on them. In this way, the whole model can be trained with an enhanced knowledge transfer capability and retain a good ability of anti-forgetting. **Reply to Q2:** Following L2P, the accuracies reported in Table III of our PDF for **ALL the methods** are the "**final average accuracy**" (*i.e.*, 85.80 instead of 91.51 for EASE). Therefore, we would like to emphasize that **the comparison in our table is fair.** Specifically, **EASE reports two different metrics** for accuracy in the original paper, which are referred to as "final average accuracy (FAA)" and "mean average accuracy (MAA)" here. They correspond to $\bar{\mathcal{A}}$ and $\mathcal{A}_B$ in the Table 1 of EASE, respectively. The finally average accuracy represents the accuracy over all learned tasks when the model finishes continual training on the last task. It is defined as: $\mathcal{A} _B = \frac{1}{B} \sum _{i=1} ^{B} a _{B,i}$ where $B$ is used to denote the number of total tasks to correspond to the symbols used in EASE, and $a_{B,i}$ is the accuracy of the $B$-th model (*i.e.*, the model after training on the last task) on the $i$-th task’s test data. The mean average accuracy is the mean value of $\mathcal{A} _1, \mathcal{A} _2, \cdots, \mathcal{A} _B$: $\bar{\mathcal{A}}=\frac{1}{B}\sum_{i=1}^{B}\mathcal{A}_i$ Since the accuracy in early learning tasks are usually high under the class-incremental learning protocol, *i.e.*, $\mathcal{A}_1 > \mathcal{A}_2 > \cdots > \mathcal{A}_B$ usually holds, MAA is almost always higher than FAA. This explains why the results in the column $\bar{\mathcal{A}}$ are higher than those in the column $\mathcal{A}_B$ in the Table 1 of EASE. For a fair comparison with other methods, we report FAA in Table III which are widely adopted in existing prompt-tuning-based papers. For a clearer comparison **under these two metrics**, we select those methods which report both FAA and MAA from Table III of the PDF, including: InfLoRA [20], EASE [45] and CPrompt [11]. The results are shown in **Table VII**. **The columns of FAA have been reported in Table III (corresponding to "Acc."), and the columns of MAA are newly added for a comparison under the mean average accuracy metric.** It can be seen that our approach surpasses other approaches in both metrics. Especially, **VPT-NSP2 outperforms EASE by an average of 3.86% under the FAA metric and an average of 3.35% under the MAA metric.** **Table VII**: Comparison with the methods that report both finally average accuracy (FAA) and mean average accuracy (MAA). **The italic values are produced by us due to lack of official results, while the others are from their corresponding original papers.** Best results are highlighted in bold. ||20S-CIFAR-100|20S-CIFAR-100|10S-CIFAR-100|10S-CIFAR-100|10S-ImageNet-R|10S-ImageNet-R|10S-DomainNet|10S-DomainNet| |:-:|:-:|:-:|:-:|:-:|:-:|:-:|-:|:-:| |Method|FAA|MAA|FAA|MAA|FAA|MAA|FAA|MAA| |EASE|85.80|91.51|87.76|92.35|76.17|81.73|*78.89*|*84.58*| |InfLoRA|*81.42*|*87.42*|87.06|91.59|75.65|80.82|*81.45*|*88.75*| |CPrompt|*83.97*|*90.08*|87.82|92.53|77.14|82.92|82.97|88.54| |VPT-NSP2|**89.89**|**93.75**|**91.74**|**96.02**|**78.88**|**84.84**|**83.54**|**88.94**| --- Rebuttal 3: Title: Concerns addressed Comment: Dear Reviewer rV5K, We sincerely thank you very much for the meaningful comments and providing the valuable feedback. We really hope that our responses can address all the remaining concerns. Thank you again for your great help and many good questions and suggestions, which largely help improve the quality of our paper. We would like to clarify if you have further concerns. Thanks very much.
Rebuttal 1: Rebuttal: We thank all reviewers for their valuable feedback, with three reviewers (W3C1, 2TYW and 4B73) strongly supporting our work. We are encouraged that reviews think our paper: - **the idea in this paper is novel and interesting** (by Reviewer W3C1); - **the theoretical proof is solid** (by Reviewer 2TYW); - **providing strong theoretical guarantees** (by Reviewer rV5K); - **substantial experiments demonstrate the effectiveness of the proposed approach** (by Reviewer 4B73). The main concern of reviewer rV5K is that orthogonal constraints may hinder learning new knowledge and degrade the performance on long-term continual learning. We address this issue by **conducting experiments with respect to 50 and 100 tasks across 5 benchmarks**. The results demonstrate that our approach remains **effective and superior** even within the context of long-term continual learning. Moreover, our experimental analysis on regular CL benchmarks also verifies that orthogonal constraints **can be relaxed by the proposed plasticity enhancement**, which **promotes the model to learn new knowledge and achieve better performance**. All questions are addressed in reviewer-specific responses. Additionally, please find the PDF attached with helper tables and figures. These are referenced and described in our individual responses to reviewers. Pdf: /pdf/cd6b45238a2ef6f2f6717b7a0e6836dce9e5ea15.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Provable Benefit of Cutout and CutMix for Feature Learning
Accept (spotlight)
Summary: This paper offers a theoretical explanation for the effectiveness of two practically useful algorithms—Cutout and Cutmix—by applying typical feature learning analysis to multi-patched feature and noise data. The authors present negative results for ERM as a comparison and demonstrate positive results for Cutout and Cutmix, including positive margins and near-optimal test error. Strengths: This paper provides the first theoretical analysis of two practically useful data augmentation algorithms — Cutout and Cutmix. Weaknesses: This paper applies a smoothed leaky ReLU activation function in the neural network for technical reason, which differs from the model used in practice. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Can the analysis be generalized to non-smooth neural networks? What technical difficulties will you encounter when generalizing the analysis to non-smooth activation functions like ReLU and leaky ReLU? 2. Why is the near convergence result only presented for the Cutmix setting but not for the Cutout setting? 3. Can you provide stronger results about the margin besides its positiveness? What is the order/magnitude of the margin? 4. Why does the result for Cutout hold for any iteration T in an interval, while the result for Cutmix only guarantees the existence of an iteration that satisfies the properties? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Same as weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to express our appreciation to the reviewer for your valuable and constructive comments. In the following, we address the points raised by the reviewer. ## **W1 & Q1. The use of smooth activation and generalization to non-smooth activation** We note that our activation function differs from those typically used in practice. However, smooth activation functions are widely employed in theoretical studies to analyze the generalization performance of two-layer neural networks. Numerous works have explored theory of deep learning under similar settings as discussed in lines 59–71. Therefore, we believe that studying two-layer networks with smooth activations is a valuable approach for bridging the gap between practice and theory. Our theoretical analysis can be generalized to non-smooth activations, such as ReLU or Leaky ReLU, in the cases of ERM and Cutout. In fact, our proof did not rely on the smoothness of activations in these cases. However, the smoothness of activation plays a crucial role in the analysis of CutMix. The main difference between the analysis of ERM & Cutout and CutMix lies in our approach: for ERM and Cutout, we directly investigate learning dynamics, whereas for CutMix, we characterize the global minimum and demonstrate that CutMix training can achieve a near-stationary point. To show that the CutMix loss achieves a near-stationary point, we use the descent lemma (Lemma 3.4 in [7]), which is only applicable to smooth objective functions. This is why smoothness is necessary for the CutMix analysis. ## **Q4. Why does the result for Cutmix only guarantee the existence of an iteration that satisfies the properties?** The use of the descent lemma makes a distinction between the conditions on the iterations for ERM & Cutout and CutMix. The convergence of a smooth function $f(x)$ with optimizing variables $x$ is usually guaranteed by showing $\frac{1}{T} \sum_{t=0}^{T-1} \lVert \nabla f(x^{(t)}) \rVert^2 \leq \frac{C}{T},$ for some constant $C$. Therefore, $\min_{t =0,1, \dots, T-1}\lVert\nabla f(x^{(t)})\rVert^2 \leq \frac{C}{T}$ and it guarantees only the existence of an iteration that achieves a near-stationary point. We would like to emphasize that, even though our theory only guarantees this existence, our numerical validation (Section 5, Figure 1) shows consistent convergence behavior. ## **Q2. Why is the near convergence result only presented for the Cutmix setting but not for the Cutout setting?** The difference between ERM & Cutout and CutMix also results in different convergence criteria. We initially adopted training accuracy as the convergence criterion, following [1]. However, as discussed in lines 245–247, evaluating the training accuracy of augmented data is challenging because it uses augmented data with mixed labels. Therefore, we use the loss gradient as the convergence criterion, which can be guaranteed by the descent lemma. We note that we can also prove that ERM and Cutout achieve near convergence using the descent lemma. However, we adopted perfect training accuracy as the convergence criterion because it provides a more intuitive measure of how well the model fits all training data points. Moreover, due to the monotonic nature of ERM and Cutout training, we can show that the guarantees in our theorems hold for any sufficiently large polynomial time. ## **Q3. Can you provide stronger results about the margin besides its positiveness? What is the order/magnitude of the margin?** In our theoretical analysis, we proved that the model achieves an $\Omega(1)$ margin for test data with learned features. If you are interested, you can check line 802 for ERM, line 996 for Cutout, and line 1155 for CutMix. Thanks for your time and consideration. Best regards, Authors --- Rebuttal Comment 1.1: Comment: Dear ZwXm, What are your thoughts after reading the rebuttal and other reviews? Best, AC --- Rebuttal Comment 1.2: Comment: Thank the authors for the response. I will keep my score. --- Reply to Comment 1.2.1: Comment: We appreciate for your response. If you have any additional questions, please feel free to ask. Best regards, Authors
Summary: This work theoretically investigates why patch-based data augmentation methods for image recognition, namely, CutOut and CutMix, improve performance based on the framework by [Zou+ICML23]. Specifically, they showed that when a CNN is trained with CutOut and CutMix, it can focus on rare and extremely rare features while ignoring noise. Contrarily, the model trained with the standard ERM memorizes noise, which results in poorer performance. These findings align with their numerical results. Strengths: - This work theoretically reveals why patch-based data augmentation, specifically CutOut and CutMix, improves the performance of CNNs. These methods are so far powerful but ad-hoc heuristic, so research towards their theoretical understanding is important. - By comparing ERM, Cutout, and CutMix in a unified framework, the authors highlight the superior ability of CutMix that can exploit extremely rare features in data while effectively ignoring noises. - The theory well aligns with the numerical experiments on synthetic data. Weaknesses: - The important notions on features, i.e., common features, rare features, and extremely rare features, are not properly defined, making the manuscript difficult to read. I think improving the description in Section 2.1 would resolve this issue. - The technical contributions against [Zou et al. 23] are not clearly stated in the manuscript. Technical Quality: 3 Clarity: 3 Questions for Authors: - If some features are extremely rare, I think they are less likely to appear in test data. Why learning it improves the performance? In practice, such features are likely to be attributed to systematic mislabeling. Clearly defining common features, rare features, and extremely rare features may resolve this question. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: - Although the checklist says the limitations are discussed in Section 2, I could not find the discussion of limitations in the section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to express our appreciation to the reviewer for your valuable and constructive comments. In the following, we address the points raised by the reviewer. ## **W1. Confusion on the notions of features** We have noticed that the notions of common features, rare features, and extremely rare features were not clearly explained in Definition 2.1. These features have significantly different frequencies, where common features appear much more frequently than rare features, and rare features appear more frequently than extremely rare features. While we describe this in the Assumptions (Section 2.4), we acknowledge that additional discussion near Definition 2.1 would be helpful to readers. We plan to include a more detailed explanation of these feature notions in Definition 2.1 in our next revision. In global response, we provide clarification on the notions of features and more detailed motivation of our data model. We hope this resolves any confusion or misunderstanding. We will address the question raised due to this confusion. ## **Q1. Why learning extremely rare features improves the performance?** As we discussed in the global response, extremely rare features in our setting are still frequent enough to appear in a non-negligible fraction of the training set. These features also have a presence in the test distribution and provide valuable information for correct classification. Hence, learning these features contributes to improved performance. ## **W2. The technical contributions against Zou et. al. 2023 are not clearly stated.** We have discussed comparison to Zou et.al. 2023 in lines 123-130 and lines 313-318. While we believe this coverage is sufficient, we provide a more detailed comparison here. - Zou et.al. 2023 only consider two types of features (common features, rare features) while we consider three types of features(common features, rare features, extremely rare features). This is because Zou et.al.2023 investigate two training methods (vanilla, Mixup) while we study three training methods (ERM, Cutout, CutMix). - Zou et.al. 2023 consider quadratic activation, whereas we focus on smoothed leaky ReLU activation. While our analysis of ERM shares the same spirit as the analysis of vanilla training in Zou et.al. 2023, detailed proofs differ due to variants in the network architecture. - The main difference between the Mixup analysis in Zou et.al. 2023 and our CutMix analysis lies in the approach: Zou et.al. 2023 prove the benefit of Mixup by directly investigating its learning dynamics, while we show the benefit of CutMix by characterizing the global minimum of CutMix loss. Zou et.al. 2023 prove that Mixup can learn rare features in early phase since it is “boosted” by the learning of common features. In contrast, our analysis shows that global minimizers learn all kinds of features indicating that the benefit of CutMix arises as training approaches convergence. This distinction highlights the differences in the underlying mechanisms: Mixup has benefit in the early stage of training, while the benefits of CutMix arise from the later stages of training. ## **Limitations. Could not find the discussion of limitations in the Section 2** Our work has limitations related to the neural network architecture, specifically single-neuron leaky ReLU CNNs. We discussed this in lines 137-151, but we will add a Limitation section in the next revision to explicitly address these technical limitations. Thanks for your time and consideration. Best regards, Authors --- Rebuttal Comment 1.1: Comment: We thank the authors for the rebuttal. The authors resolved all my concerns. --- Reply to Comment 1.1.1: Comment: Thank you for your response. We are glad to hear that our explanations were helpful and adequately addressed your concerns and questions. If you have any additional questions, please feel free to ask. Best regards, Authors
Summary: - This paper investigates the effectiveness of patch-level data augmentation techniques, specifically Cutout and CutMix, in training two-layer neural networks. - The study compares vanilla training, Cutout training, and CutMix training using a feature-noise data model. - The findings show that Cutout can learn low-frequency features missed by vanilla training, while CutMix can capture even rarer features, resulting in the highest test accuracy. - The analysis reveals that CutMix enables the network to learn features and noise vectors evenly, offering new insights into patch-level augmentation. Strengths: - This paper provides a theoretical analysis of how Cutout and CutMix (i.e., patch-level data augmentation) improve feature learning. - It specifically explains what features each method learns during the training process, offering deep insights beyond empirical results. Weaknesses: - While this paper provides a theoretical analysis of patch-level data augmentation methods, it only analyzes Cutout and CutMix, leaving other related methods for future work. - The theoretical analysis of this paper is expected to offer insights into practical applications and performance improvement, but it lacks detailed discussion in this regard. - It would be beneficial if the paper could provide more meaningful insights to the readers beyond providing a theoretical analysis, which is the most significant shortcoming of this work. - It is suggested to simplify the content of the “assumptions” summarized in Section 2.4 (with detailed explanations moved to the appendix) and add more experimental content in Section 5, particularly experiments related to CIFAR10. Technical Quality: 4 Clarity: 4 Questions for Authors: - What insights can be gained from the theoretical analysis presented in the paper? Can this help in developing better patch-level data augmentation techniques compared to existing ones? - The paper specifies that the NN architecture used is a 2-layer CNN, but shouldn’t it be considered a 1-layer CNN? - Additionally, the weights W exist only for each class individually. Is it not possible to perform a more general analysis with C channels, as done by Shen et al.? Confidence: 3 Soundness: 4 Presentation: 4 Contribution: 2 Limitations: There is no specified section for limitations in this paper. The checklist includes a brief explanation, but the corresponding details are not found in the main paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to express our appreciation to the reviewer for your valuable and constructive comments. In the following, we address the points raised by the reviewer. ## **W1. It only analyzes Cutout and CutMix** As the reviewer would also agree, a complete theory is not built in a single day. We chose Cutout and CutMix as a starting point for understanding patch-level augmentation techniques, and even this first step required a considerable amount of efforts. Also, We would like to emphasize that there has been limited theory works even on CutMix, as we discussed in Section 1.2. ## **W2&3, Q1. More insights beyond theoretical analysis** We agree that exploring practical implications beyond the theoretical analysis of existing methods like Cutout and CutMix is valuable. We also believe that understanding the underlying mechanisms of these methods can lead to the development of even more effective techniques. With this in mind, we would like to discuss some potential directions for future applications. One practical insight we can offer is related to the choice of cutting size $C$ in cutout. Our main intuition behind Cutout is that it outperforms by removing dominant noise patch, or “shortcuts,” which do not generalize to unseen data. Real-world image data contains features and noise across several patches. In practical scenarios, a larger cutting size can be effective in eliminating noise but may also remove important features that the model needs to learn. Thus, there is a trade-off in choosing the optimal cutting size. From this intuition, we believe that for images with a larger portion of background(noise), a larger cutting size is likely to be more effective. In our theoretical analysis, we show that Cutout and CutMix can learn rare features even though still memorizing some noises. We believe that improving underlying mechanism we have demonstrated can lead to even more effective techniques. Below, we list a couple of potential directions that can not only enable effective feature learning, as Cutout and CutMix do, but also improve these methods by preventing the memorization of noise. - One limitation of Cutout is that it does not always effectively remove dominant noise. As a result, dominant noise can still persist in the augmented data, leading to potential noise memorization. Developing strategies that can more precisely detect and remove these noise components from the image input could enhance the effectiveness of these methods. - The main underlying mechanism of CutMix, as we have outlined in Section 4.2, is that it learns information almost uniformly from all patches in the training data, which allows it to capture even rarer features. However, this approach also involves the memorization of noise, which can potentially degrade performance in real-world scenarios. We believe that a more sophisticated cut and pasting strategy—one that considers the positional information of patches rather than using a uniform approach—could improve the model’s ability to learn more from patches containing label-relevant features and reduce the impact of label-irrelevant noise. Already some recent patch-level augmentation method, such as PuzzleMix[5] and Co-mixup[6], use these types of strategy. ## **W4. Suggestion regarding paper organization** We appreciate for your suggestion regarding the paper’s organization. We agree that moving some of the experimental results from the appendix to the main text could improve our draft. However, we also believe that fully presenting the assumptions in the main text is important for the completeness of our theoretical results. We will carefully consider how to balance these aspects in the next revision, keeping the page limits in mind. ## **Q2. The paper specifies that the NN architecture used is a 2-layer CNN, but shouldn’t it be considered a 1-layer CNN?** Whether our network is considered a 2-layer CNN or a 1-layer CNN depends on the perspective. Following the existing results in the literature, we regard it as a 2-layer CNN, where the weights in the second layer are fixed at 1 and -1, making only the first layer trainable. Many other works including [1,2,3] also consider neural networks similar to ours as 2-layer CNNs. ## **Q3. Is it not possible to perform a more general analysis with $C$ channels, as done by Shen et al.?** We believe that a more general analysis with multiple neurons as done by Shen et al. 2022 is also possible, and we numerically validate this scenario in Appendix A.2, Figure 3. As we discussed in line 144-151, using ReLU activation requires multiple neurons since neurons can be negatively initialized and remain unchanged throughout the training. In contrast, leaky ReLU activation always has positive slope, ensuring that a single neuron is often sufficient. Therefore, for mathematical simplicity, we focus on the case where the network has a single neuron for each output, and extension to multi-neuron case may require some additional techniques. ## **Limitation. There is no specified section for limitations in this paper** Our work has limitations related to the neural network architecture, specifically single-neuron leaky ReLU CNNs. We discussed this in lines 137-151, but we will add a Limitation section in the next revision to explicitly address these technical limitations. We hope that our response clarifies your concerns. We would appreciate it if you could consider reassessing our submission. Best regards, Authors --- Rebuttal Comment 1.1: Comment: Dear pSne, What are your thoughts after reading the rebuttal and other reviews? Best, AC --- Rebuttal Comment 1.2: Comment: Thank you for your response. It alleviated my concerns. I'll raise my rating. The provided insights will be very important, along with the theoretical analysis, in the paper. --- Reply to Comment 1.2.1: Comment: Thank you for your feedback and for reconsidering the score. We are glad to hear that our response addressed your concerns. We would also be happy to hear if you have any additional thoughts or suggestions. Best regards, Authors
Summary: The paper aims to provide a novel theoretical insight into the training dynamics of data augmentation methods such as Cutout and Cutmix. It also supports theoretically why Cutout and Cutmix perform better than ERM by showing that the ERM training is unable to learn rare and extremely rare features and that ERM can fit perfectly on the training data while being random on the extremely rare data. Thus the key contributions can be summarized as: **Comparative Analysis**: The paper reveals that Cutout outperforms ERM by facilitating the learning of rarer features, which ERM fails to do (Theorem 3.1 and Theorem 3.2). CutMix is shown to achieve nearly perfect performance by learning all features (Theorem 3.3). **ERM Limitations**: The authors propose that the negative results for ERM stem from its tendency to classify training samples by memorizing noise vectors instead of learning meaningful features, particularly when features do not appear frequently enough. **Cutout Benefits**: Cutout mitigates the challenge faced by ERM by removing some of the strong noise patches, thus enabling the learning of rare features to some extent. **CutMix Mechanism**: A novel technique to analyse the training dynamics of CutMix. This technique views the non-convex loss as a composition of a convex function and reparameterization, allowing characterization of the global minimum of the loss. It shows that CutMix forces the model to activate almost uniformly across every patch of inputs, facilitating the learning of all features. Strengths: What I like about the paper: + It addresses an important aspect of better understanding data-augmentation methods like Cutout and Cutmix. + It is theoretically well-motivated. + Provides a simple explanation of common features ≫dominant noises≫ rare features ≫background noise≫ extremely rare features for ERM training dynamics. Weaknesses: - I believe the writing of the current draft version can be further improved. (see questions) - The modelling assumptions about 2-layer CNN are questionable. (see questions for further discussion) - The theory needs further experimental evaluation (real-world experiment on CIFAR-10 is uninformative to the story, see questions) Technical Quality: 3 Clarity: 2 Questions for Authors: 1. **Definitions**: The definitions of common, rare and extremely rare features are central to the story and contributions of the paper. However, they are not explicitly defined anywhere in the main draft. I assume there must be some condition on probability $\rho_k$ for a feature to be rare and extremely rare. Defining these in problem formulation will help improve the readability of the paper significantly. 2. **Model Definition**: The current model definition is not 2-layered CNN in my opinion since it does not contain a hidden layer. I can summarize the model definition as $f(X)=\[1, -1\]^T\phi(\mathbf{W}\mathbf{X})1_{P}$. Where $\mathbf{W}=[w_{1},w_{-1}]$ and $\mathbf{X}\in\mathbb{R}^{d\times P}$, this is akin to a single-layer network. 3. **Simulation Evaluation**: About the numerical simulations, the common features are defined as frequency 0.8, rare as 0.15 and extremely rare as 0.05. Is there any reasoning or theoretical justification to do so? What happens when we change the frequencies of these features to skew them further? Is the assumption that extremely rare feature frequency is rare only in the training and at test time there exists another distribution? Otherwise, why is the low performance of ERM on extremely rare features negative? Also, for Fig 1 (leftmost plot) it is not obvious why Cutmix is non-monotonic here. It rises and then plateaus, perhaps authors can change the plot scale to appropriately demonstrate this point. 4. **Real World Evaluation**: I find the real-world evaluation for the theory extremely lacking. It need not be extensive with many datasets but in the current CIFAR-10 experiments, there is no intuition from the perspective of common, rare and extremely rare features. A similar experiment to simulations on CIFAR-10 would support the point of the paper much better. 5. **Visualisation of extremely rare features**: What would the visualisation of extremely rare and rare features in the case of CIFAR-10 look like? If possible can authors visualise the features learnt by ERM trained and CutOut and CutMix? This would show that extremely rare features learned by cutmix are indeed interpretable, thus important for generalisation as compared to random noise in the dataset. Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: The major limitations of the work are in its choice of definitions, namely model definition and definition of extremely rare features. 1. Limitation of model definition: I am sceptical of the current conclusions generalising to networks with more layers since the current model definition is not really even 2 layered. 2. Limitations of extremely rare features definition: Based alone on the frequency of a feature in the dataset, one can not distinguish noise from extremely rare features unless we have access to the data-generating process, which is often not the case in the real world. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We express our gratitude for your valuable comments. In the following, we address the points raised by the reviewer. ## **W1, Q1. The definitions of features** Please refer to our global response. We describe these features in Section 2.4, but we acknowledge that discussion near Definition 2.1 would be helpful to readers. We plan to include a more detailed explanation of these notions in Definition 2.1 in our next revision. ## **W2, Q2, L1. 2-layered CNN** Whether our network is 2-layer or 1-layer may depend on the perspective. Following existing works [1,2,3] in the literature, we regard it as a 2-layer CNN, where the weights in the second layer are fixed at 1 and -1, making only the first layer trainable. In addition, we agree that extension to a deeper neural network is not straightforward. However, we believe that our work sheds light on the benefits of patch-level augmentation, making it a valuable contribution. We will leave the extension to deeper networks as future work. ## **Q3. Numerical simulations** Our choice of frequencies is intended to highlight the distinctions between the three methods. Our findings suggest that applying Cutout and CutMix lowers the “threshold” for the frequency of a feature’s occurrence required for learning it. When we reduce the frequency of a feature from 1 to 0, initially all three methods can learn the feature. As the frequency decreases, ERM may fail to learn the feature while Cutout and CutMix continue to succeed. At even lower frequencies, only CutMix can learn the feature whenever it appears in the training data. ## **Q3, L2. Confusion regarding extremely rare features** We believe there may be some confusion regarding extremely rare features. We have provided explanations with examples for our framework in our global response, and we encourage you to refer to it. We address remaining questions here: >**Is the assumption that .... extremely rare features negative?** Our training and test data distributions are identical. As detailed in our general response, learning more features generally improves performance. Although data with extremely rare features make up a small portion of the test distribution, learning these features can enhance test accuracy on that data, leading to overall test-time performance improvements. Thus, low performance of ERM on (extremely) rare features is negative for overall performance. >**Limitation. One can not distinguish noise from extremely rare features** We think the reviewer may have confused. Extremely rare features are also label-dependent information that can appear nontrivially many times (albeit relatively rarer) across several data points, while similar or identical noise patches hardly reappear across different data points. ## **Q3. Non-monotonicity of CutMix** In Figure 1, the leftmost plot shows that the curve for CutMix initially rises, then slightly decreases before plateauing. This indicates a non-monotone behavior for CutMix, in contrast to the other methods. Let us clarify why: Due to the use of mixed labels, CutMix loss has global minimizers and they evenly learn all kinds of features. In the early stages of learning, the model tends to overshoot in learning common features because of their faster learning speed compared to other features. This initial overshooting leads to a temporary rise and subsequent decrease in the curve. This non-monotone behavior is precisely the reason why we had to devise a novel proof strategy different from ERM and Cutout. ## **W3, Q4, Q5. Real world evaluation** We experimented on CIFAR 10 to support our findings and address the reviewer’s concern. We train ResNet18 using vanilla training without any augmentation, as well as with Cutout and CutMix, following the same experimental details described in Appendix A.1, except using only 10% of the training set. This data-hungry setting is intended to highlight the benefits of Cutout and CutMix. We then evaluated the trained models on the remaining 90% of the CIFAR training dataset. The reason for evaluating on the remaining training dataset is that we plan to analyze the misclassified data using C-score[4], which is publicly available only for the training dataset. C-score measures the structural regularity of data, with lower values indicating examples that are more difficult to classify correctly. In our framework, data with harder-to-learn features (corresponding to rarer features) would likely have lower C-scores. Since directly extracting and quantitatively evaluating features learned by the models is challenging, we use the C-score as a proxy to evaluate the misclassified data across models trained by ERM, Cutout, and CutMix. Figure 2 in our attached pdf in our global response illustrates that Cutout tends to misclassify data with lower C-scores compared to ERM, indicating that Cutout learns more hard-to-learn features than vanilla training. Furthermore, the data misclassified by CutMix has even lower C-scores than those misclassified by Cutout, suggesting that CutMix is effective at learning features that are the most challenging to classify. This observation aligns with our theoretical findings, demonstrating that CutMix captures even more difficult features compared to both ERM and Cutout. Since directly visualizing features learned by a model is challenging, we present data that was misclassified by the model trained with ERM but correctly classified by the model trained with Cutout instead. In Figure 3 in attached file, we show 7 samples per class with the lowest C-scores, which are considered to have rare features. Similarly, we also visualize data misclassified by the model trained with Cutout but correctly classified by the model trained with CutMix to represent data points with extremely rare features. This approach allows us to interpret some (extremely) rare features in CIFAR 10 ,such as frogs with unusual colors. Thanks for your time and consideration. Best regards, Authors --- Rebuttal Comment 1.1: Comment: Dear mYpt, What are your thoughts after reading the rebuttal and other reviews? Best, AC --- Rebuttal Comment 1.2: Title: Thanks for the rebuttal Comment: I thank authors for addressing all my concerns. The rebuttal has cleared my original doubts about the work. Therefore, I raise my score. --- Reply to Comment 1.2.1: Comment: Thank you for your response. We are glad to hear that our response was helpful and resolved your concerns. If you have any further questions, please feel free to ask. Best regards, Authors
Rebuttal 1: Rebuttal: Dear reviewers, We express our gratitude for your time and valuable comments. Before addressing concerns/questions raised by individual reviewers, we would like to re-emphasize the main intuition behind our theoretical framework and findings. ## **Motivation for feature and noise in our data distribution** We would like to provide motivation for our data distribution and clarify each component of the distribution. The figure in the attached file illustrates the core ideas behind our feature noise data model, which we believe will help reviewers better understand our approach. The main characteristic of image data is that the input contains both information relevant to the image labels (which we refer to as "features," e.g., a cat’s face) and information irrelevant to the labels (which we refer to as "noise," e.g., the background). The key difference between these two components is that features can appear in other data points, while noise typically does not appear in other (unseen) data since it is independent of the label. This distinction motivates our approach, where features are sampled from a set of fixed vectors and noise is sampled from a Gaussian distribution. Both features and noise can be used to correctly classify training data, however, only features are useful for correctly classifying unseen test data. Thus, learning features are important for better generalization. ## **Clarification on difference between common, rare, and extremely rare features** Next, we would like to clarify the notion of common, rare, and extremely rare features. Different features appear in data with different frequencies. For example, in a cat and dog classification task, the number of occurrence of cat’s face and cat’s tail in the dataset might differ significantly, yet both are relevant for the label "cat". The reason we separate features into these three categories is to highlight the distinctions between the three training methods we analyze. Additionally, we emphasize that “extremely rare” features are still likely to appear in a nontrivial fraction of the training data with high probability, as outlined in the second bullet point of Lemma B.2, given the assumptions on hyperparameters in Assumption B.1. We hope this clarification addresses any concerns or misunderstandings expressed by some reviewers. We plan to include these discussions in the next revised version of our paper. Additionally, we provide references that will be addressed in individual rebuttals below. We hope our response helps to resolve any concerns and confusion. Best regards, Authors Reference [1] Ruoqi Shen, Sébastien Bubeck, and Suriya Gunasekar. Data augmentation as feature manipulation. In ICML 2022 [2] Zeyuan Allen-Zhu and Yuanzhi Li. Towards understanding ensemble, knowledge distillation and self-distillation in deep learning. arXiv 2020 [3] Difan Zou, Yuan Cao, Yuanzhi Li, and Quanquan Gu. The benefits of mixup for feature learning. In ICML 2023 [4] Zihenrg Jiang, Chiyuan Zhang Kunal Talwar, and Michael C. Mozer. Characterizing structural regularities of labeled data in overparameterized models. In ICML 2021 [5] Jang-Hyun Kim, Wonho Choo, and Hyun Oh Song. Puzzle mix: Exploiting saliency and local statistics for optimal mixup. In ICML 2020 [6] Jang-Hyun Kim, Wonho Choo, Hosan Jeong, and Hyun Oh Song. Co-mixup: Saliency guided joint mixup with supermodular diversity. In ICLR 2021 [7] Sébastien Bubeck et al. Convex optimization: Algorithms and complexity. Foundations and Trends®358 in Machine Learning 2015 Pdf: /pdf/27c15ab53545a299e78524f0eeb55fcdbbf9cce3.pdf
NeurIPS_2024_submissions_huggingface
2,024
Summary: This paper theoretically analyzed the benefits of two data augmentation techniques, Cutout and CutMix, for feature learning of two-layer and two-neuron convolutional neural networks. The authors considered an ideal data distribution in high dimension with one feature patch, one dominant noise patch, and some background noise patches, where the feature patch is generated by three different features in terms of the population: common, rare, and extremely rare features. This paper showed that classical training dynamic without data augmentation achieves perfect training accuracy but performs poorly on rare and extremely rare features, leading to low test accuracy; Cutout training achieves perfect training accuracy and learns rare features better than ERM, but still struggles with extremely rare features; and CutMix training achieves near-perfect performance by learning all features and noise vectors evenly, resulting in the highest test accuracy among the three methods. The paper validated the theoretical findings through extensive numerical experiments and real-world data experiments on CIFAR-10, showing that CutMix achieves the best performance in terms of test accuracy. Strengths: The paper provided a novel theoretical framework for understanding the benefits of Cutout and CutMix, filling a gap in the literature where empirical success lacked a theoretical explanation. The presentation of the main theorems (Theorem 3.1, 3.2, and 3.3) is rigorous and clearly shows the differences in learning dynamics and performance between ERM, Cutout, and CutMix.. Weaknesses: 1. The use of a feature-noise data model to analyze the effectiveness of different training methods is restrictive. This feature noise patch data ideally separates the noise and feature orthogonally and enables us to consider the dynamics of the weights in feature and noise directions separately. However, in general data itself may contain some useful nonlinear features, which may not be characterized by this ideal model. It would be better to provide more motivations for this data assumption and present some real-world datasets exhibiting common, rare, and extremely rare features simultaneously. Besides this analysis only considered binary classification setting. 2. The asymptotic choice of the hyperparameters in Section 2.4 is not natural. While analyzing high-dimensional limits is necessary for the theoretical analysis, the practical implications and sensitivity to these hyperparameters should be better addressed. Including a more detailed discussion or empirical validation of these assumptions would be better. Additionally, the paper could provide more guidance on how to choose the hyperparameters (or the scaling) in training and data augmentation for practitioners. 3. There is limited discussion on the computational costs associated with implementing Cutout and CutMix compared with ERM. Including a theoretical analysis of memory usage, time complexity, or other computational costs would help in understanding the trade-offs involved. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. In Section 1.2, it would be better to include more recent work on feature learning theory. For instance, some references are below. 2. In Cutout training, based on your analysis of training, is there an optimal choice of hyperparameter $C$? Can we set $C=0$ in Theorem 3.2 to get Theorem 3.1? 3. In Section 2.4, why do you assume the number of data points $n$ is much smaller than the feature dimension? Is it because of the over-parameterized model? And how about the large learning rate $\eta$ case, for training dynamics? Many previous papers [Yang and Hu. 2020, Abbe, et al. 2022, Ba, et al. 2022] have shown the benefits of large learning rates to obtain feature learning. Moreover, how do you assume $K$, the number of orthogonal feature directions? 4. In (4), a typo in $w^{(t+1)}_s$ 5. The proof strategy of CutMix training is to find the global minimizer directly. Does this strategy also work for ERM and Cutout? Can this method generalize to wider neural networks and more general datasets? 6. In Lemma B.3, the statement is not complete. You define the $\gamma_s^{(t)}$ and $\rho_s^{(t)}$ later for different cases, separately. ------------------------------------------------------------------------------------- Yang and Hu. 2020. Feature learning in infinite-width neural networks Abbe, et al. 2022. The merged-staircase property: a necessary and nearly sufficient condition for sgd learning of sparse functions on two-layer neural networks Ba, et al. 2022. High-dimensional asymptotics of feature learning: How one gradient step improves the representation. Ben Arous, et al. 2021. Online stochastic gradient descent on non-convex losses from high-dimensional inference. Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We express our gratitude for your time and valuable comments. We also thank you for bringing to our attention the typo in (4) and unclear statement in Lemma B.3 (we intended to claim the “existence” of such $\gamma_s^{(t)}, \rho_s^{(t)}$ for each training method). We will fix/clarify this in our next revision. In the following, we address the points raised by the reviewer. ## **W1. Motivation for data** We provide motivations with examples for our framework in our global response. We would appreciate it if you could refer to it. ## **W2, Q2&3. Choice of hyperparameters** We assume the number of data points $n$ is much smaller than the dimension $d$ which corresponds to the over-parameterization regime common in modern neural networks. This allows us to apply high-dimensional probability theory effectively, as you mentioned. The assumptions on hyperparameters, such as the strength of noise and the frequencies of features, are designed to highlight the distinctions between the three training methods by satisfying the inequalities outlined below line 280. The choices for other hyperparameters are made to ensure convergence within our specific setting. These types of assumptions are also considered in several related works [1,2,3]. Also, we do not impose a strict condition on the total number of features. More important thing is the frequency of each feature rather than the total number of features. Any value of $K$ is possible as long as each $\rho_k$ can satisfies given conditions. Even though most choices of hyperparameters arise from technical elements of our proof, we can provide intuition on choice of cutting size $C$ in cutout. Our intuition behind Cutout is that it outperforms by removing dominant noise, which do not generalize to unseen data. Thus, $C\geq 1$ is necessary to ensure that the analysis of training dynamics differs from that with $C=0$. The proof strategy for Theorem 3.2 heavily relies on the condition $C\geq1$. Hence, setting $C=0$ in Theorem 3.2 does not derive Theorem 3.1 directly. In practice, image data contains features and noise across several patches. A larger cutting size can be effective in removing noise but may also remove important features that the model needs to learn. Thus, there is a trade-off in choosing the optimal cutting size. From this intuition, we believe that for images with a larger portion of background(noise), a larger cutting size is likely to be more effective. ## **W3. The computational costs** Since our work does not focus on proposing novel algorithms, we believe that a discussion on computational and memory costs is not essential and should not be considered a weakness of our work. However, we address this concern here for the reviewer's benefit. In the practical implementation of Cutout, a squared region is randomly sampled and this same region is cut from all images within a batch. Similarly, in the implementation of CutMix, a squared region is randomly sampled and then used to cut and paste parts of pairs of data batch and its random permutation. Consequently, the computational and memory costs involved in each iteration of both Cutout and CutMix are comparable to those of vanilla training, differing only by a constant factor. In our theoretical setting, we consider training using full-batch gradient descent for ERM, and gradient descent on the expected loss for Cutout and CutMix. These idealized training methods involve higher computational and memory costs per epoch since they require more augmented data. For example, Cutout requires $\binom{P}{C} n$ data points and CutMix requires $2^{P-1} n^2$ data points. However, we would like to emphasize that this setting is designed for a theoretical understanding; as outlined above, the practical versions are not computationally burdensome. ## **Q1&3. Recent works on feature learning theory** Thank you for suggesting recent works on feature learning theory and for raising questions related to comparisons with these literatures. We were not familiar with these works and have noted that they are somewhat orthogonal to our work due to differences in the definition of “features” compared to our approach and other related previous works. The notion of features in our work refers to label-relevant information contained in the input that is useful for generalization to unseen data. In contrast, the notion of features in the works you suggested seems to be relate to the outputs of the last hidden layer (i.e., the representation of data point learned by the network). This distinction suggests that the concepts of "features" in these studies differ from those in our approach. We believe that a large learning rate has less impact within our notion of feature learning since it does not affect trend in learning speed of features and noise which is essential as we described in Section 4.1. ## **Q5. The proof strategy for CutMix** The proof strategy for CutMix cannot be applied to ERM and Cutout because their training loss lacks a global minimum due to the exponential tail of the logistic loss ($\lim_{z \rightarrow \infty} \ell(z) = 0$). In contrast, CutMix loss has a global minimum due to its use of mixed labels ($\lim_{z \rightarrow \infty } \ell(z)+ \ell(-z) = \infty$). Since main idea of our strategy is considering the training loss as a composition of reparameterization and convex functions, we believe that this technique could be extended to a broader class of architectures, datasets, and training methods involving mixed labels. However, the exact characterization of the global minimum in different settings would require techniques beyond those we have used. Thanks for your time and consideration. Best regards, Authors --- Rebuttal Comment 1.1: Comment: Dear hATA, What are your thoughts after reading the rebuttal and other reviews? Best, AC
null
null
null
null
null
null
Graph Diffusion Policy Optimization
Accept (poster)
Summary: The paper studies the problem of learning graph diffusion generative models on arbitrary non-differentiable objectives using policy gradients. Authors argue that the recently proposed DDPO technique doesn't work well on the discrete, graph-related learning tasks and consider a modified objective and a corresponding gradient estimate which they refer to as Graph Diffusion Policy Optimization (GDPO). They show that GDPO performs significantly better on a number of reward functions and datasets. Strengths: ## GDPO seems to be efficient Discrete diffusion models are valuable class of machine learning models and being able to trained them on non-differentiable objectives might might unlock interesting applications, in particular, in optimizing molecular graphs as authors demonstrated. ## The paper is well-written and easy to follow I can clearly answer most of my own questions about how GDPO works and it should be easy to reproduce main results. Weaknesses: ## Underexplored potential of modern RL techniques While the proposed GDPO is valueable regardless, it is unclear to me that the biased objective / gradient estimate of eager gradients is necessary. Neither DDPO nor this paper explores even the simplest techniques such as actor-critic, PPO / TRPO or more sophisticated versions of importance sampling. DDPO showed good performance on image tasks without such advances and it could well be that it would still work well on graph tasks. ## Lack of bias-variance analysis Authors acknowledge that eager gradients is a biased version of the standard policy gradients but don't provider either theoretical or empirical analysis of the bias-variance tradeoff between the two methods. In the appendix (line 660) authors a strange statement that importance sampling is used to reduce variance. It is not obvious to me all. Importance sampling can both reduce or increase variance depending on the proposal or the policy generating the trajectory. What it achieves is that it allows to train on experiences generated by policies other than the current policy being optimized. Technical Quality: 3 Clarity: 3 Questions for Authors: My main question is what is the nature of the bias the GDPO introduces? Is there a reasonable objective that is followed go eager gradients? Can we still interpret them as policy gradients in some kind of modified MDP? Figure 4: what is $D_G$ and $D_I$ concretely? Does it make sense to assess $L_2$ distances on discrete objects with categorical features? What happens if you apply GDPO to image diffusion? Does it still work or work better? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: See above, I believe the paper needs a clearer discussion of the bias introduced by policy gradients. At the moment, it is not clear what is the connection of GDPO to the reverse diffusion MDP. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insightful review and valuable questions. Below, we respond to the comments in ***Weaknesses (W)*** and ***Questions (Q)***. ***W1: Other RL techniques*** Thank you for your suggestions. Due to the multi-step generation process characteristics of DPMs, directly estimating model gradients from rewards is very challenging. While we acknowledge the theoretical limitations of GDPO compared to DDPO, our results suggest that GDPO is an effective method for discrete DPMs and fills a gap in related research. Exploring other RL techniques, such as actor-critic, PPO, TRPO, or more sophisticated versions of importance sampling, to further enhance GDPO should be an interesting extension, which we leave to future work. --- ***W2&Q1: Analysis on bias*** Thank you for highlighting these limitations. The main bias of GDPO arises from modifying the "weight" term in Eq. (9), which shifts the model's focus more towards the generated results rather than the intermediate process, thereby reducing potential noise. Due to the discrete nature of Graph DPMs, the $x\_0$-prediction and $x\_\{t-1\}$-prediction formulations cannot be related through denoising objectives as in continuous DPMs. This issue also complicates the connection between DDPO and GDPO. We have not yet identified a relevant solution and are still working on it. In our empirical study, we do not observe significant performance variance and tradeoff for GDPO given the current scale of experiments. This may be due to the graph sizes we explored not being sufficiently large. In future implementations, we will incorporate support for sparse graphs to assess GDPO's performance on larger graph datasets and investigate the tradeoff more thoroughly. Regarding the statement about importance sampling, we apologize for any misunderstanding. In line 660, we did not claim that importance sampling techniques can reduce variance. Instead, we stated that they can be used to update DPMs multiple times with the same batch of trajectories, which aligns with your understanding, i.e., training on experiences generated by policies other than the current policy being optimized. We will include the above discussion in the final version. --- ***Q2: Clarification on Figure 4*** We apologize for the lack of clarity. $D_I$ and $D_G$ represent the feature dimensions of images and graphs, respectively. For example, if an image has a size of $3 \times 32 \times 32$, then $D_I = 3072$. For a graph, $D_G$ is the product of the number of nodes and the feature dimension of the nodes. Since the L2 norm sums over the feature dimensions, we average over these dimensions to eliminate the influence of dimensionality. We choose the L2 norm to maintain consistency in the metric for comparisons, we acknowledge that graphs and images reside in different spaces and typically have different representations, we believe the comparison with L2 distance can provide valuable insights into the differences between graph and image DPMs. --- ***Q3: GDPO on images*** Thank you for your insightful question. Our research primarily focuses on graph DPMs, so we did not include experiments on Image DPMs. We attempted to adapt GDPO to Image DPMs but observed no significant advantages. We believe that for continuous DPMs, the policy gradient noise of DDPO is already sufficiently small, and there is no need to adjust the weight term. We will include this discussion in the final version. --- Rebuttal Comment 1.1: Comment: I thank authors for their response which I found helpful but ultimately not convincing enough for me to raise my score. --- Reply to Comment 1.1.1: Comment: We appreciate your valuable feedback. We will further polish the paper and incorporate the rebuttal discussions into the final revision. Thank you!
Summary: This paper introduces graph diffusion policy optimization (GDPO), a policy gradient method for optimizing graph diffusion probabilistic models with respect to non-differentiable reward signals. By establishing the connection between a T-step denoising process and a T-step Markov Decision Process (MDP), policy gradient methods can be applied to graph diffusion models. While a previous work (DDPO) report s competitive generation quality for image diffusion models trained with the classical REINFORCE algorithm, the authors empirically observe a convergence issue when applying REINFORCE to graph diffusion models. This issue is possibly due to the increasingly vast space constituted by discrete graph trajectories as the number of nodes in the graph increases. To address this issue, the authors propose GDPO, a modified policy optimization objective. Empirical studies demonstrate the effectiveness of the proposed modified objective. Strengths: **S1.** The proposed methodology is neat and well-motivated. **S2.** The paper is well-written and easy to follow. **S3.** The empirical studies demonstrate the effectiveness of the proposed approach. Weaknesses: **W1.** Some experiment settings are not completely clear from the description. See the questions below. **W2.** The proposed modification coincides with the idea of training a denoising network to predict the original uncorrupted graph rather than perform one-step denoising. Some discussions are expected. **W3.** Classifier-based and classifier-free guidance are two popular approaches for training conditional diffusion models and have been previously explored for graph diffusion models. Some discussions on the potential pros and cons of the RL approach against them are expected. Technical Quality: 3 Clarity: 3 Questions for Authors: **Q1.** For the graph diffusion baselines considered, are they trained using classifier-free or classifier-based guidance? **Q2.** A key question in understanding the pros and cons of the different conditional generation approaches is sample efficiency. If ground truth reward functions are employed for GDPO training, then similarly we can label the samples generated by non-RL conditional graph diffusion models as extra training samples. The key question is then how the performance of the different models changes against the number of reward function evaluations. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The paper considers discrete time graph diffusion models. Whether the proposed approach is effective for continuous time graph diffusion models remains unexplored. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insightful review and valuable suggestions. Below, we respond to the comments in ***Weaknesses (W)*** and ***Questions (Q)***. --- ***W1: Experimental settings*** Thanks for your suggestions. We will continue to polish our introduction to experimental settings, moving important details from Appendix to the Sec. 6 and providing additional experimental details for clarify. --- ***W2: Comparison with the $x\_0$-prediction formulation*** Thank you for pointing this out. Indeed, our eager policy gradient in Eq. (10), compared to the policy gradient of REINFORCE in Eq. (8), resembles the idea of training a denoising network to predict the original uncorrupted graph rather than performing one-step denoising. However, we note that training a denoising network to predict the original data is fundamentally a matter of *parametrization* of one-step denoising. Specifically, the one-step denoising $p\_\\theta(x\_\{t-1\}|G\_t)$ is parameterized as a weighted sum of $x\_0$-predictions $p\_\\theta(x\_\{0\}|G\_t)$, as described in Eq. (1). The eager policy gradient in Eq. (10) is motivated differently, focusing on *addressing the variance issue* as detailed in Sections 4.2 and 4.3. We will include this discussion in the final version. --- ***W3: Discussion on the potential pros and cons of the RL approach against classifier-based and classifier-free guidance for graph diffusion models*** Compared to graph diffusion models using classifier-based and classifier-free guidance, RL approaches such as GDPO have at least two main advantages: - Compatibility with discrete reward signals and discrete graph representations (L24-30): As guidance for diffusion models is based on gradients, a differentiable surrogate (e.g., property predictors [29, 37]) is needed for non-differentiable reward signals (e.g., results from physical simulations). RL approaches naturally accommodate arbitrary reward functions without the need for intermediate approximations. - Better sample efficiency: For graph diffusion models with classifier-based or classifier-free guidance, labeled data are required at the beginning and are independently collected with the graph diffusion models. In contrast, RL approaches like GDPO collect labeled data during model training, thus allowing data collection from the current model distribution, which can be more beneficial. We also empirically observe a significant gap in sample efficiency. Please also see our response to ***Q2***. Potential cons of GDPO are discussed in the Limitations, see Appendix A.2. We will include the above discussion in the final version. --- ***Q1: Are the graph diffusion baselines trained using classifier-free or classifier-based guidance?*** The graph diffusion baselines in our experiments, i.e., MOOD and DiGress-guidance, are both classifier-based methods. For these methods, additional regressors for guidance (referred to as property predictors) are trained on graph samples with ground truth rewards. Specifically, noise is added to the input molecular graph so that the predictors can provide correct guidance at all timesteps during the denoising process. For graph diffusion models using classifier-free guidance, we are currently not aware of comparable work and will be happy to include this in future work. --- ***Q2: Sample efficiency and performance comparison*** In our experiments on ZINC250k, we used 100,000 extra samples with ground truth rewards to train property predictors to provide gradient guidance for the MOOD and DiGress-guidance baselines. Note that for GDPO, the total query of reward is only 20,000, which is much smaller than that used for guidance. Nonetheless, a significant performance gap remains between these methods and GDPO. This demonstrates that GDPO has much better sample efficiency compared to graph diffusion models based on guidance. We will highlight these results in the revision. --- Rebuttal Comment 1.1: Comment: Thank you for your response, which has addressed most of my questions. I've increased the "Soundness score" from 2 to 3 and the rating from 5 to 6. --- Reply to Comment 1.1.1: Title: Thank you for your support and raising the score Comment: We greatly appreciate your valuable feedback and the score improvement. We will further polish the paper and incorporate the rebuttal discussions into the final revision. Thank you!
Summary: This paper introduces Graph Diffusion Policy Optimization (GDPO), a novel approach to optimize graph diffusion models for arbitrary objectives using reinforcement learning. The key contributions are: 1. Formulating the denoising process of graph diffusion probabilistic models (DPMs) as a Markov decision process and proposing an "eager policy gradient" method tailored for graph DPMs to address high variance issues in standard policy gradient approaches. 2. Demonstrating state-of-the-art performance on various graph generation tasks, including general graph generation and molecular graph generation with complex objectives. The authors show that GDPO significantly outperforms baseline methods, including other graph generation techniques and adaptations of diffusion model optimization approaches from the image domain (e.g., DDPO). Strengths: 1. Novelty and Originality: Introduction of the "eager policy gradient" method, to address the high variance issues encountered with standard policy gradients in graph diffusion models. Despite the lack of theory supporting it, it's a clever solution to a significant challenge in optimizing these models for arbitrary objectives. 2. Clarity: - Clear problem formulation: The authors provide a well-structured explanation of the challenges in optimizing graph DPMs and why existing methods fall short. - Effective visualization: Figure 1 offers a clear overview of the GDPO method, aiding understanding of the approach. - Detailed ablation studies: The paper includes thorough analyses of different components and configurations of GDPO, which helps clarify the contribution of each aspect of the method. 3. Significance: - Strong performance improvements: GDPO demonstrates substantial gains over state-of-the-art baselines across various graph generation tasks. For example, in molecular graph generation, it achieves up to a 19.31% improvement in hit ratio for generating effective drugs. - Sample efficiency: The method achieves good results with relatively few queries (e.g., 1/25 of the training samples), which is crucial for applications where reward evaluation may be computationally expensive, such as drug discovery. - Broad applicability: GDPO is flexible and can be applied to a wide range of graph generation tasks with complex, multi-objective reward functions. This versatility enhances its potential impact on the field. 4. Technical Quality: - Thorough experimentation: The authors provide extensive experiments on both general graph generation and molecular graph generation tasks, lending credibility to their claims. - Careful analysis of baseline methods: The paper includes a detailed study of DDPO (a related method for image diffusion models) failing on graph DPMs, which strengthens the justification for GDPO. - Consideration of practical aspects: The authors address important practical considerations such as the impact of different reward weightings and the number of trajectories used for gradient estimation. Weaknesses: - Limited theoretical analysis: While the eager policy gradient is empirically effective, the paper lacks a rigorous theoretical treatment of its properties, particularly regarding the bias-variance trade-off. - The paper would benefit from a comparison to other RL-utilizing graph generation methods, particularly MolGAN https://arxiv.org/pdf/1805.11973 , which also applies RL techniques to molecular graph generation. - Scalability concerns: The paper does not explore the method's performance on very large graphs (e.g., 500+ nodes), leaving questions about its scalability unanswered. - Limited exploration of failure cases: While the authors provide a failure case related to novelty optimization, a more comprehensive exploration of scenarios where GDPO struggles would provide valuable insights into its limitations. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. How sensitive is GDPO to the choice of reward function? Are there certain types of rewards or objectives that are particularly challenging for the method (e.g., very sparse, high dynamic range, noisy...)? 2. I'd like to suggest an important ablation study on highly ambiguous graphs to better understand the limitations and robustness of GDPO. Specifically, I propose constructing a "noisy-tree-with-planted-motifs" dataset with the following process: --- 1. Generate a base tree structure with parameters: - Tree depth - Minimum fan-out factor - Maximum fan-out factor - Binned beta variation across the fan-out for a given node 2. Plant a tree of cliques as an expansion of nodes at a chosen height in the base tree. This planted structure should have parameters: - Number of rings of cliques - Ring size and clique count 3. Apply a noisy rewiring process towards an Erdős-Rényi random graph with the same number of nodes and edges as the base graph. This process should take as parameters: - Number of rewiring steps - starting layer of the subgraph to be noised (starting at a given layer of the tree) The noisy rewiring process would involve: a) Randomly selecting an edge b) Checking whether to remove it based on the probability of an edge existing in the target Erdős-Rényi graph c) Selecting a random node pair to potentially add an edge d) Repeating this process for the specified number of steps I suggest ablating over: 1. The number of rewiring steps 2. The size of the subgraph to be noised 3. The overall graph size 4. The height at which the tree of cliques is planted 5. (optional) clique size and ring size My hypothesis is that GDPO would struggle to maintain the planted tree of cliques structure and the overall tree structure as the noise level and graph size increase, as it would slowly fill up the capacity of the model with combinatorial structures. This ablation study would provide valuable insights into GDPO's performance on more complex and ambiguous graph structures. Questions I'd like answered through this ablation study: 1. How does GDPO's performance change as the noise level (number of rewiring steps) increases? 2. What is the impact of graph size on GDPO's ability to maintain the planted tree of cliques and the overall tree structure? 3. Is there a critical point in the noise level or graph size where GDPO's performance significantly degrades? 4. How does the height at which the tree of cliques is planted affect GDPO's ability to maintain this structure? (i.e., how well does it maintain long-range dependeny) 5. How does GDPO compare to baseline methods on these more challenging graph structures? 6. Can you identify specific types of motifs or structures within the planted tree of cliques that GDPO struggles to maintain or generate in these noisy environments? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: I think the limitations in the appendix are acceptable Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insightful comments and valuable suggestions. Below, we respond to the concerns raised in ***Weaknesses (W)*** and ***Questions (Q)***. --- ***W1: Theoretical Analysis, Scalability, and Failure Cases*** Thank you for highlighting these points. Despite considerable efforts, theoretical analysis of the eager policy gradient remains challenging due to the discrete nature of Graph DPMs. We will continue to address this in future work. While GDPO shows significant efficiency, its scalability to larger graphs is constrained by the inherent limitations of graph DPMs, particularly concerning GPU memory. In future implementations, we aim to introduce support for sparse graphs to enhance scalability. Regarding failure cases, in addition to the case mentioned in Section 6.3, we believe that potential failure cases likely arise from tasks with very sparse reward signals. We will add further discussion to the final paper. --- ***W2: Comparison with RL-based graph generation methods*** Thank you for your suggestion. We discussed MolGAN in the Introduction (line 32). We have also compared GDPO with several representative and leading RL-based Graph Generation Methods, such as GCPN, REINVENT, and FREED. These methods optimize graph neural networks to generate molecules with specific properties by designing molecule encoding schemes and utilizing modern RL techniques. --- ***Q1: Discussion on reward selection*** In Appendix A.5, Table 5, we investigated the sensitivity of GDPO to different rewards. The results demonstrate that adjusting the weights of sub-rewards does not significantly impact model performance, indicating that GDPO has a certain degree of robustness to reward variation. However, very sparse rewards generally pose a challenge. As detailed in Algorithm 1 in Appendix, GDPO normalizes the received rewards. For very sparse rewards (e.g., most rewards are zero and a few are one), the normalized values can become very large, leading to instability in optimization. To mitigate this issue, we follow DDPO in clipping gradients and discarding gradients generated by extremely large rewards. However, for extreme cases, these techniques may also fail. We will include further discussion on this topic in the final version. --- ***Q2: Ablation study on noisy tree with planted motifs*** Thank you for sharing the idea. Based on your proposed setting, we conduct additional experiments. Due to time constraints, we do not perform extensive parameter exploration. Additionally, we made some simplifications to facilitate implementation and better explore the key factors. We first generate a tree and then connect a clique to the nodes of the tree, performing a specified number of rewrite operations as suggested. Based on the number of rewrite steps, graph size, and clique position, we generate multiple datasets, each containing 400 samples. Of these, 256 samples are used for training Graph DPMs, with the remaining samples allocated for validation and testing. In $\\textrm{\\color{blue}Figure A}$ of the rebuttal PDF, we present some examples. $\\textrm{\\color{blue} Figure A(a)}$ illustrates a tree structure with a clique of size 4. When the number of rewrite steps is 3, $\\textrm{\\color{blue} Figure A(d)}$ demonstrates that the overall structure of the samples is disrupted. After training the Graph DPMs, we apply GDPO. The model receives a reward of 1 when it generates a tree with a clique; otherwise, the reward is 0. **Rewrite Steps** In $\\textrm{\\color{blue} Figure B(a)}$, we demonstrate GDPO's performance across different rewrite steps, with four curves representing steps ranging from 0 to 3. Despite a notable decrease in the initial reward as the number of rewrite steps increases, GDPO consistently optimizes the Graph DPMs effectively to generate the desired graph structure. **Graph Size** In $\\textrm{\\color{blue} Figure B(b)}$, we gradually increase the number of nodes from 16 to 40. The results show that graph size affects the initial reward but does not impact GDPO’s optimization performance. **Critical Point** In the current Ablation Study, we do not observe any clear change points. This may be due to the limited range of parameters explored. We will include additional experiments in the final version to address this. **Clique Position** We experiment with inserting the clique at different levels of the tree but find no significant difference. We believe this is because the position of the clique does not affect the initial reward of the Graph DPMs, leading to similar optimization results with GDPO. **Comparison with Baseline** In $\\textrm{\\color{blue} Figure B(c)}$, we compare GDPO with DDPO. The results, consistent with those in Figure 2 of the paper, reveal a clear distinction between GDPO and DDPO in handling challenging data generation tasks. **Challenging Motifs** Based on the current results, we do not observe related behaviors. However, as indicated by the results in Table 1 of the paper, generating planar graphs with specific macro-topological features is quite challenging for GDPO. This is due to the model’s need to perform global summarization of the graph structure, and the rewards are typically sparse. Thank you very much for your suggestions. We will include the above discussions in the final version of the paper. --- Rebuttal Comment 1.1: Comment: Thank your for performing the ablation study and addressing my other points. I am surprised and intrigued by the results of the ablation study and am now curious on how it would look like when investing a larger compute amount to deeper/wider graphs, but I fully understand this is not feasible in this round of investigation. Since I already had the highest score across reviewers, I will retain my score as is for now, but I think this and the other additions have made the paper stronger and I'm now quite comfortable with my score. --- Reply to Comment 1.1.1: Title: Thank you for your feedback Comment: Thank you for your timely response and for acknowledging our efforts. We appreciate your positive feedback, especially regarding the strengthened contribution of the ablation study. We will further explore deeper/wider graphs and incorporate these findings, along with the rebuttal discussions, into the final revision. Thank you once again!
null
null
Rebuttal 1: Rebuttal: Dear Reviewers, We thank all reviewers for their constructive feedback, and we have responded to each reviewer individually. We have also uploaded a **Rebuttal PDF** that includes: $\\textrm{\\color{blue}Figure A}$: Some visualizations of graph data generated based on Reviewer gTVc's suggestions. $\\textrm{\\color{blue}Figure B}$: Results of the ablation study on the synthetic data. Pdf: /pdf/7c55b784ab439788c9545f16efccfad0c66e5dc7.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Finding Transformer Circuits With Edge Pruning
Accept (spotlight)
Summary: The paper proposes Edge Pruning, a novel algorithm for circuit finding. They claim that it compares favourably to prior methods on GPT-2 small in terms of circuit metrics like faithfulness. They also claim it scales to the 13B model size. Finally, they apply their model to circuit-finding in a 13B model, and provide preliminary analysis of instruction-following and in-context-learning capabilities. Strengths: Originality: Excellent. The approach taken to introduce learnable parameters to determine whether to include edges in a circuit is a significant departure from prior work, and opens up exciting new avenues for future research in circuit discovery. Quality: Good. The experiments conducted are reasonable and the analysis supports the claims made. In particular, it's highly promising that Edge Pruning can recover ground-truth TracR circuits, and outperforms ACDC and EAP by a significant margin on both KL-divergence and logit difference. Some exceptions are discussed in "Weaknesses" below. Clarity: Fair. The writing was overall clear and flowed well. Specific technical details are not present at time of review, discussed in "Questions" below. Significance: Excellent. Scalable and effective circuit discovery methods open the door towards accurate interpretability for commercial-sized language models, bringing us closer towards useful applications of interpretability at large. Weaknesses: 1. Circuits are trained to minimize KL divergence, but evaluated using the logit difference. It would be useful to discuss the differences between KL divergence and logit difference, and comment on why they are chosen for training / evaluation respectively. 2. When evaluating baselines, this paper used KL divergence as the objective for both EAP and ACDC. However, both EAP and ACDC originally use logit difference as the optimization objective. Therefore, I am concerned that this is not an apples-to-apples comparison with prior work. 3. The claim that Edge Pruning outperforms EAP / ACDC in GPT-2 small is based primarily on circuit metrics. However, circuit metrics may be misleading. It would be useful to directly compare the nodes and edges found by each method, and report some graph metrics such as node / edge IoU. I would be particularly interested in whether Edge Pruning recovers components of an IOI circuit previously found through manual analysis: https://arxiv.org/abs/2211.00593 4. Similar to the above, there is little / no analysis of circuits found in the 13B model. While it is true that a smaller circuit is generally more interpretable, it is unclear to what extent we can understand the circuits found in the 13B model. It would be useful to highlight specific nodes / edges in the 13B model that seem human-interpretable. As a stretch goal, it would also be useful to formulate and verify hypotheses via standard circuit analysis techniques. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Do you see edge pruning as a suitable technique for unsupervised circuit discovery? 2. Were there other tasks you tried in the CodeLlama-13B model? 3. Other than interchange interventions from a corrupt prompt, did you try other kinds of ablations? (e.g. mean ablation, random-sample ablation, zero ablation) 4. Did circuit-discovery in the 13B model lead you to novel insights about how the model was performing computation? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 4 Limitations: The authors have adequately acknowledged the limitations: 1. Requiring much more GPU memory (32 H100s for Edge Pruning) 2. Being slower than EAP for small dataset sizes The authors have also acknowledged that circuit metrics may be misleading. It would be good if they could address this limitation as discussed in "Weaknesses". Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your detailed review and feedback. We are happy to find that you found our method original, our experiments convincing and the potential impact high. We respond below to some of the points raised in the review. > KL divergence v/s Logit Difference for evaluation We actually evaluate all methods on both KL divergence and Logit Difference, as well as additional metrics. Specifically, we evaluate the methods on KL divergence in Figure 2 and Logit Difference in Figure 3. Additional faithfulness (Exact Match, Figure 5) and accuracy (Figure 6) metrics are provided in Appendix C. > Training with KL divergence instead of Logit Difference for ACDC and EAP The ACDC algorithm actually uses KL divergence as the target metric. In Appendix C of the ACDC paper [1], they justify this choice by showing that Logit Difference (or any task-specific metric) can be over-optimized. EAP acknowledges that [1] recommends using KL divergence as the metric as well (Section 3.3 of [2]). However, as 0 is the global optimum for KL divergence, they (EAP) found that the gradient and thus their scores were the zero vector at that point, leading them to use task-specific metrics. It is therefore preferable to use KL divergence as the target when possible, as Edge Pruning does. > KL divergence v/s circuit overlap metrics Our choice of evaluation was motivated by [3], which discusses at length why faithfulness metrics are more robust and preferable as evaluation metrics as compared to circuit overlap metrics. In addition, overlap metrics are obtained against manually reverse-engineered circuits, which might not exist yet (e.g. for GP), be incomplete (e.g. IOI – only attention heads were analyzed), not be fine-grained enough (both IOI and GT manual circuits only find important nodes, not edges between them) or even inaccurate (it is difficult to evaluate second order effects like the impact of removing A on the importance of B). Nonetheless, we agree that including these metrics can offer a more rounded comparison. We show the ROC plots for Edge Pruning and ACDC on IOI and GT in the PDF attached with the global response. On the former, Edge Pruning achieves a slightly higher AUC on IOI, and a slightly lower AUC on GT as compared to ACDC. We will also include these results in the next draft. > Analysis of circuits in the 13B model / Novel insights thereof Thanks for your suggestion! Please refer to our global response for our additional analysis of the circuit in the 13B model. **Questions** > Do you see Edge Pruning as a suitable technique for unsupervised circuit discovery? We are not quite sure what kind of unsupervised circuit discovery you are referring to in this context. It would be great if you could clarify! In the paper, our method is trained with the KL loss and technically does not need labels during training. > Did you try other tasks with CodeLlama-13B? Originally, we also performed experiments with Dyck Languages and Object Counting from BBH with CodeLlama-13B and Llama-2 13B. However, we found that (1) The instruction-prompted performance was usually very low for these tasks, and (2) Llama-2 generally performed worse than CodeLlama on the algorithmic BBH tasks. Hence, we settled upon Boolean Expressions and CodeLlama for the case study. > Did you try other ablations (mean, zero, random sampling, …)? That is a good question. We settled on interchange ablations relatively early, and haven’t run the main experiments with other forms of ablations yet. We will include more results in a future version. [1] “Towards Automated Circuit Discovery for Mechanistic Interpretability”, Conmy et. al., NeurIPS 2023 [2] “Attribution Patching Outperforms Automated Circuit Discovery”, Syed et. al., NeurIPS 2024 ATTRIB Workshop [3] “Have Faith in Faithfulness: Going Beyond Circuit Overlap When Finding Model Mechanisms”, Hanna et. al., COLM 2024 --- Rebuttal 2: Comment: Thank you for your detailed responses and for including additional metrics, which have addressed most of my concerns with the paper. I still think it's important to elucidate and analyze an actual circuit using the proposed methodology, as quantitative measures could lead to illusions. (see: https://transformer-circuits.pub/2024/qualitative-essay/index.html). Nonetheless, I understand that circuit analysis is difficult, and based on the empirical evidence available and intuition, my expectation would be quite high that this method leads to interpretable circuits. I strongly recommend the authors to publish the circuit found in Llama-13b in order to aid reproducibility and future work. Overall, I have decided to keep my current score of 7, recommending acceptance to NeurIPS.
Summary: In this paper, the main focus lies in finding "transformer circuits" to perform a variant of structured pruning on transformers. While generic structured pruning removes neurons, the authors claim that it is a too-coarse approach and propose "edge pruning", where one output wired to more layers can be wired just to a subset of them. The method to achieve this is borrowed from a famous paper in the pruning community [Louizos, 2018]. The empirical results show the effectiveness of the method. Strengths: - The approach proposed is simple and I can not find a reason why it should not work in general, even beyond transformer networks. - The presentation is overall clean and fluid, despite a very limited related works section. Weaknesses: - The authors miss a relevant segment of the structured pruning variant named "channel pruning". Essentially, instead of pruning filters, you can prune input channels to a (convolutional) neural network. Of course, in a feedforward model this results in filter pruning from the previous layer, but already in residual neural networks this is not necessarily true and discovers "circuits". The key factor that allows the proposed approach to work lies in the massive presence of residual branches in modern architectures (in this case, transformers). I reference [A] as one of the most representative works, but a huge effort has been conducted by the pruning community around this topic in the last 6 years. As such, the novelty claims should be massively downscaled. - Besides, it is unclear what would prevent any practitioner to re-adapt any structured pruning approach along the input dimension. As such, many more comparisons in the experiment section (with either channel pruning approaches or re-adapted structured pruning ones) should be included. - Furthermore, the proposed approach is very similar to a NAS approach where sub-networks are extracted from a supernet (in your case, the full transformer). A positioning with respect to these methods, and empirical comparison in terms of search complexity and training is also missing. - Consequently to the previous points, given that the regularizer function to undermine the subnet is borrowed by [Louizos, 2018], I fail to see the technical novelty in this work. - Besides, across the paper the way the authors address "structured" sparsity is too vague and generic, as well as the fact that they refer to their approach as "Edge Pruning" that should be "different" from unstructured pruning. In reality, unstructured pruning does remove edges (not in the sense of the paper), as each parameter modelizes a connection between the previous and current layer. [A] He, Yihui, Xiangyu Zhang, and Jian Sun. "Channel pruning for accelerating very deep neural networks." Proceedings of the IEEE international conference on computer vision. 2017. Technical Quality: 3 Clarity: 2 Questions for Authors: The authors are invited to provide a commentary on the differences with the aforementioned literature and to comment on the technical novelty of their work, besides providing answers for the raised points. Confidence: 5 Soundness: 3 Presentation: 2 Contribution: 1 Limitations: Limitations are properly discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for pointing us to the literature on Channel Pruning and NAS! We agree this is relevant and will include a discussion of these papers in our next draft. However, we believe there has been a misunderstanding regarding the goal and key contributions of our paper, which we would like to clarify below. To summarize, our goal is not to develop a novel pruning method. Our main contribution is to *adapt* existing pruning techniques to a problem in transformer interpretability (circuit finding). In particular, to our knowledge, we are the first to prune edges rather than nodes in the computational graph to find circuits, and to scale automated circuit discovery to 10B+ models. As per these goals, the focus of our experiments is in comparing a standard pruning approach to established baseline methods for the circuit finding problem, not in comparing different pruning methods. Our method is tailored to a particular formulation of “transformer circuits” that has been widely used in the interpretability community [1,2,3,4] and is based on cross-layer edges between interpretable components. As you noted, common pruning techniques produce other kinds of “circuits”, but we highlight below why such circuit definitions have not been as widely adapted in mechanistic interpretability. > Channel Pruning with residual connections discovers “circuits”... Any practitioner can re-adapt any structured pruning approach along the input dimension. In the presence of residual connections, pruning the channels of the input would only allow one to either remove *all* the edges from that channel (remove channel) to future layers or *none* of them (keep channel). Edge Pruning isolates the exact contribution between any two layers, which gives interpretability practitioners a much more precise signal for understanding the internal control flow of the model. Another difficulty with pruning the channel or hidden size dimension is that the resulting subspaces are difficult to interpret and suffer from “polysemanticity” [5]. Therefore, interpretability research tends to focus on larger components such as attention heads and retain all hidden dimensions, which can be made more interpretable with tools such as LogitLens [6, 7]. In the common response, we discuss how Edge Pruning could be extended to prune more semantic features in the hidden states of the model. > Given that the regularizer function to undermine the subnet is borrowed by [Louizos, 2018], I fail to see the technical novelty in this work. By adapting pruning techniques to the circuit finding problem, we contribute several technical changes to prior methods. These include defining masks for edges between components across layers; disentangling the residual stream from each component to each downstream component; and replacing pruned components with counterfactual activations, rather than 0's. Note that these contributions would not make sense in the context of the standard pruning literature. > [The authors] refer to their approach as "Edge Pruning" that should be "different" from unstructured pruning. Unstructured pruning does remove edges (not in the sense of the paper), as each parameter modelizes a connection between the previous and current layer We can see how the name “edge pruning” could technically apply to many well-known techniques commonly referred to as “unstructured pruning”. However, we hope that our method “Edge Pruning” will be primarily associated with transformer interpretability, and therefore not cause any confusion in the pruning/model compression community. [1] “Interpretability in the Wild: a Circuit for Indirect Object Identification in GPT-2 Small”, Wang et. al., ICLR 2023 [2] “How does GPT-2 compute greater-than?: Interpreting mathematical abilities in a pre-trained language model”, Hanna et al., NeurIPS 2023 [3] “Towards Automated Circuit Discovery for Mechanistic Interpretability”, Conmy et. al., NeurIPS 2023 [4] “Circuit Component Reuse Across Tasks in Transformer Language Models”, Merullo et. al., ICLR 2024 [5] “Feature visualization”, Chris Olah, Alexander Mordvintsev, and Ludwig Schubert, Distill 2017 [6] “Interpreting GPT: The logit lens”, nostalgebraist, LessWrong 2020 [7] “Eliciting Latent Predictions from Transformers with the Tuned Lens”, Belrose et al., 2023 --- Rebuttal Comment 1.1: Comment: After reading the other reviewer's comments, and the author's rebuttal, I still feel that this work has very limited technical novelty (also including the further details provided, which can be simply summarized as codework + replacement with counterfactual activations, that are also borrowed from another work!). I disagree with the authors when they say that structured pruning "either removes all the edges from that channel (remove channel) to future layers or none of them (keep channel)" - it just depends on whether it is applied at the output of a layer (in that case, the authors are right) or at the *input* of a layer (in that case, you would obtain *exactly the same effect as observed here*). A comparison with these approaches would be a must. In general, I also find the claims about interpretability, as the same authors declare, difficult to handle, due to the largeness of the models under exam. All of the issues are in my opinion unaddressable at this stage. Although raising my score, my evaluation is still a reject. --- Reply to Comment 1.1.1: Title: Author Response Comment: Thank you for responding! > Comparison with [structure pruning] is a must The suggested baseline of applying structured pruning to the inputs of a layer is part of our proposed method, which also separates the contribution from each previous layer. We re-iterate that pruning individual neurons/channels would not be a useful comparison, since the resulting circuits would not be desirable for mechanistic analysis of the model due to the sheer volume and polysemanticity of neurons, and lack of meaningful "counterfactual" channel features. > I also find the claims about interpretability, as the same authors declare, difficult to handle, due to the largeness of the models under exam. It seems to us that this is rather a judgement on the whole research area than the merit of our work and, as such, would ask the reviewer to re-consider the confidence score of their review. We point to the published and widely cited AC/DC [1] - a direct predecessor to our method which produces circuits of the same granularity - to argue that automatic circuit finding is indeed a useful and promising approach in transformer interpretability. [1] "Towards Automated Circuit Discovery for Mechanistic Interpretability", Conmy et. al., NeurIPS 2023
Summary: The authors propose "Edge Pruning" as an effective and scalable method for automatic circuit discovery. Edge Pruning consists of learning a binary mask over the edges of the computation graph in a transformer neural network. Edge pruning performs favorably compared to the prior art and the authors demonstrate how it can successfully be used to find new circuits in a case study. Strengths: - The proposed Edge Pruning method is novel and a worthy contribution to the literature. - The comparisons to the prior are are thorough. - The paper is well-written and well-organized. The details of the Edge Pruning method are explained well. - The case study in which the authors find a circuit in CodeLlama-13B provides excellent evidence of the utility of Edge Pruning. The circuit that they uncovered will be able to be used as an extra "ground truth" circuit that all future work can use. Weaknesses: - It would have been useful to see whether or not any prior methods for "automatic" circuit discovery could have found the same or a similar circuit in CodeLlama on the same task. Technical Quality: 3 Clarity: 3 Questions for Authors: - Did you attempt to use any other circuit discovery methods on the CodeLlama task? - You acknowledge in the limitations that it is difficult for a human to fully interpret these computational graphs with hundreds of edges, but (and this is not meant as a criticism) how hard have you tried? - Could Edge Pruning scale to neuron or subspace-level components? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 4 Limitations: The outputs of circuit discovery techniques such as Edge Pruning are large computation graphs that are not, in and of themselves, interpretable. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for leaving a thoughtful and constructive review. It endears us to learn that you found our method novel, and our case study a good demonstration of its utility. We take this opportunity to respond to some of your questions here. > Other methods on CodeLlama We couldn’t run ACDC on CodeLlama as it was prohibitively expensive. Although EAP can work with CodeLlama in theory, we struggled to run it as its current implementation does not interface easily with scalable frameworks like FSDP/DeepSpeed. We expect that an appropriate implementation of EAP will remain efficient for CodeLlama 13B. > Human interpretation While it is challenging to reverse-engineer a large circuit, we believe parts of it can still be understood with enough effort. We discuss this point further in the global response. > Can Edge Pruning scale to neuron or subspace-level components? Thanks for the question! Please refer to the global response for how Edge Pruning may be used in this setting. --- Rebuttal Comment 1.1: Comment: Thank you for your responses. I remain excited about this submission and am keeping my score.
Summary: The paper proposes a method to automatically discover circuits within trained transformer models that are responsible for its behavior on a specific task. The automatic discovery of those circuits forms the first step to interpret the trained model. The transformer is visualized as a graph with each attention head containing 4 nodes, 3 nodes for key, query and value producing neurons with incoming edges and 1 node for the output with outgoing edges. For a specific task, the optimization problem is formulated as the difference between the output of the full model against the output of the circuit given original and corrupted inputs. The use of corrupted inputs act as a means to identify and suppress network components that are not important for the eventual circuit output. Previous methods include greedy search and linear approximation-based edge scoring for circuit discovery. The authors propose an edge pruning based method where each edge is assigned a binary mask value relaxed to be continuous for optimization. This mask value for a specific edge is applied to the original input while the corrupted input receives the compliment and then passed on to the node. The loss used for the output is KL divergence between the circuit output and the output of the full model. After optimizing, the continuous mask values are converted into binary values by simple thresholding. Thus, a sparse graph within the overall graph is discovered that faithfully captures the behavior of the full model on the specific task. The authors evaluate the proposed method on a wide range of tasks and metrics. The results show that the circuits uncovered are more faithful to the full model and perform better as compared to circuits discovered through previous methods. Additionally, the circuits are far sparser as compared to the previous methods. Finally, the authors show that the method can be scaled to a 13 Billion parameter model showcasing the applicability of the method. Strengths: 1. The paper proposes an interesting and efficient method on an important problem in transformer interpretability. 2. The experiments conducted and results shown are convincing of the methods superiority in terms of faithfulness and scalability. Weaknesses: 1. The proposed method and direction are useful for discovering circuits for specific tasks. However, components of the circuits uncovered may have varied behavior across tasks or even input datapoints not considered in the dataset. This makes the overall goal of mechanistically understanding each component dependent on the specific task and the corresponding data available. 2. The use of KL divergence is prominent in the literature. Do you also consider the possibility that multiple circuits may emerge in the model with similar output behavior? It is unclear whether the discovered circuits are consistent across models trained using different seeds. 3. One of the major concerns is that the circuit is selected based on the sparsity level. Multiple circuits can be selected from the same optimization run depending on the sparsity and faithfulness. How do the authors decide the final circuit in terms of sparsity to be used for the next step of reverse engineering? Technical Quality: 3 Clarity: 3 Questions for Authors: The experiments conducted show that circuits discovered are faithful and sparse. However, the authors should also consider reverse engineering the circuits uncovered to showcase that the circuits are not only faithful to the overall model behavior but also interpretable. The circuits are discovered on a component level and not on a neuron level would interpreting the resulting circuits be a time-consuming avenue. How would the method work on a weight level mask ? How do the learning rates for different component masks affect the final circuit? It is unclear what would be a good choice for the hyperparameters. How are the mask values initialized ? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 4 Limitations: The authors discuss the limitations of their method in a sufficient level of detail. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for providing valuable feedback and suggestions. We are happy to read that you found our method interesting and useful, and the methods convincing. We would like to respond below to some of the questions and concerns raised. > Circuits may have varied behavior on tasks or input datapoints not considered in the dataset Circuit finding is indeed sensitive to the task and which datapoints are selected. Our formulation of circuit finding as loss minimization makes this dependence explicit and we evaluate the “in-distribution” generalization of a circuit to unseen examples from the same task distribution. It would be interesting for future work to explore more intuitive ways of defining tasks and evaluate the “out-of-distribution” behavior of circuits, e.g., on task compositions. > Could there be multiple circuits with a similar KL? Excellent question! We explore this by running Edge Pruning with 12 different seeds and a fixed sparsity target across three tasks and measure their pairwise similarity via Intersection-over-Union (IoU). Please refer to the global response, which shows that the KL values are consistent across runs. However, the IoU values are in the range of 0.52—0.64, indicating the existence of many partially overlapping circuits with similar KL. We argue that this does not represent a weakness of our method, as all these circuits are valid solutions to the problem statement. However, an analysis of their commonalities and differences could motivate future work and we will include this discussion in the next draft. > How to select the sparsity level to be used for the next steps of reverse engineering? In circuit finding, there is a trade-off between circuit sparsity and faithfulness to the full model: larger circuits are harder to interpret, but more faithful to the full model. One option is to find circuits with a range of target sparsities and identify inflection points in the KL-sparsity curve. **Questions** > Interpreting some of the circuits discovered Great suggestion! We plan to provide a detailed analysis in the next draft of the paper. We outline our ongoing efforts in the global response. > Component v/s neuron level We discuss the challenges of extending Edge Pruning to neural circuits in the global response. > Learning rates Our method requires a high learning rate. Our experiments use a learning rate of 0.8 for all masks (log alphas) and all regularization lambdas, which we found to work well across many batch sizes and training steps. > Initialization of masks Following [1], we initialize our log-alphas to a normally distributed tensor with each entry having a mean of 10.0 and a standard deviation of 0.01 (corresponding to initial masks close to 1). The regularization lambdas start at 0. [1] “Structured Pruning Learns Compact and Accurate Models”, Xia et. al., ACL 2022 --- Rebuttal 2: Comment: thanks for the responses to my reviews. I have also read the comments by the other reviewers. I agree with some of the points made by the more negative reviewer (qgos) but I think those can be addressed with writing/presentation changes.
Rebuttal 1: Rebuttal: We thank the reviewers for their insightful comments. We are happy to see that the reviewers found our method interesting, the experiments convincing, and the potential impact high. At the same time, two questions were raised by multiple reviewers. We would like to address them here. > Can Edge Pruning work with neurons? Individual neurons are typically polysemantic, i.e., [1] show how a single neuron responds to academic citations, English dialogue, HTTP requests and Korean text. Recent work suggests that Sparse Auto-Encoders (SAEs) trained to reconstruct the hidden states of a language model learn sparse features that correlate with distinct, interpretable concepts [2]. Recently, [3] have adapted EAP to work with “feature circuits” over such SAE features rather than heads/MLPs. The number of all feature edges is prohibitively high (quadratic in #features), and [3] used a two-stage approach where they first found important features (nodes), and then found important edges between them. Edge Pruning could be similarly adapted in this fashion: we can first model masks over features to find important features, and then learn masks over pairs of them. Since the former step only involves as many masks as the number of features, its memory overhead should be a constant factor. > How hard is it to interpret the features of the 13B model / Did you find anything interesting? Interpreting circuits with >1000 edges remains difficult, but we have made progress understanding parts of the circuit. For example, we have found the following circuit of two composed heads: L8.H16 attends from operations (and/or) to the previous token (i.e. from op to a in “a op b”) L10.H24 attends from an operand to a previous operation (i.e. from b to op in “a op b”) and read the results from L8.H16 This suggests that this duo computes the value of the expression. Interestingly, the attention pattern also holds when a is not a literal like “True” but an arbitrarily nested subexpression. A hypothesis here is that the model could deal with arbitrary depth expressions by guessing the value of “a”—allowing it to proceed with the second step—and later verifying the guess. This would also allow the model to parallelize a sequential computation by doing them in parallel. Nonetheless, further study and careful interventions are required to verify this hypothesis. We will include these preliminary findings in the next draft and hope that it may inspire future work to study these circuits in more detail. Finally, we have attached a PDF with this common response containing figures that we referred to in some of the responses. The first figure runs Edge Pruning with multiple seeds and finds that we discover multiple partially overlapping circuits with similar (good) faithfulness at the same sparsity. The second demonstrates that Edge Pruning is competitive on circuit overlap metrics when compared to ACDC. [1] “Towards Monosemanticity: Decomposing Language Models With Dictionary Learning”, Bricken et al., https://transformer-circuits.pub/2023/monosemantic-features [2] “Sparse Autoencoders Find Highly Interpretable Features in Language Models”, Cunningham et. al., ICLR 2024 [3] “Sparse Feature Circuits: Discovering and Editing Interpretable Causal Graphs in Language Models”, Marks et. al., arXiv 2024 Pdf: /pdf/3fd6cd9810a75d12db332d5a1375533afcc45d56.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Temporal-Difference Learning Using Distributed Error Signals
Accept (poster)
Summary: In the paper, the authors propose a novel reinforcement learning algorithm. The algorithm is based on Q-learning and uses a biologically inspired design to work with local error signals and updates, eliminating the need for the biological implausible backpropagation. The proposed algorithm, Artificial Dopamine, is motivated by the dopamine based learning system within the brain, concretely the ventral tegmental area (VTA) and the nucleus accumbens (NAc). It is realized with a novel recurrent layer/cell type, that predicates Q values from local error signals with an attention-like mechanism. The algorithm is evaluated against several baselines in a set of established RL benchmarks, and additional ablations add further insights on the algorithm. --- **Post-Rebuttal/Discussion**: During discussion with the authors and fellow reviewers, open questions were tackled, and additional results were presented. I've increased my score to 8 due to this. Strengths: The paper tackles an interesting and important research direction with biological plausible reward-based learning, linking computer science advancements and neuroscientific findings nicely. The approach is motivated nicely, both through the background in neuroscience as well as with the connection to the recently proposed forward-forward algorithm. Similar to that, the proposed algorithm aims at training artificial neural networks without the biological implausible backpropagation. The method is introduced and explained understandable, with details necessary to understand the algorithm conceptually as well as to implement it. The included figures are of high quality and are helpful illustrations. The presented evaluations are comprehensive, solid, and appropriate for the motivating research question. Especially the provided ablations answer questions occurring while reading the paper proactively. While not covering all aspects of biological plausible learning (as mentioned by the authors), the paper shows an elegant and competitive alternative to backpropagation that is grounded in neuroscientific insights on biological plausible reward-based learning. Weaknesses: While the majority of the paper is well written and understandable, one particular details is unclear and different sections seem to be contradicting. The AD cells/layers use multiple tanh layers, one for each action as stated and visualized. In Section 3.1, however, there is only one weight matrix W_att mentioned and used. Also in Figure 3 there is only the Q_t value, independent of an action. Inside each AD cell, are there multiple layers, one for each action, and if so, how are they used together for Q? Or is it one layer/matrix, with one dimension per action? (The first seems more plausible, but clarification and unification of notation and description would be beneficial.) While computational demanding and, hence, understandable, 10 runs are quite low for comparing RL algorithms (known problem in the community, although often neglected/ignored). Recent frameworks and metrics have been proposed tackled at these problems related to low number of runs and comparing overall algorithms performance [1]. It would be beneficial to add such metrics (e.g., performance profiles and IQM/IQR) to the paper for a better comparison of the proposed algorithm. ------------ [1] Agarwal, Rishabh, et al. "Deep reinforcement learning at the edge of the statistical precipice." Advances in neural information processing systems 34 (2021) Technical Quality: 4 Clarity: 4 Questions for Authors: The algorithm is based on Q-learning, could the main ideas be transferred to other RL backbones? The reward needs to be globally available to all layers to compute the local prediction errors. Which implications does this global target induce? Any theoretical or practical drawbacks? How crucial is the choice of the two activation functions in the AD cell? Typo (?) Line 101 mentions 15 tasks, while the rest of the paper mentions 14. Additionally, the main paper actually ‘only’ uses 10 tasks, the additional 4 are only provided in the Appendix for comparison with an additional baseline. Can this be clarified or better, unified in the main paper? Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: Limitations are discussed thoroughly in a separate Section. While detailed, they focus solely on the biological background, not mentioning technical/practical limitations. Some of such limitations, like only working with discrete/discretized action spaces and the difficulty in scaling to larger action spaces, are mentioned in other parts of the paper, it would be good to add such practical limitations and assumptions in the dedicated limitation section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for their constructive feedback and detailed questions, and we’re glad to hear that you appreciate the novelty, clarity, and elegance of our work. To address your specific questions and concerns: **Clarifications on Section 3.1.** We understand how this may be confusing. Conceptually, the Q_t function takes two parameters, the state and the action, and outputs the value for that state-action pair. There exists multiple tanh layers, one for each action; W_att represents the weights of one of these layers. In practice, for parallelization, like [1], the Q_t function is implemented to output a vector of |A| values (where A is the action space), i.e. one value for each possible action. Each tanh layer is used to compute the value of one corresponding action. We then take the argmax over that vector to determine the action to take. We see how Figure 3 is misleading; we will update it to show that the cell computes multiple Q_t values, one for each action. **10 runs are quite low for comparing RL algorithms; additional metrics like IQM/IQR would be beneficial.** While we agree that our results can be further bolstered with more runs, RL experiments are indeed computationally demanding (Appendix H), and like most researchers, we are constrained by available resources and environmental concerns. However, we hope that the number of tasks we evaluate our algorithm on helps alleviate some of your concerns; AD is evaluated on 14 standard RL tasks, with 10 runs on each task. Further, we agree that additional metrics like IQM/IQR are beneficial for comparing the performance of RL algorithms. We’re working on additional figures that show the IQM and IQR of our data for our main results and ablation study, and will upload these before the end of the discussion period. **The algorithm is based on Q-learning, could the main ideas be transferred to other RL backbones?** Yes, we believe they can, although we haven't yet experimented with this. We chose Q-learning as its one of the most supported hypotheses for how the brain performs reward-based learning, however the same learning principles should transfer to other RL paradigms like actor-critic models. **The reward needs to be globally available to all layers to compute the local prediction errors. Which implications does this global target induce? Any theoretical or practical drawbacks?** Just making the reward signal globally available to all layers does not incur any notable drawbacks we are aware of; there is evidence that this occurs in biological learning (Section 2.1), and providing such a signal to each layer does not add any significant computational costs for modern hardware (in fact, it improves parallelization, as each layer’s update can be computed in parallel, Section 3). However, there is a significant drawback to replacing the sequentially propagated error signal with just per-layer reward and prediction errors, namely the layers can no longer directly coordinate their learning. Specifically, the upper layers of the network can't influence the learned representations of the lower layers via sequentially propagated gradients. The problem with gradient propagation is that there is no evidence it in the brain, despite its known importance in deep learning. This is partially why we believe our results are surprising and significant: we show that just distributed, local prediction errors and updates may already be sufficient to learn many complex reward-based learning tasks, without the need to directly coordinate updates across layers during learning. **How crucial is the choice of the two activation functions in the AD cell?** We experimented with alternative choices for these two activation functions, including replacing the relu function with a leaky relu, and replacing the tanh function with a sigmoid and with a second relu function. Most combinations of different activations did not have a significant impact on performance, with the exception of using two relu functions, which made learning unstable. For transparency, our experiments on activation function choice were not extensive; there could certainly be combinations of other activation functions that lead to better performance. However, as our primary goal is not to achieve state-of-the-art performance, we believe that any potentially unrealized gains in performance does not interfere with our contributions, and thus leave further hyperparameter tuning of AD cells to future work. **Line 101 mentions 15 tasks, while the rest of the paper mentions 14; the main paper uses 10 tasks.** We agree this can be further clarified. Line 101 is indeed a typo; we evaluated AD on 14 tasks, including 5 tasks from the MinAtar testbed, 5 tasks from the DeepMind Control Suite, and 4 classic control tasks. We relegated the 4 classic control tasks to Appendix C because they are relatively less challenging compared to the MinAtar and DMC tasks (despite still being an informative starting point for evaluating the performance of reward-based learning algorithms). As we achieved consistently strong performance on each of these 4 tasks, to avoid cluttering the main paper body (given the space limit), we moved them to the appendix. To address this issue, we have fixed the typo, and updated the Experiments section (Section 4) to explicitly state the number of tasks. **While technical and practical limitations are mentioned in other sections, the Limitations section only discusses limitations related to biology.** We will add a dedicated paragraph in the Limitations section to discuss technical and practical limitations, including discretized action spaces, current difficulties with scaling, and the number of runs per task. --- **References** [1] Mnih, Volodymyr, et al. "Human-level control through deep reinforcement learning." *nature* 518.7540 (2015): 529-533. --- Rebuttal Comment 1.1: Title: Thanks. Comment: Thank you for your detailed answer to my review and questions. The answers clarified the open questions and I hope to see the mentioned text changes in a potential final paper. Also waiting for "We’re working on additional figures that show the IQM and IQR of our data for our main results and ablation study, and will upload these before the end of the discussion period." for a better comparison of the methods.
Summary: This paper addresses the distributed credit assignment problem in biological reinforcement learning, where a naive implementation of backpropagation is implausible. The authors show that it is possible to update action values using a variant of the forwward-forward algorithm specialized for RL. They then validate that this algorithm is effective on a range of standard benchmarks. Strengths: - The problem (how to implement scalable deep RL in a biologically plausible way) is important. - The paper is clearly written, with a good review of both the biology and AI background. - The results are impressive given that the system does not use backprop. - I appreciated the careful ablation analysis. Weaknesses: - p. 5: The review by Glimcher (2011) is cited to support the claim that NAc (ventral striatum) neurons signal action value. In fact, there is a long tradition arguing that it is actually the dorsal striatal subregions (caudate and putamen) which signal action values, whereas ventral striatum signals state value (i.e., a form of actor-critic architecture; see Joel et al., 2002; O'Doherty et al., 2004). Most studies I'm aware of have shown that signals related to action value are relatively weak or absent in ventral striatum (see for example Ito & Doya, 2015). - Overall I felt that the biological evidence for the modeling framework was weak. What is the evidence that dopamine updates values locally in the NAc? This is physiologically tricky because dopamine acts postsynaptically through volume transmission, diffusing broadly in the extracellular space. What is the evidence for the particular form of hierarchical message passing architecture proposed here? Basically, I felt that the ideas here are interesting in principle but as presented are rather disconnected from empirical data. Minor: - p. 2: "connection ," -> "connection," Technical Quality: 4 Clarity: 3 Questions for Authors: - I was unsure whether the claim here is that the NAc is a multi-layer network or one layer in a multi-layer network. - If the authors would like their model to be taken seriously by neurobiologists, they need to do more to directly link it to empirical evidence. - Ideally the authors could make some neural predictions based on the model. It would also be helpful to understand how the predictions are different from those of other models. Rebuttal update: I have raised my score from 6 to 7 following the authors' responses to my comments. Confidence: 4 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: The authors extensively discuss limitations in section 6. There are no potential negative societal impacts. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for their insightful feedback, and we’re grateful for the opportunity to improve and clarify our work through answering your detailed questions and concerns. **There is a long tradition arguing that it is actually the dorsal striatal subregions (caudate and putamen) which signal action values, whereas ventral striatum signals state value.** We acknowledge that there are multiple theories that provide different mappings of reinforcement learning frameworks to the brain’s reward circuitry. We want to clarify that our assumptions and findings are not in conflict with the actor-critic models by Joel et al. (2002) and O’Doherty et al. (2004). Actor-critic mappings of the reward circuitry, like Joel et al. (2002) and O’Doherty et al. (2004), argue the dorsal striatum is responsible for action selection, i.e. policy learning, whereas the ventral striatum is responsible for value learning. On the other hand, Q-learning is a simpler model that combines policy and value learning. The main difference (with respect to our work) is whether action selection occurs separate or together with value learning. AD does not make the argument that action selection need happen in the NAc. Rather, we just take the assumption that the NAc computes the Q-value function, which takes in the state and action as parameters, and performs Q-learning. The action can be provided from another region like the dorsal striatum. This view is consitent with the findings of works by Roesch et al. (2009), Mannella et al. (2013), and Aston et al. (2024). On the other hand, we acknowledge that there exists research like Ito and Doya (2015) and Weglage et al. (2021) that suggest the competing theory that the dorsal striatum is responsible for signaling action values, whereas the ventral striatum signals state values. For completeness and openness, we will include this in the Limitations section. **The biological evidence for the modeling framework was weak. What is the evidence that dopamine updates values locally in the NAc, and for the particular form of hierarchical message passing architecture proposed here?** The theory that dopamine updates values locally in the NAc is supported by Wightman et al. (2007), who found spatial and temporal heterogeneity of dopamine release within the NAc. This suggests that while dopamine concentration is homogeneous within each microenvironment containing a local subpopulation of NAc neurons, it can vary across different subpopulations. The hierarchical message passing architecture is a design choice inherited from current deep learning practices, reflecting the passing of information between subpopulations of neurons. Our specific architecture is not intended to be a fully mechanistically accurate model of the NAc, but rather a computational model to demonstrate that it is possible to learn complex reward-based tasks using synchronously distributed, locally homogeneous error signals. The actual implementation in the NAc may differ. This counterintuitive and surprising finding potentially opens new avenues for explaining biological reward-based learning, which is why we argue that we make a valuable contribution. **I was unsure whether the claim here is that the NAc is a multi-layer network or one layer in a multi-layer network.** Each layer in our model represents a local subpopulation of NAc neurons that receive the same signals from a subpopulation of dopamine neurons. However, as our model is not intended as a mechanistically accurate representation of the NAc, we do not claim that the NAc is a multi-layer network. **If the authors would like their model to be taken seriously by neurobiologists, they need to do more to directly link it to empirical evidence.** We agree that additional biological connections can strengthen our work. However, the main contribution of our work is not to propose a biological model, but to show that synchronously distributed, locally homogeneous error signals observed in the mesolimbic dopaminergic system may be sufficient to teach neurons complex reward-based tasks. While we make some biological assumptions for modeling (e.g., NAc neurons encode approximations of the value function, and VTA projections may compute errors specific to these approximations), we do not claim our work proves these assumptions; they are part of our premise, not our hypothesis. We demonstrate that if these assumptions hold true, such conditions may be sufficient for complex reward-based learning by neural networks, without the need for other mechanisms to explicitly coordinate credit assignment. Previously, it was believed that solving complex nonlinear tasks required neurons in different regions to sequentially pass error signals and explicitly coordinate their learning. This assumption is held by both biologically plausible local-learning algorithms and backpropagation. We show that this assumption may not be necessary. **Ideally the authors could make some neural predictions based on the model. It would also be helpful to understand how the predictions are different from those of other models.** We agree that making neural predictions based on our model is an important direction. As an initial step towards better alignment with neurobiology, we performed experiments showing agreement with the distributional dopamine coding model by Dabney et al. (2025) in Section 5. We plan to further develop this aspect of our work in the future. --- Rebuttal Comment 1.1: Title: thanks Comment: Thanks for your thorough response to my comments. I hope that at least some of this will be reflected in the paper. --- Reply to Comment 1.1.1: Comment: Thanks for acknowledging our rebuttal. We appreciate your feedback and are committed to reflecting your suggestions in the revised paper. To address your concerns, we plan to make the following updates: 1. Clarify that AD does not require action selection to occur in the NAc and acknowledge competing theories (Ito & Doya, 2015; Weglage et al., 2021) in the Limitations section. 2. In the background section, clarify that dopamine providing local signals for updates is supported by the findings of Wightman et al. (2007). 3. Add to the Limitations section that our hierarchical message passing architecture is a design choice from deep learning practices and not a fully mechanistic model of the NAc. We'd value your feedback on whether these updates address your concerns and if there are any other critical points we should prioritize to potentially raise the score. Thank you again for your constructive feedback. --- Rebuttal 2: Comment: **References** Roesch, M. R., Singh, T., Brown, P. L., Mullins, S. E., & Schoenbaum, G. (2009). Ventral striatal neurons encode the value of the chosen action in rats deciding between differently delayed or sized rewards. *Journal of Neuroscience*, *29*(42), 13365-13376. Mannella, F., Gurney, K., & Baldassarre, G. (2013). The nucleus accumbens as a nexus between values and goals in goal-directed behavior: a review and a new hypothesis. *Frontiers in behavioral neuroscience*, *7*, 135. Ashton, S. E., Sharalla, P., Kang, N., Brockett, A. T., McCarthy, M. M., & Roesch, M. R. (2024). Distinct Action Signals by Subregions in the Nucleus Accumbens during STOP–Change Performance. *Journal of Neuroscience*, *44*(29). Weglage, M., Wärnberg, E., Lazaridis, I., Calvigioni, D., Tzortzi, O., & Meletis, K. (2021). Complete representation of action space and value in all dorsal striatal pathways. *Cell reports*, *36*(4). Wightman, R. M., Heien, M. L., Wassum, K. M., Sombers, L. A., Aragona, B. J., Khan, A. S., ... & Carelli, R. M. (2007). Dopamine release is heterogeneous within microenvironments of the rat nucleus accumbens. *European Journal of Neuroscience*, *26*(7), 2046-2054.
Summary: In this work, the Authors aimed to propose a computational model that is consistent with classical functions assigned to select regions in the reward system in the brain. To this end, they build upon George Hinton’s Forward-Forward algorithm, extending it with the ability to robustly generate continuous (instead of categorical) values and endowing it with the DQN loss. The Authors show that the proposed algorithm reaches competitive scores on a panel of standard RL benchmarks, and discuss its potential mapping on the brain circuitry. Strengths: The text is clear and almost always straightforward. Many key statements are accompanied by reasonable disclaimers; as a result, the claims are seldom overstated. The paper has a reasonable description of the related (used) algorithms. The figures are high-quality, which makes the paper stand out among many other submissions. The experiments are performed on a sizeable panel of benchmarks. Wherever the baseline design had to be altered to ensure a fair comparison with the proposed model, correct adjustments are performed including the (hyper)parameter search for the altered models. There may be a subcommunity interested in these results. After George Hinton’s successful presentation of his Forward-Forward algorithm at NeurIPS 2022, many people present at this keynote lecture got interested in the topic. Weaknesses: The paper’s neuro premise (Sections 1; 2.1) reads as if the roles (or even activities) of the brain regions involved in the brain circuit are known undebatable. Even though this impression is then refuted in Section 6 (which is commendable), most of the paper operates under this assumption. While the paper correctly cites some classical literature on the topic, the debates over the brain region roles proposed in these works led to more nuanced models, e.g. the mappings of the actor-critic model on the reward circuit, and then to an overall notion that such mappings represent a bird-eye view of the problem and may not necessarily be mechanistically accurate. It would be nice to explore and discuss this uncertainty, as it may alter the ground truth for this work. The paper’s comp neuro premise limits itself to the Forward-Forward algorithm. Although some of the other related work is mentioned in Section 7, these and other works have not made it into the model’s design. Meanwhile, the field of biologically plausible learning, including RL, is immense. There’s a host of work on predictive coding and local learning rules; the new works are submitted to (almost) every major ML conference. There are noteworthy works on the computational role of dopamine and RL in the brain, including the works on "Backpropamine" and DeepMind’s modeling of distributional dopamine coding. In that sense, it’s unclear why the current submission focuses on a single model, mostly validated within the ML community while ignoring the host of models validated within the computational neuroscience community. This may be mitigated by the Authors’ comment that their goal was to present a model consistent with neuroanatomy, which is valid, but brings me to the next point. A single model consistent with data doesn’t preclude the existence of multiple other models, potentially with different mechanics, leaving the chance that the proposed model has little to do with the biological reality. Indeed, a Monte Carlo – or even brute force model – will eventually learn an approximately correct Q-function and won’t need backprop / coordinated feedback signal to do so. While – and I would like to stress that – the results in this paper are correct and the wording is correct as well, arguably, to learn something from this result, one would need to perform multiple additional comparisons between the (versatile, detailed) neuroanatomical structure of the reward circuit and a panel of prominent computational models. As such comparison is not performed within the scope of this work, it is arguably too early to recommend this paper for publication in NeurIPS at this point. Technical Quality: 2 Clarity: 4 Questions for Authors: N/A Confidence: 5 Soundness: 2 Presentation: 4 Contribution: 2 Limitations: Related literature on neuroanatomy and computational models of the reward system are not considered. The proposed model is not contrasted against other candidate models and thus may be uninformative. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the review. We noticed that this review is nearly identical to an earlier review we received at a different conference. Building on the previous feedback, we have significantly updated our manuscript with additional experiments, ablation studies, and expanded discussions, including new experiments and algorithmic updates aligning with DeepMind’s distributional dopamine coding (Section 5). We are thus surprised to receive the same review with a decreased score. Assuming you are the same reviewer, we wonder if there might have been a transpositional error or if the old version of our manuscript was reviewed by mistake? In either case, we’d like to directly respond to your concerns, and highlight the relevant updates we made since our last submission that aims to address these concerns. **The neuro premise in Sections 1 and 2.1 reads as if the roles of the brain regions involved in the brain circuit are known undebatable. This is refuted in Section 6, but the discussion is inadequate.** We acknowledge this was a weakness in our previous submission. We expanded our Limitations section to explicitly state the assumptions we make regarding the biological mapping and included additional discussions on alternative mappings of RL frameworks to the reward circuit. We additionally discussed works by Takahashi et al. (2008), Chen & Goldberg (2020), Morris et al. (2006), Niv (2009), Roesch et al. (2007), Akam & Walton (2021), and Coddington et al. (2023). We highlighted that these theories are high-level mappings and may not be mechanistically accurate. We also added a footnote and forward reference to ensure this discussion is not overlooked. **The paper's comp neuro premise limits itself to the Forward-Forward algorithm, and ignores the host of other models in the field of biologically plausible learning.** Our main investigations revolve around an architecture inspired by the Forward-Forward algorithm. This does not mean we disregard other models in biologically plausible learning. Instead, our focus was to address a specific problem in biological reward-based learning: how neurons can coordinate learning to solve complex, nonlinear RL tasks using regionally homogenous, synchronously distributed error signals. While there are many related methods, they address different aspects of biologically plausible learning. Taking a reductionist approach allows us to tackle more complex modern RL tasks, highlighting what TD-learning with distributed error constraints can achieve relative to non-constrained current deep RL algorithms. This approach helps attribute performance differences to distributed errors and the downsides of our algorithm, without being confounded by other shortcomings in current biologically plausible deep learning. We agree that integrating our model with other established methods, like DeepMind’s model of distributional dopamine coding (Dabney et al., 2020), is important. Since our last submission, we designed an extension of our algorithm based on the quantile regression method to learn distributions over values (Section 5). Our goal is to evaluate whether AD can learn such distributions, aligning with Dabney et al.’s work suggesting the brain’s value predictors learn distributions over values, rather than the mean value. Like Dabney et al., we use quantiles to reflect differences in optimism/pessimism across dopamine neurons. This distributional version of AD consistently achieved strong performance on DMC tasks, though it was slightly less sample efficient than the standard version of AD. We have not yet integrated with Backpropamine (Miconi et al., 2020), which focuses on neuromodulated plasticity and is less directly relevant to our problem, but we we look forward to potentially exploring this direction in future work. Finally, we expanded our related work section (Appendix N) to include additional works in biologically plausible learning. **A single model consistent with data doesn’t preclude the existence of multiple other models, potentially with different mechanics, leaving the chance that the proposed model has little to do with the biological reality.** We agree that our model solving the credit assignment problem in reward-based learning doesn't preclude other models from doing so. Our goal is not to claim our model is the only explanation, but to provide one potential solution to this problem, which, to our knowledge, has been lacking. To our knowledge, no published works consistently solve all MinAtar tasks or our subset of DMC tasks without backpropagation; we are the first to do so. Ororbia & Mali (2022) proposed a similar backprop-free Q-learning algorithm, but their evaluations are limited to simpler tasks. We present results on 3 of these tasks in Appendix C (the fourth was not publicly available). We also want to address the biological reality issue, and explain why we believe our work is biologically relevant. The central hypothesis of our work is that synchronously distributed, locally homogeneous error signals, observed in the mesolimbic dopaminergic system, may already be sufficient to teach neurons to solve complex reward-based learning tasks. While we assume that NAc neurons encode approximations of the value function (and that VTA projections may compute errors specific to these approximations) in our experiments, we do not claim that our work proves this to be true; this assumption is part of our premise, not our hypothesis (we agree that alternative theories exist). Rather, our findings suggest that if our assumptions hold, such conditions may be sufficient for complex reward-based learning by neural networks without the need for mechanisms explicitly coordinating credit assignment. This is counterintuitive and challenges established beliefs, potentially opening new avenues for explaining biological reward-based learning, and thus we believe it is a meaningful contribution. --- Rebuttal Comment 1.1: Comment: I would like to thank the Authors for their detailed response. Below, I address the Authors' specific concerns and elaborate on my report. Regarding the similarity to the previous review, I would like to describe the way I've prepared this one to clear all possible concerns in this aspect. First, I've compared the new version of the paper with the old one. To do that, I went through the main text of each paper, comparing them paragraph by paragraph. What I found is that, besides the minor formatting changes aimed at accommodating the NeurIPS format, there were three paragraphs added: the first paragraph in Section 4, the last paragraph in Section 5, and the middle paragraph in Section 7. Second, I went through my old review to see the points that I still view as valid, considering the added paragraphs and the extended discussion with the Authors last time, which has provided me with a lot of important details regarding this work (the Authors' engagement was much appreciated). The added experiments did not contrast the proposed model with other existing models, and the disclaimers on the biological plausibility, while more abundant now, were still sparse. Finally, to determine my score, I've factored in my previous assessment of this work and the scope of the changes made between the two versions. Specifically, while the previous rebuttal period was short, limiting the scope of changes that could be introduced during it, there's much more time between the submissions, so typically substantial revisions of the papers are expected. Now to the score part. This was (and is) one of the harder submissions I've got to review (and score) for the following reasons. On one hand, the paper is technically correct. All appropriate disclaimers are made regarding the biological relevance of the work. On the other hand, the biological relevance is constantly alluded to throughout the text, potentially leading a reader to the idea that the paper explains how the brain works, and likely leading the reader to the conclusion that the paper has a high impact on the field of biology. Notably, most of the disclaimers and additional considerations are provided at the end of the main text and in the Appendix, way after the readers' opinions on the work are formed. This concern is consistent with the reports of all other Reviewers here who have expressed concerns regarding the biological relevance of this work and/or the language describing such. What I would see as a proper way to present this work in its current scope is to be more upfront about the limitations, such that the reader would immediately know that the mechanistic modeling of the brain was not a goal or a result of this work, but rather a new computational architecture was tested, inspired by some neuroscience works. Then, the paper could be assessed based on its direct contribution rather than on what some may see as an alluded promise. The algorithm itself is a valid contribution and, as such, should absolutely be published somewhere. Not that it matters, but I would be happy to revise my score to borderline contingent on the changes in the message from biology-focused to algorithm-focused. I'm happy to re-read the updated anonymous PDF with such changes. Please also let me know if there are other changes or considerations that I may have overlooked. --- Reply to Comment 1.1.1: Comment: Thank you for engaging. We have made several substantive improvements in the paper since our last submission that we’ll highlight below, and we’ll edit the paper in revision to make these changes more clear. Some additional figures were in the new appendices, but if important can be moved into the main text. 1. **Algorithm Extension**: We designed and evaluated an extension of our algorithm to learn distributions of values, aligning our work with Dabney et al. (2020) (Section 5 & Appendix M). 2. **Sample Efficiency**: We significantly improved the sample efficiency of our algorithm in the DMC environments (Section 5). 3. **Ablation Studies**: We conducted additional ablation studies on DMC and MinAtar environments, analyzing the effects of varying layer sizes in a single-layer AD network (Section 5 & Appendix L). 4. **Biological Assumptions**: We expanded our discussion on biological assumptions and alternative hypotheses, citing eight additional neuroscience works (Section 6), and added a forward reference to address your concern about potentially misleading readers. 5. **Hypothesis Falsifiability**: We defined the criteria of sufficiency to explicitly make our hypothesis falsifiable (Section 4). We agree that our work is more algorithm-focused, and we do not claim that our model is a realistic biological reconstruction of specific brain mechanisms. However, our motivation for developing this algorithm is rooted in biological observations, and we believe our contribution holds relevance for the field. Specifically, our paper focuses on an algorithmic insight relevant to biology—demonstrating that distributed error signals, as observed in the brain, can be sufficient for reward-based learning—rather than on biological modeling per se.
Summary: An novel, biologically plausible RL algorithm based on DQN and Fast-Forward called Artificial Dopamine is introduce. The algorithm is inspired by Dopamine pathways in the human brain and uses synchronous, locally computed per-layer TD errors for training. Additionally, a novel recurrent network architecture is proposed where layer outputs are used as inputs to the respective preceding layer at the next time step. The approach is evaluated on a range of discrete and continuous RL tasks. Strengths: Investigating biologically plausible learning is an interesting field of research and surely significant. The manuscript is well written and reads very smoothly. Explanations of biological phenomena are clear and informative, and foremost rigorously cited. The authors clearly distinguish which parts are inspired by biology and which are deliberate choices due to empirical reasons. Limitations and societal impact are discussed thoroughly. Weaknesses: Minor Issues: - L 116: "and is" -> "and are" - L134: "in many practical applications, such as TD learning in the brain". Makes it sound like this a proven fact that there is TD learning taking place in the brain... Technical Quality: 3 Clarity: 4 Questions for Authors: - L252: "Since the local updates are distributed and per-layer, they can be computed in parallel." The temporal dependency between time-steps due to the recurrence is still present. So, does this actually translate to improved runtimes? Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 2 Limitations: The Limitations such as the use of the use of a layered structure, and limited proof for the method implemented (Q-learning) occurring in the brain are discussed in detail. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for their constructive feedback, and we’re glad to hear that you appreciate the significance, scientific rigor, and clarity of our work. We have fixed the typo on L116, and reworded L134 as “…in many practical applications with large or continuous state spaces…” to avoid confusion; thank you for pointing out these issues. The reviewer also raised an insightful question: whether the temporal dependency between time-steps due to recurrence would prevent parallelization (and thus improved runtimes). For most recurrent architectures trained with backpropagation-through-time, this is indeed the case; the updates of each layer at each time step must be sequentially computed. However, in AD, unlike backpropagation-through-time, we do not propagate any error signals back along the recurrent connections, thus there does not exist any such dependencies during learning. To elaborate, the forward-in-time recurrent connections only serve to pass activations from upper layers to lower layers; no error information is passed back through these connections. Therefore, the recurrent connections do not prevent parallelization. --- Rebuttal Comment 1.1: Comment: Thank you for the given answer. However, it is still unclear to me how the model could be parallelized. Assuming you are training weights that process the last timestep's input, you will need to know the input before updating those. I guess you could buffer the intermediate layer's activations during rollout for later training, entailing a bias in the gradients when you update the model more than once on the same batch of collected data? --- Reply to Comment 1.1.1: Comment: Thanks for the follow-up question. You’re right— we can save the last timestep’s activations along with the sequences into the replay buffer, and use these values for training, but there would be a bias in the gradients as the model parameters are updated. Instead, to avoid this, we store short sequences during rollout into the memory buffer, then replay the sequences during training, zero-initializing the first "recurrent" state. This follows established practice in deep RL for training RNNs (Hausknecht & Stone, 2015; Kapturowski et al., 2018). Does that answer your question? --- **References** Hausknecht, M., & Stone, P. (2015, September). Deep recurrent q-learning for partially observable mdps. In *2015 aaai fall symposium series*. Kapturowski, S., Ostrovski, G., Quan, J., Munos, R., & Dabney, W. (2018, September). Recurrent experience replay in distributed reinforcement learning. In *International conference on learning representations*.
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Learning Cut Generating Functions for Integer Programming
Accept (poster)
Summary: The paper is concerned with the interplay of learning theory and the branch-and-cut algorithm for solving mixed-integer programs (MIPs). Concretely the authors analyse the problem of cut selection. In general, cut selection asks for a given (class of) MIPs: What cut(s) should be added to the lp relaxation, to speed up the branch-and-bound algorithm as much as possible? The authors limit the scope of this board question to: How should one parameterize a concrete cut-generating function, to generate a single cut, which, when added to the problem at the root node, minimizes the number of branch-and-bound nodes the solver has to create? For the selection of a cut generating function, the authors list three criteria: 1: There should be an efficient way, to compute the cutting planes 2: One should be able to prove concrete sample complexity bounds 3: One should be able to demonstrate that the cutting planes are “significantly better in practice than classical cuts” The authors then proof sample complexity bounds for three different settings. 1) One wants to find the parameters for the Gomory and Johnson [2003] cut generating function (which generates cuts from a single row of the simplex tableau) minimizing the expected tree size for instances drawn from $D$. 2) One wants to find the parameters for a subfamily of cut generating functions studied by Basu and Sankaranarayan [2019] (which generate cuts from k many rows of the simplex tableau) again minimizing the expected tree size for instances drawn from $D$. 3) One wants to train a neural network to choose a good instance (not distribution) dependent cut generating function. In their numerical experiments, the authors investigate the effect of the cut generating functions on some generated knapsack and partition problems. The goal is to find evidence for better performance of the chosen cut generating functions when compared to classical (here GMI only) cuts. For 1-d knapsack, the cut generating function is shown to clearly outperform GMI cuts in the chosen setting. For all other instances, GMI performs worse than at least one cut generating function, but the difference is smaller, especially for the packing instances. Strengths: The paper is well-structured and easy to read. The provided code allows for the reproducibility of nearly all experiments (only for knapsack_50 I get “index 0 is out of bounds for axis 0 with size 0” and no results, even if I run the code to regenerate the instances). Weaknesses: The main weakness I see is the little empirical evidence for some strong claims about the Gomory and Johnson [2003] cut generating functions. See e.g. line 220 and 251: “and they result in significantly smaller tree sizes compared to traditional cutting planes”. This statement seems over the top. 1) the numerical experiments do not always indicate “significant” 2) the only tested classical baseline are GMI cuts, which are by far not the only class of cuts doing heavy lifting in modern branch-and-cut solvers 3) for the instances on which the impact is high (1-d Knapsack), I think the combination of setting and instance class, does not allow general claims, see below. L 316: “Instances were generated using the distribution proposed by Chvátal in Chvátal [1980], also used in Balcan et al. [2021b].” -> this is not correct. The authors use the same parameters as Balcan et al., but they are at best related to Chvátal instances. Chvátal explicitly designs instances, which are provably hard to solve for certain branch-and-bound algorithms. To achieve this, all constraint coefficients $a_{ij}$ are sampled from $[1, 10^{(n/2)}]$ and $b_j = floor(\sum (a_i / 2))$. Balcan et al. sample a_ij from a uniform normal distribution $N(50, 2)$. This leads to extremely similar a_ij (so quite far from Chvátals distribution) values and makes the instances extremely easy in practice. For 1-dimensional knapsack the original constraint of the problem formulation is tight. When turning on primal heuristics (which are turned off by the authors, but not by Balcan et al.) Gurobi solves all instances at the root node, before even solving the relaxation. From my perspective, there is little to learn from experiments on these types of instances (1-d knapsack). I am well aware, that the authors focus on theoretical aspects of machine learning and cut selections, but when explicitly listing “3) we should be able to demonstrate that these new cutting planes are significantly better in practice than classical cuts” (line 109) as one of three requirements for the cut generating functions, I expect stronger empirical evidence. Technical Quality: 2 Clarity: 3 Questions for Authors: Additional and minor remarks: - L 6: “optimal cutting planes”. “Optimal” is very global and strict. Maybe “good” - L 24: “small representative sample”. Representative seems subjective - L 36: “we understand many of their theoretical properties”. “We” sounds ambiguous here. Maybe “many theoretical properties are understood” - L 53: “with several excellent insights”. “Excellent” is quite subjective - L 144: Z^n should be Z^k - Figure 1: The use of the label “y” for the y-Axis is confusing to me - Figure 1: For r = 1 the graph has a red circle, indicating it is not defined here. The same should hold for r=0? - Figure 1: The font seems unnecessarily small - L 296: If “their” in “A direct applications of their main theorem” refers to Cheng et al. [2024] from line 278, I feel like the reference is too far away, for the reader to parse “their” in the context of Cheng et al. - L 314: The two types of instances under consideration are quite similar (in referenced Tang 2020: “A knapsack problem is a binary packing problem”.) I think it might make sense to mention their similarity and the fact, that one is integer and one is binary - Table 1: For $d$-dimensional knapsack, there are $d$ constraints significantly different from the remaining $n$. Are these constraints considered first, for k-row? Does this explain why e.g. 2-row is best for 2-d knapsack? - Small and capital letter inconsistency in references. E.g. 352 & 358: “cambridge university press” and “Cambridge University Press” Confidence: 2 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: The author's discussion of their limitations is fine, apart from the part about the numerical experiments (see weaknesses). Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We greatly appreciate your thorough review and the thoughtful feedback. Regarding the experimental evidence, we acknowledge that our paper focuses primarily on theoretical aspects, and thus, the experimental section is somewhat preliminary. First, we agree that it was poor scholarship on our part to call our distribution the Chvátal distribution. Our distribution is indeed the same as what was done in the Balcan paper. We will reword to something like "what Balcan et al call the Chvátal distribution". However, we believe that even these basic experiments lend support to our claim that some of the CGFs we consider can result in significantly smaller tree sizes compared to some traditional cutting planes. Our reasoning is as follows: 1. Even though the 1D knapsack problem is very simple, this does not detract from the theoretical interest of the result. Specifically, when using only branch-and-bound for certain specific distributions (such as the "Chvátal distribution" considered here), there are indeed some CGFs that perform significantly better than GMI cuts. 2. Even if we disregard the 1D knapsack results, we believe the other experiments on multidimensional knapsacks and packing problems also give some support to our claim: - Regardless of the problem type or the number of rows used to generate cutting planes, we selected a parameter from only 121 random candidate CGF parameters, and for one-row cuts, we only considered the simple case of $p = q = 2$, resulting in only two parameters. Despite this, some CGFs performed better, which demonstrates the potential of CGFs. - In the last column, we provided the "best 1-row cut" for the one-dimensional CGF, obtained by enumerating and selecting the best 1D CGF for each instance. We believe this result shows significant improvement over GMI cuts, even for the packing problem settings. While this method is not really "learning," we believe it does demonstrate the potential of CGFs. Also, for this method, in Section 5 we proved the learnability and rigorous sample complexity for solving the problem of "selecting good CGFs for each instance" using neural networks, which may provide some insights to practitioners to develop some neural network based instance-specific selection methods. 3. We compare with GMI cuts because this is considered a "gold standard" in evaluating cutting planes in IP computational literature. While there are indeed several other cutting plane families that are used in modern solvers, the first test for any new family of cutting planes (from a practical perspective) is usually to see how well they do in comparison to GMIs. They are arguably the most important and popular *general purpose* cutting planes, adopted by commercial solvers for almost thirty years now (Conforti et al., 2014). Additionally, GMI cutting planes are generated by an extreme CGF (Section 2.1 has the corresponding CGF definition). Given that the main focus of the paper is on the theory developed, we wanted to keep the message of our experimental section crisp and streamlined. Comparisons with GMI seemed like a good way to achieve this. We do agree that perhaps our choice of wording in the experimental section (and in references to it) is overenthusiastic. We will reword and tone down some of the language. The reason behind our excitement is that we have not seen such promising results with CGFs in the computational IP literature. We concede that some of instances are not the most challenging ones for modern solvers. Nevertheless, we do not know of any published (or folklore) cases of instances where CGFs have previously been demonstrated to have such stark improvements over GMI cuts. And this is even more true for the Gomory-Johnson family of CGFs. While there are some computational results showing some promise with CGFs coming from lattice-free sets and their liftings, to the best of our knowledge, the Gomory-Johnson families have not been shown to do better than GMIs (beyond some artifical instances) in any previous study. Observing an improvement with these CGFs that was quite a contrast from previous computational experience was the basis for our enthusiastic wording. One reason could indeed be the choice of instances. Nevertheless, we think another reason is that previous work focused more on the gap closed at the root, as opposed to the overall tree size. We hope that our preliminary results convinces the community to revisit these CGFs, especially with the view to investigate their effect on the overall tree sizes (both theoretically and practically). We will also expand our computational investigation to look at other families of instances to see if the improvement in tree size is as dramatic as we observed on these knapsack instances. To summarize, our hope with the experimental section is to "whet the IP community's appetite" and rejuvenate its interest in CGFs. We believe they can be useful, and using learning techniques may be a good way to unlock their true potential. Some responses to your more detailed questions: - > Z^n should be Z^k It should be $\mathbb{Z}^n$ since $\mathbf{r}^1,...,\mathbf{r}^n \in \mathbb{R}^k$ are vectors. - > Figure 1: For $r=1$ ... should hold for $r=0$? The plots in Figure 1 are on the interval $[0,1)$, so $r = 1$ is not included, which is necessary because the generating function of Chvátal-Gomory cut, $\text{CG}_f(r)$, is not continuous at $r \in \mathbb{Z}$. However, $r = 0$ is defined, and by periodicity, for these functions, the value at $r = 1$ is the same as the value at $r = 0$. - > Table 1: For $d$-dimensional ..., 2-row is best for 2-d knapsack? Yes, this is a very insightful point. We share the same intuition. Indeed, for $k$-row cuts, we selected the first $k$ rows from the simplex tableau to generate a cut, so it makes sense that the 2-row cut works best for the 2D knapsack problem. We will address the other minor remarks in the final version of the paper.
Summary: This paper studies the learning of generic classes of cut generating functions, which can be used as an algorithmic tool for solving integer programming problems. The paper presents a handful of cut generating function families, studies the learning complexity of these functions, and presents a computational study of those CGF families on standard integer programming problems. Strengths: The setting is of clear interest to the integer programming (IP) community. I cannot vouch for the novelty or correctness of the learning-theoretic content of the paper, but the IP content is crisp and clean, and the high-level idea and computational setup all make sense to me. Weaknesses: I feel like there is some connective tissue missing from the paper, particularly joining the computational study with the rest of the paper. Section 5 seems like an interesting connection to draw, but does not appear in the computational study and feels a bit disjointed from the rest of the paper. And while the paper spends much of its effort to show that the cut generating functions _can_ be learned, the computational study is relatively rudimentary and does not really attempt to develop very sophisticated techniques to actually learn the CGFs in practice (to be clear, this is a reasonable tradeoff to make for a theoretical paper). Technical Quality: 3 Clarity: 4 Questions for Authors: None. Confidence: 2 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for the thoughtful and encouraging review. Section 5 is meant to illustrate that our analysis extends to the setting where one wishes to select cutting planes tailored to instances using, say, a neural network mapping from instances to cutting planes. This was done in a more general algorithm design setting by Cheng et al (2024), but the analysis can be easily adapted to CGFs. This can potentially lead to stronger gains since the CGFs are tailored to specific isntances, as opposed to using the same CGF for all instances.
Summary: This work presents sample complexity results for learning parameters for certain classes of cut generating functions, along with some numerical experiments. These are functions that determine coefficients of cutting planes to help solve mixed-integer programming problems, and some of the most effective cutting planes in practice (e.g. GMI cuts) can be expressed as cut generating functions. More specifically, this paper proves pseudo-dimension bounds on an established class of one-dimensional cut generating functions, and on a class of k-dimensional functions, both of which generalize GMI cuts. This latter function can be (non-trivially) computed efficiently. Furthermore, the authors provide pseudo-dimension bounds for these functions for the case where the parameters are learned by a neural network. Finally, the paper provides numerical experiments to highlight the learnability of parameters for these families of cuts on small instances. Strengths: This work adds an interesting and novel learning-theoretical perspective to the cut generating function literature. Making general cut generating functions practical has always been challenging, and this paper suggests potential in learning them in a principled and theoretically grounded fashion. While we are still far from being able to use cut generating functions in the way that the paper promotes, this is a valuable step that advances the field further and I believe that this can inform future learning-based cut generation methods. The paper is overall written in a clear way. I checked the proofs at a high level but could not do so in detail since my familiarity with learning theory is limited. The computational results, while focused on small cases, show good potential for learning-based methods. Weaknesses: None of the weaknesses that I see are particularly major. I would have liked to see further experiments on more realistic scenarios, but I understand that we can derive insights from a very focused experimental setup and the theoretical contributions are the main focus of this paper. I leave specific issues for the Questions section below. Technical Quality: 3 Clarity: 3 Questions for Authors: These are all minor comments. 1. It would be nice to add the geometrical interpretation of the parameters of the functions to provide a quicker understanding for the reader. It takes a little while by looking at the functions and the examples (which are helpful) to understand, and for me it was easier to understand some of the proofs after I understood what those parameters were doing. You can for example add them with Figure 2 in the Appendix. 2. I am not sure I can visualize the k-dimensional function very well. Would you be able to explain what motivates this particular function, and perhaps include 3d examples for the case where k = 2 (like you did with k = 1)? 3. Is the definition of tree size here the worst-case tree size across variable and node selection? If relevant, could you add a precise definition of tree size to the paper? 4. Could you add to the paper why these cut generating functions are extreme? I believe those come from previous work (e.g. the 1-row one has two slopes)? 5. I am confused as to how you have 5-row cuts and 10-row cuts for knapsack with 2 and 3 rows. If I missed something, please add the explanation to the paper. 6. This question is mainly out of curiosity (though could be interesting to discuss in the paper if you have a good answer): I see that increasing the number of rows sometimes help, but generally do not seem to be too helpful. It may be more difficult to find a good multi-row cut with more rows but a fixed number of samples. Do you have any sense on how the quality of your cuts would improve with more samples? 7. Please fix the citation types (those missing parenthesis). Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: There is not much discussion on limitations, though the paper is mostly theoretical. Perhaps the paper could discuss some more what these results could lead to in future work, and limitations of interpreting these small-scale experiments. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for the insightful review and providing detailed feedback. Regarding your questions: 1. This is an good point. The two families of cut-generating functions considered in this paper are indeed parametrized geometrically. We will include additional explanations and figures to clarify this further. 2. The $k$-dimensional cut-generating function in this paper is a subfamily of those studied by Basu and Sankaranarayanan (2019). Specifically, it is the trivial lifting of the gauge function of a family of maximal $(\mathbf{f} + \mathbb{Z}^k)$-free simplices, defined as $\mathbf{f} + \lbrace \mathbf{x} \in \mathbb{R}^k : \sum_{i=1}^{k} \mu_i \mathbf{x}_i \leq 0, \mathbf{x}_1 \geq -1, \dots, \mathbf{x}_k \geq -1 \rbrace$, parametrized by $\mu$ restricted to the standard $k$-dimensional simplex. We indeed have some 3D-printed models of these generating functions, as shown in our global response (see the attached PDF). We will include these figures in the paper. 3. Our main results are based on the piecewise structural results of the branch-and-cut tree size studied by Balcan et al. (2022), so we adhere to the same methods and assumptions for building branch-and-cut trees as in their paper. Therefore, the results apply to product scoring policy or lexicographic policy for variable selection and depth-first search policy for node selection. Using the same technique, one can prove similar results for other selection policies, such as the best-bound policy for node selection. We will clarify this further in the paper. 4. Gomory and Johnson (2003) proved that the one-dimensional cut-generating functions (CGFs) considered in this paper are extreme. For $k$-dimensional CGFs, they are minimal valid functions as they are the trivial lifting of the gauge function of maximal $(\mathbf{f} + \mathbb{Z}^k)$-free convex sets with the covering property (Basu et al., 2013; Conforti et al., 2014; Averkov and Basu, 2015). Then, these $k$-dimensional CGFs are extreme by the $(k+1)$-slope theorem (Basu et al., 2011). We will include these explanations in the appendix of the paper. 5. There are some trivial constraints for those $0/1$ knapsack problems restricting the decision variables to be no larger than 1. We do not explicitly handle these upper bounds differently, and consequently, they contribute to the simplex tableaux. 6. This is a very interesting and important question. There are two related issues here. One is the classical bias-variance/bias-complexity/overfitting-underfitting tradeoff in any statistical method such as ours. The second issue is the difficulty in finding the optimal solution to the ERM problem $\min_{{\mu}} \frac{1}{N}\sum_{i=1}^N T^k(I_i, {\mu})$, where $T^k(I_i,{\mu})$ is the branch-and-cut tree size of the instance $I_i$ after adding a cutting plane induced by a $k$-dimensional CGF parameterized by ${\mu}$ (i.e., $k$ is the number of rows used to generate the cutting plane). Increasing the number of rows reduces the bias of the model, i.e., we expect the overall expected error to go down with the optimal choice of parameters ${\mu}$ (for minimizing the overall expected error). However, it increases the variance/complexity of the learning procedure. As you point out, if we fix the number of samples, increasing the number of rows leads to weaker guarantees on the expected error of the learned parameters, i.e., the ERM solution ${\hat\mu}$. Sample complexity tracks this trade-off quantitatively: it tells us how the sample size should grow if we increase the complexity of our model (i.e., increase the number of rows $k$ used to generate the cutting plane), if we wish to keep the same error and high probability guarantees. From an empirical perspective of solving the ERM problem, the phenomenon you mentioned is the following. In the experiments of this paper, regardless of the value of $k$, we uniformly sampled a constant number of CGF parameters on the $k$-dimensional standard simplex and chose the best one. Therefore, for higher dimensions, the probability is lower that some of these randomly sampled points are good enough to solve the ERM problem. In this setting, increasing the number of sampled candidate parameters as $k$ increases is a reasonable way to improve the performance of the $k$-dimensional CGF. However, the number of candidate parameters might grow exponentially with $k$ for a provable guarantee. We have not theoretically or computationally explored this dependence. Therefore, with more IP instance samples, we might consider moving away from this simple yet robust enumeration method and adopting heuristic algorithms to solve the ERM problem. For instance, as suggested by Cheng et al. (2024) in their paper that also involves the optimization of a similar ERM problem, we could use some reinforcement learning (RL) algorithms, treating the CGF parameters as continuous actions in the RL setting, to find a relatively good parameter setting. 7. Thank you for catching this. We will fix the citation types in the paper. --- Rebuttal Comment 1.1: Comment: Thank you for the response. This answers all my questions and I appreciate the changes to the paper. I have read all reviews and rebuttals and I will keep my score. I am not too concerned with the limited computational experiments given the nature of this paper, and while I agree with the main concern of 6RAE, in my opinion it should be sufficient to change the language so that it is clearer that this is more of a scientific study rather than a practical one.
null
null
Rebuttal 1: Rebuttal: The attached PDF includes figures of a 2-dimensional cut generating function, related to reviewer yCxh's question (point 2). Pdf: /pdf/ac03b8b7624a88381b4ac2449a951c09eece8a19.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Any2Policy: Learning Visuomotor Policy with Any-Modality
Accept (poster)
Summary: This paper aims to enable robots to understand tasks and their environments using multiple modalities such as text, audio, images, video, and point clouds. To accomplish this, the authors introduce Any2Policy, a versatile framework designed to process multi-modal inputs at the instruction and observation levels. Within this framework, they developed an embodied alignment module that effectively integrates multi-modal features. Additionally, the use of ImageBind and TokenLearner helps to mitigate tedious encoder design and enhance training efficiency, respectively, which appears to be a practical approach. The paper's thorough evaluations, conducted on 30 real-world tasks and three distinct simulation benchmarks, convincingly demonstrate the effectiveness of the proposed method. Strengths: 1. This paper addresses a pivotal topic in robot learning, focusing on enabling robots to understand tasks and interpret the environment through multi-modal inputs, which is essential for developing generalist agents. 2. The manuscript is well-written and well-motivated, presenting its ideas and research goals effectively. 3. The methods introduced in this paper effectively integrate multi-modal inputs to substantially improve policy learning. Comprehensive evaluations conducted across both real-world tasks and simulated environments demonstrate the effectiveness of the Any2Policy framework. Weaknesses: However, there are still some concerns regarding the experiments. 1. While the authors have conducted extensive ablations on real-world experiments, replicating these results in practice may be challenging for follow-up researchers. To facilitate better benchmarking, it would be advantageous if the effectiveness of each proposed method could also be demonstrated in simulated experiments. 2. Due to the high cost associated with evaluating real-world tasks across different random seeds, reporting variance is difficult. Nonetheless, it would be beneficial if the authors could assess the impact of multiple random seeds in simulation tasks to provide more robust statistical insights. 3. The authors appear to have omitted the results from the meta-world evaluations. These results were neither found in the main text of the paper nor in the appendix, despite the authors claims that they were reported. 4. MUTEX [1] is another significant work that explores task specifications using multi-modal inputs. While Any2Policy incorporates a broader range of observation modalities (image, point cloud), including the comparison with MUTEX could enrich the analysis, especially given the limited range of baseline methods evaluated in this paper. [1] Mutex: Learning unified policies from multimodal task specifications Technical Quality: 3 Clarity: 3 Questions for Authors: 1. I'm curious to know whether the video instructions were recorded using the same embodiment or if they include cross-embodiment examples, such as demonstrations by humans? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors have properly discussed the limiations and societal impacts in the Appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank reviewer for the comments and reference reviewer with identifier 5TLy as R4. Comment n of reviewr m is denoted as RmCn. **[R4C1]** > While the authors have conducted extensive ablations on real-world experiments, replicating these results in practice may be challenging for follow-up researchers. To facilitate better benchmarking, it would be advantageous if the effectiveness of each proposed method could also be demonstrated in simulated experiments. We share the reviewer's sentiment that building a simulation environment is a paramount topic that could improve productivity and enhance future research. We are currently developing a simulation based on Isaac Sim, which we hope will facilitate further research in this domain. **[R4C2]** > Due to the high cost associated with evaluating real-world tasks across different random seeds, reporting variance is difficult. Nonetheless, it would be beneficial if the authors could assess the impact of multiple random seeds in simulation tasks to **provide more robust statistical insights**. Since our simulation environments were not ready, we performed a robustness study on random seeds in real-world experiments. We evaluated four tasks in the real world, running 3 seeds for each experiment with seed numbers 0, 1, and 2. For each seed, we conducted 10 evaluations across the four real-world tasks. Both methods were evaluated 10 times, and we report the average success rate for each method. | Seed | PlaceBread | CloseLaptop | InsertPlug | PlaceCan | | ------- | ------- | ------- | ------- | ------- | | 1 | 100 | 100| 50 | 50 | | 2 | 90 | 100| 30 | 50 | | 3 | 100 | 90| 50 | 60 | | Average| 96.6 | 96.6 | 43.3 | 53.3 | Our experiments indicate that the training of our approach is stable, and the results are consistent across different seeds. **[R4C3]** > Missing MetaWorld Results. We thank Reviewer 4 for the careful review and for pointing out the missing experimental results. We sincerely apologize for this oversight. Here is a brief summary of the experimental results. We present the results based on three task levels, following the settings in Masked World Model (CoRL'23). We report the average success rate across three task levels: easy, medium, and hard. In total, there are 45 tasks. All experiments were trained with 20 demonstrations, evaluated with 3 seeds, and for each seed, the success rate was averaged over five different iterations. We used all available modalities for our method. The experimental results are shown below: | Method | Easy (28) | Medium (11) | Hard (6) | | ------- | ------- | ------- | ------- | | Diffusion Policy (RSS'23) | 80.4 | 32.6 | 9.4 | | IBC (CoRL'21) | 68.3 | 15.2 | 10.6 | EmbodiedGPT (NeurIPS'23) | 60.1 | 24.5 | 7.9 | | **Ours** | **89.6** | **65.3**| **29.6** | The numbers in parentheses indicate the number of tasks for each specific task level. The experimental results support our conclusion that Any2Policy are able to achieve better performance than the baseline across different levels of task difficulty. **[R4C4]** > Comparison with Mutex. We thank the reviewer for raising this point. Mutex is indeed a significant work that explores multimodal task specification. To strengthen our work, we conducted a comparative study with Mutex. Specifically, we selected four real-robot tasks and compared them with Mutex using the same training data. Both methods were evaluated 10 times, and we report the average success rate of each method. | Method | PlaceBread | CloseLaptop | InsertPlug | PlaceCan | | ------- | ------- | ------- | ------- | ------- | | Mutex | 80 | 90 | 20 | 10 | | **Ours** | **100** | **100**| **50** | **50** | Our experimental results indicate that Any2Policy performs better than Mutex. We believe that the performance gain comes from the additional modalities introduced by our method. **[R4C5]** > I'm curious to know whether the video instructions were recorded using the same embodiment or if they include cross-embodiment examples, such as demonstrations by humans? Yes, the video instructions were recorded through human demonstration. --- Rebuttal 2: Comment: Thanks for the careful comments. My major concerns are well-resolved. So, I'm happy to increase my confidence to reflect this (from 3 to 4).
Summary: This paper proposes the simultaneous fusion of image, text, point cloud, video, and audio—four modalities—in robotic manipulation tasks, while also considering the integration of information from both instruction and observation. Through the transformer architecture and cross-attention mechanism, embodied alignment of multiple modalities is achieved. Furthermore, this paper constructs a real multimodal dataset and conducts a detailed experimental analysis of the impact of additional modality information on performance. Strengths: 1. This paper investigates the impact of simultaneously fusing five modalities: image, text, point cloud, and video, on robotic manipulation tasks. Although there is already a considerable amount of work that fuses two or three of these modalities, this paper argues that it is still meaningful to consider the simultaneous fusion of different modalities from both instruction and observation. 2. This paper constructs a real robotic dataset that includes information from all the aforementioned modalities. 3. The experimental section of this paper provides a detailed analysis of the impact of multimodal information fusion on robotic manipulation, confirming that the involvement of additional modalities in training or inference can lead to performance improvements. Weaknesses: 1. The technical contribution of this paper is limited. Although the paper analyzes some difficulties in the process of fusing multimodal information and embodied alignment, the overall model architecture is still relatively straightforward and simple. 2. The scale of the dataset is relatively small compared to the widely used robotic operation datasets currently available. Considering that the model architecture and training methods for multimodal information fusion typically require a large amount of data, it may be necessary to have a larger dataset to verify stronger generalizability or to draw more robust conclusions on multimodal information fusion. Additionally, some existing datasets are already capable of including modalities such as text, image, video, and point cloud simultaneously. The main contribution of this paper seems to lie in the addition of rich paired audio data. If one considers expanding the scale of the dataset as well as costs, would it be a better choice to augment the existing datasets with information from modalities such as audio? 3. The experimental section does not yield particularly new conclusions. Generally speaking, it is not surprising that the addition of new modalities, especially for robotic manipulation tasks, typically leads to better performance. Technical Quality: 3 Clarity: 3 Questions for Authors: Considering the excellent development of current multimodal pre-trained models, and that the models in this paper are trained from scratch, is it feasible to incorporate existing pre-trained models such as text-image, audio-image, image-point cloud, and text-point cloud models into the current training framework? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The authors have adequately addressed the limitations andsocietal impact of the work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Rebuttal: We thank reviewer for the comments and reference reviewer with identifier RRf1 as R3. Comment n of reviewr m is denoted as RmCn. **[R3C1]** > The technical contribution of this paper is limited. Although the paper analyzes some difficulties in the process of fusing multimodal information and embodied alignment, the overall model architecture is still relatively straightforward and simple. We thank the reviewer for raising this point. While the technical novelty may seem limited, our primary contribution lies in providing a straightforward yet effective solution that, for the first time, successfully integrates multiple modalities for both task specification and observation, enabling seamless robotic manipulation. Our method demonstrates significant performance gains over the baselines in new settings, underscoring the necessity of our approach **[R3C2]** > The proposed dataset seems to lie in the addition of rich paired audio data? Would it be a better choice to augment the existing datasets with information from modalities such as audio? We appreciate the reviewer's concern about our dataset. Notably, our dataset goes beyond merely adding rich paired audio data. We offer a comparison with existing datasets in the following table. We focus on the types of modalities in existing data on both observation and task specification. | Dataset | Observation | Task Specification | | ------- | ----------- | ------------------| | Open-X Embodiment | point cloud, image | text | | Droid | point cloud, image | text | | RH20T | point cloud, image | video, text | | **Ours** | point cloud, image, video | text, image, video, audio | Most previous datasets provide point clouds and images for observation, and text for task specification. There are other modalities, such as image goals and audio, for task specification. Despite being smaller than Open-X and other datasets, our dataset is more comprehensive in the domain we aim to study, specifically aligning multiple modalities for both instruction and observation to facilitate the training of robotic manipulation. It highlights the unique value our dataset brings to the community. Secondly, while it is possible to expand existing datasets with this additional information, it would require significant effort and time. This presents a valuable future research direction for expanding datasets to include richer multimodal information. **[R3C3]** > The experimental section does not yield particularly new conclusions. Generally speaking, it is not surprising that the addition of new modalities, especially for robotic manipulation tasks, typically leads to better performance. We agree with the reviewer's point. Intuitively, adding new modalities should improve model performance for manipulation tasks. However, prior works have not fully explored this area. The main contribution of our work lies in building a robust system for multimodal settings, providing a benchmark to evaluate methods in this context, and demonstrating the value and potential of this approach for future research. **[R3C4]** > Considering the excellent development of current multimodal pre-trained models, and that the models in this paper are trained from scratch, is it feasible to incorporate existing pre-trained models such as text-image, audio-image, image-point cloud, and text-point cloud models into the current training framework? The challenge here is that an embodied agent needs to _observe_ and _interact_ with the world using different modalities. Not only do we need to manage diverse modalities for tasks such as observation and task specification, but we also need to align these two components together. Currently, no work addresses this comprehensive challenge, and it is infeasible to incorporate existing pre-trained models to resolve it. --- Rebuttal Comment 1.1: Comment: Thanks for the authors' sincere response. Despite the objective shortcomings that I mentioned, I generally recognize the contribution and solidity of this paper. Therefore, I maintain a positive attitude towards this paper overall, and I will discuss the review results with other reviewers in the follow-up.
Summary: This submission develops a new model that can handle many different modalities as input for instruction and observations (video, text, image, pointcloud etc.). Different modalities are encoded via different frozen encoders (with projection layers kept trainable) to a shared representation to then be passed into a policy that then generates actions. The submission constructs a multi-modal dataset for several evaluation benchmarks (3 simulated, 1 real), and trains a policy on this multi-modal data and show it out-performs policies trained on a specific modality of data. While work in the past has built multi-modal models in language/vision, this is the first to do so in robotics. Cross attention is used where the observation sequence of tokens are keys and the instruction sequence of tokens are values. Strengths: - This appears to be the first work that combines video, image, pointcloud, and audio together into one multi-task model for robotics, demonstrating some improvements in success rates when multiple modalities are used in training instead of just one modality. - The release of a highly multi-modal annotated dataset is great to see. However more details about the data would be greatly appreciated. Weaknesses: - The performance of AnyPolicy looks very similar to EmbodiedGPT in the franka kitchen setting in Figure 4. The figure 4 caption appears to also be wrong as it seems EmbodiedGPT outperforms AnyPolicy on both Knobs Left and Light Left, not just Knobs left. Moreover, no error bars are shown and with results so similar over just 100 evaluations in simulation, it is hard to say if AnyPolicy out performs embodied GPT. - FrankaKitchen, while a useable benchmark, has almost no initial randomization and is incredibly easy given objects in the environment are not randomized, making results reported on it not mean that much. It would be great to see comparisons on harder manipulation benchmarks although I realize it might not be possible to run experiments now. - Very few details about the real world dataset are provided. What are the 30 tasks? What are some example instructions? What are some example audio? Technical Quality: 3 Clarity: 3 Questions for Authors: Questions: - Are there meta-world results? I can't find any figures/tables on it, only see results on ManiSkill2 and Franka Kitchen. - The name of the dataset this paper curated, RoboAny, is mentioned once in the paper in passing and never again. Is there a reason for that? - Can there be more clarification around how the ablation of no embodied alignment module works? My understanding is if you remove it you simply directly use the encoded tokens and put them all in one big sequence as input to the transformer. However the original encoders might output different shapes for the tokens so I am not sure how this is consolidated. - Is AnyPolicy trained on data from all benchmarks or is a separate policy trained per benchmark. In other words, is the multi-task policy multi-task over all tasks in a benchmark, or all tasks in all benchmarks? - How does the modality-specific model work, what is its architecture compared to AnyPolicy? - How come StackCube has such a high success rate but PickCube has such a low success rate? My understanding is in ManiSkill2 StackCube is much harder than PickCube because it requires careful placement on a cube and releasing the cube, whereas in PickCube you simply grab the cube and move to a random goal. - How many demonstrations are used for ManiSkill2? This detail could not be found anywhere. - How is R3M used in the Video-Image result in table 3? How does it process video input and produce video tokens? Typo: - "We further conduct experiments on two simulated robotics benchmarks to reinforce the strong generalizability of our approach compared to existing methods" in section 1. I think three benchmarks were tested not two. - Figure 4 shows 110% on the y-axis. It should be clamped to [0, 100]. Unfortunately given the significant lack of details on data, and how some ablations work (see questions below for more) I recommend a reject in the current state. Moreover the performance of the model after combining several new modalities does not appear to out perform other baselines from other papers that much to warrant the amount of extra work on a simple task like FrankaKitchen. I am happy to raise my score to the accept range if these weaknesses and questions are addressed. Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Limitations are provided in the appendix and discuss limitations from the perspective of additional things that could be done (more modalities, more robots etc.). I think limitations around data efficiency could be important to bring up. Just how many demonstrations are needed? Real-world imitation learning suffers greatly due to the expensive cost of teleoperation (especially if high success rates are desired). Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank reviewer for the comments and reference reviewer with identifier QY6x as R2. Comment n of reviewr m is denoted as QY6x. **[R2C1]** > The performance of AnyPolicy looks very similar to EmbodiedGPT in the Franka kitchen setting in Figure 4. The Figure 4 caption appears to be wrong Thank you for pointing out our mistake. The caption is indeed incorrect, as Any2Policy performs slightly worse than (or on par with) EmbodiedGPT on two tasks. However, we respectfully disagree that the overall performance of Any2Policy is similar to EmbodiedGPT. For the Franka Kitchen experiments, we demonstrate that: 1. For challenging tasks, such as Micro Right and Cabinet Right, our method significantly outperforms EmbodiedGPT. 2. In low-data scenarios (e.g., 10 demonstrations), Any2Policy achieves much better performance than EmbodiedGPT and other baselines. 3. Despite being developed to handle multiple modalities, our method outperforms those trained specifically for a single modality. **[R2C2]** > The experiments of FrankaKitchen is an easy benchmark. We agree that FrankaKitchen is an easy benchmark and may introduce bias in evaluation. Therefore, we have provided further evaluation on Meta-World (results are included in the rebuttal), ManiSkill-2, and real robots. To further address the reviewers' concerns, we conducted additional experiments on RLBench and Adroit, which include more challenging tasks with a 24 DoF ShadowHand and a 4 DoF arm. All experiments were conducted with 200 demonstrations. We provide multiple baselines, including Diffusion Policy, IBC, EmbodiedGPT, and also present error bars. We ran each experiment with 3 seeds and evaluated 200 episodes every 200 training epochs. We computed the average of the highest 5 success rates, which is a typical setting in this simulation. Below are the experimental results. | Method | Hammer | Door | Pen | | ------- | ------- | ------- | ------- | | Diffusion Policy | 52 $\pm$ 15| 56 $\pm$ 7 | 17 $\pm$ 5 | | IBC | 0 $\pm$ 0 | 1 $\pm$ 1 | 0 $\pm$ 0 | | EmbodiedGPT | 35 $\pm$ 4 | 49 $\pm$ 10 | 21 $\pm$ 7 | | **Ours**| **64 $\pm$ 13** | **65 $\pm$ 8** | **29 $\pm$ 6** | We demonstrate that even on challenging benchmarks, our method consistently outperforms state-of-the-art baselines. **[R2C3]** > Metaworld Results are missing. We thank Reviewer 4 for the careful review and for pointing out the missing experimental results. We sincerely apologize for this oversight. Due to limited space, please see **[R4C3]** for details. **[R2C4]** > Name Roboany is not used anywhere else. We thank R4 for careful reviews and for pointing out this **typo**. We will include this in the revision. **[R2C5]** > Can there be more clarification around how the ablation of no embodied alignment module works? We appreciate the reviewer's concern regarding the details of our ablation study. We have provided a **detailed illustration of our methodology in lines 254-258**. Specifically, we use MLP layers to align the observation tokens with the instruction tokens, ensuring the dimensions are correct. Additionally, we employ TokenLearner to reduce the number of tokens and maintain consistency, even when some modalities are missing. **[R2C6]** > Is AnyPolicy trained on data from all benchmarks or is a separate policy trained per benchmark. In other words, is the multi-task policy multi-task over all tasks in a benchmark, or all tasks in all benchmarks? In our experiments, for each benchmark, including real-world scenarios, our model is trained on all collected tasks to perform multi-tasking. This is a typical setting in robot learning, as seen in works like Diffusion Policy (RSS '23) and EmbodiedGPT (NeurIPS '23). **[R2C7]** > How does the modality-specific model work, what is its architecture compared to AnyPolicy? Due to our systematic design, our method can naturally handle missing modalities. The architecture of our modality-specific model is identical to AnyPolicy. During inference, we pass empty tokens to account for the missing modalities. **[R2C8]** > Since **PickCube only needs to place a cube on a random place**, which is much easier than StackCube, why does StackCube have such a higher success rate than PickCube? We thank Reviewer 4 for their careful review and for pointing out this observation. To explain, first of all, PickCube requires picking up a cube and placing it at a **specific goal position**. This task is not easier than StackCube. For instance, as shown in Table 3 of the ManiSkill-2 paper, **the success rate for StackCube is higher than for PickCube when using point cloud observations**. Secondly, since our method is trained in a multi-task fashion (as opposed to the single-task training in ManiSkill-2), the learning behavior could differ from that of a single-task policy. **[R2C9]** > How many demonstrations are used for Maniskill2? We used 1K demonstrations for all tasks for Maniskill2. **[R2C10]** > How is R3M used in the Video-Image results? For R3M, we select the keyframe with images of resolution $224 \times 224$ as input and pass to R3M. All other settings are kept the same as the default R3M. We thank you for the precious review time and valuable comments. We have provided corresponding responses, which we believe have covered your concerns. We hope to further discuss with you whether or not your concerns have been addressed. Please let us know if you still have any unclear parts of our work. --- Rebuttal Comment 1.1: Title: Response Comment: Thanks for the comprehensive response. R2C2: I'm aware you test on other benchmarks, but surprised you keep FrankaKitchen in the main text and then the harder ManiSkill 2 benchmark in the supplemental (many people often miss the supplemental). Is there a reason for this? At minimum since there are MetaWorld results you can include those in the main text (they are far more impressive and significant, larger differences in success rates on harder tasks with a bit more diversity than franka kitchen). R2C10: What do you mean keyframe, how are keyframes selected? If these are hand picked or by some algorithm, could it be that the keyframe selection is poor leading to worse results? All my concerns are essentially covered. My main concern left really is the paper does not seem to be polished given some key figures were missing to begin with (and are only present in openreview comments, not revised pdfs submitted here). I have raised my score to 6 to reflect that I do wish to accept the paper. It is straightforward and fairly simple, easy to understand, and even if the results are highly expected, someone had to do the experiments and I really appreciate the efforts, I imagine a good amount of engineering is necessary to get this kind of project working. But I think its presentation and organization could be better. A big presentation issue for example is with regards to the choice to only include franka kitchen in the main text, without including MetaWorld or ManiSkill2.
Summary: The paper aims to enhance the generalizability of robotic agents by enabling them to handle tasks using diverse modalities such as text, audio, images, and point clouds. The authors introduce a multi-modal system named Any-to-Policy, which utilizes a versatile modality network to adapt various inputs and connects with policy networks for effective control. To support this approach, they assembled a comprehensive dataset covering 30 robotic tasks annotated across multiple modalities. Extensive experiments demonstrate the system's capability to adapt to diverse multi-modal inputs in both simulated and real-world settings, showcasing its potential to improve the generalizability and performance of embodied agents in various scenarios when comparing various baselines. Strengths: The paper introduces a novel approach to multi-modal robotic learning with the Any2Policy framework. This framework allows for the integration and processing of diverse modalities such as text, audio, images, and point clouds in a unified manner. In my opinion, the originality seems to lie in the seamless combination of these modalities, enabling robots to handle a wide variety of tasks with greater adaptability. The assembly of a comprehensive real-world dataset annotated across multiple modalities is also a significant and innovative contribution, addressing the scarcity of such datasets in the field. Moreover, the authors provide detailed descriptions of their framework, including the use of multi-modal encoders and embodied alignment techniques. The experimental setup is relatively comprehensive, encompassing both simulated benchmarks and real-world scenarios, which validates the effectiveness of the proposed approach. Finally, the paper addresses an important challenge in robotic learning: the ability to process and integrate information from multiple modalities. By demonstrating the capability of the Any2Policy framework to generalize across various tasks and modalities, the paper opens up new possibilities for the development of more generalizable and adaptive robotic systems. The release of the multi-modal dataset will likely push further research in this area, which is important for future work along this direction Weaknesses: 1. While the assembly of a multi-modal dataset is a significant contribution, the paper could improve by providing more detailed information on the diversity and representativeness of the tasks and scenarios included in the dataset. For instance, it would be beneficial to know more about the variation in objects, environments, and task complexities. This information would help to assess whether the dataset sufficiently covers the wide range of scenarios the framework is intended to handle. 2. Although the paper compares the Any2Policy framework with several state-of-the-art models, it could benefit from a more extensive comparative analysis. For instance, including a wider variety of baseline models e.g. RT-1, RT-2, RT-X, Octo, OpenVLA etc. and providing a detailed discussion on the differences in performance could help to better highlight the strengths and weaknesses of the proposed approach. Additionally, a qualitative comparison, showing example outputs or behaviors from different models, could provide valuable insights. 3. The paper includes some ablation studies to assess the impact of different components of the framework. However, these studies could be expanded to provide a deeper understanding of the contribution of each component given that Any2Policy is a fairly complex system. For example, the authors could investigate the impact of different encoder architectures or the role of specific modalities in greater detail. This would help to identify the most critical elements of the framework and guide future improvements. Technical Quality: 3 Clarity: 3 Questions for Authors: See the section above. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank reviewer for the comments and reference reviewer with identifier h8z5 as R1. Comment n of reviewr m is denoted as RmCn. **[R1C1]** > More details on the datasets. We concur with the reviewer's suggestion to provide additional details in our paper. To address this, we have included in the Appendix examples of 8 tasks from our real-world dataset. This dataset comprises 20 short-horizon tasks, such as pick-and-place [item], and 10 medium-to-long-horizon tasks, which involve multiple steps to complete (e.g., opening a drawer, retrieving an item, placing it back, and closing the drawer). Our setup features a variety of real-world objects, including microwaves, drawers, laptops, boxes, cubes, bottles, etc. These objects range from deformable to rigid and articulated, encompassing a wide array of real-world scenarios. We will provide a detailed summary of our dataset in the revised manuscript. **[R1C2]** > Extensive comparative analysis would be beneficial, such as RT-1/RT-2/RT-X, Octo, OpenVLA. We appreciate the reviewer for highlighting these issues. Since RT-2/RT-X are not open-sourced, and OpenVLA is released recently which we do not have time to implement in our environment, we provide additional results of Octo and RT-1 in our real world experiments. Given the limited time available for the rebuttal, we conducted experiments on four selected tasks. The same training data were used for all methods. For Octo, we present two versions: one that uses pretrained weights from the Open-X framework, and another that is trained from scratch using our dataset. Both versions were evaluated 10 times to calculate the average success rate for each method. Consistent training hyperparameters, including learning rate, learning rate scheduler, and training epochs, were maintained across all methods. | Method | PlaceBread | CloseLaptop | InsertPlug | PlaceCan | | ------- | ------- | ------- | ------- | ------- | | Octo (pretrained) | **100** | **100** | 40 | 20 | | Octo (train-from-scratch) | 50 | 70 | 0 | 0 | | RT-1 | 60 | 80 | 0 | 0 | | **Ours** | **100** | **100**| **50** | **50** | We demonstrate that Octo, pre-trained on the Open-X framework, achieves comparable results to our method on two tasks. However, for more challenging tasks, our method outperforms the pre-trained Octo. When Octo is trained with the same amount of data as our method, our approach achieves a significantly higher success rate. **[R1C3]** > The author could investigate the impact of different encoder architectures or the role of specific modalities in greater detail. We thank the reviewer for raising these points. Unfortunately, due to the limited time available during the rebuttal period, we were unable to complete these experiments. However, we plan to conduct additional experiments in the future, focusing on using different encoders and replacing the policy head.
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Improving Neural Network Surface Processing with Principal Curvatures
Accept (poster)
Summary: The paper promote to use the principal curvature as the surface presentation that can be better used in the modern neural network architectures. To support their hypothesis, the paper compares three different representations: HKS, SHOT descriptor and extrinsic coordinates in the use by PointNet++, Delta Net and Diffusion Net. Among all the providing experiment figures, principal curvatures provides the best performance. Strengths: - The paper provides an overview of the classic surface representations, it also provides an overview of the popular achitectures for geometric deep learning in the exsiting body of work. - The paper work on a relatively meaningful topics in geometric deep learning, as it analysis different surface representations to be used by some popular surface-processing architectures. Weaknesses: - Quality: - Although section 3 provides the way to calculate the principal curvature, it takes up too much space but does not adding equivalently significant value to the paper, and thus appear to be a bit redundant. - The mathematical presentation in the paper is quit disorganized. Most of the formulation are not indexed. The definitions are given without a term [line 104, line 118]. - Incompliteness: - Section 6 Acknowledgement is empty, which becomes another major drawback regarding to the presentation quality. - Limited Contributions: - The surface representations being analyzed in the paper is relatively limited. The task and the methods are not new, not does the paper provide an intelligence-stimulated combination of the existing method. Technical Quality: 2 Clarity: 2 Questions for Authors: Refer to the weaknesses. Confidence: 3 Soundness: 2 Presentation: 2 Contribution: 3 Limitations: Refer to the weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for the time they have spent on reviewing our work. However, we are puzzled by several statements they have made, making it difficult to give a meaningful rebuttal. We will respond, point by point, as best as we can: > "... [section 3] takes up too much space but does not adding equivalently significant value to the paper, and thus appears to be a bit redundant." This is a value judgement and not a scientific judgement. Since curvature and the shape operator are the main tools used in the work, it is expected to state and define what these tools are, and give references. We believe this is normal scientific practice. If the reviewer believes otherwise, we ask they be more specific about what they expect or like to see instead. >"The mathematical presentation is quit disorganised. Most of the formulation are not indexed. The definitions are given without a term" The mathematical presentation is confined to two and a half pages, and is a little more than a sequence of definitions introducing the main mathematical objects we use. The definitions are clearly labelled, and since we make no explicit use of any particular formulas in later sections there is no real need to number them. We feel the reviewer is voicing, once again, a value judgement, which can be interpreted as deliberately obstructive and combative. > "Section 6 Acknowledgements is empty, which becomes another major drawback regarding to the presentation quality." This is perhaps the most puzzling statement made by the reviewer. This being a blind peer review process, we believe it is of common knowledge that acknowledgements should be left empty at this stage, to avoid compromising our anonymity. Using this argument for saying our paper is incomplete seems obstructive and combative. > "The surface representation being analyzed in the paper is relatively limited. The task and the methods are not new, not does the paper provide an intelligence stimulated combination of the existing methods" The belief that the analyzed representations (i.e. data) is limited is a value judgement, not a scientific judgement. The work addresses shape data which is used in numerous broad application domains, including medical imaging, biology, architectural artifacts, etc. We have already addressed the point of novelty in our general answer, however we do not understand the problem raised by the reviewer with the tasks themselves not being new. They are commonly used as state-of-the-art benchmarking tasks, appropriate for comparing methods, with completely open sourced data, and varying levels of difficulty. We are unsure what you mean by the final clause in the second statement. --- Rebuttal 2: Title: Additional comments for the review? Comment: Dear ifJ1. The end of the discussion period is close. It looks like your review contains many general comments concerning the structure of the text, but not about the technical properties of the proposed method. Could you please provide in more detail any additional technical comments concerning the proposed method itself? --- Rebuttal 3: Comment: ## I.Regardsing to the novelty... I had spent quit a long period in reading this paper, and trying to understand the ideas it convey. I really appologies that makes the authors think the review does not contribute much to a constructive discussion, it is also because the shape descriptor appears to be an unfamiliar topics to me. During the whole reviewing period, I keep thinking about how principal curvature contribute to the geometric learning scheme. There has been intensive research upon mesh and point cloud laplacian [1][2], mentioned about laplacian beltrami operator via its descrite forms of matrices contains important curvature, angles information. Also, a concurrent work "Manifold Diffusion Fields"[3] use the first k eigen vectors of the laplacian beltrami operator as intrinsic coordinates to replace x,y,z on mesh vertices or point cloud. They used it as positional encoder and use a transformer directly, got great improvement. These line of research are also conluded as the spectral methods in geometric deep learning, ref [4] (Section V) Another line of geometric deep learning on processing and learning the manifold surface, utilising the tangent bundle of surfaces. One of the most recent work in this line of research is [5]. Although they're task is to learning a texture genertion on single mesh, surface geometry are still being learned and can also be generalized to mesh deformation task. It could because my knowledge base are dominant by these evidence, that I did not fully recognise the contribution by the submission 17689. It appears to me that the paper performs experiment with the use of two surface representations with different neural networks. I recognized the overall experiments that being conducted in the paper, and the effort made by the authos, but think that the contribution is limited, also a major part in section 3 being vagues and hinder me to follow the paper. When the authors stating they are filling the research gaps, I was more expecting to see that the aforementioned research fields being addressed. Also it could also because my limited knowledge in the surface processing or shape operators, that did not realize the significance as much as authors and reviewer qKsN. - [1] Sorkine, Olga. "Laplacian mesh processing." Eurographics (State of the Art Reports) 4.4 (2005): 1. - [2] Lévy, Bruno, and Hao Zhang. "Spectral mesh processing." ACM SIGGRAPH 2010 Courses. 2010. 1-312. - [3] Elhag, Ahmed AA, et al. "Manifold Diffusion Fields." The Twelfth International Conference on Learning Representations.(2023). - [4] Bronstein, Michael M., Joan Bruna, Yann LeCun, Arthur Szlam, and Pierre Vandergheynst. "Geometric deep learning: going beyond Euclidean data." arXiv preprint arXiv:1611.08097 (2016). - [5] Mitchel, Thomas W., Carlos Esteves, and Ameesh Makadia. "Single Mesh Diffusion Models with Field Latents for Texture Generation." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024. ## II. Confusing points in the paper, especially section 3 Operator. Carrying with the questions, I kept reading the paper's method part, and hope to develop a better understanding. - (1) Two definitions are given without a terms, so I guess definition 1 (line 104) defines the shape operator, definition 2 (line 118) defines the principal curvatures. If so, it would be better if the authors could root the definitions with some references. - (2) It is also confusing when the paper mentioned (e1,e2) in kune 115, what are their relationships to (k1,k2), the paper does not mention clearly, leaving an not rigorous impression to this part. - (3) line 137, (as the equation index not given), I do not find a explaintion to the operator DF that being used in the innert product <.,.>, so as the subscript p - (4) line 138, what is TS, the tangent bundel on surface S? Also S_hat is not proporly introduced. - (5) Based on (3), (4), the equations after line 140 is vague in the meaning, again the equations not indexed is a big problem. Does g_s refers to gaussian curvatures? What it g_s(.), definition or description not clearly given/easily found. - (6) line 139, to tell the difference between \ita and F, requires a prior knowledge in eulcidean space and manifold when reading. In a nutshell, section 3, especially section 3.1 is not rigourous in writting. ## III. Fianally... After reading the reviews and reply from reviewer qKsN, I realise that I am not fully recognise the contribution the submission 17689 has made to the surface processing and I was not very confidence about my understanding to the paper, while I also agree with reviewer LGgy with the weaknesses. Also the paper's writting for section 3 still being a big confusion to me, this could either be my rather limited understanding, or the paper need a revision to have a more rigorous way to better informed the audience. Thus, I will adjust my rating to boderline reject and lower down my confidence scroes as dicussed. --- Rebuttal Comment 3.1: Comment: We thank the reviewer for this additional feedback. Their concerns now appear clearer although we might have misunderstood some claims. We are happy to address them below: Novelty and contributions of the work: - ⁠Laplace mesh processing has been addressed in the paper; in fact, the HKS representation is the most successful representation stemming from this line of work, and in our paper we show that the principal, Gaussian, and mean curvatures outperform the HKS representation. - While the Laplacian is linked to curvature (albeit through an equality relating two 2nd order differential operators), we are specifically suggesting in our work that curvature alone is a better suited input to neural networks processing shapes, which is strongly backed by our experiments. To the best of our knowledge, this has never been addressed in existing literature. Therefore (this could be a misunderstanding from our part) we do not agree with the claim that our work has entirely been done in the realm of spectral mesh processing. On the contrary, we believe our work is novel and contributes to the field of shapes processing. - We thank the reviewer for the references they have provided. We will be happy to extend the contextualization of our work by adding a paragraph on transformer-based methods and including the suggested references. Notation and the mathematical aspect of the paper: - The concerns of the reviewer about notations are now clearer. We will be happy to address them here (and in the revision of our paper) to facilitate the understanding of the mathematical aspects of our paper. Nonetheless, we would like to point out that we have used common notations surrounding the well-studied subject of curvature, and have also suggested Guggenheim [15], Olver [26], and O'Neill [27], for further reading. These are classical texts on the subject and are widely used in undergraduate and graduate courses. The definitions used are standard and can be found in many textbooks, including the three previously mentioned. This also makes it hard for us to understand what the reviewer means by 'lack of rigor' -- the mathematical section does not contain heuristics or sketches, we have given precise definitions which serve to simply define our tools. To answer the specific questions of the reviewer: - (1) We believed definitions 1 and 2 did not need to be “titled”, as their content is rather explicit and straightforward: def. 1 introduces the shape operator, while def. 2 introduces the principal, Gauss and mean curvatures, which are nothing more than eignenvalues, determinants, and traces. - (2) k1 and k2 are the eigenvalues associated to the orthonormal eigenvectors e1, e2. While it is explicitly mentioned in definition 2, we will specify these terms directly at the end of definition 1 in the revisions, to avoid any confusions. - ⁠(3) DF is a standard notation denoting the differential of a map F. We use the same notation throughout the paper, as so, p is a point in S, and the subscript p refers to the map at point p -- standard notation. However, we are willing explicate this in revisions of our paper. - (4) TS is the standard notation for the tangent bundle of S. We are willing to explicate this in the revisions of our paper. S_hat is introduced as one of the two surfaces referred to on the same line 139. We struggle to see how else it could be introduced. - ⁠(5) Although we fail to see how the indexing of equations is a big problem (given that no equation is referencing another), we will address this in revisions of our work to facilitate reading. Regarding ⁠g_s, it is introduced and defined on line 138, as the Riemannian inner product induced on TS by the euclidean inner product. - (6) We believe F and eta have been introduced in a way that the difference between them is clear: F is an isometry of R^3, while eta is an isometry between surfaces. We already introduce them both in writing and formula -- we struggle to understand what more is needed here. Overall, we believe all objects have been properly introduced. However, we will happily explicate some common notations used in our work to help readers less familiar with them. We hope we answered the reviewer’s questions more specifically in this comment
Summary: This paper proposes to use surface curvatures as input to neural networks to improve performance on 3D tasks. The main hypothesis is that as the curvatures are intrinsic properties of the surface, they will enable more effective learning for the relevant tasks. The paper carries out some evaluation comparing performance of network with curvatures as input against existing approach that uses point coordinates or other intrinsic properties as input. The proposed approach achieves better results on a few category of tasks. Strengths: - The paper carries out evaluation with some existing methods to demonstrate the benefit of using surface curvature as input. - The paper is overall well-written and easy to understand. Weaknesses: - The use of surface curvature in neural networks is not really new, so the novelty is limited. - The paper fails to discuss some inherent limitations when using curvatures as input. e.g. *) Surface curvature requires 2D manifold structure. In a lot of applications, we are dealing with point clouds rather than a well-defined surface structure such as meshes. On point clouds with thin structures, some points not in the neighborhood of a point on the manifold may be closed actually close to the point in 3D space, causing difficulty in correcting identifying the neighbors and affecting the accuracy of curvature computation. *) As a high-order differential property, curvature is sensitive to noises. *) The signs of curvatures are dependent on the orientation of the surface (i.e., among the two possible normal directions, which one points to the outside and which one points to the inside). Given a point cloud, it is non-trivial to properly orient the normals to enforce a globally consistent orientation. If some points on the point cloud have incorrect orientation, their curvatures will have negated signs and may mislead the neural network. These real-world scenarios can cause difficulties when using curvature rather than point coordinates as input to neural networks. There is insufficient evaluation on such real-world cases. - In recent years, transformers have shown better performance than convolution networks on many tasks. There is a growing body of transformer-based networks for 3D tasks (see https://github.com/lahoud/3d-vision-transformers), but there is no comparison with such methods in this paper. Technical Quality: 2 Clarity: 3 Questions for Authors: - How well does the method perform when dealing with noisy data? - How well does the method perform when compared against transformer-based architectures? Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: There is no discussion on limitations. In particular, the dependencies on orientation, the difficulty of orienting the surface globally, and the sensitivity to noises are not discussed sufficiently. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their work and time. We clarify misunderstandings and respond to the questions raised below: > "The main hypothesis is that as the curvatures are intrinsic properties of the surface, they will enable more effective learning for the relevant tasks." The idea that curvature should be the representation of choice is not, as stated by the reviewer, solely for its intrinsic properties. Many other methods are intrinsic, such as HKS, which we show is outperformed by curvature. >"The proposed approach achieves better results on a few category of tasks" The inflection of this statement is misleading, especially since we achieve better results on *all* categories of tasks competing against the state-of-the-art neural network architectures with state-of-the-art surface representations. > "The use of surface curvature in neural networks is not really new, so the novelty is limited." Our methodology and results *are* new and were missed by the rest of the community. What is important is novelty of results, not novelty of subject matter, since otherwise this point of view shuts down almost all avenues of scientific inquiry. We also make sure to reference papers that use curvature in our work, and explain clearly how we differ from them. >"In recent years, transformers have shown better performance than convolution networks on many tasks..." Although they have shown tremendous improvements in the NLP domain, one could argue the verdict on transformers outperforming CNN in image processing is not out yet. More so, regarding shape data, which is the context of our work and results, we have selected three architectures that are the most used in the domain and can undeniably be considered the state-of-the-art in neural networks for shapes. Nonetheless, we are open to including a comparison with one of the emerging 3D transformer architectures in revisions of our work; as we show that curvature representation outperforms other methods on a variety of architectures, we believe a transformer model would also benefit from such representation. >"The paper fails to discuss some inherent limitations when using curvature as input... In a lot of applications we are dealing with point clouds rather than a well-defined surface structure such as meshes." It is true that point clouds are different from meshes. But there are several things to consider here: (1) In our paper, and within the domain of surface processing, we are focusing on shapes for which standard representation is a mesh, as mentioned explicitly in our paper. Application domains making use of mesh shape data are numerous, and large, including medical imaging, biology, architectural artifacts, so the context in which our results are valid is not at all limited. (2) We believe curvature as shape representation could be extended to noisy point cloud data. Perhaps more so than other representations, since discrete curvature knows a long history of research specialized in noisy data. In the paper we reference methods (that have been implemented and tested) from geometric measure theory, that can be used to define curvature for point clouds without an obvious orientation of normals everywhere. --- Rebuttal Comment 1.1: Comment: The rebuttal has addressed my concerns. I am happy to raise my rating to 6. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for taking into consideration our rebuttal, and are happy they raised their rating. --- Rebuttal 2: Title: Any comments? Comment: Dear LGgy. The end of the discussion period is close. I would be grateful if you provide a feedback regarding authors’ answers to your review.
Summary: The paper proposes to use curvature instead of previous surface descriptors in neural networks that process shapes. They show that the principal curvatures and/or mean curvature are better surface descriptors for three very different neural network pipelines, and is much faster to compute. They point out previously known properties of curvature for why curvature makes for such good shape representation. Strengths: - While there is clearly performance improvement, it is impressive that the benefit is clear on three very different neural network pipelines - Furthermore the simplicity and low dimension show of the proposed descriptors really demonstrate their effectiveness - Reduced computation time scaling means it can be scaled to very large sampling - "The performance of each representation is strongly dependent on the chosen implementation. We have tried to be as fair as possible by not developing our own implementations of existing work and instead using implementations which have already been tried, tested, and validated in the literature" - Very happy with with mentality, this is very important and a lot of papers in the literature do not do this. Weaknesses: - The reduction in performance for the principle curvatures for classification is a bit puzzling, especially since Gaussian curvature can trivially be computed from the principal curvatures. Surely the complexity of Delta Net and Diffision Net can cope with that, but there is a large performance reduction compared to HKS. The paper mentions that it indicates that Gaussian curvature interacts better with pooling, but that is not really justified. In the task HKS performs a lot better than shot16/64 indicating that there is some difference in the task. Given that this paper is more of a discovery of the benefit of a well known quantity, I would expect more analysis into this. Especially since for the other two tasks, Gaussian curvature performs worse than the principle curvatures, and intuitively this made sense as it decreased the amount of information, but then discarding this argument for the classification task is not appropriate. - It would be nice to also benchmark against the pipeline PointNet++ recommended, the "linear combination of HKS, WKS and Gaussian curvature, followed by a PCA projection, leading to a 64 dimensional feature per point" Technical Quality: 3 Clarity: 3 Questions for Authors: - Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 4 Limitations: None given, and in the checklist have said it is not necessary given it is a comparative study. However given they also try to motivate why curvature is approriate theoretically and it depends whether to use principle curvature or Gaussian curvature, it would be nice for them to either try to identify when to use principal or Gaussian curvature(s) (theoretically or hypothesis-wise) Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their positive comments, and for their very valuable remarks. We acknowledge that our results raise some interesting and important questions that are not answered in the work, as pointed out in the 'weaknesses' section. However, we do not see these as weaknesses of the work: discovery has always preceded explanation in science and it's important that discoveries are shared with the community and circulated. The fact that there are not easily answered questions arising from the discovery is an invitation to investigate our constructions further, which we believe to be a scientific strength. Nonetheless, we are grateful to the reviewer for their suggestions, that represent excellent directions for future work. We will make sure to include these discussion points in revisions of our work. --- Rebuttal Comment 1.1: Title: Keeping my original score, but hope the authors are interested in improving their paper. Comment: I'm not sure I agree with the rebuttal from the authors in regards to my weaknesses. Yes it is important that discoveries are shared with the community and circulated, such as the one presented in the paper, but that does not mean only a surface level investigation of the discovery is needed before circulation, and not being interested to investigate or at least discuss further is not a scientific strength. While I did not expect the authors to necessarily run more experiments based on the weaknesses I listed, I was hoping for at least some discussion or thoughts about them. In regards to the first weakness, do you think that a) the complexity of Delta Net and Diffision Net do not have the ability to compute Gaussian curvature from principal curvatures, or b) they do have the representation ability but they never find Gaussian curvature or some equally useful information derived from principal curvatures? I also had a look at the other reviews. Reviewer LGgy has a point about the difficulty of estimating curvature robustly, and it makes sense that at least a short discussion about this would be beneficial to the paper, and I hope the authors add this. However the experimental results show that at least for the experiments given, current standard methods (like the one from [30] used in the paper) are sufficient for various interesting datasets. I don't think that it is necessary to specifically give results with noisy data, and especially not necessary to give results with transformers. I am keeping my original score of 7, but hope the authors are interested in improving their paper. --- Reply to Comment 1.1.1: Title: Response and additional comment Comment: We thank the reviewer for this additional feedback. First, we would like to apologize if the reviewer was under the impression that the questions they have raised have not been considered. On the contrary, they have stimulated discussion amongst us and we agree the suggestions will improve the paper. We we will be happy to include a discussion section on the points raised by the reviewer and summarised below: Benefits of Gaussian curvature: - ⁠We believe the initial comments made about better understanding why gaussian curvature could be better than principal curvature are interesting and relevant; it is an important point that will add value to the paper's thesis, and we thank you for highlighting it. - We could also hypothesize that the size of meshes - rather than the task of classification - could be an important factor, since the SHREC dataset we have used in the paper also happens to be the one that contains the smallest sizes of shapes. - As we do not have a strong analytical understanding of neural network behaviour, let alone an understanding of their action on shapes, it is hard to give a guideline as to when should we use gaussian against principal curvature or to give a scientific discussion that would not be purely hypothetical/speculative, without any experiments and hypothesis testing as support. - However, we will be happy to discuss potential experiments that could help tackle this question, e.g. benchmarking both curvatures on different classification datasets with ranging sample size and size of meshes. Representation ability of the neural networks: - ⁠We believe that, at the minimum, delta net and diffusion net are flexible enough to "easily reach" gaussian curvature from principal curvature. In fact, they can probably reach it from the extrinsic coordinates representation: in terms of shape information, each representation is complete in a sense; that is, they contain all the geometric information needed to fully describe the shape. We do not create more information when using curvature, it is just "presented" in a compact, yet information rich, form which the neural network uses to process the shape. In addition, principal curvature and gaussian curvature are indeed very close to each other, but the HKS is as well, it is essentially computed from the shape Laplacian, which is related to curvature. Noisy data: - First, we thank the reviewer for emphasizing both the fact that added experiments on noisy data would be beside the point (since we work with known shapes), and that there is no need for comparing with transformers based architectures. - As for integrating curvature in a noisy data scenario: curvature has a long history of research, with numerous discretisation methods and implementations, each designed to be robust to different scenarios. We believe that noisy data mostly implies a scenario where the orientation of the shape is missing, and in this case we point out that curvature as defined, and implemented, from geometric measure theory should be the best option, as it doesn't rely on the normals. We have only slightly touched upon this point in the paper but agree that elaborating on it, specifically in section 3.2, would improve our paper. We hope we answered the reviewer’s questions more specifically in this comment.
null
null
Rebuttal 1: Rebuttal: We would like to first thank the reviewers for their time, and thoughtful comments and questions that will certainly help improve the revised paper. In particular we thank the reviewers for acknowledging the qualities of our work: \begin{equation*} \text{"... it is impressive that the benefit is clear on three very different neural network pipelines."} \end{equation*} \begin{equation*} \text{"... a lot of papers in the literature do not do this."} \end{equation*} \begin{equation*} \text{"The paper is overall well written and easy to understand"} \end{equation*} \begin{equation*} \text{"The paper works on a relatively meaningful topic in geometric deep learning..."} \end{equation*} We are particularly happy with the comments of reviewer qKsN who perfectly grasped the main message of our work, and highlighted some interesting questions. Although reviewers LGgy and ifJ1 have pointed out that the use of curvature in surface processing is not new, this is specifically what makes our contribution all the more valuable and all the more publishable. Despite the existence of related work using curvature, our results were missed by the rest of the community, and the questions our work raise are novel and non-trivial. Finally, we believe some comments made by reviewer ifJ1 are, unfortunately, value judgements and not scientific judgements, and we are concerned about the extent to which they actually engaged with the work. We respond point by point to each reviewers' remarks in individual comments below.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Efficient Sketches for Training Data Attribution and Studying the Loss Landscape
Accept (poster)
Summary: The paper proposes efficient sketching algorithms designed to address memory constraints in large-scale models. The key idea is to eliminate the multiplication of a dense projection matrix multiplication (large matrix materialization) found in the existing sketching algorithms such as in FJL and FastFood. The authors provide theoretical guarantees for their proposed algorithms: AFFD, AFJL, and QK. They showcase the advantages of these methods in three different contexts: training data attribution, intrinsic dimension estimation, and eigenvalue estimation of the Hessian. Strengths: - The paper addresses an important topic that is likely to be of interest to the NeurIPS community, with potential applications across various domains. - The authors provide a well-motivated problem statement, clearly articulating the need for efficient sketching algorithms. - The results presented are comprehensive and interesting, covering multiple aspects of the proposed algorithms and demonstrating their effectiveness across several use cases. - While the authors have not provided code with the submission, the appendix contains sufficiently detailed information to enable the reproduction of the results. Weaknesses: - While the paper is generally well-written, there are several areas that require improvement: a) Some notations are not clearly explained. b) Certain statements lack precision, potentially leading to ambiguity (specific examples are provided in the detailed comments section). - The authors do not sufficiently address the limitations of the proposed approach. The brief mention in the conclusion does not adequately address the potential drawbacks or constraints of the method (e.g., the limitations of QK). - The paper would benefit from a more comprehensive comparative analysis, such as the sketching algorithm used in TDA [1, 2]. [1] Park, S. M., Georgiev, K., Ilyas, A., Leclerc, G., & Madry, A. (2023). Trak: Attributing model behavior at scale. arXiv preprint arXiv:2303.14186. [2] Xia, M., Malladi, S., Gururangan, S., Arora, S., & Chen, D. (2024). Less: Selecting influential data for targeted instruction tuning. arXiv preprint arXiv:2402.04333. Technical Quality: 2 Clarity: 2 Questions for Authors: I have also included suggestions in addition to questions. - The authors should be more specific when discussing Training Data Attribution (TDA) methods. Rather than using the broad term "TDA," they should explicitly mention that their work focuses on gradient-based TDA methods, such as TracIn [3] and GradDot [4]. TDA is a wide-ranging concept that encompasses various approaches, some of which do not align with the description provided in line 22. - In the introduction, the authors state that random projections have been implemented using dense matrices for TDA. Several works perform random projections without explicitly materializing dense matrices. The same holds for the experiment section in 5. - The statement “memory constraints limited their investigation of ID in generative tasks to 500k” in line 98 is unclear. What does ID stand for? - In line 103, the authors claim that their work makes influence function computations more efficient. Please clarify this point, considering that influence functions typically require inverse Hessian-vector product (IHVP) estimation, which isn't directly addressed in Section 5.1. Is this claim primarily based on improving methods like the Arnoldi iteration? - In Section 5.1, the conclusion that "layer selection coupled with dense projections faces severe scalability limitations" relative to AFFD, QK, and AFJL is not clear. Consider including tabular results (similar to Table 5) in the main paper to support this claim. - Do you anticipate any challenges in applying these techniques to even larger language models? Minor Comments - Line 129: Replace "D" with "$D$" for consistent mathematical notation. - Line 150: Clarify that $B$ and $H_D$ now have dimensions $D \times D$. - Line 172: Specify the relevant section when mentioning "unacceptable early TPU results" for clearer cross-referencing. - Line 227: Add a period at the end of the sentence. - In the case of TDA, the rankings between the gradient dot products are important. Are there any reasons why authors do not use the Spearman correlation? - In line 311, it would be helpful if the authors provided a direct explanation of why explicit sketching provides substation speed up. - Figure 1 has low resolution, making it difficult to read when printed. [3] Pruthi, G., Liu, F., Kale, S., & Sundararajan, M. (2020). Estimating training data influence by tracing gradient descent. Advances in Neural Information Processing Systems, 33, 19920-19930. [4] Charpiat, G., Girard, N., Felardos, L., & Tarabalka, Y. (2019). Input similarity from the neural network perspective. Advances in Neural Information Processing Systems, 32. Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 3 Limitations: The paper would benefit from a more comprehensive discussion of limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for their careful reading of our paper and the many valuable suggestions to improve the presentation. We also appreciate the reviewer's acknowledgment that this work is likely to be of interest to the NeurIPS community, with potential applications across various domains. **Answers to Questions** ```The authors should be more specific ... description provided in line 22.```: We acknowledge the need for greater specificity when discussing Training Data Attribution (TDA) methods. We commit to revise the manuscript to explicitly mention that our work focuses on gradient-based TDA methods, such as TracIn [3] and GradDot [4]. We recognize that TDA is a broad concept encompassing various approaches, and we commit to clarify that our work is not applicable to all TDA techniques. We commit also to clarify that we consider methods built on estimating the gradient (e.g., TracIn, GradDot) or quantities derived by preconditioning the gradient with the inverse Hessian ([10, Koh et al.], [21, Schioppa et al.]). ```In the introduction, the authors state that random projections ... experiment section in 5.```: We acknowledge that some works perform random projections without explicitly materializing dense matrices. However, the methods we are aware of (e.g., TRAK [ https://arxiv.org/pdf/2303.14186, B.1]) still need to temporarily materialize chunks of the dense projection on-the-fly. This approach has two substantial disadvantages: (1) runtime grows linearly in the target dimension (they trade-off memory with compute), and (2) there is a need for specialized kernels for efficient implementation, and it's thus unclear how to implement this on TPUs. We have included a plot of runtimes (Figure 1) and a table (Table 1) in the rebuttal PDF to illustrate these points. We emphasize that implementation difficulty is a drawback of these methods, as evidenced by our Triton implementation of TRAK not outperforming the original CUDA kernel released by the TRAK authors. We also emphasize that in the case of using pure JAX we were not able to get a satisfactory implementation because one cannot control how the XLA compiler will deallocate these temporary arrays or their placement in the memory hierarchy. On the other hand, our methods can be written in pure JAX and have constant runtime in the target dimension. ```The statement ... What does ID stand for?```: *ID* stands for Intrinsic Dimension (line 90). We meant that [15, Liu et al.] could only work with a target sketching dimension <= 500k, which prevented them from searching for the true value of the intrinsic dimension (ID), as we demonstrate on a summarization task where ID approaches the model dimension. ```In line 103, ... like the Arnoldi iteration?```: Let us start with an observation from [7, Guo et al; Sec 3, eq(4)]: by symmetry of the Hessian, the iHVP can be either applied to the train or test points; however, given that the train data is much bigger, the iHVP should be applied just to the test point. Therefore the cost of an iHVP is limited only to test points. Now, our claim about improving influence function computations is based on two different approaches: 1. *Sketching the output of an iterative iHVP solver applied to the gradient.* This doesn't change the iHVP computation itself, but makes storage and the search for influential examples more efficient. 1. *Applying the Arnoldi iteration [21, Schioppa et al.] to the sketched Hessian, then using the sketched eigenvalues and eigenvectors to apply their method.* This is more efficient as it operates on vectors of dimension $D$ (sketching dimension) instead of the original model dimension $N$. Moreover, as we demonstrate in 5.4, we can scale the Arnoldi iteration well beyond to what was done in [11, Krishnan et al.] and [21, Schioppa et al.]. In both cases storage is reduced from $N$ to $D$; in the second case the step of applying $k$ Arnoldi projectors can be performed very efficiently if $kD$ is sufficiently small to fit in the GPU/TPU memory as the computation can be parallelized across projectors. ```In Section 5.1 ... to support this claim.```: We commit to add this table. ```Do you anticipate ... larger language models?```: We focused on models that don't require partitioning weights across devices. For larger models, sketching can be applied to individual partitions. We commit to include an example in Appendix B demonstrating how to lift single-device code to multi-device code in JAX using shard-map to create sharded versions of sketching algorithms. ```In the case of TDA, ... Spearman correlation?```: We use Pearson correlation because we are comparing sketched or layer-selected dot products against full dot products, where a linear relationship is expected. Spearman correlation is more appropriate when the relationship is assumed monotonic but not necessarily linear. **Other** We commit to incorporate the suggested minor fixes and appreciate the reviewer pointing them out. Regarding limitations, we are open to suggestions for expanding the discussion in Section 6 and would welcome specific areas the reviewer believes warrant further elaboration. At the moment we can think of emphasizing that QK might require higher values of the target dimension than AFFD. **Additional Clarifications Regarding LESS** We provide further clarification on the comparative analysis with LESS (TRAK is discussed above) LESS builds upon the sketching of TRAK by applying it on top of LoRA. However, as noted by reviewer kKVg, *full fine-tuning often outperforms LoRA fine-tuning in LLMs, a finding corroborated by our intrinsic dimension analysis*. Additionally, LESS focuses on data selection, which is orthogonal to the core insights of our work. --- Rebuttal Comment 1.1: Title: Reply Comment: I thank the authors for their reply and acknowledge reading their response. As the authors mentioned, it would be helpful to be more explicit about the TDA in the motivation of the paper (and give a more detailed description of why this also applies to quantities derived by preconditioning the gradient), add explicit comparisons to Trak's random projection (as the author did for the general rebuttal and the response to my review), and address other details the authors mentioned they will fix/add in the next revision of the manuscript. In general, it seems that most reviewers, including myself, had trouble following the notations used in the paper, which made the presentation of the work on the weaker side. Relatedly, while minor, I agree with reviewer 9kYE that it was difficult to follow all the acronyms in the paper (e.g. AFFD, AFJL, and QK), and it would be helpful to fix issues with \citep vs \citet. It would make the paper more clear if the authors could address these issues. Given these issues will be resolved, as the authors promised, I have increased my score (but it was difficult for me to give a higher score since the current process does not allow the reviewers to see the final manuscript).
Summary: The authors present a framework for scalable gradient sketching and HVP sketching. They introduce three algorithms AFFD, AFJL, QK and provide guarantees for sketching. The paper focuses on three applications: training data attribution, intrinsic dimension computation, and Hessian spectra analysis. These are all implemented with a focus on pre-trained language models. Strengths: * The scalable approaches to sketching are well-motivated and interesting. It seems important to figure out useful sketching routines that scale to high-dimensional parameter spaces that we are seeing in pre-trained language models. * The theory appears to be a strength of the paper. It appears that there are two approaches taken in the paper, the direct approach that implements a gradient in the original parameter space and the implicit approach that implements a gradient in the sketched dimension. It is interesting to see the comparison made between these two different approaches. * The introduction of the FFT as a faster pre-conditioner appears novel. Weaknesses: * Presentation is a weakness of the paper. Since the paper is proposing three different algorithms along with three different applications, it makes it challenging for a reader to follow the contributions and comparisons being made. For example, there seems to be two separate topics being discussed: 1) New theory/algorithms for scalable sketching, and 2) Implications for Pre-trained large language models. Additionally 2) aims to cover three different insights including layer selection, intrinsic dimension, and LLM Spectra. It is a challenge to cover all of the aforementioned points in enough detail in a 9 page paper. * Table 1 is not clear how it is related to the proposed algorithms. In particular, TDA (section 5.1) just writes in bold “Our findings indicate the unreliability of layer selection (Table 1)”. Layer selection does not seem to be well defined, nor related to AFFD, AFJL, QK. * Table 3 is also unclear. Which algorithm should one use in practice? * Minor: Figure 1 is hard to read and is too small. A single sentence in the caption telling the reader why these plots are significant (rather than just a description) would help link it back to the contributions. * Minor: Related to presentation is the use of citations and acronyms. It seems like many acronyms are not defined, e.g. AFFD, AFJL, and QK. Also, citations should not read “While [11] developed…”. It should be “ While X et al. 2024 [11] developed…”. There is a command for this in Latex. Technical Quality: 3 Clarity: 2 Questions for Authors: * Which components of the 6 contributions in the introduction are the main contributions of the paper? * Just above equation (2), “A sketch of the HVP can be obtained as…”: Is this a contribution of the paper, or is there a reference to this available? * In practice, which of the algorithms should one use and why? * Could the authors define what the meaning of an influence score is for this paper? Confidence: 2 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: The authors focus on scaling their approach to GPUs and TPUs and adequately talk about the memory constraints and improvements made by each approach. No code was included with the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for taking the time to read the paper, give feedback on the presentation and appreciating the theoretical part of the paper. **Answers to Questions** ```Which components of the 6 contributions in the introduction are the main contributions of the paper?```: The main contributions are 1-3 (lines 45-54). The other 3 bullet points in the intro (59-68) are novel insights enabled by contributions 1-3. ```In practice, which of the algorithms should one use and why?``` The choice depends on the target dimension and hardware, with AFFD excelling in accuracy for smaller dimensions, and QK/AFJL offering speed advantages on TPUs and GPUs, respectively. Let us look at this more closely. If the target dimension $D$ is constrained (<4k), we recommend AFFD (see Sec 5.2) for its superior accuracy. For larger $D$, QK on TPU and AFJL (with the FFT preconditioner) on GPU V100 are the fastest options. Refer to Table 2 for QK on TPU performance. For AFJL, the vanilla version takes 82ms (Table 2), and the FFT preconditioner provides a 64\% speed-up on top of that (Table 3). We believe our analysis offers a valuable set of adaptable options rather than a single ``best'' algorithm. ```Could the authors define what the meaning of an influence score is for this paper?``` We define influence score as the gradient inner product $\nabla_{\theta}L(\theta,x) \cdot \nabla_{\theta}L(\theta,z)$ (l279). We focus on this for three reasons: 1. Practicality: In the short term, it correlates with loss changes when adjusting the weighting of specific data points [Schioppa et al., https://arxiv.org/pdf/2305.16971 ]. 1. Foundation for Advanced Methods: It serves as a building block for methods utilizing Hessian pre-conditioners or relying on gradient sketches across multiple model runs [TRAK, https://arxiv.org/pdf/2303.14186 ]. 1. Clarity: The definition is straightforward and avoids additional hyper-parameters to tune (e.g., number of models or size of removed datasets in [TRAK]). **Clarifications related to weaknesses** We hope to address here additional concerns raised in the review. ```Minor: Figure 1 is hard to read and is too small ... There is a command for this in Latex.```: We commit to incorporate these suggestions to improve clarity and presentation. ```Since the paper is proposing three different algorithms along with three different applications... It is a challenge to cover all of the aforementioned points in enough detail in a 9 page paper.```: We acknowledge the challenge of covering this material comprehensively in a 9-page paper. We believe there is value in presenting both (1) new theory/algorithms and (2) their applications to LLMs. Our strategy has been to split theory (Sec 3) from applications (Sec 4 & 5); each subsection of Sec 5 corresponds to a different application, so the reader can directly jump to the parts they are more interested in. We also aimed to provide sufficient references for interested readers to delve deeper. Additionally, we have included supplementary material in the Appendix. We welcome suggestions for further content to enhance the Appendix. ```No code was included with the paper.```: Appendix B contains a step-by-step tutorial with Python code. --- Rebuttal Comment 1.1: Title: Thanks for the Rebuttal Comment: In light of the additional results and the comments above, I will increase my score. I am still concerned with the overall presentation, but I am hopeful the authors will be able to adjust this.
Summary: This paper introduces new methods for sketching high-dimensional gradients and HVPs. These are important building blocks for tools like training data attribution and Hessian spectrum analysis. Their methods introduces both empirical and theoretical improvements over prior methods, and are used to demonstrate new insights about properties of pre-trained LMs. Strengths: - Clearly written and organized - Addresses a practically relevant and timely technical problem (projecting high-dim gradients and HVPs) - Provides interesting new insights (e.g., that intrinsic dimension is not that low for LMs) Weaknesses: Nothing major, but: - Misses some important related work ([1], [2]) - but gives good coverage otherwise - Gradient inner product is not a good TDA estimate (many papers have found this now, e.g, [1][3]) - Estimating inner product is fine, but be clear that's the goal then [1] TRAK: Attributing model behavior at scale https://arxiv.org/abs/2303.14186 [2] Faithful and Efficient Explanations for Neural Networks via Neural Tangent Kernel Surrogate Models https://arxiv.org/abs/2305.14585 [3] Training Data Attribution via Approximate Unrolled Differentiation https://arxiv.org/abs/2405.12186 Technical Quality: 3 Clarity: 3 Questions for Authors: - Would benefit from better exposition of the different methods in Section 3. Some type of figure would be more useful to get intuition for differences between different methods (it's a bit hard to keep track with so many different symbols). Happy to revisit if there's a newer version. - Some missing text, e.g., there seems to be no transition at the start of S4 Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: none Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for their insightful feedback and positive assessment of our work. We commit to include [1 & 2] suggested in the review to the Related Work. **Answers to Questions** ```Would benefit from ... there's a newer version.```: We acknowledge the need for improved clarity in Section 3. While we cannot upload a revised paper during the rebuttal phase, we have included a proposed diagram (Figure 2) in the rebuttal PDF. We believe this visual aid will significantly enhance understanding of the various methods. We would greatly appreciate it if you could take a moment to review the diagram. ```Some missing text, ... at the start of S4```: Thank you for pointing this out. We commit to ensure that a smooth transition is added at the beginning of Section 4. **A clarification on Weaknesses** ```Gradient inner product is not a good TDA estimate```: We agree that high correlation with full gradient dot products alone does not guarantee optimal TDA performance in the long time range. However, it serves as a practical metric for short-term time evaluations and is a foundational component in more computationally demanding methods like TRAK. For instance, in the short time range, gradient dot products correlate with loss changes and are useful for example selection in error correction [Schioppa et al. https://arxiv.org/pdf/2305.16971 ]. Notably, TRAK itself relies on accurate gradient sketches as measured by dot products [TRAK, https://arxiv.org/pdf/2303.14186, C.2], with the authors stating: *as long as we preserve the inner products to sufficient accuracy, the resulting system has approximately the same evolution as the original one.* If layer selection is employed for computational efficiency, any adverse impact on the estimation of gradient inner products would have consequences for TDA even in the long run. We commit to revise the paper to make it more transparent that our method goal is to estimate inner products.
Summary: Gradient information is useful for various tasks such as training data attribution and intrinsic dimension analysis, but often suffers from huge compute/memory costs, limiting its practical utility. This paper focuses on several popular sketching approaches (e.g., FJL, FFD), points out lookup-based memory as the major bottleneck in enabling efficient computations on modern accelerators, and proposes the improvement using the Kronecker products with theoretical guarantees. Their scalable sketching algorithms are applied to downstream applications including training data attribution, intrinsic dimension estimation, and Hessian analysis, and provide various useful insights, some of which are contradictory to existing common beliefs. Strengths: I agree that gradient information is useful in understanding important characteristics of neural networks, and sketching is a promising approach to reduce the cost of storing/computing gradient information. Indeed, (random) low-rank gradient projection has been studied in several prior research (e.g., Arnoldi IF, TRAK), with similar downstream applications in mind. That being said, I believe one of the main contributions of this work is in improving efficiency of existing sketching algorithms including FJL and FFD. Their solution of using the Kronecker product is, in my opinion, a smart strategy. Furthermore, their theoretical analyses are very interesting and make their arguments more rigorous. With scalable sketching algorithms, they were able to scale various analyses that were only applicable to small-scale networks to much larger networks, and provided many useful insights. For instance, many people have previously observed that full fine-tuning often achieves better performance than LoRA fine-tuning in LLMs, and the intrinsic dimension analysis in this paper can experimentally corroborates such observation. Overall, I believe the proposed algorithms would be valuable tools for understanding various phenomena in neural networks at scale. Weaknesses: In overall, I believe this paper is a strong paper *if* I limit the scope to gradient sketching. However, I am confused by several arguments. 1. TDA (Sec 5.1): The authors claimed that layer-selection for TDA is inadvisable by pointing out the low (Pearson) correlation with the full gradient dot product between (x, z) pairs. While a high correlation with full gradient dot products could indicate accurate sketching, it doesn't necessarily imply better TDA performance. There are better options for TDA evaluation such as linear data modeling score from TRAK, mislabel detection, data subset selection, brittleness tests, etc. 2. Sketching for HVP: While the authors cited matrix sketching for dealing with higher-order gradient sketching, I am unsure if theoretical guarantees apply in this case. For instance, if we adopt the Fisher information approximation of the Hessian, we can understand the Hessian as the gradient covariance matrix. If the vector in HVP is some sort of gradients, we can understand HVP, especially in influence functions, as a whitening operation that makes all components equally important. In this case, information loss from sketching is not negligible, and the general sketching argument (ie Eq(1)) doesn't really hold? 3. Clarity: I find notations in the paper to be a bit confusing. For instance, in Eq. (2, 3, 6), they used respectively P, F, \Phi to indicate the sketching process. Technical Quality: 3 Clarity: 2 Questions for Authors: 1. I wonder how easy it is to combine proposed sketching algorithms with (data-parallel) distributed computing. While I appreciate general efficiency improvements in AFJL and AFFD, I believe distributed computing becomes necessary at some point for scaling these analyses to large-scale networks and datasets. Can you enable distributed computing simply by wrapping the model or do you need special implementation tricks? 2. How are ground-truth eigenvalues are obtained in Table 1? If those eigenvalues are not exact (e.g. approximated using some iterative methods), then being close to approximate eigenvalues do not necessarily imply better accuracy? Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: The authors have appropriately discussed limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the insightful comments, taking the time to read the paper, and finding the insights provided by our sketching algorithms valuable. **Answers to Specific Questions** 1. We focused on models that don't require partitioning weights across devices. For larger models, sketching can be applied to individual partitions. We commit to including an example of lifting single-device code to multi-device code in JAX in Appendix B. Specifically, if the JAX model is sharded using jit with sharding specifications, shard-map can lift sketching algorithms to a sharded version. 2. We follow [11, Krishnan et al.], as exact eigenvalues are infeasible due to model size. This estimation is accurate, with error [11, eq(7)] decreasing exponentially in the number of iterations. **Clarifications related to weaknesses** We hope these replies help to clarify points raised in the weaknesses. 1. ``` TDA (Sec 5.1): The authors ... brittleness tests, etc.```: We agree that high correlation with full gradient dot products doesn't measure TDA performance in the long time range; however it is a practical metric in the short time range and a building block of more computationally intensive methods like TRAK. For example, in the short time range, gradient dot products correlate with loss changes and are relevant to select examples for error correction [Schioppa et al. https://arxiv.org/pdf/2305.16971 ]. Evaluating with LDS from TRAK would introduce more hyperparameters and computation (>50 models on different subsets of the data); TRAK itself relies on good gradient sketches measured by dot products [TRAK, https://arxiv.org/pdf/2303.14186, C.2], and the TRAK authors make the following point: *as long as we preserve the inner products to sufficient accuracy, the resulting system has approximately the same evolution as the original one [they refer to tracing the training dynamics, our note]. This justifies replacing the gradient features with their random projections* 2. ```Sketching for HVP: ... (ie Eq(1)) doesn't really hold?```: Theoretical guarantees for matrix sketching apply here. The Hessian has limited bulk [11, Krishnan et al.], exploited by Arnoldi-based influence functions [21, Schioppa et al. ]. Sketching provides guarantees for approximating matrix spectrums [22, Swartworth et al.] and solving linear systems [20, Sarlos, Thm 12]. 3. ```Clarity: I find notations... to indicate the sketching process.```: Would replacing P, F, and $\Phi$ with $\Phi$ throughout the paper address this concern? --- Rebuttal Comment 1.1: Title: Response Comment: > We focused on models that don't require partitioning weights across devices. For larger models, sketching can be applied to individual partitions. We commit to including an example of lifting single-device code to multi-device code in JAX in Appendix B. Thanks. This would be helpful. > We follow [11, Krishnan et al.], as exact eigenvalues are infeasible due to model size. This estimation is accurate, with error [11, eq(7)] decreasing exponentially in the number of iterations. Thanks for the answer. > Theoretical guarantees for matrix sketching apply here. The Hessian has limited bulk [11, Krishnan et al.], exploited by Arnoldi-based influence functions [21, Schioppa et al. ]. Sketching provides guarantees for approximating matrix spectrums [22, Swartworth et al.] and solving linear systems [20, Sarlos, Thm 12]. I agree with the authors that the theoretical guarantees for sketching and apply, **if** we separately consider the Hessian and the gradient. However, when sketching is applied to both the Hessian and train/test gradients simultaneously in influence computations, there would be an inherent information loss due to the whitening effect of the inverse Hessian---all components in gradients become equally important from the whitening effect of the inverse Hessian. > TRAK authors make the following point: as long as we preserve the inner products to sufficient accuracy, the resulting system has approximately the same evolution as the original one [they refer to tracing the training dynamics, our note]. This justifies replacing the gradient features with their random projections This is true if we only consider the short horizon. In the long horizon, however, small components that contribute minimally to naive dot product may still have a non-negligible influence on the overall training dynamics. > Would replacing P, F, and with throughout the paper address this concern? I believe this would improve the readability. Considering all these, I am willing to increase my score to 6. Thanks again for your comment.
Rebuttal 1: Rebuttal: We extend our gratitude to the reviewers for their insightful comments and suggestions. We particularly appreciate the encouraging feedback, acknowledging the practical relevance and timeliness of our work, as well as its potential interest to the NeurIPS community. In this rebuttal, we would like to highlight two key points: the use of gradient dot products and a comparison to the dense projection implementation in TRAK [ https://arxiv.org/pdf/2303.14186 ]. **Gradient Dot Products**: We concur that high correlation with full gradient dot products may not be the definitive measure of long-term TDA performance; however it is a practical metric in the short time range and a building block of more computationally intensive methods like TRAK. For example, in the short time range, gradient dot products correlate with loss changes and are relevant to select examples for error correction [Schioppa et al. https://arxiv.org/pdf/2305.16971 ]. Evaluating with LDS from TRAK would introduce more hyperparameters and computation (>50 models on different subsets of the data); TRAK itself relies on accurate gradient sketches, as measured by dot products, and the authors emphasize that preserving inner products to sufficient accuracy results in a gradient-descent system that approximately preserves the same evolution as the one corresponding to model re-training [TRAK, https://arxiv.org/pdf/2303.14186, C.2]. **Random projections like in TRAK**: While TRAK avoids materializing the full random projection, it still requires the temporary materialization of chunks of the dense projection. This leads to two significant drawbacks: (1) runtime scales linearly with the target dimension (memory traded off with compute), and (2) specialized kernels are necessary for efficient implementation, with unclear applicability to TPUs. Our attempts to implement a competitive version using pure JAX were unsuccessful due to the lack of control over memory allocation and placement. We have included a plot (Figure 1) and a table (Table 1) in the rebuttal PDF demonstrating the linear runtime growth of TRAK compared to the constant runtimes of AFFD and QK. The implementation challenges of TRAK are further highlighted by the fact that our Triton implementation does not outperform the original CUDA kernel released by the TRAK authors, underscoring the difficulty of efficiently implementing random projections in chunks. Pdf: /pdf/bb48a6e8d498dc74763f810f77c337bcacde2dcf.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Fast Channel Simulation via Error-Correcting Codes
Accept (poster)
Summary: Inspired by the duality between source and channel coding, the authors use polar codes to develop a channel simulation algorithm for binary output channels. Notably, the authors' scheme scales as $O(n \log n)$ where $n$ is the channel dimension, providing an example of a channel simulation algorithm whose runtime scales sub-exponentially in $n$ that doesn't rely on dithered quantization. The authors conduct some toy experiments to showcase the behaviour of their algorithm. Strengths: Overall, I am very excited about this paper. The general idea of exploiting the duality between source and channel coding to develop fast channel simulation algorithms could significantly impact the field and thus have far-reaching implications for neural compression. The authors' concrete scheme is relatively simple, leaving little doubt about its correctness and efficiency. However, they do back things up with appropriate theoretical results. I have checked the proofs of the statements and can confirm that they are correct. Finally, the authors provide further discussion in Appendices A and B on possible future directions / concrete examples of channel codes that could be used to develop new channel simulation algorithms. Weaknesses: I should note that while I have expertise with channel simulation algorithms and have some basic familiarity with channel codes, I am not an expert in the latter. Given the above, I found the paper's biggest high-level weakness to be its non-replicability, though most of it should be easily fixable. The biggest issue is that the description of the experimental setup for the comparison study between PolarSim, GPRS and PFR is missing, which means that Figure 3 and the last paragraph in the section are not interpretable. While the first set of experiments in Section 3.2 is better explained, the authors should also include the analytic form of the mutual information (the green curves in Figure 2) to make the results more interpretable. Similarly, the contents of Appendix A are quite high-level, with most of the experimental details missing; hence, again, Figure 4 is not interpretable. These problems should be easily fixable by providing the essential details of the experiments in the main text and an additional section in the appendix that describes the precise setup; I am happy to increase my score once the authors address this. I saw that the authors provided their code in the supplementary material; hence, the paper's contents are technically reproducible. Still, ideally, the paper should have sufficient detail so that someone can reproduce the experiments without the authors' code. The second weakness of the paper is that it is closer to a position paper both in terms of content and impact. As the authors explain, "The aim of the paper, however, is not to show that polar codes are useful for simulation per se. Rather, we seek to make the larger point that ideas from the field of error-correcting codes are useful for the simulation problem." Now, I believe the paper should be accepted just based on the strength of the idea (given that the authors address my first concern). However, the paper could have been significantly stronger if the authors considered some practical applications of their scheme and carried out more extensive experiments to demonstrate its usefulness. ## Typos & Miscellaneous: - line 89: should be $h_B^{-1}(h_B(p)) = p$ - line 99: "$m + n$y" instead of $m + n$. - line 120: $F^{\otimes n}$ undefined - I believe the authors mean Kroneker product - line 125: "thatl" - line 231: "mutual information lower " - the word "bound" is missing - line 479: incorrect reference to figure 7, should be figure 8; the contents of figure 7 are never referenced - lines 502-503: "has the correct binomial distribution." - I believe it should be bernoulli - Eqs (21) and (22): Indicator has $Z_i$ instead of $Z_1$ and $Z_2$ - Figs 1a & b are difficult to read; please increase the font size of the axis, tick and legend labels - I think the left and right panels of Figure 2 could be merged - Please add an explicit reference to the proof of Theorem 1 in Appendix C. - Eq (28): I believe the expression should be $I[U_2 ; Y_1, Y_2, \mid U_1]$, unless $U_1$ is independent of $U_2$. Given Eq (17), it makes sense, but the authors should mention this explicitly. - Regarding the proof of Theorem 1 in Appendix C: - Please restate the theorem in the appendix. - Why does eq 53 mix the two probability notations? - I believe Eq (54) is an equality rather than an inequality. Technical Quality: 3 Clarity: 2 Questions for Authors: Not so much a question but a suggestion: In Figure 3, the authors note that "Data for GRPS is only plotted for parameters where the algorithm consistently terminates in a reasonable amount of time." If I understand correctly, the authors are simulating a vector of 8 iid Bernoulli channels. Since, in this case, the sample space is finite and has few elements (only 256), the implementations of PFR and GPRS can be simplified. To see this, note that if we draw the same sample twice, the one with the later arrival time will never be accepted. This motivates a simple strategy: simulate the arrival time for each element of the sample space and perform a brute-force search to find the element that fulfils the acceptance criterion. In the case of PFR, this is equivalent to just performing the Gumbel-max trick. It would be good if the authors could improve their implementation using the above solution (or otherwise) to provide a more complete comparison of the methods in Figure 3. Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: While the authors clearly state that polarization occurs for large $n$, providing a plot of this phenomenon would be valuable. For a fixed amount of mutual information (e.g., uniformly distributed across the dimensions), the authors could include a plot of the rate of PolarSim versus the problem dimensionality. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your encouraging feedback, and for suggesting the improvement to our comparison plot. > ... description of the experimental setup ... We will describe the experimental setup for Figure 3 here, which we will also include in the camera-ready version of the paper. This also takes into account the reviewer's suggestion for speeding up the implementation of GPRS and PFR. The updated plot is included in the attached PDF (Please refer to Figure 2). 1. $\mathtt{PolarSim}$: We use the same setup used in the BSC plots for Figure 2 (i.e. the top plots). We plot the average rate over 200 simulation runs. 2. **GPRS**: We implement [7, Alg. 3]. The proposal distribution $P$ is chosen to be i.i.d. Bernoulli($1/2$) with $n = 8$. Given the input $X^n = x^n$, the target distribution $Q(y^n)$ is chosen to be $\prod_{i = 1}^n \mathsf{BSC}_p(y_i|x_i)$, where $p$ ranges over $(0,1/2)$. The stretch function $\sigma$ was derived using the definitions provided in [7]: $w_P(h) = F\left(\frac{\log h - n( \log(1-p) + 1 )}{\log p - \log(1 - p)}, n, \frac{1}{2}\right)$, $w_Q(h) = F\left(\frac{\log h - n( \log(1-p) + 1 )}{\log p -\log(1 - p)}, n, p\right)$, and $\sigma(h) = \int_0^h \frac{1}{w_Q(\eta) - \eta w_P(\eta)}d\eta$ where $F(\cdot,n,p)$ is the CDF of a $\text{Binomial}(n,p)$ random variable. The algorithm outputs a positive integer $n$, which is entropy coded using the Zeta distribution in [(151), 7]. The number of bits is divided by $n = 8$ to obtain the rate. For each $p$ on a grid in $(0, \frac{1}{2})$, we run the simulation $200$ times and plot the average rate obtained against the channel mutual information $1 - H(p)$. We observe that the selection rule used by the algorithm can overlook repetitions --- if the first occurrence of an output sequence in the randomly generated codebook is rejected, all its subsequent occurrences are also rejected. This simplification helps speed up the execution significantly and also reduces the rate as repetitions need not be indexed. 3. **PFR**: We use the algorithm described in [Sections II-III, 1] with $P_Y$ chosen to be i.i.d. Bernoulli($1/2$) with $n = 8$ and $P_{Y|X}$ chosen to be i.i.d. BSCs with crossover probability $p$. We use the same idea as outlined before in the GPRS implementation to eliminate repetitions from the codebook, speeding up the algorithm and improving the rate. The selected index $n$ is compressed using the Zipf distribution given in [Sec. III, 1]. We simulate the channel $200$ times for uniformly randomly chosen input sequences and plot the average rate obtained. > ... contents of Appendix A ... Proposed algorithm: 1. Let $X^n \sim \mathcal{N}(0,\sigma^2)$ be the input at the encoder, and $f^*(\cdot, R)$ be a rate $R$ trellis coded quantizer that obtains an average distortion value of $D$. We use a $256$-state trellis with each state having exactly two branches leaving it, along with a codebook of size $2^{R+1}$ which is partitioned into 4 subsets, each of size $2^{R-1}$. Each branch of the trellis is then associated with one of the $4$ subsets (See [Figure 3.15, 14] for the mapping used). The Trellis is initialized with randomly generated codewords from a standard normal and trained using the Lloyd-Max algorithm. Please refer to [14] for a detailed description of the Trellis construction and training. 2. Using the common randomness, select a uniformly random rotation matrix $\Pi$ at both the encoder and decoder. 3. At the encoder, compute the randomly rotated source $\tilde{X}^n = \Pi X^n$ and the reconstruction $\hat{\tilde{X}}^n = f(\tilde{X}^n, R)$. Transmit the reconstruction to the decoder by specifying its trellis path. 4. Compute the scaling factor $a = \frac{\sigma^2}{\sigma^2 - D}$ and the reconstruction $\tilde{Y}^n = a\hat{\tilde{X}}^n$ 5. Finally, compute $Y^n = \Pi^{-1}\tilde{Y}^n$ as the output of the scheme. In figure 4, we generate $100$ independent realizations of $X^n$ with $n=1000$ and source power $\sigma^2 = 1$. The scheme outlined above is then used to generate the corresponding $Y^n$ realizations. The quantiles of the sample noise $D = \|| X^n - Y^n \||^2$ are plotted against the theoretical quantiles obtained by assuming $p_{Y|X}$ to be AWGN with noise power $\frac{1}{n}\sum\limits_{i=1}^{n}\|| X_i^n - Y_i^n \||^2$. In Figure 5, we plot the rate of our approximate scheme against the mutual information of the simulated channel. Bootstrapped $95\%$ile confidence intervals for the mutual information are plotted to account for the error in estimating the noise power. > ... analytic form of the mutual information ... 1. $p_{Y|X}$ is $\mathsf{BSC(p)}$, $p_X$ is $\text{Unif}(\{0,1\})$}: $I(X;Y) = 1 - H(p)$ 2. $p_{Y|X}$ is $\mathsf{BEC(\epsilon)}$, $p_X$ is $\text{Unif}(\{0,1\})$}: $I(X;Y) = 1 - \epsilon$ 3. $p_{Y|X}$ is $\mathsf{AWGN(\sigma^2)}$, $p_X$ is $\text{Unif}(\{-1,1\})$}: There is no known closed-form expression for the mutual information in this case. We note that $p_Y(y) = \frac{1}{2}\mathcal{N}(y|-1, \sigma^2) + \frac{1}{2}\mathcal{N}(y|1, \sigma^2)$, $p_{Y|X}(y|x = 1) = \mathcal{N}(y|1, \sigma^2)$, and $p_{Y|X}(y|x = -1) = \mathcal{N}(y|-1, \sigma^2)$. We can then calculate $h(Y)$ using numerical integration and observe that $h(Y|X) = \frac{1}{2}h(Y|X=1) + \frac{1}{2}h(Y|X=-1) = \frac{\ln 2}{2} \left[ 1 + \ln{ 2\sigma^2\pi } \right]$. From these, we can compute $I(X;Y) = h(Y) - h(Y|X)$. >...polarization occurs for large $n$, .. To clarify, the claim in the paper was that, for a fixed channel, polarization increases with the number of i.i.d. copies, $n$ (Figure 1 of uploaded PDF). We did not intend to suggest that polarization occurs as $n$ increases if the channel varies with $n$, say by holding $nI(X;Y)$ fixed. In our experience, noisier channels polarize more slowly. If one increases $n$ and the channel noise simultaneously, it is unclear which of these effects will dominate. Thank you for highlighting various typos and errors. We will correct them in the final draft. --- Rebuttal Comment 1.1: Comment: I thank the authors for their rebuttal. They have addressed my concerns, and I have updated my score accordingly.
Summary: The manuscript considers the design of algorithms for channel simulation. This topic has been extensively explored in the information theory literature, under various names including 'Reverse Shannon Theorems', 'Channel Synthesis', and 'Channel Simulation'. The primary concern is that the problem formulation, proposed solution, and underlying applications do not align with the core focus areas of this conference, making it more appropriate for information theory and theoretical computer science venues. Additionally, the writing and presentation require significant improvements. The problem formulation is not rigorously provided, and many of the statements and proofs are incomplete and inaccurate. The proposed approach lacks significant novelty. Furthermore, the scope is restricted to the synthesis of symmetric binary-input channels. Strengths: The channel simulation problem is of significant interest in the information theory community both in the classical and quantum settings. Weaknesses: - The problem formulation, proposed solution, and underlying applications do not align with the core focus areas of this conference, making it more appropriate for information theory and theoretical computer science venues. - The IID assumption on $X^n$ and $Y^n$, and the fact that only simulation of symmetric channels with binary-input is considered in the main body of the manuscript significantly limits the scope and applicability of the proposed algorithms. - It is not clear why the main focus of the paper is on polar codes. As mentioned in the appendices, the ideas presented in the paper (which have roots in source coding literature) are applicable to linear codes in general. The authors provide examples of Trellis codes and general linear codes and lattices in the appendices, whereas a significant portion of the paper is focused on discussing the properties of polar codes. - Common randomness is used to generate uniform variables $Z_i$ which are then used in the simulation protocol. It is not explained how the uniformly distributed $Z_i$ are produced. For instance, if the randomness is communicated through a discrete noiseless channel as in [1], or is available as a binary string [2], then the users cannot produce completely uniform $Z_i$ with limited computational complexity, which makes exact channel simulation using the proposed protocol impossible. This is not an issue in evaluating the fundamental limits of channel simulation as in [1,2] since computational complexity is not the focus of those works, and $Z_i$ can be made as close to uniform as necessary. However, in this work, which considers efficiency and algorithm design, the computational complexity of producing $Z_i$ must be discussed, and the model for common randomness available to the users be described in more detail. That is, given a fixed n, how close does $Z_i$ need to be to a uniform variable, and what is the complexity of producing such $Z_i$ from a common random string? - Section III is not well-organized. The ideas are presented loosely and without formal introduction of the underlying concepts. The simulation of the reverse channel P_X|Y is described in detail, and it is stated that the scheme is used to simulate the actual channel $P_{Y|X}$ without further explanation. It is not clear why the description of the simulation protocol for $P_{Y|X}$ is not provided directly instead. Minor Comments: P.3 Line 99 -> blocklength m+n Equations (21) and (22) -> $Z_i$ should be $Z_1$ and $Z_2$, respectively. [1] Cuff, Paul. "Communication requirements for generating correlated random variables." 2008 IEEE International Symposium on Information Theory. IEEE, 2008. [2] Li, Cheuk Ting, and Abbas El Gamal. "Strong functional representation lemma and applications to coding theorems." IEEE Transactions on Information Theory 64.11 (2018): 6967-6978. Technical Quality: 2 Clarity: 1 Questions for Authors: - In Section III, the simulation of the reverse channel $P_{X|Y}$ is described in detail, and it is stated that the scheme is used to simulate the actual channel $P_{Y|X}$ without further explanation. Please explain the simulation procedure for the direct channel. - Please explain why the paper focuses on polar codes as opposed to other linear codes, given the comparatively high decoding complexity of polar codes. The alternative approach, which is applicable to other linear codes is only discussed in the appendix. How do the approaches compare and why is preference given to the one utilizing polar codes in this work? Confidence: 4 Soundness: 2 Presentation: 1 Contribution: 2 Limitations: As mentioned in the manuscript, a major limitation of the approach utilizing polar codes is the restriction to binary-input symmetric channels. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your feedback. We address your comments and questions below: > ... This topic has been extensively explored in the information theory literature ... We disagree with this description. The information theory literature on this problem has predominantly focused on fundamental limits (i.e. theoretically achievable rates) without regard to computational complexity. Practically implementable schemes have mostly emerged from ML literature. See, e.g. [2-8]. > ... the problem formulation, proposed solution, and underlying applications do not align with the core focus areas of this conference ... Again we disagree: * The *problem formulation* is drawn directly from the above papers * The *proposed solution* is more of interest to the ML community because it proposes a fast, implementable scheme * The *applications* mentioned in the second paragraph of the introduction are all drawn from machine learning, with citations to papers in ML venues. We are not aware of any work on channel simulation in the theoretical computer science community outside of [9], which is a decade old and focuses on fundamental limits, not practical schemes. > The problem formulation is not rigorously provided We feel the problem formulation in Section 2.1 is comparably rigorous to various papers in the ML literature on this exact problem, such as [3,4,7] It also matches that given in a more explicit form in many papers, e.g., [Section III, 10]; we can include such a formulation in the camera-ready version. > ... many of the statements and proofs are incomplete and inaccurate. There is only one short proof in the paper (Theorem 1 in Appendix C), which in our estimation, is entirely rigorous and correct. We would appreciate it if the reviewer could substantiate this comment by noting specific assertions that are incomplete or inaccurate. > The proposed approach lacks significant novelty. We view the use of error-correcting codes to achieve exact channel simulation as new. Prior attempts have taken the form of hand-crafted solutions for small $n$ or variations on information-theoretic schemes --- completely different from PolarSim. We would appreciate it if the reviewer could substantiate this comment by indicating prior works with significant overlap. > ... restricted to the synthesis of symmetric binary-input channels The channel must be binary output, not binary input. Otherwise, we agree that this is a limitation of the paper. But, we note that 1. No existing scheme can achieve subexponential complexity for any nontrivial class of channels. 2. Other coding schemes, including variations of polar codes, can handle nonbinary channels. In particular, see Appendix A for a scheme that simulates a continuous channel. 3. For compression applications, binary-output channels are not unreasonable. Indeed, various papers have considered VAE-type architectures with discrete latents, including [11-13]. In channel coding, no one code works optimally for all channels; different channels require different designs. It is natural to expect this to hold for channel simulation as well. Also, some existing schemes can only simulate a restricted class of channels [6]. > The IID assumption ... As noted in the introduction, several applications of interest require the simulation of an i.i.d. channel with sizable $n$. Arguably one of the limitations of prior schemes is that they seek to simulate i.i.d. channels but fail to capitalize on the structure that this assumption entails. Far from being a weakness, we view one of the contributions of the paper as pointing out that an i.i.d. channel assumption is both reasonable and valuable. > Please explain why the paper focuses on polar codes ... The focus on polar codes is made clear in the introduction of the paper: "First, they [polar codes] have excellent channel coding performance, both theoretically (Mondelli et al. [2016]) and in practice (Egilmez et al. [2019]). Second, their complexity scales as $n \log n$. Third, they require no manual tuning. Fourth, they are simple to describe, requiring minimal background in coding theory." PolarSim relies on certain unique properties of polar codes; it does not extend to arbitrary linear codes. The schemes presented in the appendix have limitations: The dither-based approach in Appendix B can only simulate the BSC (because the input and output alphabets must be the same), and the trellis-based approximate AWGN simulator has no theoretical guarantees. Also, note that the decoding complexity of polar codes is $n \log n$. It is low, not high. > Common randomness ... The availability of infinite common randomness is the prevailing assumption in the ML literature on this problem. Determining the degree of "exactness" needed and the adequacy of pseudo-randomness are important and difficult problems, but they transcend this work and merit a separate paper. > Section III ... The beginning of Section 3 is focused on providing intuition for the main algorithm and hence written in an expository style while still preserving mathematical correctness. For a formal description, one should rely on Algorithms 1 and 2, Theorem 1, and the experimental results. > The simulation of the reverse channel ... The inherent duality between channel coding and channel simulation means that the way to use channel codes for simulation is to swap the roles of the encoder and the decoder, i.e., the channel coding decoder becomes the channel simulation encoder and vice versa. Thus, using the channel code for a channel $p_{Y|X}$, we can simulate the reverse channel $p_{X|Y}$. Given its fundamental nature, we chose not to hide this swap in our exposition. However, given that it has caused some confusion, in the final version we will develop the PolarSim scheme without highlighting this. Thank you for bringing this to our attention. We also thank you for pointing out typos and errors; we will correct them in the final draft. --- Rebuttal Comment 1.1: Comment: I thank the authors for their comprehensive response and their effort in addressing the comments from the previous round. Unfortunately, the response does not alleviate the concerns raised in the previous round. In particular, the following are my main remaining concerns: - The authors cite references [2-8] as examples of related prior works published in similar venues. While these works indeed focus on quantization, compression, and sampling—topics directly relevant to the NeurIPS audience—and use statistical techniques such as concentration of measure which are of great interest in this literature, the relevance of this paper to machine learning applications remains unclear. Although the authors argue that the channel simulation problem explored here can be viewed as a generalization of quantization, the applications of this generalization for ML scenarios are unclear. The paper's core methodologies, including polar coding and trellis codes, are more familiar to information theory and communication theory communities. Consequently, experts from these fields would be better positioned to evaluate the work's quality and novelty. The results would also be appreciated more if published in venues which focus on related research problems. - Many of the statements, including the problem formulation, are not presented precisely. The authors assert that Section 2.1 presents a rigorous problem formulation, it actually provides only a high-level introduction to the problem and existing literature. Definitions of concepts such as code, rate, and other relevant parameters should be given rigorously (see [10] Section III as an example). There are many other imprecise statements. For example, Equation (31) uses terms like "$\approx$" and "for most i" without clear definitions or quantifiable bounds. In a theoretical paper, such ambiguities significantly impact the work's rigor and overall quality, and make it difficult to verify the assertions. There are also typographical mistakes and undefined variables, which makes the paper less readable, for instance in Section 2.1, 'm and n can be combined to obtain a scheme for block length m + ny', the y should be removed. In Algorithm 2, `N' is not defined, etc. - The novelty of the work is not clear. The use of randomly generated codebooks for channel simulation is not novel (although using polar codes has not been specifically considered). For instance, please refer to the following paragraph in the literature review provided in [10]: `[prior] schemes follow the same general architecture—the common randomness is used to generate a large i.i.d. codebook containing different reconstruction strings, and the encoder stochastically selects a codeword and indexes it to the decoder.' Of course, the focus of those prior works was on deriving the fundamental performance limits, rather than constructive algorithms, thus they have focused on large i.i.d codebooks. The use of well-studied codes such as polar codes, in place of the large i.i.d. codebooks considered in those works, is an interesting directions, with limited novelty. - My concerns regarding the organization, especially the presentation of Section III have not been fully addressed. The authors mention that ` Section 3 is focused on providing intuition for the main algorithm and hence written in an expository style ' and that modifications will be made in the final version of the work to improve some aspects of the presentation. However, given the significant changes required, I believe that the changes need to undergo a further round of review, once they are made, which is not possible given the current review process. --- Reply to Comment 1.1.1: Title: Thank you for your response. We respond to the concerns raised in your comment below. Comment: > ... related prior works ... Our point in the rebuttal was not that works [2-8] are examples of tangentially related papers. Our point was that those papers (especially [3-8]) consider exactly the problem considered in this work: practically implementable schemes for channel simulation. In our view, this problem originated in the ML community, and most of the subsequent work on it has appeared in ML venues. > ... the applications ... The second paragraph of the introduction provides applications in machine learning including model compression [Havasi et al., 2018], federated learning [Shah et al., 2022], image compression with realism constraints [Theis et al., 2022], and VAE-based compression [Balle et al. 2020]. All of these papers apply channel simulation to an ML task, and would benefit from improved channel simulation methods. We could have provided many more papers in this list, but we felt that the channel simulation problem is now so firmly established in the ML community (see [3-8] above and the references therein) that this was unnecessary. > ... methodologies ... The paper's methodologies would indeed be more familiar to someone in information theory or communications. But the problem it is solving and the applications it impacts are more familiar to the ML community. We therefore believe that an ML venue is more appropriate for this paper. Indeed, many ML papers apply methodologies from statistics, information theory, or other disciplines. We view this as one of the strengths of ML as a field. One specific argument in favor of choosing an ML venue is that the prior state-of-the-art work [7] was published at last year's NeurIPS. The earlier state-of-the-art methods were also published in ML venues. It is not clear that reviewers of information theory or communications conferences would be aware of [7]. In contrast, some of the present reviewers are clearly familiar with that work. We also claim no contributions to the understanding of polar codes, which might be expected of an information theory paper. The coding theory background required to understand and evaluate our work is minimal, and we provide a self-contained description in our paper. > Many of the statements, ... We believe the formulation in Section 2.1 is perfectly rigorous. It is simply less explicit than, e.g., [10, Section III]. We provided the formulation in condensed form because we noticed that the problem had appeared in so many different ML papers over the years that some recent ML papers had described the problem in a similarly condensed form (as in, e.g., [7]). In any event, we believe there is no dispute about what the problem formulation is. The question is simply whether to state the problem with the level of detail of [7] (which appeared in NeurIPS last year) or [10] (which appeared in an information theory venue). Given the venue, we chose the former but it is trivial to replace this with the latter. > ... `N' is not defined, etc. The beginning of Section 3, as noted earlier, is meant to be expository and provide intuition. We have presented a formal statement and proof of correctness and optimality later in Section 3 and in Appendix C, which quantify all of the approximations mentioned. To verify the correctness of the assertions, one should rely on the theorem and the proof, not the informal discussion. We felt that some readers would find the paper more accessible if an intuitive discussion was provided in addition to the formal proof, especially since the typical NeurIPS reader might be unfamiliar with polar codes. Readers who are uncomfortable with this informality can proceed directly to the theorem and proof. 'N' is supposed to be 'n'; this is a typographical error. We thank the reviewer for pointing out this and the two typographical errors mentioned in the original review. > The novelty ... We agree that most past works have used large i.i.d. codebooks. This is why they have exponential complexity. We believe that our polar-code-based scheme entails significant novelty for the following reasons: * Existing schemes have a complexity that scales exponentially with n. Ours has n log n complexity. * Our scheme is fundamentally different from past works. It has no notion of acceptance or rejection of samples. There is no codebook per se. Unlike past works, it does not aim for good performance for small n; Instead, it aims for scalability in n. * Our scheme significantly outperforms existing schemes in terms of rate. > ... the organization ... We said in the rebuttal that the beginning of Section 3 is written in an expository style. This serves as a warmup for the formal description that follows later in the section. We do not believe this section requires significant changes. We offered to edit the informal exposition to circumvent the channel “flip.” This essentially amounts to swapping the role of X and Y in the discussion, and we do not believe it merits additional rounds of review.
Summary: This paper considers the scalability problem in the channel simulation. Channel coding, specifically polar coding, is introduced to improve the performance of channel simulation. The topic is interesting and the work is valuable. Strengths: This paper uses error correction codes to improve the channel simulation with significant results. Weaknesses: The paper stresses "fast" in the title, but the corresponding justification or even a statement is missing in the main body of the paper. Some notations are not well defined. For example, what is the meaning of performance? It seems to be the rate, but it also seems to be speed (according to the title of this paper). A table is needed to compare the results with the other works. Technical Quality: 3 Clarity: 2 Questions for Authors: What is the meaning of the performance? How to justify "fast" in the title? Confidence: 2 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your feedback and comments. We have addressed specific comments and questions below. > The paper stresses ``fast'' in the title, but the corresponding justification or even a statement is missing in the main body of the paper. Some notations are not well defined. For example, what is the meaning of performance? It seems to be the rate, but it also seems to be speed (according to the title of this paper). > What is the meaning of the performance? > How to justify "fast'' in the title? We are interested in both (data) rate and (execution) speed, and, per (1) and (3) in the introduction, the two are coupled: a faster algorithm allows one to handle larger $n$, which improves the rate. "Performance'' refers to the rate $R_n$ defined in the introduction, i.e., the average number of bits transmitted divided by the number of times the channel is being simulated. We felt that the term "fast'' in the title was self-explanatory given that $\mathtt{PolarSim}$ has $n \log n$ complexity while (quoting from the paper) "there are currently no known schemes that simulate any nontrivial class of channels with even subexponential complexity in $n$." The speed of the scheme is what enables the improved rates that we observe in Figure. 3 (See Figure 2 of attached PDF for updated version) --- Rebuttal Comment 1.1: Title: I thank the authors for their comprehensive response. Comment: The contributions should be clearly and explicitly stated and compared in the main body of paper, rather than self-explanatory and deduced by the readers. The efficiency should be compared with the other works in terms of theoretic analysis and experiments in details. --- Reply to Comment 1.1.1: Title: Runtime Comparison Comment: Thank you for your comment. The paper does include a theoretical comparison of the complexity: existing schemes such as PFRL and GPRS have complexity that is exponential in $n$, whereas PolarSim's is n log n. We did not compare the runtimes experimentally because we felt the vast gulf in their theoretical complexities rendered this unnecessary. But it is simple to perform such a comparison. For the setup in Fig. 3, these are the run-times in seconds, averaged over 50 runs: | p | GPRS | PolarSim | |-------|--------|----------| | 0.441 | 40.570 | 0.098 | | 0.299 | 57.623 | 0.120 | | 0.225 | 61.514 | 0.110 | | 0.171 | 69.175 | 0.110 | | 0.127 | 75.816 | 0.114 | | 0.091 | 67.527 | 0.122 | | 0.061 | 72.374 | 0.123 | | 0.035 | 71.960 | 0.118 | | 0.015 | 64.417 | 0.116 | | 0.000 | 59.206 | 0.115 | Here $p$ is the crossover probability of the channel. PolarSim is operating directly on the $n = 1024$ copies of the channel (and takes about $.1$ seconds per run to do so). For GPRS, we run the algorithm separately on $128$ blocks of size $8$ (so that effectively $n = 8$). This takes about $70$ seconds. Thus, PolarSim is over $600$x faster than GPRS while its data rate is significantly better, as shown in Fig. 3. The GPRS implementation has been optimized according to the suggestion from reviewer w9ei. For this particular setup, this GPRS implementation could be parallelized by running the $128$ blocks (or a subset thereof) simultaneously. Thus, the runtime of GPRS could be reduced. On the other hand, the complexity of GPRS is still exponential in $n$, so running on $114$ blocks of size $9$ would result in a substantial slow down. Such a table could be easily included in Fig. 3., alongside the existing plot. We hope this addresses the concern about the efficiency comparison.
Summary: The paper studies the channel simulation problem, whose goal is to minimize the number of transmitted bits so that the decoder can generate an output according to a target distribution given the encoder’s input. The paper proposes a channel simulation method, called PolarSim, based on polar codes. The rate of PolarSim approaches the mutual information lower bound by scaling up the block length, as theoretically proven and demonstrated by experiments. Strengths: 1. The authors propose a novel method for channel simulation. 2. The paper is technically sound (to the best of my knowledge) and well-written. 3. The performance of the proposed method is impressive: The experiment results demonstrate that the proposed method achieves a rate approaching the theoretical bound, outperforming a quiet recent method. Weaknesses: 1. The smallest $n$ used in the experiments in the paper is $1024$. How would PolarSim perform for smaller $n$? Could the authors provide some figures in the supplementary material or at least comment on this? 2. In Figure 3, the authors mention that “Data for GRPS is only plotted for parameters where the algorithm consistently terminates in a reasonable amount of time.” It is unclear what the authors mean by “a reasonable amount of time.” Can the authors be more explicit about what they mean here? Minor comments and typos: 1. The figures would be easier to read if the colors were included in a legend within the plot. 2. Line 55: Seems like “means that” should be “this means that.” 3. In line 118, I would suggest the authors to define the $F_2$ notation. 4. Line 124: “likeihood” should be “likelihood.” 5. Line 125: “thatl” should be “that.” 6. Line 227: “)” is missing. Technical Quality: 4 Clarity: 4 Questions for Authors: - Could the authors repeat their contributions in Conclusions so the contributions are clearer? - Please see Weaknesses for a few more questions. Confidence: 1 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: The authors have explicitly addressed the limitations of their method. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your encouraging assessment of our work and for the comments and feedback you have provided. We have addressed specific comments and questions below. > The smallest used in the experiments in the paper is 1024. How would $\mathtt{PolarSim}$ perform for smaller $n$? Could the authors provide some figures in the supplementary material or at least comment on this? In our rebuttal PDF, we have provided a plot of the redundancy (meaning the rate minus the mutual information) versus $n$ for the $\mathsf{BSC}, \mathsf{BEC}$ and the $\mathsf{AWGN}$ (Please refer to Figure 1. of the attached PDF). There we see that the $\mathtt{PolarSim}$ scheme performs well even for small values of $n$. For comparison, we also plot the achievable rate from [Eq (1),1] (i.e. the upper bound), labeled in the plots as "PFR UB". We will include this in the camera-ready version of our paper, likely in the appendix. > In Figure 3, the authors mention that “Data for GRPS is only plotted for parameters where the algorithm consistently terminates in a reasonable amount of time.” It is unclear what the authors mean by “a reasonable amount of time.” Can the authors be more explicit about what they mean here? We were facing issues with getting the GPRS algorithm to terminate in high mutual information regimes during our experiments. However, we have followed the suggestions of reviewer w9ei to speed up the implementation of the GPRS algorithm and as a result, this issue is no longer a concern. We have presented the updated plot for Figure 3 (Please refer to Figure 2 in the attached PDF) which we will include in the camera-ready version of the paper. > Could the authors repeat their contributions in Conclusions so the contributions are clearer? Thank you for this suggestion. We will take this into consideration when preparing the camera-ready version. > The figures would be easier to read if the colors were included in a legend within the plot. Our concern here was potentially overcrowding our plots. That said, we will try our best to make the suggested change work and include it in the camera-ready version. We also thank you for pointing out certain errors and typos in our manuscript. We will be sure to incorporate these changes in our camera-ready version. --- Rebuttal Comment 1.1: Comment: Thank you for addressing my comments.
Rebuttal 1: Rebuttal: We thank all the reviewers for taking the time to review our paper. Their feedback and suggestions will be valuable in helping us refine our work. Based on their comments, these emerged as the main areas for improvement for our paper: * Studying the performance of $\mathtt{PolarSim}$ for small $n$ . * Simplifying our description in Section 3. to avoid any potential confusion regarding the directionality of the simulated channel * Our comparison plot (Figure 3 in the paper) * Providing full details of our experiments for greater reproducibility * investigating/fixing the convergence issues for GPRS * Providing full experimental details for the scheme presented in Appendix A We have detailed our strategy to tackle these issues in our responses to the individual reviewers. We have also addressed other questions and clarifications that were raised in the reviews. # Supplementary PDF We have also included a one-page PDF attachment that contains the following figures * Figure 1 contains plots of the normalized redundancy $R_n - I(X;Y)$ vs the blocklength $n$ for three different channels. * Figure 2 contains an updated plot of Figure 3 from the paper, based on the suggestions offered by Reviewer w9ei. # References 1. Cheuk Ting Li and Abbas El Gamal, "Strong functional representation lemma and applications to coding theorems", *IEEE Transactions on Information Theory*, 2018. 2. Chris J. Maddison, Daniel Tarlow, and Tom Minka, "$A^*$ sampling,'' *NeurIPS*, 2014. 3. Eirikur Agustsson and Lucas Theis, "Universally quantized neural compression,'' *NeurIPS*, 2020. 4. Gergely Flamich, Stratis Markou, and José Miguel Hernández-Lobato, "Fast relative entropy coding with $A^*$ coding,'' *ICML*, 2022. 5. Lucas Theis and Noureldin Yosri, "Algorithms for the communication of samples,'' *ICML*, 2022. 6. Gergely Flamich, Stratis Markou, and José Miguel Hernández-Lobato, "Faster relative entropy coding with greedy rejection coding.'' *NeurIPS*, 2023. 7. Gergely Flamich, "Greedy Poisson rejection sampling,'' *NeurIPS*, 2024. 8. Buu Phan, Ashish Khisti, and Christos Louizos, "Importance matching lemma for lossy compression with side information,'' *AISTATS*, 2024. 9. Braverman and Garg, "Public vs private coin in bounded-round information'', *ICALP*, 2014 10. Sharang M Sriramu and Aaron B Wagner, "Optimal redundancy in exact channel synthesis", *arXiv preprint*, 2024. 11. J. T. Rolfe, "Discrete Variational Autoencoders'', *ICLR*, 2017 12. J. Fajtl, V. Argyriou, D. Monekosso and P. Remagnino, "Latent Bernoulli Autoencoder'', *ICML*, 2020. 13. E. Özyilkan, J. Ballé, and E. Erkip, “Learned Wyner–Ziv compressors recover binning”, *ISIT*, 2023. 14. David S Taubman, Michael W Marcellin, "JPEG2000: Image compression fundamentals, standards and practice", *Springer*, 2002. Pdf: /pdf/a6ffed38e2648bf9ca623504c900c4d50b82b74e.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Efficient Adversarial Training in LLMs with Continuous Attacks
Accept (spotlight)
Summary: Large language models (LLMs) are vulnerable to adversarial attacks that can bypass their safety guardrails. However, current adversarial training methods for LLMs are hindered by the high computational costs required to perform discrete adversarial attacks at each training iteration. To solve this problem, the paper instead calculates adversarial attacks in the continuous embedding space of the LLM, which is orders of magnitudes more efficient. Strengths: The presentation of the article is clear. The proposed method can consume less computational cost to obtain robust LLMs against adversarial attacks. Weaknesses: 1. The main claim of the paper is that the proposed method can use less computational cost to obtain a robust LLM, and regarding this viewpoint, there is only an experimental setting presented in Table 1, which may be insufficient. I believe that some more intuitive experiments should be added, such as comparative experiments on the real-time cost for a single example and for the entire process. 2. Although the paper is overall clear and the problem it intends to solve is well-defined, some details are not clear enough, especially the description of the proposed method's pipeline. For example, 'IPO' keeps appearing in the first two pages, but only in Section 3 are some descriptions of 'IPO' provided. 3. The hardware description in Section 4 is also unclear. I understand that this does not impact the main contributions of the paper, but a more explicit explanation of each experiment's settings would provide a clearer understanding for the readers. Additionally, totaling GPU hours across different GPUs may be unreasonable, as 1904 GPU hours on a V100 differ from those on 80GB A100 GPUs. Therefore, I hope the authors can further refine these details. 4. As LLAMA-2 is one of the most mainstream open-source LLMs, why have the experiments yet to be conducted on LLAMA-2 to evaluate the effectiveness of the proposed method? 5. Currently, the defense mechanisms on LLMs are not limited to R2D2 nor even to AT methods. I'm curious about how the proposed method compares with other methods, such as some methods mentioned in the related works. Technical Quality: 2 Clarity: 3 Questions for Authors: See Weaknesses. Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: The authors have discussed limitations of the work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their insightful comments and address each point/question in turn: **W1: The authors should conduct a controlled experiment to measure runtime differences between the approaches:** **A1:** This is a valid concern and we address it by measuring the wall time it takes to run $15$ steps of the algorithms in both cases, extrapolating from there also the total experiment time, which would be prohibitive to run for us. The wall times are in the ballpark as our forward pass comparison: On an A100 a single step of the R2D2 took the 489.7 longer than a single step of C-AdvIPO. Extrapolating this results to the total number of step considered for training R2D2, its complete training would have taken 1991 times longer with the implementation provided in the repository. **W2: Important notation and methods are not always introduced at the right point in the paper:** **A2:** Thank you for pointing this out! We will improve the writing further, for instance by better defining the term IPO earlier in the manuscript and in more detail. **W3: The hardware descriptions are imprecise**: **A3:** Roughly 800 hours were on 80GB A100, the remainder were on the 40GB GPUs. Only 11.6 hours were used on V100 GPUs, which were only used for debugging. **W4: Llama2 should be added as a baseline:** **A4:** We thank the reviewer for the suggestion as Llama2 is a popular model. We have trained Llama2 with the CAT loss successfully, which considerably improves the robustness of the model with minor degradations in utility. See the overall response and Figure 2 of the attached PDF for the results. **W5: More defenses should be included in the evaluation**: **A5:** We’d be happy to include any defenses the reviewer finds relevant. Could you point us to which defenses you would like to see? In the meantime, let us discuss some that come to mind to us and why we didn’t include them: Many of the other defenses mentioned are orthogonal to our paper, such as perplexity filters on the input or toxicity filters on the output. Other papers such as Rainbow Teaming are both orthogonal and not reproducible as the authors do not share code/data/models necessary to do so (Rainbow teaming for instance use a Meta-internal helpful-only Llama model). Other works only apply to Bert style models on don’t focus on generation. We also want to note that it is standard practice to compare adversarial training approaches with each other as the vast majority of empirical approaches have been broken by third-party evaluations, and adversarial training remains one of the only exceptions [4]. (4) Tramer et al., “On Adaptive Attacks to Adversarial Example Defenses" NeurIPS, 2020 --- Rebuttal Comment 1.1: Comment: Thanks for the responses. Most of my concerns have been addressed. However, I still have concerns about W5: The authors shared a series of similar works in the related work section, but failed to compare them in the experimental section. This can easily lead readers to question the accuracy of the limitations of related works. Of course, I agree with authors’ explanation that some methods are orthogonal or irreproducible. Nevertheless, this does not substantially solve the above problem, although I acknowledge your explanation. I hope the authors can handle this issue more appropriately in the future. Hence, I maintain my score. --- Reply to Comment 1.1.1: Comment: Thank you for your feedback on our paper. We'd like to clarify our position: The primary contribution of our paper is to address a specific research question: - Does adversarial training with continuous attacks in the token embedding space of an LLM provide robustness to discrete natural language attacks? Prior works on adversarial training in NLP target different objectives, such as improved generalization or adversarial robustness in sentiment classification. These methods differ in several key aspects: Previous work didn't consider threat models, where the attacker has complete input control but generally enforced similarity to the initial text. This was done to keep the adversarial nature of the perturbation (to be semantically not meaningful to a human observer). However, we focus on alignment and not classification, which results in substantially different threat models. Further, not all methods directly apply to decoder-only LLMs, and some only apply to pretraining, demanding unrealistic amounts of compute in the LLM setting. We want to emphasize that the scope of our paper is focused first on answering our research question and second on adversarial robust alignment algorithms. We believe the prior work mentioned above cannot be considered as a baseline that should be compared to our algorithms, as extending such prior work to our setting would be a contribution by itself. Instead, we adopted two of the most common losses used in LLM training. However, we acknowledge that leveraging such prior work to improve robustness of alignment is an interesting avenue for future work, and we will discuss this option in the outlook section of the final manuscript.
Summary: This paper introduces adversarial attacks in continuous space in the context of LLMs. In addition, it utilizes continuous attacks for adversarial training to robustify LLMs and demonstrate that this efficient training algorithm can indeed protect LLMs against various attacks, including discrete ones, while maintaining the utility. Strengths: 1. The proposed method in this paper is easy to understand, to implementation and generally applicable. 2. The proposed method dramatically improve the efficiency of adversarial training in the context of LLMs. 3. The proposed method does not hurt the utility of the model too much. Weaknesses: 1. The experiments should be more comprehensive: (1) In Figure 2, the comparison between R2D2 and the proposed methods is only conducted on the model ZEPHYR-7B, comparisons on more models would be preferred. (2) The authors only test one safe answer ''Sorry, I can't do that''. The safe answer $y$ is a very important variable in the loss objective function. Tests on more ''safe answers'' would make the results more convincing and more generally applicable. (3) The attack success rates are based on GCG, AutoDAN and PAIR. Recently, there have been some stronger attacks proposed, such as LLM-adaptive attacks (https://github.com/tml-epfl/llm-adaptive-attacks), the authors should include more attacks for a more comprehensive evaluation. (4) Ablation studies should be conducted to support why IPO is better than DPO, as line 161 to 162 claimed. 2. The proposed methods introduce more hyper-parameters to tune, hindering its application for practitioners. 3. The presentation of the paper can be improved, for example, the formulation in line 154 to 156 is very confusing, as $\mathcal{L}'$ is not defined. Technical Quality: 2 Clarity: 2 Questions for Authors: The major concerns have been pointed out in the weakness part, the authors should first answer questions in that section. In addition, we have some minor questions: 1. In line 210 to 211, why do bigger models have smaller values of $\epsilon$? Is there any intuition? 2. Similar to TRADES [Zhang et. al 2019], in addition to the value of $\epsilon$, is it possible to add a co-efficient to the last term of Equation (4) to balance the trade-offs between robustness and utility? Overall, the paper tackles an interesting problem. However, due to the weakness part and the questions above, I believe the manuscript needs to be edited to improve. I welcome the authors to address my concerns during the rebuttal and will re-evaluate the manuscript after the rebuttal. > Post Rebuttal I improved my ratings to 5 after reading authors' rebuttal. Confidence: 3 Soundness: 2 Presentation: 2 Contribution: 3 Limitations: The limitations and the societal impact are discussed at the end of the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their insightful comments and address each point/question in turn: **W1: R2D2 should be evaluated on more models.** **A1:** Unfortunately, Zephyr R2D2 is the only model available trained with R2D2 and training more base models with R2D2 is prohibitively expensive (see Table 1 in the paper and the overall response). We have thus focused on showing that our method is able to achieve a good robustness/utility trade-off and compared it where we could to R2D2, outperforming it. **W1.1: The authors should test their approach on more diverse safe answers** : **A1.1:** We explored generating diverse safety answers to increase the utility robustness trade-off in preliminary experiments but did not see any improvements. The common hypothesis is that a model is considered safe as long as the model provides any safe answer to a harmful query [Zou et al., 2023]. As diverse toward answers did not improve our results, we opted for the simpler approach. Using multiple away targets (harmful affirmative responses) for every query appeared to be more impactful, which is why we trained with 12 different possible away responses $\hat{y}$ for every given query. Note that the safety response is never used during evaluation and is just a training objective in the algorithm. **W1.2: Stronger attacks should be used for evaluation**: **A1.2:** We thank the reviewer for the remark. These attacks were not in the original submission because they are contemporary to our submission (NeurIPS guidelines suggest that any paper published on arXiv less than 2 months before the submission deadline should be considered contemporary work). However, we believe the reviewer suggestion to include these new attacks is a fantastic opportunity to showcase the width of the robustness of our model. We added results for the simple adaptive attack and an ICL attack for as many models as we could run during the rebuttal and will complete the remainder for the final paper. Both attacks are highly effective against the base models but cannot break the adversarial trained versions even when using $10000$ attack iterations and random restarts. We thank the reviewer for their suggestion, which we believe further strengthens our method’s empirical performance. Please see the attached PDF Table 2 and overall response for the results. **W1.3: Ablation studies on the choice of the preference optimisation algorithm should be conducted**: **A1.3:** As the reviewer points out, several preference optimization (PO) losses could be used in our adversarial training approach. We conducted preliminary experiments using DPO and found that IPO leads to less overfitting to the toward response (where the model rejects every request). A full ablation of all the preference losses feels to us a bit outside of the scope of this paper whose key contribution is to show that continuous robustness can robustify the models towards discrete attacks. However, we will surely add our results for DPO in the camera ready version. **W2: The proposed methods introduce hyperparameters that need to be tuned**: **A2:** It is true our method introduces a few more hyper-parameters (epsilon, attack iterations, beta), we believe tuning them is worth the improved robustness-utility trade-off. We acknowledge this limitation in the paper and also point out that the main baseline (R2D2) suffers from the same issue, with GCG having many more additional hyper-parameters to tune. Morever, it is common for adversarial training algorithms to require at least tuning of the epsilon and attack iteration parameters. Most methods, such as TRADES, additionally include a regularization parameter to control the robustness accuracy trade-off (beta in our case). Lastly, due to the cost of fine-tuning LLMs we were not able to conduct more than a few training runs for all models, which was sufficient to make the method work. Thus, we conclude that our approach is reasonably robust to hyperparameter choices. **W3: The presentation of the paper can be improved** **A3:** We thank the reviewer for their observation and added a description of \mathcal{L}' (the actual loss used for optimization after applying the cutoff) to the paper. We also reviewed every equation and made sure all the variables are defined. **Q1: Why do bigger models require smaller values of $\epsilon$?** **Q-A1:** We thank the reviewer for pointing this out and are happy to provide a hypothesis for this observation. We argue that larger models have been trained for longer, which leads to token embeddings that contain more information. Thus, smaller perturbations have a larger effect on the model. To investigate this theory, we conduct a continuous $\epsilon$-ball attack ($\epsilon = 0.05$) on the Phi and Gemma base models. Even though the initial robustness of the Phi model is larger, its robustness decreases substantially faster under attack. These findings seem to support our intuition. See the attached PDF for loss curves. We will add these experiments to the final paper. **Q2: Is it possible to add a regularization parameter to the adv. training algorithm like in TRADES?** **Q-A2:** Thanks for the nice question. Controlling the trade-off between robustness and utility is indeed crucial. For C-AdvIPO we ablate how robustness and utility depend on the $\beta$ parameter in Figure 3a of the paper. We demonstrate that small $\beta$ values increase robustness and decrease utility, which can be used to control the trade-off. Similarly, the trade-off can be controlled for C-AdvUL by choosing different weightings for the toward, away, and utility loss. Here a lower utility loss positively impacts the robustness while decreasing utility. We included the weighting parameters in Table 2 of Appendix A but did not clearly define them in the paper. We apologize for this oversight and will include the parameters in Equation (4) of the camera-ready version. --- Rebuttal Comment 1.1: Comment: I thank the authors for detailed feedbacks. Most of my concerns have been address, so I am willing to improve my rating to 5. The revised manuscript should include stronger attacks (adaptive attacks) and more ablation studies. I agree with Reviewer Z2aE that the author should highlight the difference of their proposed defence method and existing ones. --- Rebuttal 2: Comment: Thank you for your reply and feedback. We chose to evaluate our method with a subset of the strongest attack available at the point of submission (according to Harmbench [1]). Beyond the ICL attack and the adaptive attack that we performed for the rebuttal, we will conduct additional evaluations with the approaches proposed in [2, 3] for the final manuscript. These attacks have shown high ASR against robust models. We will add our ablation study on the behavior of $\epsilon$ for models of different sizes to the final manuscript. We are open to any further suggestions on ablation studies that would further improve our work. We will add a more profound discussion about the differences between previous approaches in encoder-decoder models and the presented adversarial training algorithm. More specifically, we will: - Elaborate on the different training settings, i.e., post-training stage for LLMs and pre-training in encoder-decoder models - Differences in the optimization target, i.e., robustness in classification settings vs. preference optimization or alignment finetuning - Differences in the optimization goal, i.e., improved generalization vs. adversarial robust alignment. We want to emphasize that robustness in classification problems differs substantially from adversarial robust alignment. In the classification setting, models are trained to be robust to minor input perturbations with respect to their prediction. In contrast, in the alignment setting, we are not concerned with the stability of predictions. Rather, we train the model to refrain from generating "toxic" outputs altogether. To achieve this, losses need to be adjusted for this task (such as combining different objectives or using preference optimization algorithms). None of the previous methods was designed for LLM alignment and they can not be employed for this task without further changes. Lastly, the majority of these approaches were used during the pre-training stage of encoder-decoder models. Applying these algorithms during the pre-training stage of an LLM exceeds our computational budget by orders of magnitude. [1] Zou et al., "HarmBench: A Standardized Evaluation Framework for Automated Red Teaming and Robust Refusal" (Feb. 2024) [2] Thompson T and Sklar M., "Fluent Student-Teacher Redteaming" (July 2024) [3] Liao et al., "AmpleGCG: Learning a Universal and Transferable Generative Model of Adversarial Suffixes for Jailbreaking Both Open and Closed LLMs" (May 2024)
Summary: This paper proposes adversarial training for LLMs, in which perturbations are created in the continuous embedding space rather than finding discrete suffixes. The proposed fast adversarial training algorithm (AdvUL) consists of two losses: the first strengthens the model against continuous embedding attacks computed on an adversarial behavior dataset, and the second ensures the final model's usefulness by fine-tuning utility data. Furthermore, the authors introduce C-AdvIPO, an adversarial variant of IPO that does not rely on utility data for adversarially robust alignment. The empirical evaluation of four models, Gemma, Phi3, Mistral, and Zephyr, at various scales, reveals that both algorithms improve LLM robustness against discrete attacks (GCG, AutoDAN, PAIR). Strengths: The Strengths of this paper include: - The writing of this paper is clear, with neat formulas to fastly explain the proposed AT methods. Using the form of DPO/IPO in Eq. (5) seems reasonable to me. - There are details to describe the experimental settings, and experiments are comprehensively done on different LLMs and datasets. - The empirical improvements on the robustness against GCG, AutoDAN, and PAIR appear significant. Weaknesses: The Weaknesses of this paper include: - The main difference between adversarial attacks/defenses on traditional models (e.g., CNNs) and those on LLMs is that there is no explicitly defined *threat model* when jailbreaking LLMs. Namely, the strategy to jailbreak an LLM could be quite flexible, and there is usually no constraint on human imperceptibility. AT is a strong defense under a given threat model (e.g., $8/255, \\ell\_{\\infty}$), but AT is observed to generalize badly to unseen attacks. While I'm experienced with AT in traditional settings, I'm still not very convinced that AT could be a good solution for LLMs. - While GCG, AutoDAN, and PAIR are commonly evaluated attacks, they are relatively weak nowadays. The authors are encouraged to evaluate their models against more advanced and diverse attacks, such as ICL-based [1,2] and/or decomposing-based [3]. - To investigate the limits of C-AdvUL and C-AdvIPO, there should be a sanity check to directly perform attacks in the continuous embedding space. - From my perspective, what C-AdvUL and C-AdvIPO do is connected to machine unlearning (i.e., unlearning the harmful knowledge), where [4,5] have used similar learning objectives. Their connections should be more discussed and empirically ablated. Refs:\ [1] Many-Shot Jailbreaking\ [2] Improved Few-Shot Jailbreaking Can Circumvent Aligned Language Models and Their Defenses\ [3] A False Sense of Safety: Unsafe Information Leakage in 'Safe' AI Responses\ [4] Negating Negatives: Alignment without Human Positive Samples via Distributional Dispreference Optimization\ [5] Negative Preference Optimization: From Catastrophic Collapse to Effective Unlearning Technical Quality: 3 Clarity: 3 Questions for Authors: Why there are no experiments done on the Llama series of models? Is there any reason for this design choice? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Please see Weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their insightful comments and address each point/question in turn: **W1: Is adversarial training suitable to robustify LLMs in the context of diverse threats?** **A1**: Thanks for initiating this discussion. We agree that the perturbations set in LLMs are much less constrained than in Computer Vision. However, both past work in computer vision [1, 2] and existing work in LLMs [3] indicate that variations of latent adversarial training can improve robustness against diverse threats. Our experimental results demonstrate that latent space adversarial training extrapolates robustness well beyond the continuous attack we train on, such as jailbreaking attempts and suffix attacks. Moreover, adversarial training is one of the only methods that stood the test of time and delivered reliable robustness improvements. In contrast, the majority of heuristic preprocessing approaches and other techniques were later broken. Overall, we don’t claim that AT can solve the problem entirely, but it presents a piece of the puzzle in robustifying the current generations of LLMs and is, therefore, worth studying. We are happy to engage in further discussions regarding this point. **W2: Could the authors include stronger attacks in their evaluation?** **A2:** Thanks for the references. These attacks were not in the original submission because they are contemporary to our submission (NeurIPS guidelines suggest that any paper published on arXiv less than 2 months before the submission deadline should be considered contemporary work). However, we believe the reviewer suggestion to include these new attacks is a fantastic opportunity to showcase the width of the robustness of our model. We agree that a thorough evaluation is crucial in this domain. We have added two attacks. One adaptive attack (as proposed by **TJPv**, which has shown very strong results on open source and proprietary models) and ICL. Our models increase the robustness against both attacks considerably compared to the base models, with most adversarially trained models achieving 100% robustness. See the PDF Table 2 and the overall response for the full results. **W3: A sanity check should be performed with continuous embedding space attacks to explore the limits of the robustness of the models**: **A3:** We agree with the reviewer that this is an important sanity check and have added this as an experiment. Against unbounded attacks, all models exhibit 0% robustness; against epsilon-ball attacks, the adversarial trained models show higher robustness than the base model (see also PDF and general comment). **W4: The connection of adversarial training to machine unlearning should be highlighted**: **A4:** We thank the reviewer for the great suggestion! Machine unlearning in the face of adversarial attacks is indeed closely related to the setting we consider here, we will amend the related work to highlight this. However, while the unlearning literature has proposed many new losses, such as NPO, they alone do not provide robustness to adversarial attacks (see the paper appendix B.2 Table 6 and attached PDF Table 1). We do believe that future work may look at how our adversarial training method may used in machine unlearning as well, but a full evaluation of this is beyond the scope of this paper. **Q1 Why are there no experiments done on the Llama series of models?**: **Q-A1**: No particular reason except that it’s utility performance is worse than the models we considered, but we have added LLama2 with our CAT method, which considerably increases the robustness of Llama2 (see the overall response and attached PDF Figure 2). (1) Casper et al., “Defending Against Unforeseen Failure Modes with Latent Adversarial Training” 2024 (2) Laidlaw et al., "Perceptual Adversarial Robustness: Defense Against Unseen Threat Models.” ICLR, 2021 (3) Dai et al., "Formulating robustness against unforeseen attacks." NeurIPS, 2022 --- Rebuttal Comment 1.1: Comment: I thank the authors for their responses and additional experiments, which addressed the majority of my concerns. I understand that stronger attacks like [1,2,3] occur concurrently with the submission, so I didn't take this as a negative point. Nonetheless, I encourage the authors to conduct more comprehensive evaluations in their paper revision, particularly against attacks that differ from the (implicit) assumptions of C-AdvUL and C-AdvIPO. In conclusion, I believe this paper provides a good defense against jailbreaks through extensive experiments. So I'd like to raise my score to 7. --- Rebuttal 2: Comment: We thank the reviewer for their quick response and for increasing their score! We agree that further attack evaluations would improve our work and will look into adding decomposition attacks to our work. We welcome any further feedback to improve our paper and are thankful for your feedback.
null
null
Rebuttal 1: Rebuttal: We thank the reviewers for their efforts and suggestions. We included a pdf, which provides an overview of the new results. The following experiments have been added to the paper: 1. Training and evaluation of Llama2-C-AdvUL, which results in considerable robustness improvements. 2. Adaptive attack [1] and In-Context-Learning (ICL) attack. We demonstrate that our models are robust to both attacks. For the adaptive attack, we use the evaluation commands proposed in their GitHub repository and gpt-4-o as a judge. 3. Continuous attack sanity check. An unconstraint continuous attack breaks all our models. Adversarial trained models are more robust against $\epsilon$-ball attacks. 4. NPO baseline. We added two NPO and one additional IPO baseline without adversarial training. All of these models are not more robust to adversarial attacks than their base model counterparts. 5. R2D2 vs Ours Wall time. On an A100 a single step of the R2D2 took the 489.7 longer than a single step of C-AdvIPO. The complete training of RD2D would have taken 1991 times longer with the implementation provided in the official repository. As a result, we are unable to conduct R2D2 on other models. **Open questions:** If we fail to address any remaining concerns, we will be happy to engage in more discussions. (1) Andriushchenko, Maksym, et al. "Jailbreaking Leading Safety-Aligned LLMs with Simple Adaptive Attacks" (2024) Pdf: /pdf/f3b4b07cb786379818436ef3617552beb7172f91.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
BricksRL: A Platform for Democratizing Robotics and Reinforcement Learning Research and Education with LEGO
Accept (spotlight)
Summary: The paper presents a framework for training reinforcement learning policies on LEGO robots in the real world. BricksRL integrates the LEGO robotics hub PyBricks and a reinforcement learning library TorchRL, providing an easy interface to implement and deploy RL algorithms. A robust infrastrure for robot-environment-algorithm communication is implemented for real-world deployment. A camera is integrated in the system beyond LEGO’s standard sensor set, which expands the platform’s capabilities. Experiments on various tasks on three LEGO robots (2wheeler, walker, and robotarm) are demonstrated, validating the system’s capabilities for real-world robotics applications. Ablations are studied to compare sim2real transfer with learning from the real world, and w/ or wo/ camera sensor inputs. Strengths: Constructing robots with LEGO parts is low-cost and widely available. This paper demonstrates deploying reinforcement learning on LEGO robot, which is very inspiring for low-cost robot research. LEGO parts are modular and reusable, allowing the user to tailor robot designs to specific tasks. This provides a nice platform for cross-embodiment policy research, task-specific robot design, etc. The paper provides an effective solution to deploy reinforcement learning algorithms on low-cost robots, by integrating PyBricks and TorchRL, and implementing a robot-emvironment-algorithm communication via bluetooth connection. The system is proved to be robust by deploying real-world reinforcement learning policies. Detailed experimental results on three LEGO robots are demonstrated, showcasing the system’s capability to train real-world rl policies on low-cost robots. Weaknesses: Integrating LEGO hub with BricksRL, due to communication overhead, the system frequency is only 11 Hz, which limits the capability of the system to perform certain tasks such as dynamic manipulation tasks. It would be insightful for the authors to provide more discussions and ablations on the robustness of the system. The paper noted that the system has low communication speed, the lack of millimeter-precise constructions, backlash, and noisy sensors. But does the policy learn to be robust to these noises during training? Or are there certain strategies that authors employ to deal with these issues? Technical Quality: 3 Clarity: 2 Questions for Authors: Which simulator is used? Is RoboArm-mixed-v0 trained only using real-world data? Perhaps because sim2real transfer with image is hard? Is the trained policy robust to some slight deviations of robot hardware during explorations, such as shifted parts, sensor drifting, and system latency? Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: Yes, the authors addressed the limitations well. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Response Weaknesses: Thank you for your assessment of the weaknesses, we are happy to address these points to provide more clarity. We recognize the potential benefits of increased communication speed and are willing to improve this. We've contacted PyBricks directly to collaborate on enhancing communication rates and they claim to have better protocols almost ready. Once implemented they will be made available in BricksRL immediately. However, we hardly had any problems due to the slower communication speed. As described in the paper, lower communication even led to more stable and faster learning for the walker robot. The policies learnt were robust notwithstanding existing backlashes and noise. For the real-world training, we did not take any measures to stabilize the learning as it was not necessary. However, as mentioned in the paper, for the sim2real transfer with the policies trained in simulations we added noise to the actions to simulate backlashes and sensor noises. This was sufficient to learn stable policies that could be evaluated successfully in the real world. Response Questions: - The robots were mainly trained online in the real world. We only developed two simulations to demonstrate the sim2real capabilities. Our approach to simulating LEGO robot models in BricksRL is straightforward but effective. In the two simulation environments, we simulate the transition dynamics by directly applying the actions to the simulated motor angles. This works well because our action space represents the delta angles for robot movement, allowing for a simple yet accurate simulation of the robot's behavior. - Yes, the RoboArm-mixed-v0 is fully trained in the real world, we did not develop any simulator for this environment. The fact that LEGO can be used in this setting proves the value of the LEGO platform for RL research. - Yes, the trained policies are robust to different sensor noises and backlashes. Different sensor noises/deviations are more harmful than others. For example, we did notice that during training of the RoboArm-mixed-v0 with image inputs lightning conditions are very important and need to be stable during the whole training. However, this is more specifically for the contour detection of the ball for reward calculation and not so much for policy learning. --- Rebuttal Comment 1.1: Comment: Thank you for addressing my comments and questions. The idea and system setup of the paper are novel and inspiring, with sufficient experiments being demonstrated. I have raised my score.
Summary: This paper presents BrickRL which is a system for using reinforcement learning within the context of lego robotics. The paper provides an overview of their setup, how they used TorchRL and PyBricks to interface with the the lego robots. They also provide results that show the feasibility of this system to use off-policy algorithms to train in the real world. Strengths: Originality: From my undersanding, this is the first paper to combine the lego robotics setup that has been popular in education for a while with RL. They cite other efforts to create low-cost robotics but even those are significantly more expensive than a simple lego setup, which further improves the originalitiy in my mind as a first of its kind very low-cost and flexible setup. Quality: The main claim the authors make in this paper is their method is able to train agents on lego robots that perform well in the real world which they do. It would have been nice to mix both off and on policy algorithms in your results as well as maybe some offline RL just to clearly show the vercitility of your method. That being said, the claim of "agents can learn for lego RL" is clearly answered affermatively (at least for off-policy algorithms) from their results. Significance: Major. This paper on the surface might seem simple as it is just combining lego with RL but in my opinion the significance of low-cost robotics for RL is massive. Researchers with less funding using this system for their work to test different algorithms on low-cost robots that are extremely easy to repair. The results are good enough that this system is clearly working and has the potential to address many accessibility issues with RL in robotics. Weaknesses: Clarity: The paper has no major errors but can be confusing as it does state things that feel like implementation details in the main paper that should likely go in the appendix and did make it harder to read. Possible improvements: - I think the PyBricksHubClass paragraph feels out of place and a little confusing. This could probably be just a simple "PyBricks provides a class that..." and simply state the benefits. I don't think the fact that it uses BLE is important. - The BaseEnv paragraph also feels out of place. I think that putting it in there and saying that you can use a BaseEnv to create custom environments is redundant. The real benefit here is the second paragraph about enviornment transforms and other models. - The useage of TensorDict feels just like an implementation detail, while it might be critical, these three paragraphs in 3.2 all feel like they could be one or two sentences and then you can move on and save more room for results. - Simlar comment for Client Script and Robot-Environment Communication. It seems just like an implementation detail that might even be completely ok in the appendix. But Communication Speed is a very interesting section that I really like (addresses a possible limitation). Technical Quality: 3 Clarity: 2 Questions for Authors: Some of these questions are going to be in the unanswered limitations as well. - How long do these robots last? Do the motors wear down during training? Are they expensive to replace? - Could you use a wired connection for training? I realize it causes some issues but would it increase training Hz? - How much did the setups you used cost? Confidence: 5 Soundness: 3 Presentation: 2 Contribution: 4 Limitations: The authors don't address many limitations in their paper which I do wish they more clearly did. Scattered around are a few limitations addressed like communication frequency but I think that limitations could be more clearly thought of and addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Response Weaknesses: We greatly appreciate your valuable input and suggestions for improving the clarity of our paper. We are pleased to incorporate the proposals as far as possible in the final version of our paper. Response Questions: We're happy to provide more details about our LEGO robot setups: Robot longevity and motor wear: - Battery life varies depending on the robot's complexity and usage intensity. For example, our walker and RoboArm models typically operate for 3-4 hours on a single charge, but it is possible to directly provide power via a USB connection to a larger battery pack. - Static robots like the RoboArm can be directly connected to a power supply via a USB for extended operation in training and execution - Throughout all our experiments and tests over a couple of years, we haven't experienced any part failure, indicating good durability. If replacement is needed, individual motors cost between 40-60€. Wired connections for training: - This is possible for stationary robots using the USB connection. - We recognize the potential benefits of increased communication speed. PyBricks is already experimenting with better communication channels and they will be made available in BricksRL immediately. Cost of setups: - Our setups utilize components that can be largely found in the LEGO Education kit and extension kit. The current price range for these sets is approximately 700€. Response Limitations: The main limitations of LEGO components are in the tolerance of the gears, precisions of the encoders, and torque of the motors. However, in our experiments, we have shown that BricksRL is capable of working around those. Our extensive testing and experiments with the LEGO robots have yielded very robust results. We further show the robustness in the sim2real experiments, where policies trained in idealized simulations were directly transferable to real robots with minimal issues. Our approach of adding small amounts of noise to simulated observations to mimic backlash and sensor inaccuracies proved effective in bridging the reality gap. Regarding the limitation of communication speed, as mentioned above Pybricks is working on better communication channels, and once available those will be immediately included in BricksRL. --- Rebuttal Comment 1.1: Comment: Thank you for answering my questions and comments. I think your work provides novelty and impact to a broad range of researchers without access to expensive equipment. I'm going to leave my rating unchanged.
Summary: This work proposes a flexible and cost-effective platform named BricksRL with LEGO builds aiming to lower the cost of RL research and education. Experiments also demonstrated that LEGO robots can be trained within 120 minutes on normal computers to achieve simple tasks such as moving, walking, and grasping. An offline dataset is also provided for offline RL or imitation learning. Strengths: - Practical utility has been demonstrated by showing the deployment results of RL methods on three LEGO robots such as SAC, TD3, and DroQ. - This LEGO robot platform is low-cost and flexible with clear instructions and open-source code. Weaknesses: - BricksRL’s value as an educational platform is clear but the scientific contribution is limited because there's no innovation proposed on algorithms. - A dataset constructed with BricksRL may be of greater value for offline reinforcement learning and imitation learning. - The tasks that BricksRL supports are relatively simple, such as walking, spinning, and reaching. Technical Quality: 3 Clarity: 2 Questions for Authors: - In what way can LEGO robot models be converted to models that can be used in simulation? - Does BricksRL also support urdf model construction or the integration into mainstream simulators like Mujuco or Isaac Sim/Gym? Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: As mentioned in Weaknesses, the scientific contribution of this work is limited. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Response Weaknesses: Thank you for your thoughtful feedback, we appreciate your recognition of its practical utility and educational value. However, we respectfully disagree with the assessment that the scientific contribution is limited. We would like to emphasize several key points that demonstrate the significant scientific value of BricksRL: - BricksRL is the first platform to seamlessly integrate state-of-the-art reinforcement learning algorithms with hardware that is economical, modular, mass-produced, and easy to use like LEGO. This allows everybody in the world to focus on RL algorithms but test their ideas on real robots straightaway. There are over 1M LEGO hubs sold worldwide which are compatible with Pybricks and BricksRL as of today. We cannot emphasize enough how transformative this aspect is in the field of AI for robotics. - By enabling the creation of low-cost, customizable robots, BricksRL addresses a critical challenge in RL research - the ability to conduct reproducible experiments on multiple real robots at once. This scalability and reproducibility is crucial for advancing RL algorithms in real-world settings. - We have shown with BricksRL that complex RL algorithms like SAC, TD3, and DroQ can be successfully applied to train LEGO robots end-to-end in the real world within reasonable time frames. This is a necessary validation and demonstration of the platform towards making real-world RL more accessible and practical. - We have also demonstrated the ability to incorporate non-LEGO sensors to increase its potential for broader applications in robotics research, beyond just LEGO components. LEGO itself might provide more sensors in the future. - By providing built plans and training results, BricksRL contributes to the reproducibility of RL experiments in robotics, a crucial aspect of scientific research often challenging in real-world settings. BricksRL has clear educational benefits, but we have also created it for its research potential. We believe that BricksRL's contribution represents a significant step towards democratizing real-world RL research in robotics, addressing key challenges in the field such as cost, scalability, and the reality gap. Offline datasets: We have already generated valuable offline datasets with our LEGO robots. As part of our revised paper submission, we are making these datasets publicly available in the code repository to facilitate research in offline RL that is directly available in TorchRL. We have curated datasets for the three robot configurations: 2Wheeler, Walker, and RoboArm. These datasets encompass both expert and random data, specifically tailored to the tasks explored in our experiments. For the 2Wheeler, we offer datasets for the 'RunAway-v0' and 'Spinning-v0' environment tasks. The Walker robot dataset focuses on the 'Walker-v0' task, while the RoboArm robot dataset addresses the 'RoboArm-v0' task. The expert data for each dataset was carefully collected using the following process: We first trained a Soft Actor-Critic (SAC) agent to successfully complete the specific task. Upon achieving satisfactory performance, we evaluated the agent over 100 episodes per task, recording all transitions. For example, the evaluation process took approximately one hour for the Walker robot and yielded 10,000 expert transitions. To complement the expert data, we also generated random datasets. These were created by executing a random policy in the environment, similarly for 100 episodes per task. This process resulted in eight distinct datasets: 1. RunAway-v0-expert 2. RunAway-v0-random 3. Spinning-v0-expert 4. Spinning-v0-random 5. Walker-v0-expert 6. Walker-v0-random 7. RoboArm-v0-expert 8. RoboArm-v0-random We trained successfully offline-RL algorithms, including BC, TD3+BC, IQL, and CQL using these datasets. This allows researchers to compare the performance of various RL algorithms, both offline and online, when trained on expert demonstrations versus random interactions. Such comparisons can provide valuable insights into the effectiveness and adaptability of different RL methods across various data quality levels and robot configurations. Response Questions: Currently, our approach to simulating LEGO robot models in BricksRL is straightforward but effective. In our experiments, we simulated the transition dynamics by directly applying the actions to the simulated motor angles. This works well because our action space represents the delta angles for robot movement, allowing for a simple yet accurate simulation of the robot's behavior. BricksRL does not yet support URDF model construction, which would allow for more sophisticated simulations and easier integration with mainstream simulators like MuJoCo or Isaac Sim. However, a few LEGO CAD (https://www.leocad.org/) are actually available. One could build completely virtual robots using only LEGO parts. The complete list of parts is then listed. The LEGO ecosystem is maybe unsurprisingly very rich. It would be therefore time-consuming but relatively straightforward to build simulated LEGO robotics. We are considering this avenue of developing MuJoCo-based LEGO simulations for the future. This is, however, secondary, to prove that RL research for robotics can be done at all with LEGO which is the outcome of this work. --- Rebuttal Comment 1.1: Comment: Thank you for answering my previous doubts about scientific contributions and simulation integration. The offline dataset you additionally provide will be of great value to the public for basic RL research. The real-world experiment is sufficient to prove the scalability and reproducibility of LEGO robots. I have re-evaluated this work and will raise my score accordingly. However, I still hope this work can be extended to support more complex manipulation tasks that integrate interactions with objects to bring broader impact.
Summary: This paper demonstrates that cheap LEGO robots can be trained in under 120 mins on a laptop to perform simple tasks using sim2real approaches. The paper is motivated by making robot learning research more accessible for educational settings. A software framework called BricksRL is released in open source that integrates PyBricks and TorchRL with Gym environments for controlling modular LEGO-based components, motors and sensors. Strengths: * The paper is clearly written, with a strong motivation section and a sound implementation of RL algorithms. * The paper is accompanied with instructions on how to build LEGO robots paired with BrickRL for exploring RL/control/robot learning algorithms much more cheaply than with industrial robots. * The finding that simple tasks can be trained for in a couple of hours on a normal laptop is compelling for educational experimentation. * LEGO robots are easily extensible, so one may envision quickly designing higher-degree-of-freedom robots for research Weaknesses: * The core contributions of this paper are not technically novel - there are no new algorithms introduced, and BricksRL is a standard Gym framework for RL training. * The tasks demonstrated are relatively simple (walking, spinning, reaching). Hence, it is not clear how much value this framework provides beyond educational experimentation in simulation. * The RL algorithms benchmarked may not reflect current state of the art approaches - but this is not the prime focus of this paper. The variance in the plots seems to be extremely high. Technical Quality: 3 Clarity: 3 Questions for Authors: - The paper is currently lacking details on policy architectures used for training. Please add. - Are any of the policies vision-based? As the number of parameters increases, the training time will also increase. This will stress the reliability of LEGO hardware - possibly breakages that may complicate learning on more complex tasks. Some expanded discussion on this would be helpful. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes, to some extent. One issue with low-cost robots is easy wear and tear precluding large-scale data collection. The presence of backlash, noisy sensing may also limit experimentation for more precise tasks. Please expand more on limitations in this regard. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Response Weaknesses: - Our primary contribution lies in providing accessible, low-cost hardware integration for real-world robotics experimentation. Our goal is to lower the barrier to entry for robotics research and education, enabling a wider audience to engage with physical robots beyond simulations. LEGO parts are widely available, robust, and reproducible because they are mass-produced. We are not aware of any other robotic platform with such characteristics, making this paper a first instance of that. - The demonstrated tasks serve as proof-of-concept examples to validate the framework's functionality. It is not at all obvious that LEGO can be used for RL tasks, as it is not designed for that. Gears have generous tolerance, the stepping motors offer encoders but the precision is about 1 degree, and structurally, they are made of bricks. In this paper, we show that all this hardware can be used effectively for RL. - Our focus is on demonstrating the framework's compatibility with a range of algorithms rather than pushing the boundaries of RL performance given that this is a totally new platform. The outcome is a validated platform for the democratization of robotics. However, we made sure to implement the RL framework using the official RL library from PyTorch. This allows direct usage of all state-of-the-art RL algorithms. Since submission, we have already expanded our example experiments to include several offline RL algorithms, such as BC, TD3+BC, IQL, and CQL, trained on collected datasets. We've also integrated pre-trained foundation models like VIP transform from TorchRL. Which can be used to give reward signals to sparse reward tasks but can also be utilized to label hand-collected datasets that have no reward signal. VIP can enable simple, few-shot offline RL on real-world robotic tasks for low data quantities. We are making available these examples and the offline datasets with the revised version of the paper. - The high variance in the plots is potentially stemming from the noisy nature of real-world robotics. This variability itself is an interesting area for future research, investigating robust learning methods in noisy, real-world environments. Response Questions: We apologize for the lack of details on policy architectures in the current version of the paper. We will add this information in the appendix. All examples are available in full detail in the GitHub repository. Our default architecture for both policies and Q-functions is a multilayer perceptron (MLP) with three linear layers and ReLU activation functions. We also incorporate dropout and layer normalization specifically for the DroQ algorithm. - For the vision-based environment (roboarm-mixed-v0), we employ a hybrid architecture. This combines image observations with state observations of the robot angles. Specifically, image observations are processed through a convolutional neural network (CNN), while state observations are encoded using an MLP. The resulting image and state embeddings are concatenated and passed through a final output head. We have added a detailed description of this architecture in the appendix of the paper. - Regarding the concern about increased training time and potential stress on LEGO hardware. We have run the same LEGO components for almost two years to prepare this paper with zero faulty parts. LEGO parts are created for kids and mass-produced so they don't break. Response Limitations: Thank you for raising these important points about the limitations of low-cost robots. The main limitations of LEGO components are in the tolerance of the gears, precisions of the encoders, and torque of the motors. However, in our experiments, we have shown that BricksRL is capable of working around those. Our extensive testing and experiments with the LEGO robots have yielded very robust results. Throughout our trials, we did not encounter a single failure of any motor or gear, which speaks to the reliability of these components even under repeated use. Regarding backlash and sensor noise, these factors are certainly present in our setup. However, their impact on policy learning and execution has not been significant. This is evidenced by our successful sim2real experiments, where policies trained in idealized simulations were directly transferable to real robots with minimal issues. Our approach of adding small amounts of noise to simulated observations to mimic backlash and sensor inaccuracies proved effective in bridging the reality gap. The architecture ensures that the computational load of training and inference remains separate from the physical robot, mitigating concerns about hardware reliability or potential breakages due to increased model complexity. The LEGO components are primarily involved in the physical execution of actions and collection of observations, tasks which are not affected by increases in model size. We have added this information to the main manuscript.
Rebuttal 1: Rebuttal: We have added this subsection (PDF) to the paper. Pdf: /pdf/1b89065b0855f7f68d433abcc5f55ea1967373e8.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null